We did have a volunteer [update: two] present at the hearing on Google's motion to snip off and discard chunks of Oracle's 3rd attempt at an acceptable damages report in the Oracle v. Google case. This is about what, if anything, the jury gets to hear from Oracle's expert as to his opinion of how damages should be calculated, if Oracle should prevail, which frankly is looking less and less certain. A huge thank you to Steve Finney for making such an effort to be there.
Oracle's expert, Dr. Ian Cockburn, was in attendance, as ordered by Judge William H. Alsup, but he didn't get asked anything. The judge had asked the parties to be ready to answer some very specific questions [PDF], and he did ask both sides plenty of questions.
Jump To Comments
Here are the questions the judge said he wanted answered:
For the hearing on Wednesday, the Court would like to learn the
following. How did Dr. Ian Cockburn choose which studies to rely on for
the patent-value curves? Do studies not chosen, such as those listed in
Harhoff et al., have less skewed curves? Please bring copies of
all references and studies with patent-value curves, not just the three
selected by Dr. Cockburn. Based on the three studies cited, what is the
confidence interval for the proposition that the top 0.5% of patents are
worth 32.7% of the value (also, that the top 4% of patents are worth
10.2%)? What would be the value of the ’104, ’205, and ’720 patents if
they ranked as the bottom three of the “top” 22 patents? For all
statistical analysis, including the conjoint analysis, the Court is
interested in the confidence intervals of the results. Under the group
and value approach, what is the separate value of each patent in suit? That gives you a hint as to what the judge thought wasn't so great about earlier attempts at damages reports, I think. He wants specificities.
Both sides shall exchange whatever illustrative materials they plan to
use by 5:00 P.M. ON TUESDAY. Please come prepared to hand up
precise evidence to back up assertions.
Steve has sent his first half of his notes and will now take a moment to breathe and eat something and then he'll work on the next half. It's not, of course, a verbatim account, but we'll get that in 90 days in the transcript. This is to provide on the overall picture and provide the highlights. He has put his comments about the proceedings in brackets.
The first report takes us up to the break in the proceedings today in Judge Alsup's courtroom in US District Court for the Northern District of California in San Francisco:
Hi PJ and Mark: Here is my description of the 1st half
of the morning hearing. None (except for one marked quote)
is verbatim; much is close paraphrase, but in some places
I summarized discussion in what is more of a loose paraphrase
(which doesn't put arguments in anyone's mouth, but may disrupt
the flavor). It's obviously not good when a judge tells you you aren't convincing him.
However, since it does appear as if I'm putting words in
people's mouths, perhaps this should best be viewed as
a dramatic reenactment involving 3 characters, TC (The Court),
Oracle, and Google, which bears some resemblance to (but is not
identical with) this morning's proceedings.
My comments about the proceedings are in brackets "".
The hearing started promptly at 7:30 AM. I didn't get all the lawyers'
names. For Oracle, Michael Jacobs of Morrison & Foerster was present, and I believe Steve Holtzman of Boies Schiller, but
it seems as if someone named "Norton" did the bulk of the talking.
(He had a reddish beard and mustache, and light-rimmed
glasses; no other Oracle lawyers had facial hair).
For Google, Robert Van Nest was present, but I believe a Mr. Purcell did
the bulk of the talking.
[Courtroom 8, SF Federal Building]
Judge Alsup: I want to start with a big picture question. When we talk
about the 2006 100 million dollar offer, was that per year, or a one-time
Oracle: This was to be paid over a 3-year period, but this was still
under negotiation. It was not a full payment and would be
renegotiated after 3 years.
Judge: This is an important question. Was this a one-time paid license,
or would more money be paid over time? This affects the issue of
damage calculation on a royalty basis. What did the 100 million pay for?
Google (Van Nest, this time): I'm not sure. This was something
that should have been covered in the Cockburn and Leonard reports.
I believe it was a one-time payment that covered everything.
[Judge Alsup seemed concerned that the point isn't known for sure.]
Judge: Next question: Are current 2012 damages numbers the same as in earlier
Oracle: Not identical, but compatible. It depends on whether you look at
independent signficance analysis or group and values analysis. [Oracle
refers to provided charts.] The '476 patent has been dropped
so that changes things a little.
Judge: Question 3: Could you check my math on the 10% line item?
I came up with $32 million for patents, but Oracle's number was
different (44.8 million for all 5 patents).
Oracle: [Discussion I couldn't follow referencing slides, various curves
Somehow the discussion transitioned into the issue of patent value
curves. In the patent curve discussion,
I'm summarizing things Judge Alsup and Oracle said over the course of 5 or 10
minutes of discussion. It's not necessarily in order, but the
content should be accurate.]
Judge: Here is a fundamental question. Oracle cites 3 studies
about patent value. If you look at the top 20% of the 3
different curves, they're quite similar, and I'm willing to
accept that. It matches the standard 80/20 rule.
Even the 10% numbers are at least similar.
But when you go below that, it diverges a lot.
When you look at just the top 1%, the numbers for the
3 studies are quite different (42%-78.4%). And three studies
is a very small number. Can you
say, with any statistical confidence, that a fourth
study would even fall within that range? We are not
just looking at the tail of the distribution; we're looking
at the tip of that tail. Don't we need more samples?
What kind of confidence interval?
Oracle: [provides a list of about 5 studies, omitting
Barney (it was not a "survey" study), but a few more than originally].
We only looked at studies that surveyed patent value that also
provided sufficient data to do a patent distribution curve.
The Cockburn report cited multiple studies in support of the
concept of skew, but only the PatVal report was used
for Cockburn's quantitative analysis.
Even five studies are not sufficient to calculate a confidence interval.
Putnam [some case reference] is an example where a single
curve was used to calculate damages, and it was accepted.
Judge: Was that a Federal case?
Oracle: No. [PJ: We think after putting all our heads together that it was a Federal case, but not a *jury* trial, rather one with only a judge making the decision based on the law only.]
Judge: So, just a bench judge. Did the judge address the issue
of the tip of the tail?
Judge: OK, it sure looks like Cockburn used three studies [cites paragraphs
in Cockburn report], but you're saying that only the PatVal study was
used for numbers. So that's only one data point.
Oracle: It was the best study -- 23,000 data points. If there's a gold-standard
study [makes a hypothetical reference to a hypothetical well-funded NIH cancer
study], that's all you need.
Judge: Where in the Cockburn report is the defense of the Patval study?
Oracle: It's not in the report. It's in the depositions.
Judge: But by Rule 26, all the relevant information has to be in
the report itself [Something about the depositions not being
Oracle: It would be an insurmountable obstacle to comprehensively
defend the methodology in the report itself.
Judge: It is critical that the statistics of the patent value
calculation be correct.
"You're not convincing me". [That's a direct quote.]
Oracle: Cites Lucent. Some vagueness is acceptable. Cites Dainhu Tire (sp?)
case: "reasonable approximation" is acceptable.
These issues relate to weight, not methodology, so a jury
question, not Daubert. Providing a range of results is not
unsound. Cites a case about nuclear radiation exposure.
Judge: Google, what is your response?
Google: [Much of what Google said here duplicates what's in the briefs,
I may have not noted all of it.]
PatVal applies to a random patent selection, not to this
narrowly focussed patent selection.
Judge: So what is so special about Sun?
Google: It's not a question of "Sun's portfolio". It's the fact that the
patents were explicitly selected from Sun's total portfolio (about
14,000 patents in 2006) to be of *particular* value. The chaff
has been removed.
Judge: Would that make the 500 patents more or less valuable?
Google: The 597 patents are presumably the most valuable. Any calculation
involving 100% of the deal value should be more evenly distributed
across the 597 patents.
Cites Schenkerman [sp?] study, which addressed specific technology
areas. There is still an obvious skew, but it's less than the studies
Cockburn cites. Top 1% had 24% of value; top 5% had 55% of value.
Oracle: The patent selection was done looking for patents relevant
to Java smartphones. It was patent selection done for a technology
area, not for value.
The Schenkerman study showed that electronics had a higher
[something: skew?] than other industries. If the PatVal study had
been restricted to electronics, it might be even more skewed towards the
Schenkerman was based on patent renewals. This will tend to
flatten the curve, and artificially reduce skew.
[Topic changes to issues about Oracle's use of in-house engineers to
do patent ratings. After a short exchange between Google and Judge Alsup, Judge Alsup
basically states that this is a cross-examination issue, not a
Google: Our next point is the indeterminacy of the Cockburn
report. In the independent significance analysis, Cockburn
uses a vague methodology and uses the phrase "at least", which
doesn't provide a limit.
Judge Alsup: Is "at least" used in the Cockburn report?
Google: No, it's in the depositions.
Judge: Only the original report can be used at trial. However,
there are no limits at cross-examination. It is Google that
will be at risk if they introduce issues relating to "at least"
during cross examination.
[Google moves on to groups and value.]
Google: The Oracle engineers chose the top 22 patents, but
rated those patents as of approximately equal value (they
could not distinguish them). This gives a lower bound of 17.7 million
if the 22 patents are rated equally, but 51 million if the
claim is that these 3 patents are the 3 most important
patents. The Oracle engineer data does not support the
Judge: Where is the reference about engineers' inability to rate the
[Google can't find it; the judge says they were supposed to be prepared.]
Oracle: We concede [possibly didn't use that exact word] that
the engineers couldn't distinguish technical
significance among the 22 patents in 2006, given the Google
[this is presumably whatever Android product requirements document that
was provided to the Oracle engineers in 2006].
However, Dr. Cockburn is an economist, and it's possible to distinguish
among the 22 patents based on economics/financial stuff.
Judge: In 2006, was there any reason to believe that these
3 patents (presumably '105, '204, and '720] were the most important?
[Here and above, there was some discussion that I really couldn't follow.
Issues related to what happened in 2006 and what's happening
now, when the infringment occur, hypothetical 2006 negotiations,
post-2006 development of Android, patents vs specific technology,
and the fact that the '720 patent wasn't issued till 2008 (!?)].
[9:11 AM: break (court reporter needed one)]
Some background for anyone new:
Google objections to the report,
here and here, and on who contributed to the third report
here. Does all this make you want to be a lawyer? Probably not. Now you know why they are the second most sleep-deprived category for types of work, or so I read the other day. All the above is done on deadline, and then the judge throws some extra questions at them a day or so before the hearing and says, be ready.
Oracle's side is here, and
Cockburn's declaration in support is here.
Two engineers swear on the Bible that Oracle is right
And if you are curious about the earlier reports that the judge wouldn't accept:
Dr. Cockburn's 2d report, not allowed in is
The parties answer the judge's questions about the report:
Google's objections and the judge's to that report are
Oracle's viewpoint is here.
The last hearing transcript about damages is
Fighting over the first report, which also was not accepted by the judge,
It all began with a motion by Google, a type of motion called a motion
in limine, which is just the name for the kind that asks that
something in a report, or testimony by someone, not be
allowed at trial. So the issue is what does the jury get to hear?
Swing back by in a couple of hours, and we'll finish the report. We had a second volunteer tell us he'd try to attend as well, and we may hear from him by then also. Mark will also try to stop by and provide any explanations he thinks would help us understand some of the finer points also.
Update: I just heard from our second eyewitness, Groklaw member mirrorslap, and here is his first report, with the second coming after he types up the rest of his notes:
Oracle v. Google
Update 2: And here's Steve's part 2:
Hearing on 3rd damages report
March 7, 2011
U.S. District Court
450 Golden Gate
I arrived at the court at 7:35AM and the proceedings had already started,
so I missed the introductions.
Mr. Van Nest was up for Google, against (I believe) Mr. Jacobs for Oracle.
Judge Alsup (JA) presiding.
My asides are in brackets: 
This is my first time trying to follow the proceedings, and my appreciation for the skills of court reporters has increased. I was able to grab some of the more pointed and interesting interactions but this is nowhere near a transcript of what happened. I have 20 pages of notes that I am transcribing.
They were discussing the Sun-Google negotiations and the terms of the proposed contract.
Judge: “Wouldn't that have been smart?”
Oracle: It would have been a 3-year, paid-up license.
Judge: [appears to be getting vexed with Oracle] “I hear you saying the sky is dark; then you say the sky is light.”
$100M is a 3-year term?
Oracle: It was a 3-year term for $100M. Google's infringement is greater than $100M.
Judge: So you pick $100M as a starting point and adjust for all the variables. Has this been adequately briefed?
Oracle: Not [at?] all.
Judge: Dr. Cockburn (pronounced co-burn) has dropped 2012 from the current damages report (in order to be able to ask for damages for 2012 separately). Are the new numbers after dropping 2012 the same, lower, or higher?
Oracle: Mostly the same. The lower bound is lower, the upper is higher.
[produces a large binder for the judge; there are no visual aids for the gallery so we cannot see any of this.]
Tab 1 of the binder shows the steps that Dr. Cockburn used to prepare the damages for the patent and copyright infringement.
Upper bound, patent infringements: $46.7M
Lower bound, patent infringements: $17.7M
(excluding of the '476 patent, which has been dropped)
$57.1M damages upper bound
$43.7M damages lower bound
Judge: This will do... very helpful.
Oracle: Tab 2 not related to copyright, similar to “independent significance” approach.
Judge: Yesterday I sent out a request for a check on my math in calculating the 10% line on item 34.
Oracle: Items 6 and 7 address this; we did have to make adjustments for the Court's math [very diplomatically put], but there is some question regarding whether the Court used an average of the 3 curves or simply chose the middle one. The total should add up to $597M [not beeellions].
So the value for the patents-in-suit ...
Judge: A range of $32M to $44M for 5 patents, assuming equal value. $44M is the total for all 5?
Oracle: Under those assumptions, yes. The problem we [Oracle] have is that the curve is not linear; by treating the top 4% as if they are in the top 22%, it significantly reduces their value. This is the primary problem with the approach.
Judge: You have 3 sample [patent] portfolios. No one doubts that these curves have a disproportionate value at the ends. But Dr. Cockburn is trying to derive huge amounts based on 3 data points. When you get down to 1% [of the most valuable patents], the numbers aren't even close, and some statisticians would treat them as outliers. But Dr. Cockburn is using them as a basis. Looking at the wide variation in the curves at the top of their range. We are not talking about the top 20% [where the curves apparently are somewhat in agreement], we are looking at the top 1% [of some 576 patents]. Don't we need more data [to have a statistically significant result]? Aren't we looking at the tail, or the tiny tip of the tail? I see absolutely zero analysis of this from Dr. Cockburn.
Oracle: [Provides another set of documents (smaller) to the Court about this (as a part of the Court-ordered documents for this hearing). Again, the folks in the gallery cannot see the documents.]
The documents referred to all patent curves.
Dr. Cockburn relies upon the “patent val survey” method.
Similarly, this method was used by Dr. Putnam in the LG Display case.
Judge: Was that in a District Court?
Judge: Well, that's just some District Court judge talking [some self-deprecating humor?]
Did that judge address the “tip of the tail” issue?
Oracle: That issue was not addressed.
Please look at item 5 in the small handout. The Court states that a 4th sample [portfolio] could be wildly different. Dr. Cockburn is here and is available for questions. The 3 curves intersect at various points. Looking at the available curves, they look similar. Now the “confidence interval” question...
Judge: [Interrupts] Where is Barney on here?
Oracle: Barney isn't on this document.
Judge: You have chosen curves that support your case. You have conveniently left Barney out.
Oracle: Barney is not a survey.
Judge: It looks like he (Dr. Cockburn) used it until it was inconvenient for him and then he dropped it like a hot potato.
Oracle: Starts explaining why the “patent-val” method is the best survey method; strongly disagrees about the Barney being applicable here.
Judge: I don't know what you've been reading, but it says right here in paragraph 405 that he (Dr. Cockburn) used Barney.
Oracle: Patent curves are skewed. Patent-val best predicts the values.
Judge: Ref. P. 406: In each of these studies, ref exhibit 34, this is where he (Dr. Cockburn) gets 42%?
Oracle: Perhaps I am not being clear. Dr. Cockburn uses Barney as a demonstration of patent curve skew.
Judge: That's one study, and you rely on it to predict portfolio value based on the tip of the tail. Where is his (Dr. Cockburn's) analysis in the report? Any portfolio's top 0.5% to 1% will conform to these curves -- that is what you are saying?
[Exchange missed. The fur is flying. Judge Alsop appears to be sorely vexed.]
Oracle: It would be an insurmountable barrier for experts to explain every basis.
Judge: At least the top 20% of the portolio will comprise disproportionate value, let's concede that. But once we get to the top 10%, they diverge wildly. In my mind, this cries out for more samples. You are basing this on 3 portfolios. What is the purpose of a Daubert report? Where did you find that in the literature?
Oracle: We do not assert, nor do we need to assert that the tip of the tail be identical. They are in a tight range.
Judge: I can't believe that you thought I'd be dumb enough not to notice that you'd left out Barney. [verbatim]
Oracle: We are prepared and have addressed that. The analysis suggests a range of results. There is a reasonable approximation of a range of results, from X to 3X [!!!].
Case law: in the District of Colorado, Cook v. Rockwell. Exposure to plutonium and the effect of radiation dosage. The court said that within the meaning of rule 702, it is possible to make a reasonable approximation. The jury can decide which curve is a better fit.
Judge: It's like you are trying to bring in another report. I want to see the other reports. Shouldn't we just toss this one (Dr. Cockburn's) out?
Oracle: Cites example of one of the studies where the patent-val study is cited as the “gold standard” study methodology.
Judge: Are there flaws [in the execution of] this [particular] study? Is it not problematic to be drawing conclusions from a single survey?
[Now the judge turns to Google.]
Judge: What's your answer to Tab 4?
Google: These studies are a bad fit for these portfolios. Perhaps if Oracle had looked at the entire 14,000 patents in Sun's portfolio, but Oracle winnowed them down. Refers to Shankerman study (in the larger binder).
Judge: [holds up a chart] I don't think that you are coming to grips with Oracle counsel's point. Their top range is from 42% to 52%.
Google: They are looking at the wrong portfolio.
Judge: If we looked at the 14,000 patent portfolio, but we aren't, we are looking at 597 patents [actually 569]. Would they be more valuable or less valuable?
Google: They [Oracle] have gotten rid of all the chaff and have selected the most valuable patents. The 569 patents in the narrow portfolio are already in the top 10% [of value] in the 14,000 patent portfolio.
[… more later...]
[Court reconvened after break] A 706 expert means the court appointed expert, as in Rule 706.
Judge: Google, what's your next objection?
Google: There's a lot of things that are adequately covered
in the brief, so I'd like to make 4 points about the conjoint
One, it's not an accepted tool for damages.
Judge: But surveys certainly have been used to calculate
Google: But not conjoint surveys. There, you're going from features
to market share to a damages number. It's not mathematically
Two, in Shugan's methodology you have to hold the other
features constant. The data showing that 24% of the respondents
preferred or were indifferent to the same phone for $200 vs
$100 indicates a fundamental problem with the methodology.
Three, the focus group identified 39 important smartphone
features, but Shugan only/arbitrarily chose 7 of these factors for
his conjoint analysis, focusing only on certain issues important
to Oracle. Omitted factors include important factors as network
provider and brand.
Judge: Availability of WiFi was also not included, correct?
Google: Yes. And you can't simply do the survey with the
supposedly infringing features; that will inflate the value
of the patents.
[The next portion follows my rough notes, and there was some
rehashing of the briefs].
Judge: [Tries to create a simple example with car and radio vs
car without radio (and cheaper). Discussion about whether all
features need to be included]
Judge: Why doesn't the "assuming all else constant" take care of that
objection? Why do you have to question it?
Google: Shugan did not ask in his focus group what were the
most important factors; he has no data on that. And you need
to know that.
Judge: It makes sense that they'd only test litigated features.
[Judge seems to be dubious of Google's arguments here.]
The conjoint analysis bears on two somewhat separate issues:
a) The 24% of the survey population that appeared indifferent to
a price increase.
b) Consumer feature choice/preference.
Could Oracle simply remove the problematic 24%?
[I lost track of Google's Point #4. Apparently Shugan had said confidence
intervals could not be calculated for this sort of conjoint
analysis. Google's Leonard claimed it could. Supposedly Oracle
has now supplied confidence intervals].
[Oracle responds to Google's points on conjoint analysis.]
Oracle: Google's own expert, Alan Cox, repeatedly cites choice modelling
(which is the same thing as conjoint analysis) as a basis for
Even if such an analysis hasn't been used before a court before,
Lucent tells us it is appropriate.
Judge: Which of the 7 features in the conjoint analysis survey
relate to copyrights rather than patents?
Oracle: Availability of applications, which depends upon the copyrighted
APIs. It was of value to Google for application developers to use the
APIs they were familiar with. Although the Java language is free to
use, the API's are copyrighted.
Judge: How did the application availability feature rate among the 7
features tested in the conjoint survey?
Oracle: The Shugan survey shows that consumers value device speed/performance twice as much as application availability. Performance is an issue of patents,
not copyrights. That's the basis of our 2:1 ratio of patent damages
to copyright damages.
Re Google Point 2 [?], the point of conjoint analysis is that you
don't need to test every feature; what matters is the relative
importance of features. However, it is important that the most important
features be included. That's why Shugan incorporated two additional
features, price and OS [e.g., Apple/iOS, WebOS, Android]
in the survey in addition to the litigation-relevant features.
[Note that Google pointed out above that the initial focus group
did not rate features, and Oracle doesn't contend it].
Judge: The Cockburn report has multiple uses of the conjoint
analysis. Ignoring features in the analysis might not be valid
for all uses (market share vs the 2:1 calculation).
Oracle: [Something about performance share and market share, and
the mothod used in the survey: participants were given choices
between small sets of phones described as having particular features,
all else supposed to be held contant. Data was then computer
Judge: What about the 24% number? [includes indifference plus illogical
Oracle: It's explained in Shugan's report and declarations. It's
not a rater error, or fatal flaw.
Judge: What does the 8.8% mean? [This is the percent of survey respondents
who preferred a phone for $200 over an identical phone for $100.]
Oracle: Shugan uses a "hierarchical Bayesian approach. Uh, Bayes was a mathematician..."
Judge: I know who Bayes was.
Oracle: Anyway, in this method you can't do simple counts of irrational
respondents. You have to look at more complex approaches covered in
the Shugan report.
[Somewhere in here both the Google and Oracle lawyers
admit they're starting to get out of their depth on the math.]
Oracle: Some consumers give irrational answers; that doesn't mean the survey
[I'm not sure from my notes whether the Judge or Oracle made the
next point; I'm pretty sure it wasn't Google. It seems to play right
into showing that survey respondents were not holding all factors equal.]
??: E.g., consumers might associate higher price with increased durability,
Oracle: Shugan redid the study removing the 24%. The results were
only minimally different.
[Google rebuttal of Oracle points:]
Google: I understand why Oracle likes to cite Dr. Cox, but he is
an economist, and these are not legal references. Oracle still
provides no citation of previous use of conjoint analysis to
calculate damages in a courtroom.
Oracle concedes that the most important features should be included
in a conjoint analysis, but their method did not collect consumer
data on what features consumers valued.
[Something about the 24% number again.]
Judge: Google, any other objections?
Google: We'll rest on the briefs.
Judge: I have a question. The Cockburn report is sort of an
algebra equation with
variables, e.g., "patent_value = 2 * api_value". What happens
to this calculation if the jury or patent office knocks out
all the patent claims [or: all claims except for one patent?
Not sure]. Does the conjoint analysis fall apart?
Google: This goes back to the discussion about the two different
uses of the conjoint analysis.
Oracle: Copyright value is independent of patent value.
Judge: I have a question for Dr. John Cooper [a man in the
audience]. If we set the trial date for April XX [I think he
said either 16th or 22nd, not sure, sorry], will the Cooper
report be ready?
Judge: Any last questions on the Daubert motion?
Oracle: Dr. Cockburn is available in the courtroom today.
Judge: Google, do you want to cross-examine Dr. Cockburn?
Judge: We will adjourn, and I'll get an order out. Our 706
expert must get his report ready.
[Adjourned at 10:25 AM]
Update 3: And here's mirrorslap's final report:
Google: The studies are a bad fit for the small selection from the Sun portfolio. Shankerman looked at a particular technology area and looked at how the skew worked across 4 industries (chemical, mechanical, pharmacological, and electronics). The resulting range is different and lower than any other study. (References summary on chart 3, pp. 95-96 in Shankerman report. It shows significantly lower values for patents in particular technological areas.
I'm sure you join me in thanking both our reporters for such excellent coverage. This may be the hardest hearing we've ever covered, in that patents are so weird anyway, and the lingo is hard to follow, and the details our reporters were able to provide is phenomenal.
Judge: What is your answer to
a) Shankerman ad
b) the 14,000 patents?
Oracle: b) 14,000 is worldwide.
Judge: How many in the US?
Oracle: I don't know that number.
Judge: More than 576?
Oracle: Yes. The [subset of the Sun] portfolio is limited [by Oracle] to the tech area of smartphones; they were not chosen according to patent value. Regarding a), electronics have a higher degree of skew than the other 3 areas. The Shankerman survey is a “renewal” survey, which does not break down the value of patents in the top end of the range. It asks: how many patents are worth “greater than x”, not how much are the most valuable patents worth. This apprach tends to flatten the curve at the top. Patent-val is a conservative approach.
Judge: [Getting into the ranking of patents by in-house Oracle/Sun engineers.]
Google: It is fine for Oracle to use in-house people. But what Oracle did was to select patents related to smartphones. They used engineers who are used to prepare for lawsuits, and the engineers were not able to avoid having that mindset.
Barney was a renewal study.
Judge: That sounds like a cross-exam question.
Google: [I missed the response.]
Judge: I will take that as a concession. Next objection.
Google: The main one is “interminacy”. We need an independent significant approach. We (Google) do not want to be in a situation where the values approach the value of the entire portfolio.
Judge: Counsel must stick to the report. I will strike testimony that strays beyond what is in the report.
On cross, the attorneys would be able to open up issues beyond, but at their own risk.
Google: We want to put an upper bound on the values; the Cockburn report specifies “at least”, but no upper bound.
Judge: I have high confidence that the “at least” is the rock-solid basis.
Google: We want to move to a “group and value” approach, where the upper bound depends on the value of the top 3 patents (104, 205, 720). Paragraph 409 of Cockburn's report makes my point: “There is no data -- the 22 patents are equally valuable. References p3 of Cockburn, paragraph 5, line 7 in chart. This leads to a range from $17.7M – 57M.
Judge: What accounts for this [wide] range?
Google: There is no support for the upper bound to articulate that the 104, 205, and 720 patents are the most valuable.
Judge: Looking at my own chart... there are 22 groups and 22 patents. Is there any correlation between the two or is this just a coincidence that the numbers are the same?
Oracle: It is a coincidence.
Judge: This will confuse the jury.
Google: That's not my report [Judge Alsup might have chuckled here.]
Top 22 patents, top 22 technology categories.
Oracle chose to take the top 3 categories.
There is no way to be able to rank the value of the 22 patents.
Citation: p. 105 of Reinholt (sp?) deposition.
“It is intellectually unfeasable to rank the patents.”
Judge: Answer from Oracle?
Oracle: The engineers could not distinguish a ranking of the importance of the patents. [He then points out the difference between an economist (i.e. Cockburn) and an engineer (Oracle/Sun) in assigning rankings.] “In the absence of data, we cannot make a ranking, but on p. 410 of the Cockburn report, he shows how such data is available."
“Google designed Android free from constraint and could have done it in a way that didn't infringe. They chose to use these technologies” [the API to leverage developers and create lots of apps, and the patents to improve speed of loading and multitasking].
Android product description in 2006 was shared with Sun during the negotiations over licensing.
Judge: Are there any internal Google emails that show that Google decided to infringe?
Oracle: Not as such, but Google did choose to use the technology.
Judge: In 2006.
Oracle: Not all of them [the patents]. The '104 yes, that was 2006.
Judge: Google response?
Google: The argument is circular, based on the patents they say were infringed. All Oracle [Sun] had was the Product Description, which had no specifics regarding technologies.
[Missed an interaction here]
Google: The '720 patent was issued in 2008.
[Court breaks for 15 minutes at 9:10AM]
Google: Regarding the “Conjoint survey”
Four objections regarding the Shugan (sp) survey
1) The Conjoin survey is a market research tool for product design, not a tool for assessing damages
If one is trying to weigh consumer features, by limiting the scope you skew the results.
2) Shugan's methodology disproves itself. While the consumers who were polled were supposed to [in their minds] vary only one feature, keeping all others the same in expressing preference, 22% opted for a phone that was 2x as expensive with the same features. This makes no logical sense.
3) Based on next-to-last page of Oracle's handout, leaves out critical items, such as choice of network. Shugan selected 7 of 39 features to test:
• app startup time
but didn't select carrier/network or WiFi or camera.
• # of apps available
• OS choice
• screen size
Judge: Example given of a car with and without a radio.
Google: Exactly. References Apple v. Microsoft under Judge Posner. [explanation missed]
Product that was being tested was a smartphone, and it has hundreds of features.
Respondents were asked to assume that only one feature changed and to express a preference either way, all other features remaining the same. The selection of features to test was done by Dr. Cockburn for Shugan to test. What he is purporting to demonstrate is that consumers would stop buying Android without these features, and the extent to which they would stop.
Judge: Maybe the instructions were not clear enough?
Is the 24% conceded? (paying more for the same feature set) Either the consumer was agnostic or preferred a more expensive phone.
Google: How can a survey that returns such illogical results be trusted?
1) confidence intervals Oracle rebuttal:
“Conjoint is not appropriate” is nonsense. Google's own copyright damages expert, Dr. Cox, has said that it is indeed used to determine damages. Quoting Exhibit I(?): 2003 “choice modeling” is the same as Conjoint Analysis, and Dr. Cox says that Choice Modeling is “a rigorous methodology for calculating damages”.
2) Shugan said that, due to using a Bayesian estimation/anaysis that confidence intervals cannot be made. This is countervailed by other experts.
Judge: Which of these 7 things measure infringement?
Oracle: Apps availability. Must have an established programming language.
Judge: Language was pled away in this case.
Oracle: API's are copyrighted and Google copied them. These are the API's that developers expect to see.
Judge: How did the number of apps rank to consumers?
Oracle: In 2010, compared PalmOS, Blackberry, Apple at 6K, 40K and 100K apps to determine the effect on Android market share of inability to provide a robust app universe.
Judge: Copyright is one-half [of some part of the damages equation].
Oracle: Speed, memory and performance from patents is in this suit. Shugan shows that incremental speed is 2x as important as incremental # of apps. This speaks to the apportionment and relative value. Dr. Cockburn establishes that the speed and number of apps are important and verifies that these features impact sales.
Turning to chart #4 from Google's presentation: Conjoint Analysis doesn't have to test every feature. Paragraph 25 of opposition brief, Prof. Shugan: ”It is not necessary to test every feature”, but he did include additional items not under litigation: OS (android) and price. The OS is more important than price. There is no declaration that one has to test 36 features.
Judge: Question the validity of the testing, based on the 22% response for more expensive phone. How does Dr. Cockbun use market share?
Oracle: Distinguishes Preference Share vs. Market Share. December report, Shugan assigned a dollar value to patents.
Judge: What about the 24% irrational answers (more expensive phone for the same feature set)?
Oracle: Docket 740 explains why the 24% reference is wrong.
Judge: What is the correct rate of error?
Oracle: 8.8% of responses show preference of more expensive phone with same features.
This test was done using a Hierarchical Bayesian Approach -- you don't test an HBA looking at individual choices. The test is very good at predicting market share changes. Ref: pages 34-44. Says that Google cannot draw this conclusion. There is no testimony from Google to rebut this. The number is not 24%.
In any survey, you get some irrational choices. Consumers are not necessarily rational. Example: status symbol. The Court asked for a recalculation. Docket 740, footnote 44 but for: at least 7.6% lower rather than the [previous?] 7.9% lower sales.
Google: Dr. Cox's finding is not a court case. There is no cited precedent in law for Conjoint analysis. Oracle has created and attacked a straw man. Dr. Leonard (Google) did contravene Shugan in Feb. 24 response.
[Missed some exchanges here.]
Judge: Dr. Cockburn's analysis is like an algebra problem. Half of the
patents-in-suit is the API value (the other half is speed).
What happens if the USPTO knocks out more patents? What then happens to the algebraic representation? Do we then lose apportionment?
Google: Exhibit 37 has percentages.
Judge: Does the other side agree?
Oracle: The value of the copyright is different than the value of the patents.
Judge: Question for John Cooper -- if we set a trial for March 16, will the expert be prepared?
John Cooper: Yes.
Judge: Will it include a critique?
John Cooper: Yes.
Oracle: If there is anything that the Court needs to ask Dr. Cockburn, he is here.
Judge: Google gets a “free shot” to cross-examine.
[ Adjourn 10:25AM ]
The judge didn't rule from the bench today, so we'll have to wait for his decision to be published. It does sound like he'll allow some of the report to survive, but it's also apparent that he sees Oracle trying to push for higher numbers than are reasonable.
Update 4: We have the court minutes [PDF] which includes the names of all the lawyers present at the hearing:
For Oracle: Michael Jacobs, Andrew Temkin, Fred Norton, and Steven Holtzman.
The minutes list Mr. Cooper as being with the defense, but while I would love that, I don't think, judging from the hearing reports, that it's accurate. John L. Cooper is an attorney with Farella Braun & Martel in San Francisco. He's a partner, and he lists this assignment on his page: "Appointed by Judge Alsup, N.D. California to represent a court appointed expert witness in a large patent infringement action between major Silicon Valley companies." He was appointed to represent the Rule 706 court-appointed independent expert, Dr. James Kearl.
For Google: Robert Van Nest, Daniel Purcell, Michael Kwun, Christa Anderson, Bruce Baber, and Renny Hwang.
For the independent Rule 706 expert: John Cooper.
Cooper was appointed special master in the Microsoft antitrust case also by Judge Motz, the same judge in the Novell v. Microsoft WordPerfect antitrust trial, and Cooper wrote the amicus brief for Dolby Laboratories, which his bio page says was "cited and followed by the U.S. Supreme Court in Bilski". The brief argued against the machine-or-transformation test as an appropriate test for patentability and that business methods patents are just fine. I don't think the court went that far, but you can read the brief [PDF] and the court's ruling, and make up your own minds.
One thing is for sure: the man loves patents, or at least the brief he wrote does.