decoration decoration
Stories

GROKLAW
When you want to know more...
decoration
For layout only
Home
Archives
Site Map
Search
About Groklaw
Awards
Legal Research
Timelines
ApplevSamsung
ApplevSamsung p.2
ArchiveExplorer
Autozone
Bilski
Cases
Cast: Lawyers
Comes v. MS
Contracts/Documents
Courts
DRM
Gordon v MS
GPL
Grokdoc
HTML How To
IPI v RH
IV v. Google
Legal Docs
Lodsys
MS Litigations
MSvB&N
News Picks
Novell v. MS
Novell-MS Deal
ODF/OOXML
OOXML Appeals
OraclevGoogle
Patents
ProjectMonterey
Psystar
Quote Database
Red Hat v SCO
Salus Book
SCEA v Hotz
SCO Appeals
SCO Bankruptcy
SCO Financials
SCO Overview
SCO v IBM
SCO v Novell
SCO:Soup2Nuts
SCOsource
Sean Daly
Software Patents
Switch to Linux
Transcripts
Unix Books
Your contributions keep Groklaw going.
To donate to Groklaw 2.0:

Groklaw Gear

Click here to send an email to the editor of this weblog.


To read comments to this article, go here
An Update on the Copiepresse/Google Dispute -- ACAP Enters the Picture
Thursday, October 16 2008 @ 09:41 AM EDT

It's been more than a year since we had an update on the Copiepresse litigation against Google. There was supposed to be another court hearing in September, but it was postponed and last I heard it will be in November. In case it actually happens this time, Sean Daly has done the Copiepresse summons to Google for us in English as text.

Most of us have wondered from day one why Copiepresse didn't just use robots.txt to tell Google and other search engines what it wants left alone. Google did show them how, if you recall, and they agreed to use it. It turns out that publishers Copiepresse represents felt robots.txt was not sufficiently fine-tuned. So Copiepresse is now busy evangelizing for a system called ACAP, which stands for Automated Content Access Protocol.

I'm sure you love the very sound of it.

Well, if you are a publisher of the old fashioned variety, you may start drooling on your tie when I tell you it's a use and permissions system. I'll tell you all about it, but the short version is that it's an extension to robots.txt to give all control to publishers over their work by regulating what search engines can and can't do. It's a concept trying to solve a real problem, namely how to set up machine-interpretable permissions so that neither search engines nor publishers have to individually negotiate permissions.

But it's also Larry Lessig's Code and Other Laws of Cyberspace come true. They read his book and thought it was a great idea, I gather, and they are seriously trying to implement total control for publishers and the death of fair use as we know it.

Lessig wrote, "It is not hard to understand how the Net could become the perfect space of regulation or how commerce would play a role in that regulation." Fair use? Like a weed sprayed with Roundup, in such a universe it will quickly wilt and die. For that reason, I can't see how it can be legal in the US. But no doubt they'll try to find a way.

So far, the way seems to be to get publishers to just switch to ACAP and readers can do the next best thing. Will publishers make allowance for fair use if they have total control? Hardy har. They'll never enable it if they can help it, and so how in the world would fair use exist if publishers get to define what is fair? Their definition is no access unless they let you. ACAP makes publishers the preemptive copyright police. Also judge and jury. You don't get to speak or protest or do anything at all. Just pay and consume and shut up. The whole point of fair use is that they don't get to decide that question unilaterally.

By they, include the RIAA. They are on board, of course, along with other Internet Neanderthals like AP, Random House, and the Motion Picture Association. You can see the list in this slide presentation [PDF]. The list of claimed "official supporters" include the EU Commission's Viviane Reding, sadly.

I have a better idea, I think. How about everyone who wants to use such a system gets categorized by search engines and put in a separate area? You'd get results without ACAP by default. If you wanted paid content, you could search for it separately in the ACAP search results. That would give us little people some control too over what we are obliged to be exposed to while we wait for someone with a clue to sue to get rid of this system. Those of us looking for paid content would know where to find it, and those of us who want these dudes to leave the Internet alone and quit trying to make it go back to the goode olde dayes of print publishing could surf in peace as if they don't even exist. Why wouldn't that work? I'm seriously suggesting it.

Meanwhile, a court in Germany has decided two cases against Google on the question of image thumbnails, Bloomberg and paidContent are reporting. Google says it will appeal both. The general US position on thumbnails is that they are fair use, which is a wonderful aspect of US copyright law. You can read about that position in an article entitled "All Rights Reserved: Does Google's 'Image Search' Infringe Vested Exclusive Rights Granted Under the Copyright Law?" by Eugene Goryunov, John Marshall Law School, Chicago, published in the John Marshall Law Review, Vol. 41, No. 2, 2008. But fair use is a US concept, and it's a lot trickier elsewhere. So German Google users could find that they can't have Google image search any more, which would really be a shame from their standpoint. With ACAP, we could *all* find we have no thumbnails available any more, because it's one of the permissions publishers can zap.

Mme. Boribon of Copiepresse has recently been to Spain, speaking against search engines and their allegedly toxic effect on traditional print newspapers and their relationship to their readers, if my rusty Spanish is not failing me. My understanding of the article is that the journalist writing about her speech disagrees, and he feels Google isn't to blame if the old business model publishers are struggling to figure out the new.

AFP reports, again in Spanish, that she and a representative from Google were on a panel in Madrid, where she moaned about the effect of search engines on newspapers, alleging that they are diminishing the authority of newspapers as the source of information. She was pushing ACAP because it gives publishers control over their content by adding extensions and fine-tuning to the robots exclusion protocol.

The extensions, for example, would let search engines know when they can scrape text but not images. And fine-tuning it, you could let the search engine know if it can crawl, follow, index, preserve, present, etc. and then fine-tuning further when it can present a snippet, a thumbnail, only the original, or the current version, or both, and with time limits and length limits. A publisher's paradise of control.

I gather that it is like those truly annoying newspaper archives sites, where they show a few words and in a middle of a sentence, ask for money if you want to read more. I hate those sites, if anyone cares, and one reason is because they want me to charge for articles that when they were new, I read for free. That's like giving away a new book, but charging for it used.

That bothers me enough that I never pay, deciding to do without just because I find their greed beyond offensive. And it's annoying to expect to read an article you know was online for free and then find you can't access it unless you pay $5 or something inexplicably expensive for one article, when you didn't pay even one penny when the article was fresh and newsworthy. How do they justify it?

Worse, it messes up research on the Internet. I do a lot of that, and I can't tell you how irritating it is to do a search engine search, see something that looks useful, only to find you can't really read it unless you cross the publisher's palm with silver. That's why I'd like to isolate and aggegrate all that paid junk in some corner of the Internet where I don't have to bump into it.

That doesn't mean I won't pay for content. I have done so in the past, but I go to sites that I value enough to pay on purpose, and I don't want to be roped in by a search engine when I don't expect it. So, seriously, could you please rope that kind of stuff off if you use this system? Groklaw is noncommercial, and I can't afford to pay for content right and left. And if I never read a single bit of content on the list of entities using ACAP [PDF], it'd be no loss to me.

Now, I should be fair and let ACAP tell us what is so great about their system. Here's the best explanation I could find about why someone might want to use ACAP:

What types of new business might result from the development of ACAP, for search engines and publishers working in partnership?

1. Beginning with content that is freely available on the web, ACAP will allow publishers to be more confident about the use to which their content is put, allowing discrimination (for example) between trusted and untrusted partners and between different usages. ACAP will allow (again as an example) time‐based factors to be taken into account in spidering rules, giving publishers much finer control over dissemination of content at different stages in its life‐cycle

2. With content that is currently not publicly available, ACAP will create the technological framework for web site owners to allow access to content behind firewalls (book content,for example) with much finer control of the conditions under which it can be spidered – giving confidence to publishers that they can retain a direct influence over what is displayed to users and other access conditions – thus increasing the publishers’ confidence that in making their content available for search they are not damaging their core business models

It's better than litigation, I guess, and in principle, I think fine-tuning permissions makes some sense up to a point. Here on Groklaw, we came up with our own system to figure out what to let search engines use and what not to, so I do understand wanting to fine-tune such things. There can be very valid reasons unrelated to making money.

On page 14 of the slide presentation, I see the work plan for 2008 included considering the creation of an "ACAP" organization in collaboration with existing standards organizations. Like ISO, perchance? I hear they have a fast track. Once you hop on it, via Ecma, it seems there's no way to fail. I see also the ACAP people want an automated "take down" process, but so far they can't figure out how to do that. They never will be able to, either, I don't think. Google explained what is wrong with that concept in its recent letter [PDF] to the McCain/Palin folks. How would you automate the complex fair use analysis?:

Your letter raises important issues relating to the Digital Millennium Copyright Act (DMCA) that directly affect the YouTube community. As your letter acknowledges, the DMCA provides a statutory safe harbor for service providers such as YouTube that host content at the direction of users. Without this safe harbor, sites like YouTube could not exist. To strike the proper balance between rights holders and content uploaders, Congress had the foresight to implement a notice-and-takedown regime that allows rights holders to submit takedown notices for uploaded content that the rights holders believe infringes their rights. If service providers remove the content in response to a notice, they maintain their safe harbor and avoid potential copyright infringement liability. If, on the other hand, service providers do not remove the content in response to such notice, they do so at their own risk because they lose their safe harbor.

The DMCA protects content uploaders from erroneous or abusive takedown notices in two distinct ways. First, it allows uploaders to file a counter-notification in response to a takedown notice they believe to have been made in error. Once the uploader files a counter-notification, the statute allows the service provider to reinstate the content after a waiting period of 10 business days without jeopardizing its safe harbor, provided that the rights owner does not file a copyright infringement lawsuit against the content uploader during that waiting period. Second, Section 512(f) of the DMCA allows parties injured by fraudulent takedowns to sue the claimant for damages.

Despite penalties of perjury, the counter-notification process and the very real possibility of lawsuits for damages, some parties still abuse the DMCA takedown process and seek the removal of content that does not infringe their rights. Because of the DMCA's structure, an abusive takedown notice may result in the restriction of non-infringing speech during the statutory 10-day waiting period. We recognize this potential for abuse, and have a number of measures in place to combat it. Indeed, we have spent numerous hours tracking down abuse, terminating offending accounts and reinstating affected videos....

Some have suggested that YouTube mitigate abuse by performing a substantive legal review of every DMCA notice we receive prior to processing a takedown. For a number of reasons, this is not a viable solution. As you recognize in your letter, a detailed substantive review of every DMCA notice is simply not possible due to the scale of YouTube's operations. Any such review would have to include a determination of whether a particular use is a "fair use" under the law, which is a complex and fact-specific test that requires the the subjective balancing of four factors. Lawyers and judges constantly disagree about what does and does not constitute fair use. No number of lawyers could possibly determine with a reasonable level of certainty whether all the videos for which we receive disputed takedown notices qualify as fair use.

More importantly, YouTube does not possess the requisite information about the content in user-uploaded videos to make a determination as to whether a particular takedown notice includes a valid claim of infringement. The claimant and the uploader, not YouTube, hold all of the relevant information in this regard, including the actual source of any content used, the ownerships rights to that content, and any licensing arrangements in place between the parties. YouTube is merely an intermediary in this exchange, and does not have direct access to this critical information. When two parties disagree, we are simply not in a position to verify the veracity of either party's claims.

If humans are needed to evaluate case-by-case, and they are, and if they can't even do so because they lack all the relevant information, how in the world would a computer be able to automate such a process?? In Europe, where fair use isn't the norm, they may think ACAP sounds great. But I simply can't see how it could work in the DMCA context here.

Of course, the ACAP people would like to simply avoid all those complexities by not allowing fair use access at all and getting to define what they think is fair, thus making it a code-law. I think they'd best alter the DMCA first, and in the meantime live by it, as all the rest of us must.

If you'd like to know who uses ACAP, here's a search engine, Exalead, that joined the ACAP pilot project in July of 2007. The press release describes the purpose of ACAP:

ACAP, which has been endorsed by the European Commission, will provide permissions information in a form that can be recognized and interpreted by a search engine spider so that the search engine operator is enabled systematically to comply with the permissions granted by the owner. The new standard will remove the need for proprietary mechanisms that would oblige every publisher or content owner to negotiate their own agreement with each different online relationship. Publishers and other content providers invest huge sums in their content. ACAP gives them control over who gets to use that content, and under what condition.
I see on page 33 of the slide that what ACAP does is simply use different ACAP language. Instead of writing in your robots.txt file "Disallow", you would write "ACAP-disallow-crawl".

Here are their answers to the questions and objections on your mind, like the robots.txt argument. Why isn't it enough? They say it's unsophisticated:

We recognise that robots.txt is a well-established method for communication between content owners and crawler operators. This is why, at the request of the search engines, we worked to extend the Robots Exclusion Protocol not to replace it (although this posed us substantial problems). The Robots Exclusion Protocol was first defined at a time when the internet was extremely young and is simply not sophisticated enough for today's search models, let alone content and publishing models. Its original purpose was to manage bandwidth when that was a scarce commodity – a very different situation from today’s world. The simple choices that robots.txt offers are inconsistently interpreted. As well as that, a number of proprietary extensions have been implemented by the major search engines, but not all search engines recognise all or even any of these extensions. ACAP provides a standard mechanism for expressing conditional access which is what is now required. At the beginning of the project, search engines made it clear that ACAP should be based on robots.txt. ACAP therefore works smoothly with the existing robots.txt protocol.
Well. It's better than suing people, I guess, but I think all publishers should be forced to take a class on how the Internet works before they are allowed to publish on it. I'm not seriously suggesting that, but I get so tired of bulls in the china shop trying to take it over while breaking everything that makes people want to be there.

Here's the summons, along with some context from Sean.

*********************************

Copiepresse Summons to Google for Damages Up to 49 Million Euros, as text,
by Sean Daly

You may have seen the News Pick last May reporting that Copiepresse, an association of French and German language Belgian publications, had served Google with a summons for copyright violations and is demanding payment of up to 49 million euros.

When we last updated you on the Google litigation a year ago, the parties were talking, had asked the court to delay the appeals hearing and the Copiepresse titles had agreed to use robots.txt tagging again after Google returned the newspapers to the main search engine. However, Google's appeal of the February 2007 Brussels Court of First Instance ruling [PDF] stood and apparently negotiations have broken down since. Copiepresse seeks damages in this separate action which they have posted on their website, in French and English, with the following statement:

Since the negotiations with Google have not led to an agreement the appeal proceedings are therefore going on. The pleas have been lodged by both sides but no hearing date has been set. In addition the lack of agreement between Google and Copiepresse has obviously led the latter to launch a damage suit complementary to the injunction proceedings.

What kind of agreement do the Copiepresse publications want? Margaret Boribon of Copiepresse told us in October 2006:

We want the respect of the European legal framework, meaning prior authorization; that this authorization should involve remuneration seems completely logical, since the Google News service constitutes really a loss leader for Google and it's a way for them to generate very, very, very large revenue... If Google was reasonable, they could understand that we have an interest in coming to an understanding, and doing a fair deal, a win-win deal, because indeed our content without a very good search engine would not be the most efficient thing on the Internet, but their search engine, if all the content producers refuse to go along, is no longer worthwhile either, or much less, in any case.

So Copiepresse wants remuneration for its content. Although Google has the deepest pockets, they are by no means Copiepresse's only target; even the European Commission's news aggregator has been attacked, unsuccessfully so far as we reported. But as the World Association of Newspapers statement tells us, Copiepresse is not alone; as it happens, Google has previously faced opposition and in some cases litigation from news content producers such as the Associated Press, Agence France-Presse, the New York Times, the Washington Post, and the UK Press Association. Google dampened criticism and ended litigation -- with the exception of Copiepresse -- by agreeing to license content from the major B2B wire services, saying: "This change will provide more room on Google News for publishers' most highly valued content: original content."

What about fair use? you might ask. Can't Google just index freely on the Internet whatever isn't excluded with robots.txt, and aggregate news article titles, leads, brief excerpts, and tiny thumbnails? In the case of Copiepresse, there's a catch: Fair use is a US concept, and Belgian and other European countries' copyright law is different. Dr. Séverine Dusollier, Creative Commons lead for Belgium, professor at Belgium's University of Namur and head of the Department of Intellectual Property Rights at the Research Center for Computer and Law there, presented a paper [PDF] a few years ago discussing fair use -- "or exceptions to copyright, as we say in Europe" -- in the context of DRM. Dr. Dusollier's expertise, as you will see, was called upon by Copiepresse: an article she authored is cited in the Google summons, on not a minor point: the application of the CFI ruling to all of Europe as jurisprudence, a goal clearly stated by Mme Boribon in our 2006 interview.

Copiepresse presents an infringement expert assessment by another Belgian professor in the summons, Alain Berenboom. However, Professor Berenboom is not a detached expert operating in a vacuum; no stranger to the case, he is a lawyer and represented [French] the Société Multimédia des Auteurs des Arts Visuels (SOFAM), a photographers rights association, as an intervener in the Copiepresse lawsuit which ultimately negotiated a confidential deal with Google and withdrew from the case. If you read French, there is an analysis of the Copiepresse judgement by an associate of Professor Berenboom here [PDF]. Professor Berenboom may be described as the Renaissance Man of Belgian copyright law: he advised the Belgian Parliament on the transposition of the EU Copyright Directive, is the editor in chief of a revue cited several times in the Court of First Instance's Google ruling, wrote Luxembourg's copyright laws, runs a law firm, is a regular columnist for Le Soir (a Copiepresse title), and last but not least, is a novelist and... blogger.

To sum up, some points worth mentioning about this summons, including a curious oddity. First, you will notice that the summons fixes a court date, September 18th, 2008, but that was later changed. Next, you will see that Copiepresse does not rehash any of the arguments; the summons is really just an invoice for one year of copyright infringement, for 4 million euros upfront and a total of between 32.8 million and 49.2 million euros to be paid later; Copiepresse magnanimously offers to accept the lower figure while reminding the court that they had issued only a single, limited permit (!) to Google to index Copiepresse newspaper sites. Should Google wish to contest the calculation methodology, they are free to do so, but in that case Copiepresse wants Google's Belgian logs from April 13, 2001 to the present, to be perused by a panel of experts, perhaps Mr. Golvers whose confidential, unpublished report forms the basis of the February 2007 ruling. Copiepresse cites damages in light of the 1994 copyright law which in fact was amended on May 10, 2007 (after the CFI ruling therefore) with more favorable language concerning damages. Was that amendment influenced by the Copiepresse case? And, Copiepresse wants Google to publish the ruling like a scarlet letter on its Belgian homepage and news page, in Arial 10, for a period of 20 days, something that might look like this.

Finally, the oddity: to justify serving the summons at Google headquarters, the document states:

considering that this party is based/domiciled in the UNITED STATES OF AMERICA and considering that I do not know any residence nor elected domicile in Belgium of this party,

Now, if they had just consulted a search engine -- Yahoo would do nicely -- they would know that a Belgian was named country director for Belgium last year, that he runs a sales office in Brussels, and that Google is investing 300 million dollars or so building a datacenter in Belgium's Wallonia region (photos) which will come online later this year. They could even watch a video, courtesy of the Belgian government, of Google's country director extolling the virtues of investing in... Belgium.



7

Linda REYNAERT* - Jules CALLEBAUT
Licenciaten in de Rechten - Licenciés en Droit
GERECHTSDEURWAARDERS - HUISSIERS DE JUSTICE
Ortwin VERSCHUERE*
Kandidaat-Gerechtsdeurwaarder - Candidat Huissier de justice
[address]

REFERENCE: A15346 / GT

SUMMONS

(ART. 86bis of the law dd. 30th June 1994 on copyright and related rights)


Considering that my hereafter better described plaintiff, COPIEPRESSE, is the management company of the rights of the Belgian publishers of the daily French- and German-speaking press, authorized by the M.D. dd. 14th February 2000 and 20th June 2003 (MB dd. 10/03/2000 and MB dd. 14/08/2003)[PDF] to carry out its activities on Belgian territory from the date an excerpt of its articles of association are published in the Moniteur belge [Belgian State Gazette];

That my plaintiff's company objective is to defend the copyright of its members (actual rights of the publishers and acquired rights of the journalists) and to regulate the use of the protected work of its members by third parties.

Considering that the COPIEPRESSE directory is available on its website (http://www.copiepresse.be).

That the plaintiff is moreover entitled to go to court.

Considering that my plaintiff has discovered that the hereafter better described company under American law, GOOGLE INC published entire articles or article excerpts from its list of editors to be read by the public at large through:

  • "GOOGLE NEWS": reproduction and partial publication,
  • "GOOGLE SEARCH": Full reproduction and publication via its "cached" pages,

Considering that to date, GOOGLE INC has only received one permit to reproduce the contents of the websites of the publishers featuring in the Belgian COPIEPRESSE directory with the sole purpose of allowing these latter parties to be referred to on the search engine (GOOGLE SEARCH).

That this permit does not cover the other services offered by GOOGLE INC, i,e. notably "GOOGLE NEWS". Neither does it cover access to the "GOOGLE SEARCH" "cached" pages.

Considering that the dispute between my plaintiff and GOOGLE INC gave rise to a judgment handed down by the President of the Court of First Instance of Brussels sitting in injunction proceedings on 13th February 2007 (Civ. Brussels (inj.), 13th February 2007, GR no. 06/10928/A)


8

That even if this judgment were to be appealed by GOOGLE INC, its quality on a legal level has been unanimously recognized by various doctrine articles.

That COPIEPRESSE therefore sees itself forced to uphold its position, which was also followed by the President of the court of first instance of Brussels in his judgment dd. 13th February 2007 and in respect of which doctrine concludes: "This concerns the correct application of copyright which has to be made quite clear, before the Belgian decision is copied in all the countries..." (S. DUSSOLIER, The clay-footed giant: Google News and copyright, Lamy, Intangible Rights).

That these proceedings seek to have GOOGLE INC ordered to redress the loss suffered by the COPIEPRESSE principals as a result of the violation of copyright.

Considering that compensation for the loss covers various positions and that its foremost objective is to restore the injured party to the situation he was in as if the offence had never been committed.

That the compensation must cover all the loss items.

That on grounds of article 86 A of the law dd. 30th June 1994 concerning copyright and related rights introduced by the law dd. 10th May 2007 concerning the civil aspects of the protection of intellectual rights: "§ 1st. Without prejudice to § 3, the injured party is entitled to be compensated for any loss he has suffered as a result of the violation of copyright or related right. § 2. If the extent of the loss incurred cannot be determined in any other way, the judge may set a reasonable and fair fixed amount for damages."

That my plaintiff has asked Professor Alain Berenboom (Université Libre de Bruxelles [Free University of Brussels]) to put the extent of the loss incurred into figures.

That at the end of a 26-page report Professor Berenboom concludes that the loss suffered ranges between a minimum of 32,793,366.00 euro and a maximum of 49,190,049.00 euro.

That Professor Berenboom's calculation is based on the number of articles the judicial officer appointed by COPIEPRESSE discovered after the "flagship" websites of the Beigian publishers represented by COPIEPRESSE had been blacklisted.

Considering that in a note drawn up in conjunction with Mr. Magrez [lawyer for Copiepresse, ndlr], another method of calculation was used which sets the loss at an amount of 39,751,146 euro.

That this other method is based on an estimate of the traffic in relation to the newspaper articles on "GOOGLE SEARCH" and "GOOGLE NEWS".

That once the figures from Mr. Magrez and Professor Berenboom were reconciled the latter concludes that "the additional damages awarded by the courts usually range between 100 and 200 % of the amount of unpaid royalties. In view of the amounts at stake, we think that the judges will lean towards applying damages of 100 %. It is therefore the minimum amount which should be sought. On that basis it is appropriate to set the loss suffered at the amount of 32, 793,366 euro."

That it must be noted that these assessments have been made for one single year only and do not target, as far as "GOOGLE SEARCH" is concerned the entire period not covered by statutory limitation (i.e. 5 years).

Considering that these bills have already been forwarded to Counsel of the summonsed party.

That my plaintiff leaves it up to Your Court to decide whether additional damages of 100% on the evaded royalties should be awarded which would bring the overall amount to 49,190,049.00 euro.

Considering that if GOOGLE INC were to contest the number of articles or the estimate of the traffic in relation to the newspaper articles, Your Court cannot take these claims in support of their own case into account so that an expert's appraisal will be unavoidable.

That in fact statements made by one party in support of his own cause are merely claims on which the judge cannot base himself if these claims are not backed up by other elements or some other form of presumption.


9

So that proper justice would be done it would be appropriate to ask GOOGLE INC. before any ruling is made whether they contest the data on the basis of which the calculations were made at the behest of COPIEPRESSE.

Considering that in the event the summonsed party was to contest, it would be appropriate to appoint a panel of experts whose task is further specified hereafter.

That in that case, COPIEPRESSE suggests that as date of the interruption of statutory limitation the date on which the ruling by the Distraint Judge with the Court of First Instance dd. 23rd March 2006 was served, in which Expert [Luc] GOLVERS was appointed, i.e. 13th April 2006, should be considered.

Considering that in view of the assessments already made (i.e., between 32,793,366.00 and 49,190,049.00 euro), my plaintiff deems they should already be provisionally awarded a sum of 4,000,000.00 euro.

FOR THESE REASONS:

In the year two thousand and eight, on TWENTY-TWO MAY.

AT THE PETITION OF:

The association under the form of a Limited Liability Co-operative Society COPIEPRESSE, registered with the Crossroads Bank for Enterprises under number 0471.612.218, RPM [Register of Legal Entities] BRUSSELS, with registered offices in [address],

With Counsel Mr. Bernard MAGREZ, Solicitor, with chambers in [address],

I, the undersigned, Ortwin VERSCHUERE, Judicial Officer temporarily replacing Linda REYNAERT, Judicial Officer, with chambers in [address].

HAVE SUMMONSED:

The company under American law GOOGLE Inc., with registered offices in [address],

SERVING MY WRIT AS DESCRIBED HEREAFTER.

To appear on THURSDAY EIGHTEENTH of SEPTEMBER 2008 at NINE O'CLOCK in the morning before the FIRST CHAMBER OF THE COURT OF FIRST INSTANCE OF BRUSSELS, sitting in its normal hearing rooms, ROOM 0.10, at the Palais de Justice [Court House], Place Poelaert, in said BRUSSELS,


IN ORDER TO:

For the aforementioned reasons and all others to be enforced in place and in time and here under explicit reserves.

Pronounce the claim admissible and founded;


10

Rule in law that by running the information portal GOOGLE NEWS without prior permission and by giving access to the "cached" pages on its search engine "GOOGLE SEARCH", GOOGLE INC has violated the Belgian legislation on copyright and related rights.

Rule in law that GOOGLE INC cannot invoke any legal exception; not of article 10 of the European Convention on Human Rights; nor the liability exemption granted to technical operators by the law on e-commerce.

Before ruling in the event that GOOGLE INC were to contest the number of articles and the assessment of the traffic in relation to the newspaper articles, data which form the basis for the assessments carried out at the behest of COPIEPRESSE, to appoint a panel of experts, at the exclusive expense of GOOGLE INC., consisting of at least one IT-specialist and a certified public accountant or company auditor who shall have the task to:

  • During the inaugural meeting:
    • Define the methodology to be used
    • Determine the cost provision to be deposited with the court registry

  • Attempt to reconcile the parties;
  • Draw up a list of the articles which were published on GOOGLE NEWS before GOOGLE INC gets rid of them;
  • Draw up a list of the articles which were published on GOOGLE SEARCH prior to and after the black-listing and this from 13th April 2001 onwards;
  • These first two lists shall specify for each article:
    • The publication,
    • The year of publication,
    • The title of the article,
    • If possible, the author of the article,
    • Whether the article was published in full or whether only an extract was published
    • if GOOGLE added any information or made any changes to the original texts.

  • To have the logs of the GOOGLE server hits forwarded in order to establish:
    • The number of "cached" articles which were looked at on GOOGLE SEARCH since 13th April 2001
    • The number of visits to GOOGLE NEWS since it was launched in Belgium and since the Belgian newspaper articles were withdrawn
    • The number of visits by GOOGLE NEWS to the publishers since its launch in Belgium and this until all the newspaper articles on the websites listed in the COPIEPRESSE directory which were published on the COPIEPRESSE website were withdrawn.
    • Any information GOOGLE INC retains on GOOGLE SEARCH and GOOGLE NEWS in relation to the searches carried out and visits paid and more generally, in relation to the visitors who were redirected to the newspaper sites.

  • File their report six months from the date of the inaugural meeting

Order GOOGLE INC to pay my plaintiff the provisional amount of 4,000,000 euro on an amount which has provisionally been estimated to lie between 32,793,366.00 and 49,190,049.00 euro.

Finally, to order the summonsed party to publish in a visible and clear manner (character and font size: Arial: 10) and without any commentary on their part, the entire intervening judgment on the home pages of GOOGLE.BE and NEWS.GOOGLE.BE for a continuous period of 20 days from the date of its service, under penalty of a daily fine for nonperformance of one million euro per day of delay.


11

Order GOOGLE INC to pay compensatory interests dating back to the moment the violation of copyright was established.

Award all the costs of the proceedings, including the litigation expenses which, in view of the significance of this case, have been set at € 30,000 against GOOGLE INC.

Order the provisional enforcement of the intervening judgment notwithstanding any arrestation or appeal and without surety or cantonment.

Action based on the above adduced reasons, the laws and decrees on the matter and on all other grounds to be enforced in place and time and which are here fully and expressly reserved and without any prejudicial acknowledgement.

And in order that the addressee thereof should not plead ignorance, but considering that this party is based/domiciled in the UNITED STATES OF AMERICA and considering that I do not know any residence nor elected domicile in Belgium of this party, I the undersigned and aforesaid Judicial Officer, have sent, pursuant to the International CONVENTION with regard to the service and notification abroad of judicial and extra-judicial documents in civil and commercial cases, drawn up in THE HAGUE on 15 November 1965 (approved by the law of 24 January 1970 - Belgian Official Gazette of 9 February 1971), by registered mail with acknowledgment of receipt, deposited today at the post office in UCCLE, [address]

  1. one application, properly completed in English, corresponding to the model form that is appended, in enclosure, to this Convention;
  2. two copies of the present writ, as well as the documents mentioned therein, each copy of the writ accompanied
    1. with a form that describes the summary of the document ta be served, drawn up in English;
    2. with a translation in English

  3. with the proof of amount of $ US 95

to the following private company appointed by the United States of America, empowered to act on behalf of the Central Authority, to wit:

PROCESS FORWARDING INTERNATIONAL
[address]

asking the latter to:

  1. to serve on the company under American Law GOOGLE Inc. registered offices of which are based [address], one of the copies of this writ, as stated in subsection 2 above, accompanied by the translation thereof, as well as the form that describes the nature and the subject matter of the document, in accordance, as such, of the methods of procedure, in the legal texts of the petitioned country, laid down for the service or notification of documents drawn up in that Country and meant for individuals living there, notably in pursuance of article 5, sub-section 1a of the aforesaid Convention;
  2. to kindly return to me the other copy, along with the declaration, provided for in article 6 of the Convention, amounting to the fact that the application has been implemented, at the same time stating in what form, in which place and at what point in time this was carried out, as well as the person to whom the document is issued, or, should the occasion arise, stating the circumstances which have obstructed the application;


12

And whereas article 10 of the aforesaid Convention allows, among others, for the unimpeded authority to send judicial and extra-judicial documents, directly by post, to individuals who are located abroad, and that the UNITED STATES OF AMERICA do not oppose to this possibility, I have also sent one copy of the present writ (as well as the documents notified therein), as well as a form containing the summary of the document to be served, and a translation into English, under registered cover with acknowledgement of receipt, to the address of the addressee in the UNITED STATES OF AMERICA, at the aforesaid post office of UCCLE, [address]

And I have likewise attached the receipts of these registered letters to the original of the present writ;

WHEREOF RECORD.

Costs : four hundred and thirteen Euro and forty-two Cent, to be raised with the costs of the translation into English, to wit: 383,33 EUR

The Judicial Officer,





Certified a true translation from French into English,
L.VANPARIJS Sworn Translator.

[signature]
[cost receipt]

WHI


  View Printable Version


Groklaw © Copyright 2003-2013 Pamela Jones.
All trademarks and copyrights on this page are owned by their respective owners.
Comments are owned by the individual posters.

PJ's articles are licensed under a Creative Commons License. ( Details )