decoration decoration
Stories

GROKLAW
When you want to know more...
decoration
For layout only
Home
Archives
Site Map
Search
About Groklaw
Awards
Legal Research
Timelines
ApplevSamsung
ApplevSamsung p.2
ArchiveExplorer
Autozone
Bilski
Cases
Cast: Lawyers
Comes v. MS
Contracts/Documents
Courts
DRM
Gordon v MS
GPL
Grokdoc
HTML How To
IPI v RH
IV v. Google
Legal Docs
Lodsys
MS Litigations
MSvB&N
News Picks
Novell v. MS
Novell-MS Deal
ODF/OOXML
OOXML Appeals
OraclevGoogle
Patents
ProjectMonterey
Psystar
Quote Database
Red Hat v SCO
Salus Book
SCEA v Hotz
SCO Appeals
SCO Bankruptcy
SCO Financials
SCO Overview
SCO v IBM
SCO v Novell
SCO:Soup2Nuts
SCOsource
Sean Daly
Software Patents
Switch to Linux
Transcripts
Unix Books

Gear

Groklaw Gear

Click here to send an email to the editor of this weblog.


You won't find me on Facebook


Donate

Donate Paypal


No Legal Advice

The information on Groklaw is not intended to constitute legal advice. While Mark is a lawyer and he has asked other lawyers and law students to contribute articles, all of these articles are offered to help educate, not to provide specific legal advice. They are not your lawyers.

Here's Groklaw's comments policy.


What's New

STORIES
No new stories

COMMENTS last 48 hrs
No new comments


Sponsors

Hosting:
hosted by ibiblio

On servers donated to ibiblio by AMD.

Webmaster
An Update on the Copiepresse/Google Dispute -- ACAP Enters the Picture
Thursday, October 16 2008 @ 09:41 AM EDT

It's been more than a year since we had an update on the Copiepresse litigation against Google. There was supposed to be another court hearing in September, but it was postponed and last I heard it will be in November. In case it actually happens this time, Sean Daly has done the Copiepresse summons to Google for us in English as text.

Most of us have wondered from day one why Copiepresse didn't just use robots.txt to tell Google and other search engines what it wants left alone. Google did show them how, if you recall, and they agreed to use it. It turns out that publishers Copiepresse represents felt robots.txt was not sufficiently fine-tuned. So Copiepresse is now busy evangelizing for a system called ACAP, which stands for Automated Content Access Protocol.

I'm sure you love the very sound of it.

Well, if you are a publisher of the old fashioned variety, you may start drooling on your tie when I tell you it's a use and permissions system. I'll tell you all about it, but the short version is that it's an extension to robots.txt to give all control to publishers over their work by regulating what search engines can and can't do. It's a concept trying to solve a real problem, namely how to set up machine-interpretable permissions so that neither search engines nor publishers have to individually negotiate permissions.

But it's also Larry Lessig's Code and Other Laws of Cyberspace come true. They read his book and thought it was a great idea, I gather, and they are seriously trying to implement total control for publishers and the death of fair use as we know it.

Lessig wrote, "It is not hard to understand how the Net could become the perfect space of regulation or how commerce would play a role in that regulation." Fair use? Like a weed sprayed with Roundup, in such a universe it will quickly wilt and die. For that reason, I can't see how it can be legal in the US. But no doubt they'll try to find a way.

So far, the way seems to be to get publishers to just switch to ACAP and readers can do the next best thing. Will publishers make allowance for fair use if they have total control? Hardy har. They'll never enable it if they can help it, and so how in the world would fair use exist if publishers get to define what is fair? Their definition is no access unless they let you. ACAP makes publishers the preemptive copyright police. Also judge and jury. You don't get to speak or protest or do anything at all. Just pay and consume and shut up. The whole point of fair use is that they don't get to decide that question unilaterally.

By they, include the RIAA. They are on board, of course, along with other Internet Neanderthals like AP, Random House, and the Motion Picture Association. You can see the list in this slide presentation [PDF]. The list of claimed "official supporters" include the EU Commission's Viviane Reding, sadly.

I have a better idea, I think. How about everyone who wants to use such a system gets categorized by search engines and put in a separate area? You'd get results without ACAP by default. If you wanted paid content, you could search for it separately in the ACAP search results. That would give us little people some control too over what we are obliged to be exposed to while we wait for someone with a clue to sue to get rid of this system. Those of us looking for paid content would know where to find it, and those of us who want these dudes to leave the Internet alone and quit trying to make it go back to the goode olde dayes of print publishing could surf in peace as if they don't even exist. Why wouldn't that work? I'm seriously suggesting it.

Meanwhile, a court in Germany has decided two cases against Google on the question of image thumbnails, Bloomberg and paidContent are reporting. Google says it will appeal both. The general US position on thumbnails is that they are fair use, which is a wonderful aspect of US copyright law. You can read about that position in an article entitled "All Rights Reserved: Does Google's 'Image Search' Infringe Vested Exclusive Rights Granted Under the Copyright Law?" by Eugene Goryunov, John Marshall Law School, Chicago, published in the John Marshall Law Review, Vol. 41, No. 2, 2008. But fair use is a US concept, and it's a lot trickier elsewhere. So German Google users could find that they can't have Google image search any more, which would really be a shame from their standpoint. With ACAP, we could *all* find we have no thumbnails available any more, because it's one of the permissions publishers can zap.

Mme. Boribon of Copiepresse has recently been to Spain, speaking against search engines and their allegedly toxic effect on traditional print newspapers and their relationship to their readers, if my rusty Spanish is not failing me. My understanding of the article is that the journalist writing about her speech disagrees, and he feels Google isn't to blame if the old business model publishers are struggling to figure out the new.

AFP reports, again in Spanish, that she and a representative from Google were on a panel in Madrid, where she moaned about the effect of search engines on newspapers, alleging that they are diminishing the authority of newspapers as the source of information. She was pushing ACAP because it gives publishers control over their content by adding extensions and fine-tuning to the robots exclusion protocol.

The extensions, for example, would let search engines know when they can scrape text but not images. And fine-tuning it, you could let the search engine know if it can crawl, follow, index, preserve, present, etc. and then fine-tuning further when it can present a snippet, a thumbnail, only the original, or the current version, or both, and with time limits and length limits. A publisher's paradise of control.

I gather that it is like those truly annoying newspaper archives sites, where they show a few words and in a middle of a sentence, ask for money if you want to read more. I hate those sites, if anyone cares, and one reason is because they want me to charge for articles that when they were new, I read for free. That's like giving away a new book, but charging for it used.

That bothers me enough that I never pay, deciding to do without just because I find their greed beyond offensive. And it's annoying to expect to read an article you know was online for free and then find you can't access it unless you pay $5 or something inexplicably expensive for one article, when you didn't pay even one penny when the article was fresh and newsworthy. How do they justify it?

Worse, it messes up research on the Internet. I do a lot of that, and I can't tell you how irritating it is to do a search engine search, see something that looks useful, only to find you can't really read it unless you cross the publisher's palm with silver. That's why I'd like to isolate and aggegrate all that paid junk in some corner of the Internet where I don't have to bump into it.

That doesn't mean I won't pay for content. I have done so in the past, but I go to sites that I value enough to pay on purpose, and I don't want to be roped in by a search engine when I don't expect it. So, seriously, could you please rope that kind of stuff off if you use this system? Groklaw is noncommercial, and I can't afford to pay for content right and left. And if I never read a single bit of content on the list of entities using ACAP [PDF], it'd be no loss to me.

Now, I should be fair and let ACAP tell us what is so great about their system. Here's the best explanation I could find about why someone might want to use ACAP:

What types of new business might result from the development of ACAP, for search engines and publishers working in partnership?

1. Beginning with content that is freely available on the web, ACAP will allow publishers to be more confident about the use to which their content is put, allowing discrimination (for example) between trusted and untrusted partners and between different usages. ACAP will allow (again as an example) time‐based factors to be taken into account in spidering rules, giving publishers much finer control over dissemination of content at different stages in its life‐cycle

2. With content that is currently not publicly available, ACAP will create the technological framework for web site owners to allow access to content behind firewalls (book content,for example) with much finer control of the conditions under which it can be spidered – giving confidence to publishers that they can retain a direct influence over what is displayed to users and other access conditions – thus increasing the publishers’ confidence that in making their content available for search they are not damaging their core business models

It's better than litigation, I guess, and in principle, I think fine-tuning permissions makes some sense up to a point. Here on Groklaw, we came up with our own system to figure out what to let search engines use and what not to, so I do understand wanting to fine-tune such things. There can be very valid reasons unrelated to making money.

On page 14 of the slide presentation, I see the work plan for 2008 included considering the creation of an "ACAP" organization in collaboration with existing standards organizations. Like ISO, perchance? I hear they have a fast track. Once you hop on it, via Ecma, it seems there's no way to fail. I see also the ACAP people want an automated "take down" process, but so far they can't figure out how to do that. They never will be able to, either, I don't think. Google explained what is wrong with that concept in its recent letter [PDF] to the McCain/Palin folks. How would you automate the complex fair use analysis?:

Your letter raises important issues relating to the Digital Millennium Copyright Act (DMCA) that directly affect the YouTube community. As your letter acknowledges, the DMCA provides a statutory safe harbor for service providers such as YouTube that host content at the direction of users. Without this safe harbor, sites like YouTube could not exist. To strike the proper balance between rights holders and content uploaders, Congress had the foresight to implement a notice-and-takedown regime that allows rights holders to submit takedown notices for uploaded content that the rights holders believe infringes their rights. If service providers remove the content in response to a notice, they maintain their safe harbor and avoid potential copyright infringement liability. If, on the other hand, service providers do not remove the content in response to such notice, they do so at their own risk because they lose their safe harbor.

The DMCA protects content uploaders from erroneous or abusive takedown notices in two distinct ways. First, it allows uploaders to file a counter-notification in response to a takedown notice they believe to have been made in error. Once the uploader files a counter-notification, the statute allows the service provider to reinstate the content after a waiting period of 10 business days without jeopardizing its safe harbor, provided that the rights owner does not file a copyright infringement lawsuit against the content uploader during that waiting period. Second, Section 512(f) of the DMCA allows parties injured by fraudulent takedowns to sue the claimant for damages.

Despite penalties of perjury, the counter-notification process and the very real possibility of lawsuits for damages, some parties still abuse the DMCA takedown process and seek the removal of content that does not infringe their rights. Because of the DMCA's structure, an abusive takedown notice may result in the restriction of non-infringing speech during the statutory 10-day waiting period. We recognize this potential for abuse, and have a number of measures in place to combat it. Indeed, we have spent numerous hours tracking down abuse, terminating offending accounts and reinstating affected videos....

Some have suggested that YouTube mitigate abuse by performing a substantive legal review of every DMCA notice we receive prior to processing a takedown. For a number of reasons, this is not a viable solution. As you recognize in your letter, a detailed substantive review of every DMCA notice is simply not possible due to the scale of YouTube's operations. Any such review would have to include a determination of whether a particular use is a "fair use" under the law, which is a complex and fact-specific test that requires the the subjective balancing of four factors. Lawyers and judges constantly disagree about what does and does not constitute fair use. No number of lawyers could possibly determine with a reasonable level of certainty whether all the videos for which we receive disputed takedown notices qualify as fair use.

More importantly, YouTube does not possess the requisite information about the content in user-uploaded videos to make a determination as to whether a particular takedown notice includes a valid claim of infringement. The claimant and the uploader, not YouTube, hold all of the relevant information in this regard, including the actual source of any content used, the ownerships rights to that content, and any licensing arrangements in place between the parties. YouTube is merely an intermediary in this exchange, and does not have direct access to this critical information. When two parties disagree, we are simply not in a position to verify the veracity of either party's claims.

If humans are needed to evaluate case-by-case, and they are, and if they can't even do so because they lack all the relevant information, how in the world would a computer be able to automate such a process?? In Europe, where fair use isn't the norm, they may think ACAP sounds great. But I simply can't see how it could work in the DMCA context here.

Of course, the ACAP people would like to simply avoid all those complexities by not allowing fair use access at all and getting to define what they think is fair, thus making it a code-law. I think they'd best alter the DMCA first, and in the meantime live by it, as all the rest of us must.

If you'd like to know who uses ACAP, here's a search engine, Exalead, that joined the ACAP pilot project in July of 2007. The press release describes the purpose of ACAP:

ACAP, which has been endorsed by the European Commission, will provide permissions information in a form that can be recognized and interpreted by a search engine spider so that the search engine operator is enabled systematically to comply with the permissions granted by the owner. The new standard will remove the need for proprietary mechanisms that would oblige every publisher or content owner to negotiate their own agreement with each different online relationship. Publishers and other content providers invest huge sums in their content. ACAP gives them control over who gets to use that content, and under what condition.
I see on page 33 of the slide that what ACAP does is simply use different ACAP language. Instead of writing in your robots.txt file "Disallow", you would write "ACAP-disallow-crawl".

Here are their answers to the questions and objections on your mind, like the robots.txt argument. Why isn't it enough? They say it's unsophisticated:

We recognise that robots.txt is a well-established method for communication between content owners and crawler operators. This is why, at the request of the search engines, we worked to extend the Robots Exclusion Protocol not to replace it (although this posed us substantial problems). The Robots Exclusion Protocol was first defined at a time when the internet was extremely young and is simply not sophisticated enough for today's search models, let alone content and publishing models. Its original purpose was to manage bandwidth when that was a scarce commodity – a very different situation from today’s world. The simple choices that robots.txt offers are inconsistently interpreted. As well as that, a number of proprietary extensions have been implemented by the major search engines, but not all search engines recognise all or even any of these extensions. ACAP provides a standard mechanism for expressing conditional access which is what is now required. At the beginning of the project, search engines made it clear that ACAP should be based on robots.txt. ACAP therefore works smoothly with the existing robots.txt protocol.
Well. It's better than suing people, I guess, but I think all publishers should be forced to take a class on how the Internet works before they are allowed to publish on it. I'm not seriously suggesting that, but I get so tired of bulls in the china shop trying to take it over while breaking everything that makes people want to be there.

Here's the summons, along with some context from Sean.

*********************************

Copiepresse Summons to Google for Damages Up to 49 Million Euros, as text,
by Sean Daly

You may have seen the News Pick last May reporting that Copiepresse, an association of French and German language Belgian publications, had served Google with a summons for copyright violations and is demanding payment of up to 49 million euros.

When we last updated you on the Google litigation a year ago, the parties were talking, had asked the court to delay the appeals hearing and the Copiepresse titles had agreed to use robots.txt tagging again after Google returned the newspapers to the main search engine. However, Google's appeal of the February 2007 Brussels Court of First Instance ruling [PDF] stood and apparently negotiations have broken down since. Copiepresse seeks damages in this separate action which they have posted on their website, in French and English, with the following statement:

Since the negotiations with Google have not led to an agreement the appeal proceedings are therefore going on. The pleas have been lodged by both sides but no hearing date has been set. In addition the lack of agreement between Google and Copiepresse has obviously led the latter to launch a damage suit complementary to the injunction proceedings.

What kind of agreement do the Copiepresse publications want? Margaret Boribon of Copiepresse told us in October 2006:

We want the respect of the European legal framework, meaning prior authorization; that this authorization should involve remuneration seems completely logical, since the Google News service constitutes really a loss leader for Google and it's a way for them to generate very, very, very large revenue... If Google was reasonable, they could understand that we have an interest in coming to an understanding, and doing a fair deal, a win-win deal, because indeed our content without a very good search engine would not be the most efficient thing on the Internet, but their search engine, if all the content producers refuse to go along, is no longer worthwhile either, or much less, in any case.

So Copiepresse wants remuneration for its content. Although Google has the deepest pockets, they are by no means Copiepresse's only target; even the European Commission's news aggregator has been attacked, unsuccessfully so far as we reported. But as the World Association of Newspapers statement tells us, Copiepresse is not alone; as it happens, Google has previously faced opposition and in some cases litigation from news content producers such as the Associated Press, Agence France-Presse, the New York Times, the Washington Post, and the UK Press Association. Google dampened criticism and ended litigation -- with the exception of Copiepresse -- by agreeing to license content from the major B2B wire services, saying: "This change will provide more room on Google News for publishers' most highly valued content: original content."

What about fair use? you might ask. Can't Google just index freely on the Internet whatever isn't excluded with robots.txt, and aggregate news article titles, leads, brief excerpts, and tiny thumbnails? In the case of Copiepresse, there's a catch: Fair use is a US concept, and Belgian and other European countries' copyright law is different. Dr. Séverine Dusollier, Creative Commons lead for Belgium, professor at Belgium's University of Namur and head of the Department of Intellectual Property Rights at the Research Center for Computer and Law there, presented a paper [PDF] a few years ago discussing fair use -- "or exceptions to copyright, as we say in Europe" -- in the context of DRM. Dr. Dusollier's expertise, as you will see, was called upon by Copiepresse: an article she authored is cited in the Google summons, on not a minor point: the application of the CFI ruling to all of Europe as jurisprudence, a goal clearly stated by Mme Boribon in our 2006 interview.

Copiepresse presents an infringement expert assessment by another Belgian professor in the summons, Alain Berenboom. However, Professor Berenboom is not a detached expert operating in a vacuum; no stranger to the case, he is a lawyer and represented [French] the Société Multimédia des Auteurs des Arts Visuels (SOFAM), a photographers rights association, as an intervener in the Copiepresse lawsuit which ultimately negotiated a confidential deal with Google and withdrew from the case. If you read French, there is an analysis of the Copiepresse judgement by an associate of Professor Berenboom here [PDF]. Professor Berenboom may be described as the Renaissance Man of Belgian copyright law: he advised the Belgian Parliament on the transposition of the EU Copyright Directive, is the editor in chief of a revue cited several times in the Court of First Instance's Google ruling, wrote Luxembourg's copyright laws, runs a law firm, is a regular columnist for Le Soir (a Copiepresse title), and last but not least, is a novelist and... blogger.

To sum up, some points worth mentioning about this summons, including a curious oddity. First, you will notice that the summons fixes a court date, September 18th, 2008, but that was later changed. Next, you will see that Copiepresse does not rehash any of the arguments; the summons is really just an invoice for one year of copyright infringement, for 4 million euros upfront and a total of between 32.8 million and 49.2 million euros to be paid later; Copiepresse magnanimously offers to accept the lower figure while reminding the court that they had issued only a single, limited permit (!) to Google to index Copiepresse newspaper sites. Should Google wish to contest the calculation methodology, they are free to do so, but in that case Copiepresse wants Google's Belgian logs from April 13, 2001 to the present, to be perused by a panel of experts, perhaps Mr. Golvers whose confidential, unpublished report forms the basis of the February 2007 ruling. Copiepresse cites damages in light of the 1994 copyright law which in fact was amended on May 10, 2007 (after the CFI ruling therefore) with more favorable language concerning damages. Was that amendment influenced by the Copiepresse case? And, Copiepresse wants Google to publish the ruling like a scarlet letter on its Belgian homepage and news page, in Arial 10, for a period of 20 days, something that might look like this.

Finally, the oddity: to justify serving the summons at Google headquarters, the document states:

considering that this party is based/domiciled in the UNITED STATES OF AMERICA and considering that I do not know any residence nor elected domicile in Belgium of this party,

Now, if they had just consulted a search engine -- Yahoo would do nicely -- they would know that a Belgian was named country director for Belgium last year, that he runs a sales office in Brussels, and that Google is investing 300 million dollars or so building a datacenter in Belgium's Wallonia region (photos) which will come online later this year. They could even watch a video, courtesy of the Belgian government, of Google's country director extolling the virtues of investing in... Belgium.



7

Linda REYNAERT* - Jules CALLEBAUT
Licenciaten in de Rechten - Licenciés en Droit
GERECHTSDEURWAARDERS - HUISSIERS DE JUSTICE
Ortwin VERSCHUERE*
Kandidaat-Gerechtsdeurwaarder - Candidat Huissier de justice
[address]

REFERENCE: A15346 / GT

SUMMONS

(ART. 86bis of the law dd. 30th June 1994 on copyright and related rights)


Considering that my hereafter better described plaintiff, COPIEPRESSE, is the management company of the rights of the Belgian publishers of the daily French- and German-speaking press, authorized by the M.D. dd. 14th February 2000 and 20th June 2003 (MB dd. 10/03/2000 and MB dd. 14/08/2003)[PDF] to carry out its activities on Belgian territory from the date an excerpt of its articles of association are published in the Moniteur belge [Belgian State Gazette];

That my plaintiff's company objective is to defend the copyright of its members (actual rights of the publishers and acquired rights of the journalists) and to regulate the use of the protected work of its members by third parties.

Considering that the COPIEPRESSE directory is available on its website (http://www.copiepresse.be).

That the plaintiff is moreover entitled to go to court.

Considering that my plaintiff has discovered that the hereafter better described company under American law, GOOGLE INC published entire articles or article excerpts from its list of editors to be read by the public at large through:

  • "GOOGLE NEWS": reproduction and partial publication,
  • "GOOGLE SEARCH": Full reproduction and publication via its "cached" pages,

Considering that to date, GOOGLE INC has only received one permit to reproduce the contents of the websites of the publishers featuring in the Belgian COPIEPRESSE directory with the sole purpose of allowing these latter parties to be referred to on the search engine (GOOGLE SEARCH).

That this permit does not cover the other services offered by GOOGLE INC, i,e. notably "GOOGLE NEWS". Neither does it cover access to the "GOOGLE SEARCH" "cached" pages.

Considering that the dispute between my plaintiff and GOOGLE INC gave rise to a judgment handed down by the President of the Court of First Instance of Brussels sitting in injunction proceedings on 13th February 2007 (Civ. Brussels (inj.), 13th February 2007, GR no. 06/10928/A)


8

That even if this judgment were to be appealed by GOOGLE INC, its quality on a legal level has been unanimously recognized by various doctrine articles.

That COPIEPRESSE therefore sees itself forced to uphold its position, which was also followed by the President of the court of first instance of Brussels in his judgment dd. 13th February 2007 and in respect of which doctrine concludes: "This concerns the correct application of copyright which has to be made quite clear, before the Belgian decision is copied in all the countries..." (S. DUSSOLIER, The clay-footed giant: Google News and copyright, Lamy, Intangible Rights).

That these proceedings seek to have GOOGLE INC ordered to redress the loss suffered by the COPIEPRESSE principals as a result of the violation of copyright.

Considering that compensation for the loss covers various positions and that its foremost objective is to restore the injured party to the situation he was in as if the offence had never been committed.

That the compensation must cover all the loss items.

That on grounds of article 86 A of the law dd. 30th June 1994 concerning copyright and related rights introduced by the law dd. 10th May 2007 concerning the civil aspects of the protection of intellectual rights: "§ 1st. Without prejudice to § 3, the injured party is entitled to be compensated for any loss he has suffered as a result of the violation of copyright or related right. § 2. If the extent of the loss incurred cannot be determined in any other way, the judge may set a reasonable and fair fixed amount for damages."

That my plaintiff has asked Professor Alain Berenboom (Université Libre de Bruxelles [Free University of Brussels]) to put the extent of the loss incurred into figures.

That at the end of a 26-page report Professor Berenboom concludes that the loss suffered ranges between a minimum of 32,793,366.00 euro and a maximum of 49,190,049.00 euro.

That Professor Berenboom's calculation is based on the number of articles the judicial officer appointed by COPIEPRESSE discovered after the "flagship" websites of the Beigian publishers represented by COPIEPRESSE had been blacklisted.

Considering that in a note drawn up in conjunction with Mr. Magrez [lawyer for Copiepresse, ndlr], another method of calculation was used which sets the loss at an amount of 39,751,146 euro.

That this other method is based on an estimate of the traffic in relation to the newspaper articles on "GOOGLE SEARCH" and "GOOGLE NEWS".

That once the figures from Mr. Magrez and Professor Berenboom were reconciled the latter concludes that "the additional damages awarded by the courts usually range between 100 and 200 % of the amount of unpaid royalties. In view of the amounts at stake, we think that the judges will lean towards applying damages of 100 %. It is therefore the minimum amount which should be sought. On that basis it is appropriate to set the loss suffered at the amount of 32, 793,366 euro."

That it must be noted that these assessments have been made for one single year only and do not target, as far as "GOOGLE SEARCH" is concerned the entire period not covered by statutory limitation (i.e. 5 years).

Considering that these bills have already been forwarded to Counsel of the summonsed party.

That my plaintiff leaves it up to Your Court to decide whether additional damages of 100% on the evaded royalties should be awarded which would bring the overall amount to 49,190,049.00 euro.

Considering that if GOOGLE INC were to contest the number of articles or the estimate of the traffic in relation to the newspaper articles, Your Court cannot take these claims in support of their own case into account so that an expert's appraisal will be unavoidable.

That in fact statements made by one party in support of his own cause are merely claims on which the judge cannot base himself if these claims are not backed up by other elements or some other form of presumption.


9

So that proper justice would be done it would be appropriate to ask GOOGLE INC. before any ruling is made whether they contest the data on the basis of which the calculations were made at the behest of COPIEPRESSE.

Considering that in the event the summonsed party was to contest, it would be appropriate to appoint a panel of experts whose task is further specified hereafter.

That in that case, COPIEPRESSE suggests that as date of the interruption of statutory limitation the date on which the ruling by the Distraint Judge with the Court of First Instance dd. 23rd March 2006 was served, in which Expert [Luc] GOLVERS was appointed, i.e. 13th April 2006, should be considered.

Considering that in view of the assessments already made (i.e., between 32,793,366.00 and 49,190,049.00 euro), my plaintiff deems they should already be provisionally awarded a sum of 4,000,000.00 euro.

FOR THESE REASONS:

In the year two thousand and eight, on TWENTY-TWO MAY.

AT THE PETITION OF:

The association under the form of a Limited Liability Co-operative Society COPIEPRESSE, registered with the Crossroads Bank for Enterprises under number 0471.612.218, RPM [Register of Legal Entities] BRUSSELS, with registered offices in [address],

With Counsel Mr. Bernard MAGREZ, Solicitor, with chambers in [address],

I, the undersigned, Ortwin VERSCHUERE, Judicial Officer temporarily replacing Linda REYNAERT, Judicial Officer, with chambers in [address].

HAVE SUMMONSED:

The company under American law GOOGLE Inc., with registered offices in [address],

SERVING MY WRIT AS DESCRIBED HEREAFTER.

To appear on THURSDAY EIGHTEENTH of SEPTEMBER 2008 at NINE O'CLOCK in the morning before the FIRST CHAMBER OF THE COURT OF FIRST INSTANCE OF BRUSSELS, sitting in its normal hearing rooms, ROOM 0.10, at the Palais de Justice [Court House], Place Poelaert, in said BRUSSELS,


IN ORDER TO:

For the aforementioned reasons and all others to be enforced in place and in time and here under explicit reserves.

Pronounce the claim admissible and founded;


10

Rule in law that by running the information portal GOOGLE NEWS without prior permission and by giving access to the "cached" pages on its search engine "GOOGLE SEARCH", GOOGLE INC has violated the Belgian legislation on copyright and related rights.

Rule in law that GOOGLE INC cannot invoke any legal exception; not of article 10 of the European Convention on Human Rights; nor the liability exemption granted to technical operators by the law on e-commerce.

Before ruling in the event that GOOGLE INC were to contest the number of articles and the assessment of the traffic in relation to the newspaper articles, data which form the basis for the assessments carried out at the behest of COPIEPRESSE, to appoint a panel of experts, at the exclusive expense of GOOGLE INC., consisting of at least one IT-specialist and a certified public accountant or company auditor who shall have the task to:

  • During the inaugural meeting:
    • Define the methodology to be used
    • Determine the cost provision to be deposited with the court registry

  • Attempt to reconcile the parties;
  • Draw up a list of the articles which were published on GOOGLE NEWS before GOOGLE INC gets rid of them;
  • Draw up a list of the articles which were published on GOOGLE SEARCH prior to and after the black-listing and this from 13th April 2001 onwards;
  • These first two lists shall specify for each article:
    • The publication,
    • The year of publication,
    • The title of the article,
    • If possible, the author of the article,
    • Whether the article was published in full or whether only an extract was published
    • if GOOGLE added any information or made any changes to the original texts.

  • To have the logs of the GOOGLE server hits forwarded in order to establish:
    • The number of "cached" articles which were looked at on GOOGLE SEARCH since 13th April 2001
    • The number of visits to GOOGLE NEWS since it was launched in Belgium and since the Belgian newspaper articles were withdrawn
    • The number of visits by GOOGLE NEWS to the publishers since its launch in Belgium and this until all the newspaper articles on the websites listed in the COPIEPRESSE directory which were published on the COPIEPRESSE website were withdrawn.
    • Any information GOOGLE INC retains on GOOGLE SEARCH and GOOGLE NEWS in relation to the searches carried out and visits paid and more generally, in relation to the visitors who were redirected to the newspaper sites.

  • File their report six months from the date of the inaugural meeting

Order GOOGLE INC to pay my plaintiff the provisional amount of 4,000,000 euro on an amount which has provisionally been estimated to lie between 32,793,366.00 and 49,190,049.00 euro.

Finally, to order the summonsed party to publish in a visible and clear manner (character and font size: Arial: 10) and without any commentary on their part, the entire intervening judgment on the home pages of GOOGLE.BE and NEWS.GOOGLE.BE for a continuous period of 20 days from the date of its service, under penalty of a daily fine for nonperformance of one million euro per day of delay.


11

Order GOOGLE INC to pay compensatory interests dating back to the moment the violation of copyright was established.

Award all the costs of the proceedings, including the litigation expenses which, in view of the significance of this case, have been set at € 30,000 against GOOGLE INC.

Order the provisional enforcement of the intervening judgment notwithstanding any arrestation or appeal and without surety or cantonment.

Action based on the above adduced reasons, the laws and decrees on the matter and on all other grounds to be enforced in place and time and which are here fully and expressly reserved and without any prejudicial acknowledgement.

And in order that the addressee thereof should not plead ignorance, but considering that this party is based/domiciled in the UNITED STATES OF AMERICA and considering that I do not know any residence nor elected domicile in Belgium of this party, I the undersigned and aforesaid Judicial Officer, have sent, pursuant to the International CONVENTION with regard to the service and notification abroad of judicial and extra-judicial documents in civil and commercial cases, drawn up in THE HAGUE on 15 November 1965 (approved by the law of 24 January 1970 - Belgian Official Gazette of 9 February 1971), by registered mail with acknowledgment of receipt, deposited today at the post office in UCCLE, [address]

  1. one application, properly completed in English, corresponding to the model form that is appended, in enclosure, to this Convention;
  2. two copies of the present writ, as well as the documents mentioned therein, each copy of the writ accompanied
    1. with a form that describes the summary of the document ta be served, drawn up in English;
    2. with a translation in English

  3. with the proof of amount of $ US 95

to the following private company appointed by the United States of America, empowered to act on behalf of the Central Authority, to wit:

PROCESS FORWARDING INTERNATIONAL
[address]

asking the latter to:

  1. to serve on the company under American Law GOOGLE Inc. registered offices of which are based [address], one of the copies of this writ, as stated in subsection 2 above, accompanied by the translation thereof, as well as the form that describes the nature and the subject matter of the document, in accordance, as such, of the methods of procedure, in the legal texts of the petitioned country, laid down for the service or notification of documents drawn up in that Country and meant for individuals living there, notably in pursuance of article 5, sub-section 1a of the aforesaid Convention;
  2. to kindly return to me the other copy, along with the declaration, provided for in article 6 of the Convention, amounting to the fact that the application has been implemented, at the same time stating in what form, in which place and at what point in time this was carried out, as well as the person to whom the document is issued, or, should the occasion arise, stating the circumstances which have obstructed the application;


12

And whereas article 10 of the aforesaid Convention allows, among others, for the unimpeded authority to send judicial and extra-judicial documents, directly by post, to individuals who are located abroad, and that the UNITED STATES OF AMERICA do not oppose to this possibility, I have also sent one copy of the present writ (as well as the documents notified therein), as well as a form containing the summary of the document to be served, and a translation into English, under registered cover with acknowledgement of receipt, to the address of the addressee in the UNITED STATES OF AMERICA, at the aforesaid post office of UCCLE, [address]

And I have likewise attached the receipts of these registered letters to the original of the present writ;

WHEREOF RECORD.

Costs : four hundred and thirteen Euro and forty-two Cent, to be raised with the costs of the translation into English, to wit: 383,33 EUR

The Judicial Officer,





Certified a true translation from French into English,
L.VANPARIJS Sworn Translator.

[signature]
[cost receipt]

WHI


  


An Update on the Copiepresse/Google Dispute -- ACAP Enters the Picture | 288 comments | Create New Account
Comments belong to whoever posts them. Please notify us of inappropriate comments.
Corrections Thread
Authored by: artp on Thursday, October 16 2008 @ 09:49 AM EDT
Eror -> Error in Title block, please


---
Userfriendly on WGA server outage:
When you're chained to an oar you don't think you should go down when the galley
sinks ?

[ Reply to This | # ]

News Picks Discussions here.
Authored by: Erwan on Thursday, October 16 2008 @ 10:23 AM EDT
Please, quote the article's title.

---
Erwan

[ Reply to This | # ]

OT, the Off topic thread
Authored by: Erwan on Thursday, October 16 2008 @ 10:25 AM EDT
As usual.

---
Erwan

[ Reply to This | # ]

Off Topic Thread
Authored by: artp on Thursday, October 16 2008 @ 10:25 AM EDT

Please change Title Block.

Clickable links are aprreciated. Instructions at Groklaw's HTML How To page. Preview is your friend.

Please stay firmly off topic......

---
Userfriendly on WGA server outage:
When you're chained to an oar you don't think you should go down when the galley sinks ?

[ Reply to This | # ]

Having their cake, and eating it too
Authored by: Anonymous on Thursday, October 16 2008 @ 10:33 AM EDT
Seems to me ACAP will be ram-rodded through either some for-sale legislature, or
an off-the-wall court somewhere. But Google has a built-in solution.

Just put *every* ACAP-listed website and supporter website references at the END
of the returned search list. Once the idiots see themselves at the end of a
list of 1,250,000 sites (the number of get if you Google for "Mickey
Mouse", for example), and watch their Internet-derived profits plummet,
maybe they'll get a clue: they need the Internet more than we need their
close-minded and draconian "protections".

Don't let them have the cake (search priority) and eat it too (with draconian
violation of *our* rights), Google. You have the power to let them get
*exactly* what they want - and then let them sit in their own swill.

[ Reply to This | # ]

Doesn't sound too bad
Authored by: TFBW on Thursday, October 16 2008 @ 10:48 AM EDT
The beauty of a system like ACAP (from the little I can surmise about it) is
that it allows exactly the kind of segregation of the Internet which PJ seeks.
It is a system by which a copyright holder can express policies in relation to
his work, and this becomes part of the document metadata on which a search can
be based. I can imagine that Google will, in future, allow you to filter out
search results with particular ACAP policies. This will be a very good thing,
and the publishers that thought ACAP was da bomb will find themselves hoist with
their own petard.

[ Reply to This | # ]

An Update on the Copiepresse/Google Dispute -- ACAP Enters the Picture
Authored by: Anonymous on Thursday, October 16 2008 @ 10:54 AM EDT
I don't see the problem. What they want to do with their content is entirely up
to them.

[ Reply to This | # ]

On a similar theme.
Authored by: Anonymous on Thursday, October 16 2008 @ 10:55 AM EDT
I was doing some research on the web, and had just started to read a relevant
page when, the site opened a pop-up in the center of my screen telling me that I
was using ad blocking software, and if I would like to turn it off I could view
their content. Needless to say I was out of there pronto.

Does anyone know of a Firefox add on that could get round tihs situation?

[ Reply to This | # ]

Robots.txt
Authored by: Anonymous on Thursday, October 16 2008 @ 10:58 AM EDT
Historical Note:

'robots.txt' was never intended to be used as a data protection system. It was
developed in a 'heart beat' to notify a search engine of static web pages back
when transmition speeds were s l o w and bandwidth was expensive.

[ Reply to This | # ]

Confusion of two things
Authored by: Anonymous on Thursday, October 16 2008 @ 01:52 PM EDT
This discussion (and certainly the agitation of Copiepresse) confuse two things:

1) What language/means should be used for (voluntary) access control to website internals?

2) What language/means should be used for INVOLUNTARY access to website internals?

Robots.txt, as it stands, is a voluntary protocol. Major search engines agree to honor it; however nothing requires them to. There are numerous "hostile" spiders crawling the web which happily ignore robots.txt, and right now there is no restriction on them doing so--other than copyright law. Given that you can specify "disallow" on content that you don't have copyright to but is hosted on your server, I'm not sure exactly what robots.txt has to do with anything.

A finer grained mechanism than robots.txt might be a useful thing.

The question of import, though, is to what extent such things should have LEGAL force. Should it be a crime for a search engine spider to disregard a "robots.txt" file or an ACAP configuration file--either under an intellectual property regime (which assumes, I suppose, that one has copyright to the material in question), or under a "hacking"/"unauthorized access" regime (which doesn't require copyright)? Lots of arguments on the latter go both ways--do people or companies have a right to privacy (or selective privacy/access--the right to exclude part of the public they don't like) on an unprotected public server?

I don't think that an access control can--or should--have any bearing on fair use, for the simple reason that the whole concept of "fair use" implies a lack of explicit permission (license) to use the material in question. If I have the copyright holder's permission to excerpt a work, then I don't need the fair use defense. The interesting legal question, then, becomes this:

If a copyright holder grants permission to use a particular excerpt of a work, does that undermine a fair use defense for portions of a work other than the permitted excerpt?

I would think, at least under US law, the answer would be no.

[ Reply to This | # ]

Why can't Google just refuse to index ACAP content?
Authored by: Jim Olsen on Thursday, October 16 2008 @ 01:53 PM EDT
If the publishers want to 'fine-tune' their permissions, they are free to do so.
Google could take them up on their offer it it wanted to, or it could just
treat an ACAP file as equivalent to a robots.txt file that refuses them access.

I never understood the argument that Copiepresse could force Google to index
their content, on whatever terms they dictate.

[ Reply to This | # ]

Fair Use ? - An Update on the Copiepresse/Google Dispute -- ACAP Enters the Picture
Authored by: Anonymous on Thursday, October 16 2008 @ 02:15 PM EDT
Strange.

Occasionally I'll encounter a link to an article, only to get there and discover
I need to register to use the source.

Currently there are dozens of registrations I have for various sites. Most of
which I don't use. Each one has a different profile, name, age, city, username,
password, etc. Try to remember how many different names first pets you have
when each is unique.

The amount of stuff out there is so voluminous that I've quit the practice. If
you don't want me to see it, fine. If it is important, I'll find it somewhere
else. Otherwise, so much for your product.

If it is not published with the creative commons license or the GPL, that's
fine. I'll respect your copyrights. Life is too short to read everything on
the net.

I have bought some DVDs, a few which I've watched once, a couple maybe twice.
I've a number of VHS tapes, some of which I've watched a number of times.

But I fail to see the purpose of publishing something on the internet if you
don't want people to read it. And if you expect people to read it, why on earth
make it hard to do so? And if you are going to put something on the internet
for people to use, how can any sane person expect it not to be copied at least
once, since almost all browsers cache the downloaded content?

Oh, I forgot, these are the people who sue grandmothers and intimidate 12 year
old girls. Obviously insane. Perhaps the correct procedure is to move to have
them involuntarily committed!!!!

[ Reply to This | # ]

Autozone - An Update on the Copiepresse/Google Dispute -- ACAP Enters the Picture
Authored by: Anonymous on Thursday, October 16 2008 @ 02:21 PM EDT
PJ

Want some irony? Autozone.be is one of the publishers that has implemented
ACAP.

[ Reply to This | # ]

An Update on the Copiepresse/Google Dispute -- ACAP Enters the Picture
Authored by: Anonymous on Thursday, October 16 2008 @ 02:35 PM EDT
I find it ironic that the prevalent attitude here is that you have some sort of
god given right to their content. Yet, when M$, Tivo, SCO and others try to
assert the same thing over GPL code, the same people are up in arms at the
thought of their copyright being violated.

Can't have it both ways..

Not that it matters, I think Copiepresse are idiots, and I hope they sink
because of this millstone they are placing around their necks. It still doesn't
give me the right to demand that they do what I want with their content.

[ Reply to This | # ]

An Update on the Copiepresse/Google Dispute -- ACAP Enters the Picture
Authored by: Tufty on Thursday, October 16 2008 @ 05:04 PM EDT
Am I reading this right? All that ACAP does is the same as robots.txt in that
the spider is supposed to take notice of it. Any spider that wished to could
ignore it and carry on scraping? Sounds like a chocolate fireguard to me.
Perhaps Google would be better off ignoring all Belgian queries and Belgian web
sites. Belgium, the black hole of the internet.

Tufty


---
Linux powered squirrel.

[ Reply to This | # ]

Does the Wayback Machine see or respect ACAP??
Authored by: Anonymous on Thursday, October 16 2008 @ 05:10 PM EDT
I'm not sure if this is On or Off Topic:
We have just been advised by our campus IT "not to be alarmed" if we see sweeps of our servers from a couple of IP nrs. The National Library of New Zealand is fulfilling its obligations in respect of the national documentary heritage by "harvesting" the .nz domain. They have contracted the work out to archive.org. We are told that the harvest will not respect robots.txt, but will respect password protected content. The intent is to collect all material embedded in pages.

[ Reply to This | # ]

Me and my local paper
Authored by: Crocodile_Dundee on Thursday, October 16 2008 @ 08:14 PM EDT
Quite some time ago my local paper (The West Australian) used to have fairly
easy access to a selection of its articles on its web page. I tended to buy the
newspaper when I felt like something "throw-away" to read (during a
long compile), and the web site had enough of the content that it was useful.
Indeed there were occasions where after browsing the web site I decided to go
and get a paper.

A few years ago they started to narrow down the material on offer and just make
it unpleasant to view it. I stopped going to their site and I stopped buying
their newspaper.

I believe they have reverted to something like their old behaviour on the web
site, but I really don't know for certain because I don't go there any more.

If Google don't like the terms and conditions the web sites require then it
should simply stop spidering their sites.

Alternatively Google could use as a measure of page ranking the amount of
freedom granted by the content holder. I know that this is a measure that I
implicitly use. If it's hard to get at then I don't go back unless it's the
only place left.

Another option is for Google to remove their cache and hit the web site(s) in
question each and every time that somebody's search returns a hit on one of
these pages. (Is that evil?)

---
---
That's not a law suit. *THIS* is a law suit!

[ Reply to This | # ]

They can't use .htaccess?
Authored by: katayamma on Thursday, October 16 2008 @ 08:27 PM EDT
So are these people so incompetent that they can't manage to write some kind of
access rule for whatever browser engine they're using? It would be trivial to
add a block for anything with Google's search engine name in it from reading the
pages they don't want them to have access to.

This is just another piece of crap they're trying to foist off on the industry
to replace the fact that they're too lazy to use existing technologies.

---
Never underestimate the power of human stupidity.

[ Reply to This | # ]

I don't think they understand the implications of their actions
Authored by: The Mad Hatter r on Friday, October 17 2008 @ 12:13 AM EDT

Google drives traffic to the news sites. The news sites make money off
advertising. The more visitors they have, they higher their advertising
revenues. If they limit what Google can show, they will get less traffic from
Google, which will hurt revenues, It doesn't take a rocket scientist like the
ones TSCOG used to work this out.

I've had my arguments with newspaper sites in the past. One site displayed a
prominent picture of my wife in low res. I wanted a copy so I called them.
$60.00 was what they wanted, and it would be a print copy, they wouldn't supply
an electronic one, which was what I wanted. I told them that since the picture
was of my wife, and that she had given permission, that the picture should be
free. They disagreed, and I no longer buy that paper.

Another site decided to lock down stories older than 5 days, and charge $13.00
per copy. Two years later they stopped locking down the older stories, and
everything, including those stories from the time when they were charging became
freely available. I've found this happening with more than one news site, and
while I don't know exactly why this happened, I strongly suspect it was
economic. By locking down older stories, they forced their readers to look
elsewhere, or to pay. If the cost was higher than the customer thought was
reasonable, they went elsewhere to get the story. The last thing any business
wants to do is to send customers to a competitor, and this is exactly what they
were doing.

If the Belgian publishers succeed in locking out Google it's going to damage
their earnings. I doubt that they will notice anything immediately, but I would
be surprised if it took longer than six months. And then they have to make a
decision. Keep things locked down and live with lower revenues, or open up and
hope the readers come back.

ACAP is a solution for a need that doesn't exist. As a standard it may end up
getting approved, but I wouldn't expect it to have wide adoption, because anyone
who adopts it will find their revenues dwindling. Anyone who persists in using
it will probably drive themselves out of business.

Economic Darwinism. Think of it as Evolution in Action.


---
Wayne

http://sourceforge.net/projects/twgs-toolkit/

[ Reply to This | # ]

An Update on the Copiepresse/Google Dispute -- ACAP Enters the Picture
Authored by: iraskygazer on Friday, October 17 2008 @ 01:18 AM EDT
PJ,

ACAP has one purpose and only one. Provide a foot in the door for the
elimination of fair use. It is another attempt at 'DRM' under a different name.
That's why there are so many consortium wanting this implemented. This type of
activity should be completely disallowed through law simply because publishers
can prevent access to their products by providing a password protected web site.
Simply put, there is absolutely NO need for ACAP. This type of access control
mechanism follows the same business principles used by Microsoft; embrace,
extend, overcome or by-out or eliminate.

It is very easy to see that corporations represented by a consortium have no
concern for moral obligation to society. Profit and money are the only driving
concern and societal benefit through unfettered access to content cuts into
their profit margins. I just don't understand why large content providers are
attempting to control the Internet rather than create 2 web sites. One site
would contain limited portions of a published work and the other web site would
be password protected with provision to access all content of a published work.
The password protected site would be completely invisible to Google but the
paying customer would have access to whatever tier of product had been
purchased. The 2 web site approach provides full content control by the
publisher without having to make any alterations to the current 'openness' of
Internet processes. This is why I suggest that there is a much less desirable
reason for consortium wanting ACAP.

If ACAP continues, the openness of the Internet will suffer. Just as in a
number of other comments on this topic have said, 'Why can't the publishers
simply use the technology already available to protect their content.' I find it
hard to believe the attempt to push ACAP has been generated due to the ignorance
of content control processes that are presently available to publishers.

[ Reply to This | # ]

Copiepresse - life support for ailing business models
Authored by: NigelWhitley on Friday, October 17 2008 @ 09:45 AM EDT
Thanks to Sean (and PJ) as always for an informative article.

Since the ACAP discussion is being actively discussed elsewhere, I hope it is
worth commenting on the financial aspect of the summons. I notice first that
their claim is based on articles discovered since Google "blacklisted"
what Copiepresse describe as their "flagship" websites, although it
does not indicate whether these were articles from those sites. In other words
in it not clear in the summons whether the articles are from other websites run
by Copiepresse which it knows about but Google did not associate with them (and
therefore did not exclude). More likely it is for cached articles and those on
Google News but the summons does not specify.

Secondly, they have obtained an expert report from someone sympathetic to their
position and supported those figures with a report by their own lawyer asking
for slightly more then asked the court to accept the lower figure. I somehow
doubt that Google will fail to challenge that figure with their own expert
report (if permitted), and I doubt they will choose one sympathetic to
Copiepresse. I don't know whether the court is then bound to appoint a panel of
experts (as the summons suggests) or whether the court can reach an assessment
based on the information before it (IANAL).

As PJ alludes to, it has all the appearances of Copiepresse attempting to make
money from their old business model i.e. sell current copies of a magazine at
one price then charge a premium for shipping old copies (not unreasonably since
paper is expensive to store and transport). Of course, electronic copies are
vastly cheaper to keep and to distribute so the prospect of applying the same
business model to the digital world must have seemed a highly lucrative
prospect.

Further speculation starts here. Copiepresse have websites set up with old
articles that they expect people to pay through the nose for. I guess that they
haven't had people queuing up to use the service and are looking for an
alternative revenue stream. I would be very interested to see how the bill they
have sent Google holds up against the actual revenue they received for articles
on their websites : I strongly suspect the Google bill is higher.

Quite simply, IMHO, if being listed on Google meant Copiepresse made more money
from the pay-for-antiques service than not being listed by Google then this case
would have been stillborn. Instead I believe they are not making what they hoped
and are looking to get Google to make up the difference by leveraging copyright
law. In other words, they hope to force Google to pay for a model which readers
in general have rejected. (ACAP also seems keen to support this model, although
I'm not aware of any clamour from consumers for it - quite the opposite).

Google need to keep news organisations sweet in order to keep Google News viable
so expect to see them continue to negotiate an agreement with Copiepresse.
Copiepresse seem to think Google need them more than they need Google and, right
now, that may be so. However, if Google can cut them loose without alienating
other publishers, Copiepresse may yet get what it wants : it may have none of
its copyrighted content appearing on Google, ever.

A brief point on the time-limiting aspect of ACAP, FWIW. I suspect the intention
is that after the time-limit expires, the search company should delete the
cached copy but still present the link to it. A simple alternative would be to
remove the link entirely when the time limit expired rather than feeding people
to a pay site, or at least significantly lowering the rating of the link since
it is effectively no longer accessible for most people. This is less draconian
than a blanket ban on ACAP sites and more reflective of the usefulness of such
content to the users of the engine. Similarly for "protected" content
which Google must crawl to as a "privileged" browser.

---------------------
Nigel Whitley

[ Reply to This | # ]

Some "standard". . .
Authored by: Anonymous on Saturday, October 18 2008 @ 12:50 AM EDT
Here's what I love about this "ACAP". It claims to be an internet/www
open standard, and yet they don't seem to be trying to get this approved by
either the IETF or the W3C. If it's not an IETF RFC or W3C Recommendation, it's
not an Internet Standard.

[ Reply to This | # ]

lawsuit is confusing
Authored by: mobrien_12 on Saturday, October 18 2008 @ 04:44 AM EDT
So this newspaper company is really suing google for money because google won't
use ACAP, not because of copyright infringement?

Google is a private company. They said "look you can use robots.txt."


The newspaper company seems to have said "no, we don't want to use
robots.txt. You have to index us on our own terms."

This seems fairly cut and dry then. The newspaper company thinks google is
infringing their copyright, but doesn't want to stop Google from indexing them.
So they fail to mitigate damages.

Ridiculous greed.

[ Reply to This | # ]

this protocol would allow filtering
Authored by: Anonymous on Saturday, October 18 2008 @ 06:59 AM EDT
this protocol would allow filtering.
so if you are for net neutrality make these noobs know about how you feel.

[ Reply to This | # ]

What Google and the court should do
Authored by: LaurenceTux on Saturday, October 18 2008 @ 12:39 PM EDT
Google should hard drop any content from the index that has an acap member or
acap compatible robots.txt


The court should tell Copiepresse to "relocate to the nether world"
after paying all of Googles costs and any court fees.

they knew the rules as they stood when the started so now they are in a hissy
fit be cause its not what they wanted.

[ Reply to This | # ]

Groklaw © Copyright 2003-2013 Pamela Jones.
All trademarks and copyrights on this page are owned by their respective owners.
Comments are owned by the individual posters.

PJ's articles are licensed under a Creative Commons License. ( Details )