pgengler.net
spiraling towards contentless content
Things I'd done and where I've been
Posted: 2007-03-29 23:05
No comment(s)
Author: Phil Gengler
Section: Stuff

"I really ought to start writing again." It's something I've said many a time, and each time nothing ever really seems to come of it. For whatever reason, I seem to have mostly lost my motivation to write.

I think there are a couple of reasons for this. First, now that I'm out of college, I'm no longer really having active conversations about politics or anything like that. I still follow the news, and I'll occasionally make a comment or two, but for the most part I've become passive on that front, and it did seem to have been most of my writing. Another factor, also related to being out of college, is that I don't have to write anything. There are no essays to be done, no columns to write for The Stute, so writing is no longer a natural part of my routine.

I can't say I'm really too happy with it though. I like to write, and there have been plenty of times that I've sat down with time allocated for nothing but writing, and come away with nothing. It's more than a little disappointing to read back through some of my older entries, especially things that took about five minutes to write, with no prompting, and find yourself unable to get back into that same mode.

The problem has been more general than just writing, though; for quite some time, I hadn't really done any coding either, which is another thing I like to do. I'd have a day or two where I'd take a little bit of time and fix a couple of bugs in some of my projects, but it was rare for me to spend any time on new development and rarer still for me to show any progress in it. Lately, though, I've been a lot more productive. I finished a major upgrade to my bookmarks script (which also provided my first exposure to developing something to use AJAX) and I'm just now finishing up an entirely new image gallery to replace the current poorly-designed and written solution. I've got a couple of other things in the pipeline, like a redesign (or perhaps a first design) for this site. I'm hesitant to schedule anything, though, because I seem to get most of my coding done while at work, and it's unpredictable how much free time I'll have at any point in the future.

As far as writing goes, I'm definitely going to lay off politics for a while. I've definitely got an acute case of "outrage fatigue", and really, there aren't any points that I can make that thousands of other people aren't also making, with about the same significance. Considering that my life is fairly boring, I'm not sure that I'll be able to provide anything of interest from that, so I'm not exactly sure what future content may be. I am hoping to start writing about photography (which some might say is like "dancing about architecture"), primarily because that's what I'm most captivated with at the moment, and I think that being openly critical of myself (as opposed to just deleting the not-good-enough photos without too much reflection) will do some good.


The death of the English language?
Posted: 2005-07-07 20:26
No comment(s)
Author: Phil Gengler
Section: Stuff

Last week, there was an "Ask Slashdot" submission in which the author noted that he "noticed that a surprisingly large number of native English speakers, who are otherwise very technically competent, seem to lack strong English skills." As someone who served as an editor for a newspaper at a tech school, I certainly have an opinion on the matter. Unfortunately, I came across it the next day, and given that Slashdot stories tend to be rather short-lived, I figured I'd give the issue a more thorough treatment and post it here.

The author comments that "It baffles me that a culture so obsessed with technical knowledge and accuracy can demonstrate such little attention to detail when it comes to communicating that knowledge with others." First, this is a blatant generalization, as there are plenty of technically savvy people who make an effort to follow the "rules" of the language; I like to think I'm part of such a group. On that note, my observation has been that such people are outnumbered by those who take the less structured approach, instead opting to use other means to get their ideas across.

By "other means," I mean any number of shortcuts that have been adopted and accepted by many people. For example, checking one's spelling is not often done for 'quick and dirty' forms of communication, such as an instant message, e-mail, or website comment. A number of mistakes arise from this, leading to common misspellings such as "teh" in place of "the." As real-time, text-based chat, be it IRC, IM, or something else, became more popular, shorthand was adopted for some of the more commonly used phrases. Otherwise meaningless arrangements of letters such as "lol," "bbl," and "afk" (to name a few) became part of the language used when communicating through the Internet.

One apparent result of the ability to communicate via the Internet has been an increase in the amount of "written" communication that takes place between two or more people. Many years ago, writing letters was effectively the only way to communicate with someone not in the immediate area. When the phone system grew, telephone calls became the more common way of having an informal (and in many cases, formal) conversation with someone. Writing a letter was reserved for more formal communications, and as a result, was subject to a higher attention to detail and accuracy. Having a voice conversation, as was possible with the telephone, has not been subject to the same standards of language use; I believe this is because once you have said something to someone, it cannot be taken back. With written communication, you have the chance to look it over as you compose it, as well as after, all before the content reaches the recipient. With this, it was expected that you would look it over and ensure it's syntactical and semantic correctness.

With the Internet, and the ability to communicate quickly and cheaply through the primarily text-based medium, informal communication was written again. Ultimately, the fact that such communication was quick and cheap is, in my opinion, the reason that language standards tend not to be applied. Most electronic communication is fleeting; instant messages are rarely kept around any longer than a single chat session, in-game communications are not logged, and most e-mail messages are not saved. This is in contrast with many letters, which were often saved. (I do acknowledge, however, that in total, electronic communications are, in fact, stored more often than physical correspondence, through caches and server logging facilities. Most of the time, the sender and the recipient never deal with any of these intermediates.) As a result, mistakes are more easily forgotten, and corrections more easily made available. Rather than thoughtfully compose a lengthy e-mail message, most people will send short messages with little or no thought, sending another message when something else crosses their mind.

I believe that I have digressed from the original point. The tendency toward short but frequent electronic communication is not limited solely to technical people; it afflicts nearly everyone who communicates via the Internet. In that, I disagree with the original author's view that such problems exist only in, or to a larger degree in, the technical community. I do find it more surprising that such an attitude is fostered among programmers, though, since programming languages are very strict in the syntax they allow. One of the comments responding to the original article touches on this point; the comment's author makes the distinction between a compiler, which can only understand things written in its syntax, and a human, who is capable of extracting mostly-correct meaning from sentences that don't conform to the rules of the language.

One of the comments replying to this makes a point of noting that "communicating effectively" with a human does not require rigid adherence to a set of rules. This argument is echoed in a number of other comments to the article; the idea is that, if the recipient is able to understand the meaning of what the originator was trying to communicate, then it does not matter how strictly the communication followed the "rules." In various threads up and down the discussion, proponents of this idea disagree with those who believe that there is more to communication than just understanding the idea.

I have mixed feelings about both ideas. On the one hand, I tend to feel that when something can be done just as well without following a set of arbitrary rules, then the rules should probably go. On the other, and as someone who tends to be pedantic about language use, I do agree that the way something is communicated conveys information above and beyond that which was intended. Someone who tends to be very lazy in their writing, who frequently misspells words and misuses grammar, can come across as someone who does not much care for what they are trying to say. This certainly depends on what it is that is being communicated; for someone trying to make an effective point, I tend to believe that the presentation is important, though not as important as the content itself.

In other cases, it is hard to speak generally. When someone is writing with the aim of an ephemeral comment, perhaps with something like a not-too-funny joke, I tend to be much more tolerant of spelling and grammatical errors. When reading something that is meant to be "professional," such as a series entry on a website or some sort of proposal, I expect that the author would have expended the time and effort to proofread. When I see writings which have misspellings, especially on common words, or include sentences which take far too much effort to parse correctly, or have missing or extraneous punctuation, it diminishes the quality of the entire work.

Now I am certainly no less guilty of this than anyone else; for quite a while, I used to spell the word "sentence" as "sentance," and there have been several instances when I have failed to perform my "due diligence" on something I have posted here, and only found out about the mistake when I notice a hit from Google on a misspelled word. Fortunately, this doesn't happen very often. I also admit that I tend to be very pedantic about language usage. One of my pet peeves is when someone uses the phrase "begs the question" in a place where "raises the question" would be correct. The debate goes on about whether the common misuse of this phrase means that it has become part of the English language (for which there is, in effect, no governing body).

At the heart of all this, however, there is what I believe to be a root cause: a lack of proper education of the use of the language. While I imagine that a basic grasp of the English language is taught in all schools (I can't say for sure, and don't want to make the generalization here), continuing education and reinforcement of those topics already taught is lacking. From my discussions with others, it appears that my school district's curriculum was an exception, with an English course being a required part of school through the eighth grade; where it was, however, very few people in the class would take it at all seriously; there appeared to be a prevailing attitude among the students that they were being 'babied.' People seemed to feel that they already knew the language, and being made to learn more of it by then was something of an insult. Even in other classes, where writing assignments were given, very little was done to observe and correct most mistakes. Certainly teachers would correct the basic errors in a piece of writing, but as my experience as an editor showed, these corrections were often superfluous. Poor sentence formation was frequently left uncorrected; you would be sure to find a red circle around the use of "it's" as a possessive, and run-on sentences and sentence fragments would be marked, but you would rarely find suggestions on how to rearrange a sentence or a paragraph to have it make more sense.

Once more students started chatting online, I think that many of them found it easy to be lazy; certainly, it is the culture that exists around the Internet. Unlike a school assignment, there usually isn't anyone who is going to correct the spelling or grammar of a forum post or instant message, and so since similar mistakes are happening all around us without any major complaints, it is easy to get by without ensuring accuracy. The pseudonyms we can often hide behind made this even simpler. This can cause people who usually strive for correctness to slip up; after all, it would generally take a lot of effort to connect something written on the Internet to a real person.

Unfortunately, I don't really see any way to correct this situation. With the Internet becoming a larger part of more students' lives at a younger age, seeing "Internet English" in so many places leads to them believing it is the right way (which can be seen when chat shorthand makes its way into school assignments), and so they keep using it, other kids see it and start using it, and the cycle continues. Whether this is ultimately good or bad for communication remains to be seen.


Mixing things up
Posted: 2005-02-24 20:08
No comment(s)
Author: Phil Gengler
Section: Stuff

As you may have noticed, the site looks (and acts, if you look closely) different than it used to. After months of sporadic coding, I finally decided that the new backend was "done enough" to replace the old one.

Why? Well, why not? Especially since the old backend suffered from the problem of no design whatsoever; it was largely just a mess of code that I wrote whenever I wanted to add something new. I think I've done a little better this time, even though I've still got a lot of work to do.

Of course, what's the point of a shiny new backend if there's nothing to display? Well, for now, the images section contains a whole lot of photos I've taken and uploaded in the past year or so (with some even older ones). I've also been meaning to get around to writing more (though this a promise I have made before). I'm going to keep uploading the stuff I write for The Stute, and I'm going to try and add something else at least once a week from now on.


Ralph Nader for candidate
Posted: 2004-09-17 22:20
2 comment(s)
Author: Phil Gengler
Section: Stuff

This election season, both the Democrats and the Republicans are playing dirty politics, and one man is caught in the middle—Ralph Nader.

Perhaps best known for his supposed role in helping George Bush win in 2000, by taking votes away from Al Gore; Nader is running again this year, as an independent candidate. He is facing challenges in many states from the Democratic Party, and in some cases is getting help from the Republicans.

The most recent example is in Florida. Democrats challenged Nader's application, and a Florida court issued an injunction preventing Nader from appearing on overseas absentee ballots, which must be sent out by September 18. The Florida Division of Elections filed an appeal, lifting the injunction pending a hearing. Shortly thereafter, Division of Elections director Dawn Roberts issued an order that Nader be included on the ballot. Republicans say quick action is necessary with Hurricane Ivan imminent; the Florida Democratic party called the move "blatant partisan maneuvering."

Florida is not the only state where Nader is facing challenges. There are eight states in which Nader will not appear on the ballot, and a dozen others where challenges, filed by Democrats, are still pending.

The seemingly obvious explanation for this is that Democrats do not want Nader to "steal" votes that would otherwise be cast for John Kerry. While some Democrats have openly said that their efforts are to keep Nader off ballots to try and win the election, "this is an issue of fairness" is the rationale of others. It is worth mentioning that the Democrats are not challenging the Republican application in Florida, which was filed after the deadline had passed.

It is more than just Democrats that are taking action with regard to Nader's campaign. In Michigan, Republican volunteers collected 43,000 signatures on a petition to put Ralph Nader on the ballot there. In battleground states across the country, there are reports of Republicans pushing to get Nader on the ballot there.

These actions, by both sides, are "blatant partisan maneuvering." Both parties are trying to use the strongest third-party candidate to improve their success in the election by abusing Ralph Nader's campaign. The Democrats are clearly trying to maintain the two-party system, while Republicans are using a political party with radically different ideals to try and achieve victory.

Is this the future for third-party candidates? It seems likely that if any third party candidate poses a threat to either of the major parties, that party will be fought every step of the way. The American political system should not be one where the two major parties determine who can run; such behavior should not be tolerated in a democratic system.

This was written before the Florida Supreme Court ruled that Ralph Nader could appear on the Florida ballot.


Grokster Wins
Posted: 2004-08-19 21:35
2 comment(s)
Author: Phil Gengler
Section: Stuff

Earlier this year, MGM et al appealed the ruling of a California District Court in its ruling in favor of peer-to-peer software companies Grokster, Streamcast and Sharman (for simplicity's case, I will simply refer to the defendants in the case as 'Grokster' although the courts finding apply equally to all). The appellants consist largely of MPAA and RIAA members, who clearly have no love for peer-to-peer networks. In the original ruling, the District Court held that Grokster was not liable for contributory or vicarious copyright infringement. As one would expect, the case appealed to the 9th Circuit, who issued a ruling in the case today.

The ruling of the 9th Circuit Court upholds that of the District Court in finding Grokster not liable for the content being shared by its users. The 9th Circuit also laid out the requirements for finding Grokster to be infringing. The test required three things to establish contributory infringement: direct infringement by a user (which went undisputed), knowledge of the infringement and material contribution to the infringement.

As noted, it went undisputed that direct infringement by third parties takes place over the network. With regard to "knowledge of the infringement," the court found that the software is capable of non-infringing uses, and has on record many examples of such. Following the precedent set in the original Napster case, the court held that "if a defendant could show that its product was capable of substantial or commercially significant noninfringing uses, then constructive knowledge of the infringement could not imputed." The court found that the uses presented to them were sufficient to be considered "substantial" and "commercially significant." This finding was over the arguments of the "Copyright Owners," as the court refers to the appellants in the case, that the primary use of the software was copyright infringement. "... the Copyright Owners argue that the evidence establishes that the vast majority of the software use is for copyright infringement. This argument misapprehends the Sony standard as construed in Napster I, which emphasized that in order for limitations imposed by Sony to apply, a product need only be capable of substantial noninfringing uses" (with original emphasis). "Napster I" refers to the first Napster case, where Napster was required to block filenames as provided by Copyright Owners. In a footnote, the court writes that "Indeed, even at a 10% level of legitimate use, as contended by the Copyright Owners, the volume of use would indicate a minimum of hundreds of thousands of legitimate file exchanges."

Without having constructive knowledge, the court looked at the question of whether Grokster had "reasonable knowledge of specific infringement." Here, the 9th Circuit agreed with the District Court in finding that the timing of notification of infringement was a factor. It is required that the alleged infringer had knowledge of the infringement when it occurred, and failed to act to stop it. As the "Copyright Owners" would simply provide Grokster with lists of infringements after the fact, there was nothing that could be done to stop the direct infringement. The court also noted that the "quasi-decentralized, supernode" architecture of the Grokster network meant that infringement would still be able to take place even if Grokster were to "close their doors and [deactivate] all computers within their control."

The court then considered the issue of material contribution. In contrast to the Napster case, where centralized index servers were operated by Napster, Grokster has no such servers, and thus no way to accurately view a large portion of the searches and transfers being conducted across its network. Since the network and its indices are created by use of the network and not by Grokster, what happens as a result of accessing such an index is not any fault of Grokster. Furthermore, the court found that the fact that Grokster and Sharman operate root nodes to facilitate supernodes and provide a brief XML file with some parameters, these are "too incidental to any direct copyright infringement to constitute material contribution."

One of the best quotes from the decision is that "the peer-to-peer file-sharing technlogy at issue is not simply a tool engineered to get around the holdings of" the decisions in the Napster cases. With all this established, the court does not "expand contributory copyright liability in the manner that the Copyright Owners request."

The court next looks at the claims of vicarious copyright infringement. As with contributory infringement, the court lists a three-part test for determining vicarious infringement, that there have been direct infringement by a third party, that the alleged vicarious infringer received financial benefit and that the alleged infringer have "the right and ability to supervise the users." The first two parts, direct infringement and financial benefit, are undisputed. The financial benefit comes from the advertising revenue from ads displayed in the software.

This leaves the question of whether Grokster had the "right and ability" to supervise the users of its software and network. Again, the decentralized nature of the network means that Grokster did not have the ability to terminate or supervise accounts, though their software's license agreement retained for them the right to terminate accounts. StreamCast did not even have that right, as there was no license agreement for their software. It is interesting to note that the court finds blocking access by IP address to be ineffective as large numbers of users do not have static IP addresses.

The court also writes that while there are certain measures that could have been taken to control access, they would have resulted in a universal ban on access, not control over specific accounts. From these findings, the court agrees with the District Court that the companies did not the ability to control their users. One of the arguments put forth by the "Copyright Owners" was that the Grokster was turning a "blind eye" to the copyright infringement taking place on its network via its software.

From these findings, the court agrees with the District Court that the companies did not the ability to control their users, and therefore, were not guilty of vicarious copyright infringement.

Many of the findings apply only to more recent versions of the software offered by Grokster. The appellants were also basing their claims on older versions, which operated in different ways than the newer ones. The District Court declined to rule on those versions, and the 9th Circuit likewise "express[ed] no opinion as to those issues."

The circuit court notes that "The Copyright Owners urge a re-examination of the law in the light of what they believe to be proper public policy, expanding exponentially the reach of the doctrines of contributory and vicarious copyright infringement" but believes that "such a renovation [would] conflict with binding precedent ... it would be unwise."

"Doubtless, taking that step would satisfy the Copyright Owners' immediate economic aims. However, it would also alter general copyright law in profound ways with unknown ultimate consequences outside the present context."

The circuit court believes that it is not the place of the courts to make decisions for the marketplace. "[H]istory has shown that time and market forces often provide equilibrium in balancing interests ... it is prudent for courts to exercise caution before restructuring liability theories for the purpose of addressing specific market abuses." The Betamax decision is cited, mainly that it is the place of Congress to apply copyright law to new technologies.

While the first 25 pages of the 26-page decision are good news, I find this sentence to be quite disturbing. Given the push by the "Copyright Owners" for laws such as the INDUCE Act, it is conceivable that Congress may find that it is its place to regulate technological innovation in the interest of 'protecting copyright.' While the court is clearly following along with what has come from above it, that it appeared in such a high-profile decision is sure to increase the lobbying efforts of "Copyright Owners" for the INDUCE Act. It is quite possible that this decision will be pointed to and the claim made that if the courts will not solve the "rampant" problem of copyright infringement on the Internet, Congress must step in, and indeed, was told by the courts that they have the right and requirement to do just that. Hopefully, clearer heads will prevail and this decision will be seen as the right one, but I certainly would not bet against the "Copyright Owners" pushing for and receiving legislation along the lines of the INDUCE Act in the not-to-distant future.

So while the decision comes as a victory, I imagine it will only serve to further muddy the waters surrounding the "Copyright Owners" push for protective legislation in Congress.

For more information about the decision, check out some of these links:


321 ... 0
Posted: 2004-08-14 00:41
2 comment(s)
Author: Phil Gengler
Section: Stuff

The biggest news of late (at least, that's applicable to this site) is the announcement that 321 Studios has decided to shut down. 321 Studios has been at the forefront of the MPAA's legal crusade against DVD copying. 321's flagship product, DVD X Copy, which allowed users to make backup copies of DVDs that were also stripped of the CSS, was denounced by the MPAA as a piracy tool. The legal struggle started back in 2002, when 321 preemptively sued several movie studios, seeking to have a court declare their product to be legal, heading off any lawsuits down the line. The studios countersued, and the matter has been tied up since. 321's failure to win a DMCA exemption in 2003 also came as a blow.

The end result of the whole mess has been to bankrupt the company. The MPAA's countersuit resulted in injuctions being issued against 321 Studio's software. Without a revenue source, and faced with mounting lawsuit costs, it was obvious that they could not put up the fight forever.

The downfall of 321 Studios comes a blow to those of us who hoped that the case would prove to be the catalyst for scaling back the DMCA, but the company was put into an untenable position through what Ars Techica described as a "paradox ... She [Judge Susan Illston] asserted that customer's(sic) appear to have a legal right to make backup copies of movies, but pointed out the DMCA made it illegal for customers to buy software that allows them to." This not a paradox created by her ruling. The DMCA creates that paradox, and Judge Illston read the law as it is written. The way §1201(a)(2)(A) is written does not leave any room for interpretation in the matter. DVD X Copy does circumvent the CSS encryption that "protects" the underlying work, and it is plainly stated in the law that:

No person shall manufacture, import, offer to the public, provide, or otherwise traffic in any technology, product, service, device, component, or part thereof, that —

(A) is primarily designed or produced for the purpose of circumventing a technological measure that effectively controls access to a work protected under this title

It is conceivable that the argument could be made that the software was not designed specifically to circumvent CSS, but since the fact that it does is integral to the software, it is not likely this would stand up in court.

It was also announced that 321 Studios will be settling with the studios for an undisclosed but "substantial" sum of money, and the terms of the settlement bar 321 from selling DVD X Copy. In a statement issued before his retirement (which I will be getting to in a moment), Jack Valenti said that "321 Studios built its business on the flawed premise that it could profit from violating the motion picture studios' copyrights."

Now, it should be fairly obvious here that 321 Studios did not violate any of the MPAA's copyrights. §1201 of the copyright law covers circumvention of access control, not infringement or violation. It may (and I say may because this was never decided finally) be the case that 321 Studios was in violation of §1201, but this does not make them copyright violators.

Changing subjects slightly, I mentioned above that Jack Valenti has retired as the head of the MPAA, as noted by Tim Wu over at Lessig's blog. There are also some choice quotes of Valenti's from his years as the MPAA head, all of which are worth reading (for the amusement), but one of them deserves to be copied everywhere:

We are facing a very new and a very troubling assault ... and we are facing it from a thing called the video cassette recorder and its necessary companion called the blank tape.
We are going to bleed and bleed and hemorrhage, unless this Congress at least protects one industry ... whose total future depends on its protection from the savagery and the ravages of this machine [the VCR].
...
[Some say] that the VCR is the greatest friend that the American film producer ever had. I say to you that the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone.

Since the Supreme Court's Betamax decision in 1984 (which ruled VCRs legal), video sales have been Hollywood's biggest source of profit. To hear the same companies make the same accusations again for new technologies would almost amusing, if it were not for the fact that they stand poised to put a stop to nearly all technological development in the future.

Yes, it is the Inducing Infringement of Copyright Act (S.2560), formerly the INDUCE Act. I am not going to into detail about it, as many others have already done a better job than I ever will, but if the bill becomes law, Congress (which already seems to lean heavily toward supporting the MPAA) will be required to effectively approve any new technology if there is the slightest chance that it could ever be used for an infringement of copyright. (For those interested, video of the hearing on the act is available directly or via BitTorrent.)

Even without the INDUCE Act, we are already seeing some of the effects of this mindset. Recently, the FCC approved TiVo's plan for allowing users to share recorded programs with up to 9 other users over the Internet. I think it is a shame that we have come to a point where we need to ask permission to develop something legal.

There is a good deal of other news to read up on, but for me to summarize any more of it here would be a waste of time, as Copyfight has been doing an excellent job of that lately. I hope to start writing of some more recent news as it comes up from now on.


vanishing act
Posted: 2004-07-25 01:44
1 comment(s)
Author: Phil Gengler
Section: Stuff

It doesn't take a rocket scientist to notice that my presence here has been (virtually) nonexistant over the last few months. Just like nearly every other project of mine, this site has fallen by the wayside in favor of something else, or in some cases, absolutely nothing. This summer has been a re-enactment of that on a smaller scale. With a full-time job, and the random stuff that needs to be done while at home, the amount of time I have for personal pursuits is somewhat limited. Coupled with the fact that my job is mindless and dull, I frequently return home with no desire to do anything except watch TV for a few hours. On several occassions, that's just what I've done. A large number of other nights have been spent being equally as unproductive, with the exception that I might be at the computer. When I finally get around to working, it's usually some simple work on one of a number of projects I decided to undertake at some point or another.

I have a serious problem getting things finished. I have a project I started back in 2002, that if I had focused on it for a few weeks, a month or two tops, would have been done by now. But I haven't been able to do that, and so it remains uncompleted. It doesn't help that I keep adding and changing requirements for it without actually working on it, so the scope of the project is large enough to be daunted, which just makes me more unwilling to work on it.

This website is a lot like that. It was a project that started off with no defined goals (clear or otherwise) and for a while, I was motivated to work on it. The quality of the code for this is a reflection of that fact. It's horrible code, because it started off as one thing and then was changed into something else. When neither the original nor any of the changes to it had a plan or design of any sort, the end result is a mess. Just a few days I needed to fix a bug I found in some of the code for the site. This was the first time in several months I'd even looked at the code, and I was appalled. I couldn't believe that I had put something like that together, or that it had worked.

But there's still the content, and there was nothing wrong with my capability to add more of it. The problem there is one of attitude and not of technology. The first time that I started going weeks between updates was largely due to a feeling that what I was writing had no purpose, that the things I was covering were already being covered somewhere else, and that I wasn't adding anything new to it. This was true a lot at that point, where I was basically taking an event, or a case, or something, and just describing it. There wasn't anything of my own in there, there wasn't any opinion or personal feeling in it. With that came the self-doubt, and so I backed down from writing stuff for a little bit.

But then came the pressure to write, to keep the site updated, so I turned for a bit to writing about my life and my personal experiences. It was something I had tried to stay away from when I started the site off, and so I wasn't too happy about doing it, but it kept the site active, and so I kept doing it for a while.

Then came more self-doubt. There came the feeling that no one cared about me and what I did or what I felt. This was largely the result of reading Amit's site, when I one day realized that I just didn't give a shit how many times he vomited that day or how often he dreamed of pichu. I started to wonder why I had been reading stuff like that, and I stopped reading his site, among others.

From there, updates on my own site became spotty. I had given up on personal writing, and I had sort of fallen out from my intense interest in, well, anything. So there weren't any new updates ranting about copyright-this or copyright-that, partly because the world had reached a slow point in that regard, and partly because I was falling into a nonproductive zone. Writing stopped, coding stopped, basically, accomplishing anything stopped for a few months. I would spend hours at my computer each day, and when I would go to sleep I would realize that I hadn't actually DONE anything that particular day. This continued for far longer that I would like to admit.

Finally, I started to start working on things again. I did the simple updates to old pieces to code to make them work right or give them new features, and had some sense of accomplishment again. But I still wasn't writing. This was largely the result of a lack of self-discipline. I find myself working if and when I want to, on whatever I feel like working on at the time.

Then came summer. I left Hoboken to go home, not having a job, not having any plan, but I was trying to tell myself that I would be productive, that I would have lots of free time and I could finish all kinds of things. For the first week or so, I didn't do a damn thing. Hours were spent in front of my computer, and nothing got done. Then I started using my laptop most of the time. Before we got a wireless router, I worked on bits of code I had stored locally, with Apache and MySQL set up to test them. I'd also started a full-time, 8:30 to 4:30, job at that point, so the time I had to myself was basically from 6pm until midnight.

I was addicted to Law & Order for a while. TNT shows it for three hours a night, and after that there's usually an episode of SVU on USA. So I would be lying on my bed with my laptop, tapping away during the commercial breaks, watching TV most of the time.

Eventually I decided to purchase a wireless router so that I didn't have to fuck around with stuff to get changed code out to the appropriate sites. My ability to get stuff done dropped. Now I could spend the evening, like I had spent so many days before, wasting hours of time with absolutely no memory of what I did, and nothing to show for it. I managed to get myself to start working on code again, and in the end, going wireless was the right thing to do, because I can ultimately be more productive with it.

Of course, none of this explains why I haven't updated the site. It does, however, provide a clear example of what I was talking about - I get sidetracked easily. That was one of the major reasons for not updating; the other was that I just didn't feel like writing. That statement is part right and part wrong. I felt like writing, I had in my head all sorts of ideas of stuff to write about, I just didn't actually feel like doing the writing. This is probably because I spend large parts of my day at work at a computer transcribing stuff, and when I get home, I don't really feel like spending even more time typing. So a lot of good topics have come and gone from my mind, and a lot of stuff has gone uncovered and unreported.

I've been doing a lot of reading lately about the INDUCE Act. For all that reading, I haven't looked at it in-depth, so I won't be talking much about it, but it's worth mentioning. The idea of it is that it would be possible to prosecute or sue people and groups who induced infringement of copyright. That is, if something you did facilitated someone else's infringement of copyright, you could be held responsible. The idea of the act was to make peer-to-peer companies liable for what their users did, but as many have pointed out, it would also effectively undo the Betamax decision (which, for those who don't know, said that devices with "substantial non-infringing" uses were legal).

This seems like a good place to end this, because I'm running out of things to say that relate to this topic (even in tangential ways). What I hope to do in the future, update-wise, is to get myself to flesh out a lot of the ideas that come into my head. If I can do that, the possibilities for content are virtually limitless. I also intend to get back up to speed on what's been happening on the copyright front (as well as a couple of other areas I have interest in) and start talking about those. For whatever reason, though, I'm more able to do that back at school, so it might be a month or so before that comes about.


is the internet in danger?
Posted: 2004-03-25 22:13
No comment(s)
Author: Phil Gengler
Section: Stuff

Every day, millions of people use the Internet in some way. For some, it is just to check their email or chat with friends via an instant messenger. For others, the World Wide Web is their playground, a place where all sorts of information, games, and entertainment can be found. Still others are the ones who create and maintain all the content that people enjoy and use.

But there is a darker side to the Internet. Spammers and virus writers are the most well known and widely reviled, but there are people who write automated tools (bots) to undermine the freedom provided by the Internet. As soon a domain name expires, bots are waiting to quickly snatch it up and replace it with a search engine page. Across the Web, sites are finding themselves increasingly victim of bots that can ruin the experience for regular users. There is software that will continually check Ebay auctions and then undercut the leading bidder with less than 30 seconds remaining. More and more sites are requiring users to enter back text displayed inside an image, so that bots cannot go any further. Discussion sites are being inundated with trolls and crapflooders (those who post large numbers of messages with no actual content or purpose).

A number of solutions are being passed around for the problem of spam. Some of these suggestions are requiring domains to publish a list of what addresses are allowed to send from them (known as sender permitted from, or SPF), recipient-side whitelisting (only messages from people the recipient has chosen to allow get through), and go so far as to propose that there be a small fee for sending email. Each of these changes would fundamentally alter the way email is used, and there is opposition to each, in varying levels.

Email, as with the other protocols that make up the Internet (like FTP, HTTP, SSH, and so on) was developed in a time when the Internet was orders of magnitude smaller than today. Computers were not as powerful, and bandwidth was more limited, so simple protocols were favorable at the time. Email is almost as simple as they come, providing plain-text headers followed by the message content. No concept of authentication was built-in, because at the time, none was needed.

Through no fault of its own, email has become the carrier of most viruses and worms being spread around the Internet. As the Internet has grown, so has its userbase, and with that growth come those who are unable or unwilling to invest time in learning some of what we might consider basics of computers and the Internet. With each new email worm that makes the rounds, the more tech-savvy repeat a simple instruction to users: "Do not open attachments that you are not expecting and do no come from someone you trust." Yet we continue to hear warnings about the newest virus and its expected damage (in terms of both the damage it causes to a target and the costs to remove it from the infected machine). Some of the steps being taken for solving this problem are filtering messages on email servers based on their attachments (which can help with worms like Beagle/Bagle), and moving computers to behind firewalls (which helps with viruses like CodeRed).

While spam and viruses may be the two largest and most visible problems with the Internet today, nearly every commonly-used protocol is being exploited in some way. Usenet, which was designed to provide a worldwide message board, where anyone could post messages and anyone could read them, has been all but abandoned to crapflooding. It is virtually impossible to read a discussion group without seeing a fair number of spam message posted. Usenet, like email, does not provide any means of authentication and so to allows messages to be posted by anyone, using any name and email address, valid or otherwise.

Web sites that allow users to post and submit are also being hit by the 'dark side' of the Internet. I have already mentioned Ebay, where automated scripts are undercutting human bidders. Discussion sites are probably the single greatest example of how the Web is being abused. Let us take Slashdot as an example. Slashdot allows comments to be posted to the stories it lists, and it allows people to post anonymously, with or without a registered account. This has led to a great deal of trolls, or posters who simply post to annoy users, whether the posts are simply obvious spam or replies to a comment that are designed solely to inflame or annoy the poster. Another site, Kuro5hin, has recently been dealing with similar problems.

Most site administrators have come to the realization that unfettered anonymous commenting can be harmful. The most common approach for large sites is to allow users to moderate other users' comments, so that the community can establish what is and is not worth viewing. It usually works well, which reveals just how extensive the problem is. In a recent Slashdot discussion, 1043 comments were made in total. Of these, only 747 are rated between 1 and 5, which is the range at which most people read. Nearly 300 comments were trolls or otherwise not at all constructive or useful. Unfortunately, even a moderation system can be, and is, abused. There are bots which register multiple accounts and then use some of these accounts to moderate up posts by the other accounts. It has become enough of a problem with Kuro5hin that new member registrations have been disabled.

The Internet was build to be open and free, and it is this fact that has allowed it to evolve to its current state. The ability to remain anonymous (or at least pseudonymous) has led to a great deal of controversial material to come out in situations it otherwise would not have. But it is this same freedom and anonymity that are causing many of the problems I have just described.

What sort of solutions are there to these problems? Simply solving the problems is a simple enough task, in theory. There are proposals for systems to replace email, and eliminating anonymous posting can help with Usenet and Web discussion forums. This opens up two others problems, though - implementing them, and the loss of openness the Internet has enjoyed thus far. On more than one occasion, bills have been proposed in Congress to help solve some of the problems, and some have passed. The recent CAN SPAM Act is designed to lessen the amount of spam email being sent. The problems with approaches like this? It introduces legislative control where none existed before, and also attempts to change the way the world works from the U.S. Congress.

Is the Internet in danger? Perhaps. Some people are beginning to lose faith in email, and some long-timers are moving away from the larger discussion sites. Perhaps the largest problem is the solution to the problems. Is the best way to save the Internet to change its nature? Is it a case of "in order to save the village, we had to destroy the village?" I think the biggest danger to the Internet is those who seek to change it, and in that, the future of the Internet is in danger.


second chances
Posted: 2004-03-25 05:05
No comment(s)
Author: Phil Gengler
Section: Stuff

For a wide variety of reasons, the site has fallen into a state of disrepair and general neglect. This isn't entirely true, at some point between the last update and now I made some changes to the user-facing side of the code and didn't update the admin backend, so I had to spend about a half our applying changes I don't remember making to code, just to get posting to work again.

But that's not the point. What is the point is that I'm going to once again try and maintain the site like I used to, back when updates were frequent, the links flowed like water, and I actually had something to say. I moved the last two updates to the newly-named weblog section, and just as I intend to keep the main part of the site looking well, I should toss some 'content' (I use the term lightly) into there too, if you're in to that sort of thing.

I intend to toss up a small archive of stuff I've written for The Stute lately, and when I do I'll post something about it. I'd add the stuff as regular site content, but writing for a print newspaper and writing for the web are different enough that I think the two shouldn't be mixed freely.

Look for the first of the 'new updates' sometime tomorrow.


car 54, where are you?
Posted: 2004-01-04 01:10
No comment(s)
Author: Phil Gengler
Section: Stuff

So I've been gone from here nearly 2 months, and here was gone for nearly a month of it's own. Why? Because stuff happens. Just what it was doesn't matter, as the important thing should be the fact that both I and the site have returned.

And yet I must ask why. Despite the long absence from any sort of updates here, I haven't felt any sense of loss. It didn't bother me that I didn't say anything, mostly because there wasn't anything to be said. And to a large extent, that's still the case. I'm going to try and update more often, but I'm not going to make any promises. I write when I feel like it, and it's not something that's been near the top of my list lately.

I've reached the point where I've basically turned to just expressing facts here, and as has been pointed out, it gets boring after a while. And with that in mind, and no idea of how to make it more interesting, writing took a sharp dive on my list of priorities. I'll probably end up playing around with the site in the near future, with a new layout virtually assured and a change in format likely to accompany it. Just what any of that means is something everyone (myself included) will find out in due time.

In the meantime, thought, I'll try and update this thing every once in a while (more often than I've been of late), but I'm not making any promises about that.


modern ART
Posted: 2003-11-17 00:55
No comment(s)
Author: Phil Gengler
Section: Stuff

On November 13, Senator Diane Feinstein (D, CA) (in conjunction with Senators John Cornyn (R, TX), Orrin Hatch (R, UT) and Bob Graham (D, FL)) announced the Artist's Rights and Theft Prevention (ART) Act. This act would make recording a movie in a theater a felony with up to a 5-year sentence, and would make it a felony to make available any unreleased movie or other such work.

The anti-recording provision is similar to that found in ACCOPS, a House bill introduced a few months back. It's the other provision that I take serious issue to, that simply making a copy of an unreleased work available is equivalent to 10 counts of copyright infringement carrying damages of $2,500 each.

As I address in letters to Senators Corzine and Lautenberg, our country has generally operated on a principle of 'innocent until proven guilty'; that is, until you were found guilty of a crime, you were to be treated as innocent. This bill would automatically assume that anyone guilty of making a copy of an unreleased work available is also engaging in at least 10 counts of copyright infringement, which I find absurd.

Another effect of the bill is to dramatically lower the burden of proof for establishing violations. As the law stands now, a copyright holder must establish at least 10 instances of copyright infringement, or $2,500 worth of damages for such a crime to be a felony. This requires proof that not only was such material made available, it was illegally downloaded at least 10 times. If ART were to become law, a copyright holder then need only establish that the material was made available, which, as the RIAA has shown, cannot always accurately be determined.

If a file is simply made available, no harm is done to anyone. It's when the file is downloaded that you can begin to establish harm. And we already have plenty of laws that cover actual copyright infringement, and plenty of cases pending that pertain to infringement via sharing (but they all involve downloading the material in question).

In the wake of a recent study that found 77% of all movies being traded on the Internet were initially released by insiders in the movie business, is making sharing these movies a felony the right thing to do? Rather than attack the problem at its source (those leaking the films in the first place), the MPAA seems to want to attack the edges (those sharing the films), which is the wrong approach to take. If one person leaks the movie, and 10 people download and begin sharing it (and the initial person stops sharing), then that person will almost certainly not be discovered or prosecuted, and be at virtually no risk to continue such behavior, as it's people down the line who'll get prosecuted. While that may deter some people, there are those outside the US that already and will continue to share these movies, and they will likely continue to have a source.

So why criminalize a harmless act?


is this the good fight?
Posted: 2003-11-12 09:14
2 comment(s)
Author: Phil Gengler
Section: Stuff

November 7 was probably just like any other day for the students of Stratford High School in Goose Creek, South Carolina. At least, until more than a dozen police officers stormed in, guns drawn, and started ordering students to get on the floor. Why the need for such force? Evidence of "drug activity" obtained from the school's closed circuit TV system, which the school's principal had been monitoring regularly. Police claim to have "observed consistent, organized drug activity" in the school hallways.

As the 107 students in the hallway at the time were kept sitting up against the walls, a canine unit was called in to sniff students' bags for drugs. Twelve bags were singled out by the dogs, and were then searched by members of the school administration. Their findings? Nothing. One hundred seven students were forced up against the walls of a school hallway by police with their guns drawn, for no good reason.

Is it worth putting high school students at such risk to stop the (potential) problem of a little marijuana? Considering that the officers found nothing, I would say no. What if one of the students were shot during the course of the search by a trigger-happy police officer? If you said it was worth the risk before, would you still find it acceptable to have an innocent student shot during a search that found nothing?

I had summer internship a few years back, working in the Education Department at the Garden State Youth Correctional Facility. The inmates in there were around my age, and nearly all those I spoke to were in for some drug-related charge. These were otherwise good people, who hadn't hurt anyone, who were spending three years of their life in prison.

At what point did drugs become such a problem to society that anyone caught using drugs needed to be arrested? Nearly one-quarter of America's prison population is inmates convicted of a drug-related offense. Drug-related charges are the most rapidly increasing class of crime in the country, putting an extra and unsustainable burden on the both the court and prison systems and costing taxpayers close to $3 billion per year.

When did it become wrong to make a choice that only affects yourself? A person chooses to use drugs, and that only affects them. If they should then commit some crime, they're guilty of that crime, but should drugs automatically be cited as the cause? Should they be outlawed as a result? Anger can be a cause of crimes, but there isn't a clamor to make it illegal for someone to be angry. Being drunk can lead to committing other crimes, so why isn't it illegal to possess alcohol, or to consume it? Just what makes drugs like marijuana different enough that they need to be illegal?

Note: This article was originally written for The Stute.


all for naught
Posted: 2003-10-28 17:08
No comment(s)
Author: Phil Gengler
Section: Stuff

The Library Of Congress has just published 4 new exemptions to the DMCA's Section 1201 anti-circumvention protections.

For those of you who have no idea why I'm talking about this, these determinations mark the end of the comment/reply/hearing periods, during which I testified to the LOC in support of an exemption for viewing CSS-encrypted DVDs on alternative operating systems (ones without licensed DVD playback software). This class of work was not exempted, as the LOC believes the piracy risk outweighs a consumer's right to watch a DVD he/she purchased.

While most of the proposed classes were denied exemptions, four were granted. Censorware lists are probably the most notable, as a few months back there was the case of Edelman vs. N2H2, where Ben Edelman sued N2H2 to prevent them from suing him for compiling a list of sites blocked by N2H2's software. He didn't win the case, but he was won this exemption, and can proceed with the law on his side.

It's a shame that a list of URLs is copyrightable in the first place, considering that copyright is/was designed to protected creative works (music, art, books, etc), not a list of facts, which would seem to fall completely outside the realm of copyright.

The other exemptions that were granted are for software protected by a dongle, when the dongle is obsolete/damaged/malfunctioning, ebooks, when all electronic versions of the book are protected and prohibit read-aloud or viewing in a specialized format, and old video games and computer software only available on obsolete hardware or media. This last one might just pave the way for ROMs to gain legal acceptance, at least for old systems that you can't find anymore.

If you feel so inclined, I suggest reading the actual ruling [PDF] and the recommendations of Marybeth Peters (the Register of Copyrights) regarding the exemptions.


sc[um]
Posted: 2003-08-13 01:33
2 comment(s)
Author: Phil Gengler
Section: Stuff

Things have been mostly quiet on the copyright front since my last update (my which I mean really quiet, since it's been quite some time since I last wrote something). The most newsworthy topic (for here) has been SCO's continuing saga, something I haven't touched on here but I feel merits some discussion.

Back in March of this year, SCO Group, formerly Caldera, sued IBM, alleging that IBM had breached a contract and placed code from a joint Caldera/IBM project into the Linux kernel. Now far from a mere contract dispute, SCO Group is claiming that some of the code in the Linux kernel is their 'intellectual property', and is so deeply intertwined that it cannot be removed. As a result, according to SCO, they then have exclusive distribution rights to the entire kernel, and have announced a licensing program so that organizations can continue to use a (binary-only) kernel distribution.

For a single CPU system, SCO Group is offering a $699 license until Oct. 15; from then on, $1399. They've also announced pricing for multiple CPU systems.

Seems like the death knoll for Linux, except for one small problem: SCO's claims are completely baseless, and their claim to exclusive distribution is absurd. First, all the code in the Linux kernel has been contributed under the General Public License (GPL), which among the great freedoms it gives requires that any released code (or product) building from GPL-released code must be released under the GPL as well. How does this relate to the SCO issue? Well, SCO (as Caldera) has behind a Linux distribution, which was available even after the filing of their lawsuit against IBM. Caldera was making the Linux kernel available under the terms of the GPL, which would seemingly make any code of theirs placed in it available under the GPL. If we assume that any SCO/Caldera code in the kernel (assuming there is any, which is an issue I'll get to later), and that it was inadvertently placed in the kernel, or was done so without the knowledge and consent of Caldera/SCO, the fact remains that even after 'realizing' that some of their code was 'illegally' in the kernel, and even after filing a lawsuit alleging the same, the Linux kernel sources were still available for download from SCO's site, under the GPL (in fact, are still available). This means that Caldera/SCO knowingly and willingly has made this code available under the GPL, as required by the GPL.

Secondly, it is a complete absurdity for SCO to believe they have exclusive distribution rights over the whole kernel merely because some of their code allegedly appears there. Contributions to the Linux kernel have come from literally thousands of individuals and groups, each of whom own the copyrights to their respective contributions, which they have made available under the terms of the GPL. Their (copyrighted) code appears in the Linux kernel, and if a judge were to grant SCO exclusive distribution over the kernel, would be effectively granted SCO ownership of the work of others, completely without their consent. While the fact that these code contributions have been released under the GPL may well nullify any ability of these contributors to successfully sue SCO for copyright infringement, there is a very obvious case to be made SCO's violation of the license the code is under.

The story isn't only about SCO suing IBM, though. Red Hat has filed a suit against SCO, and IBM has countersued. Red Hat's filing alleges that SCO is deliberately making false statements against Linux in an effort to hurt adoption of Linux. IBM's countersuit says SCO cannot make its claims since they have released any questionable code under the GPL, as well as asserting that SCO's UnixWare product is in violation of 4 IBM patents and that SCO's attempt to revoke IBM's license to AIX has hurt IBM's business.

SCO CEO Darl McBride said he was "disappointed" by Red Hat's decision to sue, and that they might be facing legal action for copyright infringement and conspiracy; SCO's response to IBM was that they (IBM) should indemnify customers and move away from the GPL.

No respectable software company indemnifies its customers from any legal action; not IBM, not Red Hat, not even Microsoft does that. For SCO to state that IBM should do so runs completely against the grain of today's software world, and if it were something companies actually did, would expose them to much greater risk, running the possibility (probability?) that software development (and hence the entire development of most of the tech sector) wouldn't be anywhere near the level it is today, due to a greater unwillingness of companies to release or even develop new applications.

The GPL bit seems like an indication that SCO intends to fight against the GPL, perhaps arguing that as a license it isn't valid. I fail to see what they hope to accomplish from such an attack though, for even if they were to success in having the GPL ruled an invalid license, they open themselves up for thousands of cases of copyright infringement from every other developer who has code in the 2.4 and 2.5 series Linux kernels.

At this point, one may be wondering what possible motive SCO could have for doing this; theories abound. There are some very plausible theories out there, and I don't have one of my own; the evidence that exists either doesn't completely fit with them, or it points to more than one possibility.

Another thing you may be wondering at this point is why I bring this up here. Beyond the fact that I'm a Linux user and supporter, and supporter of the open-source movement in general, this case shows what happens when a company tries to take advantage of the generosity of others, and misuses, slanders, and libels their works. But this is not an anti-corporate tirade; for while I have my feelings about that whole can of worms, this is not because SCO is a corporation; this is because there are forces at work seeking to stifle the open-source movement. To attack open-source is to attack freedom of speech and of expression, and that I will not stand for.


and then there was content
Posted: 2003-07-29 14:29
No comment(s)
Author: Phil Gengler
Section: Stuff

Apologies for the long gaps between updates, I've been alternately busy and lazy, and in neither state did anything get written for the site. Since my last update, no doubt a number of events have transpired; and no doubt you've read about them if you're interested. What I'm going to do now, and hopefully continue doing, is to take one or two things and write something, so that the site doesn't just turn into link propagation.

The subject today is H.R. 2885, or the "Protecting Children from Peer-to-Peer Pornography Act of 2003." The title, as expected, is bullshit, and the bill very plainly says it intends "[t]o prohibit the distribution of peer-to-peer file trading software in interstate commerce." So, the bill is designed to kill P2P.

This bill has been the subject of a lot of discussion on the pho mailing list, mostly because of the severe inconsistency between the bill's title and it's purpose (both have which have been amended since it's initial introduction). Looking at the title of the bill, we see something not new for Congress, a bill designed to limit the potential for children to be exposed to pornography online. Looking just a little deeper, though, the bill specifically says it's supposed to be putting an end to P2P programs in the US.

It calls for a technological measure that could prevent the installation of a P2P program, a 'do-not-install beacon' that a P2P program would be expected to look for. This 'beacon' is designed to be installed on a machine by a juvenile's parents, and any and all P2P programs are supposed to check for this beacon and refuse to install if it is found. Hardly anything needs to be said on how asinine this is, and how simply a program could ignore any 'do-not-install beacon' and install anyway.

Beyond the 'beacon', the bill creates restrictions on P2P software, namely that they must notify the user of the potential for finding porn, confirm with the user that they're over 13, comply with COPPA, not work-around any security software (such as a firewall), and most unenforceable, is that if the creator or distributor of a P2P program is outside the US, they must appoint a US resident to register with the Commission created by this act to oversee compliance.

I was always under the impression that US laws only applied to US citizens, and so if someone in Germany (for example) makes a P2P program available for download to the whole world, including the US, then what affects them is German law, and not US law, since they're neither a US citizen nor are they in the US. Without being able to enfore this as law worldwide (which, last I checked, we still aren't able to do, fortunately) any stipulation that would apply to a US citizen is without force, since unless the person is actively targeting the US (which we'll assume they're not, and since there likely isn't a marketing campaign, is unlikely to happen), then they're just making something available to the whole world.

It is not the role of any country to attempt to rule the Internet. The 'net is completely different from any other 'communications system' ever created, but governments around the world aren't able to see this, and keep trying to govern it the same way they govern other systems. To try and do this is to not realize the potential of the Internet, and also serves to cause those affected to seek hosting and services from other places where these behaviors are still legal. It's similar to the Star Wars quote, "The more you tighten your grip, the more star systems slip through your fingers." In this case, the more one country trys to regulate and restrict Internet behavior, the more those affected are going to move their operations to places where they're not subject to these rules, where there is even less control.