Over 16,529,103 people are on fubar.
What are you waiting for?

Nokkie's blog: "FreeThinking"

created on 08/12/2008  |  http://fubar.com/freethinking/b238370
CNN host Glen Beck viciously attacked the 9/11 truth movement last night on his Headline Prime show, describing the whole movement as “insane” and branding 9/11 activists as “dangerous anarchists”. Beck singled out 9/11 truthers in a segment in response to the infiltration of Real Time with Bill Maher by We Are Change protesters last week. In a piece that we would normally associate with the “fair and balanced” Fox News, Beck featured two guests who BOTH argued against 9/11 truth, as well as throwing in his own two cents. Full article here - http://infowars.net/articles/october2007/231007Beck_attack.htm
WASHINGTON — The Lakota Indians, who gave the world legendary warriors Sitting Bull and Crazy Horse, have withdrawn from treaties with the United States. “We are no longer citizens of the United States of America and all those who live in the five-state area that encompasses our country are free to join us,” long-time Indian rights activist Russell Means said. A delegation of Lakota leaders has delivered a message to the State Department, and said they were unilaterally withdrawing from treaties they signed with the federal government of the U.S., some of them more than 150 years old. The group also visited the Bolivian, Chilean, South African and Venezuelan embassies, and would continue on their diplomatic mission and take it overseas in the coming weeks and months. Lakota country includes parts of the states of Nebraska, South Dakota, North Dakota, Montana and Wyoming. The new country would issue its own passports and driving licences, and living there would be tax-free - provided residents renounce their U.S. citizenship, Mr Means said. The treaties signed with the U.S. were merely “worthless words on worthless paper,” the Lakota freedom activists said. Withdrawing from the treaties was entirely legal, Means said. “This is according to the laws of the United States, specifically article six of the constitution,” which states that treaties are the supreme law of the land, he said. “It is also within the laws on treaties passed at the Vienna Convention and put into effect by the US and the rest of the international community in 1980. We are legally within our rights to be free and independent,” said Means. The Lakota relaunched their journey to freedom in 1974, when they drafted a declaration of continuing independence — an overt play on the title of the United States’ Declaration of Independence from England. Thirty-three years have elapsed since then because “it takes critical mass to combat colonialism and we wanted to make sure that all our ducks were in a row,” Means said. One duck moved into place in September, when the United Nations adopted a non-binding declaration on the rights of indigenous peoples — despite opposition from the United States, which said it clashed with its own laws. “We have 33 treaties with the United States that they have not lived by. They continue to take our land, our water, our children,” Phyllis Young, who helped organize the first international conference on indigenous rights in Geneva in 1977, told the news conference. The U.S. “annexation” of native American land has resulted in once proud tribes such as the Lakota becoming mere “facsimiles of white people,” said Means. Oppression at the hands of the U.S. government has taken its toll on the Lakota, whose men have one of the shortest life expectancies - less than 44 years - in the world. Lakota teen suicides are 150 per cent above the norm for the U.S.; infant mortality is five times higher than the U.S. average; and unemployment is rife, according to the Lakota freedom movement’s website. http://www.lakotafreedom.com/about.html http://nativetimes.com/index.asp?action=displayarticle&article_id=9195 http://www.shortnews.com/start.cfm?id=67256 http://blogs.usatoday.com/ondeadline/2007/12/lakota-withdraw.html http://www.rapidcityjournal.com/articles/2007/12/21/news/local/doc476a99630633e335271152.txt http://www.commondreams.org/news2007/1220-02.htm
March 9, 2008: The U.S. Air Force is buying 300 PlayStation 3 game consoles. Not to play games, but because it’s the cheapest way to get the powerful processors that create the photorealistic graphics for PlayStation games. Air force researchers want to use these processors (similar to the ones found in high end video cards) to build faster computers for military use. The CPU manufacturer was not willing to sell the PlayStation processor separately, at least for a reasonable price. So it was easier to just buy PlayStation 3s. This use of video game electronics, for other purposes, is nothing new. Military researchers began doing this sort of thing in the late 1990s with graphic processors. This led to the introduction last year of modified graphic cards, which produce supercomputer type results, but at a very low cost. These were basically Nvidia 8800 graphic cards tweaked to just crunch numbers (one card equals half a teraflop of computing power). Each of these PCI cards costs about $1,500. For under $20,000 you have yourself a four teraflop supercomputer, and it looks like just another PC. By building this kind of computing power into weapons systems (like sonars and radars), you can improve their performance (speed and accuracy) enormously. This kind of computing power also makes UAVs and other robotic systems much smarter, even when they are under the control of a human operator.
Microsoft’s TechFest internal science fair wasn’t just about social networking and telescopes. The company also discussed new technology closer to its roots: an operating system kernel concept called “Singularity” intended as a showcase for some cutting-edge computer science. The software isn’t the next version of Windows or a reheated DOS. It’s a prototype of an operating system intended for computer science research that Microsoft said demonstrates the possibilities for software that is more dependable and secure than contemporary OSes (yes, that includes Windows). “Singularity is not the next Windows,” Rick Rashid, senior vice president of Microsoft Research, said in a statement. “Think of it like a concept car. It is a prototype operating system designed from the ground up to test-drive a new paradigm for how operating systems and applications interact with one another. We are making it available to the community in the hope that it will enable researchers to try out new ideas quickly.”
(PhysOrg.com) — A crucial step in developing minuscule structures with application potential in sophisticated sensors, catalysis, and nanoelectronics has been developed by Scottish researchers. Dr Manfred Buck and his team at the University of St Andrews have accomplished one of the big quests in nanotechnology, opening up an exciting new development in tiny technology. The St Andrews researchers have developed a way of forming an easily modified network of molecules over a large area - the chemical technique provides an advantageous alternative to traditional methods which become increasingly cumbersome at the ultrasmall length scale. The key to the development lies in the creation of robust and versatile surface - self-assembling structures just one molecule thick which can be exploited for further control and manipulation of nanostructures. Dr Manfred Buck, of the University¿s School of Chemistry, explained, “One of the central issues in nanotechnology is the development of simple and reliable methods to precisely arrange molecules and other nanoscopic objects. One promising route intensively investigated by scientists around the world involves the ability of molecules to spontaneously assemble onto a surface. What we have done is successfully combined two strategies which are complementary but, so far, have been explored independently, and it is this combination which opens up unprecedented opportunities for accessing the ultrasmall length scale.” “The potential of this approach lies in its flexibility on a scale, about 1/10000 of the diameter of a human hair. Using molecules as building units, the features of our structures are less than 5 nanometres in size, which enables us to control structures and materials at dimensions where new properties emerge.” One of the advantages of the technique is that it works under ambient conditions. Since no sophisticated equipment or special environment - such as a high vacuum - is required, it is easily accessible and adaptable for a wide range of applications. The chemical method provides an alternative route to nanostructures created by conventional lithography, which inscribes patterns into surfaces but struggles to be precise on a scale of a few nanometres. Dr Buck’s solution-based chemistry works by assembling molecules into tiny dimples, themselves created when molecules self-assemble into a honeycomb-shaped network on a gold surface. Such a so-called supramolecular network is held together by hydrogen bonds -a type of bonding also essential for DNA - and acts as a template to control the arrangement of other molecules. He continued, “We are just at the beginning of the exploration of a very exciting new area. Ongoing and future work will investigate changes in the dimensions and geometry of the network, where the aim is to get exact control over the arrangement of molecules, ultimately at the level of single molecules.” “In the short term, this development provides us with an easily accessible platform for fundamental studies of phenomena on the ultrasmall scale,” Dr Buck explained. “In the future, we might be able to use this technology for the assembly of ‘nanomachines’, molecular devices used to transport and manipulate molecules and nanometer sized objects,” he concluded. The research is published by the journal Nature. Provided by University of St Andrews
SAN FRANCISCO — The era of the American Internet is ending. Invented by American computer scientists during the 1970s, the Internet has been embraced around the globe. During the network’s first three decades, most Internet traffic flowed through the United States. In many cases, data sent between two locations within a given country also passed through the United States. Engineers who help run the Internet said that it would have been impossible for the United States to maintain its hegemony over the long run because of the very nature of the Internet; it has no central point of control. And now, the balance of power is shifting. Data is increasingly flowing around the United States, which may have intelligence — and conceivably military — consequences. American intelligence officials have warned about this shift. “Because of the nature of global telecommunications, we are playing with a tremendous home-field advantage, and we need to exploit that edge,” Michael V. Hayden, the director of the Central Intelligence Agency, testified before the Senate Judiciary Committee in 2006. “We also need to protect that edge, and we need to protect those who provide it to us.” Indeed, Internet industry executives and government officials have acknowledged that Internet traffic passing through the switching equipment of companies based in the United States has proved a distinct advantage for American intelligence agencies. In December 2005, The New York Times reported that the National Security Agency had established a program with the cooperation of American telecommunications firms that included the interception of foreign Internet communications. Some Internet technologists and privacy advocates say those actions and other government policies may be hastening the shift in Canadian and European traffic away from the United States. “Since passage of the Patriot Act, many companies based outside of the United States have been reluctant to store client information in the U.S.,” said Marc Rotenberg, executive director of the Electronic Privacy Information Center in Washington. “There is an ongoing concern that U.S. intelligence agencies will gather this information without legal process. There is particular sensitivity about access to financial information as well as communications and Internet traffic that goes through U.S. switches.” But economics also plays a role. Almost all nations see data networks as essential to economic development. “It’s no different than any other infrastructure that a country needs,” said K C Claffy, a research scientist at the Cooperative Association for Internet Data Analysis in San Diego. “You wouldn’t want someone owning your roads either.” Indeed, more countries are becoming aware of how their dependence on other countries for their Internet traffic makes them vulnerable. Because of tariffs, pricing anomalies and even corporate cultures, Internet providers will often not exchange data with their local competitors. They prefer instead to send and receive traffic with larger international Internet service providers. This leads to odd routing arrangements, referred to as tromboning, in which traffic between two cites in one country will flow through other nations. In January, when a cable was cut in the Mediterranean, Egyptian Internet traffic was nearly paralyzed because it was not being shared by local I.S.P.’s but instead was routed through European operators. The issue was driven home this month when hackers attacked and immobilized several Georgian government Web sites during the country’s fighting with Russia. Most of Georgia’s access to the global network flowed through Russia and Turkey. A third route through an undersea cable linking Georgia to Bulgaria is scheduled for completion in September. Ms. Claffy said that the shift away from the United States was not limited to developing countries. The Japanese “are on a rampage to build out across India and China so they have alternative routes and so they don’t have to route through the U.S.” Andrew M. Odlyzko, a professor at the University of Minnesota who tracks the growth of the global Internet, added, “We discovered the Internet, but we couldn’t keep it a secret.” While the United States carried 70 percent of the world’s Internet traffic a decade ago, he estimates that portion has fallen to about 25 percent. Internet technologists say that the global data network that was once a competitive advantage for the United States is now increasingly outside the control of American companies. They decided not to invest in lower-cost optical fiber lines, which have rapidly become a commodity business. That lack of investment mirrors a pattern that has taken place elsewhere in the high-technology industry, from semiconductors to personal computers. The risk, Internet technologists say, is that upstarts like China and India are making larger investments in next-generation Internet technology that is likely to be crucial in determining the future of the network, with investment, innovation and profits going first to overseas companies. “Whether it’s a good or a bad thing depends on where you stand,” said Vint Cerf, a computer scientist who is Google’s Internet evangelist and who, with Robert Kahn, devised the original Internet routing protocols in the early 1970s. “Suppose the Internet was entirely confined to the U.S., which it once was? That wasn’t helpful.” International networks that carry data into and out of the United States are still being expanded at a sharp rate, but the Internet infrastructure in many other regions of the world is growing even more quickly. While there has been some concern over a looming Internet traffic jam because of the rise in Internet use worldwide, the congestion is generally not on the Internet’s main trunk lines, but on neighborhood switches, routers and the wires into a house. As Internet traffic moves offshore, it may complicate the task of American intelligence gathering agencies, but would not make Internet surveillance impossible. “We’re probably in one of those situations where things get a little bit harder,” said John Arquilla, a professor at the Naval Postgraduate School in Monterey, Calif., who said the United States had invested far too little in collecting intelligence via the Internet. “We’ve given terrorists a free ride in cyberspace,” he said. Others say the eclipse of the United States as the central point in cyberspace is one of many indicators that the world is becoming a more level playing field both economically and politically. “This is one of many dimensions on which we’ll have to adjust to a reduction in American ability to dictate terms of core interests of ours,” said Yochai Benkler, co-director of the Berkman Center for Internet and Society at Harvard. “We are, by comparison, militarily weaker, economically poorer and technologically less unique than we were then. We are still a very big player, but not in control.” China, for instance, surpassed the United States in the number of Internet users in June. Over all, Asia now has 578.5 million, or 39.5 percent, of the world’s Internet users, although only 15.3 percent of the Asian population is connected to the Internet, according to Internet World Stats, a market research organization. By contrast, there were about 237 million Internet users in North America and the growth has nearly peaked; penetration of the Internet in the region has reached about 71 percent. The increasing role of new competitors has shown up in data collected annually by Renesys, a firm in Manchester, N.H., that monitors the connections between Internet providers. The Renesys rankings of Internet connections, an indirect measure of growth, show that the big winners in the last three years have been the Italian Internet provider Tiscali, China Telecom and the Japanese telecommunications operator KDDI. Firms that have slipped in the rankings have all been American: Verizon, Savvis, AT&T, Qwest, Cogent and AboveNet. “The U.S. telecommunications firms haven’t invested,” said Earl Zmijewski, vice president and general manager for Internet data services at Renesys. “The rest of the world has caught up. I don’t see the AT&T’s and Sprints making the investments because they see Internet service as a commodity.”
The US is negotiating with Georgia and Turkey to establish a naval base at one of the two key Georgian ports of Batumi or Poti, reports say. Turkey, in an attempt to avoid political tension with Russia, has not officially revealed its position regarding the plan, said Gruzya Online, a Russian-language internet site. Russia had previously announced its intention to station its own special forces at the Georgian ports. One of the responsibilities of US Special Forces in the region is to ensure the security of an oil pipeline passing through Georgia. HSH/RA
As long as people have engaged in private conversations, eavesdroppers have tried to listen in. When important matters were discussed in parlors, people slipped in under the eaves—literally within the “eaves­drop”—to hear what was being said. When conversations moved to telephones, the wires were tapped. And now that so much human activity takes place in cyberspace, spies have infiltrated that realm as well. Unlike earlier, physical frontiers, cyberspace is a human construct. The rules, designs and investments we make in cyberspace will shape the ways espionage, privacy and security will interact. Today there is a clear movement to give intelligence activities a privileged position, building in the capacity of authorities to intercept cyberspace communications. The advantages of this trend for fighting crime and terrorism are obvious. The drawbacks may be less obvious. For one thing, adding such intercept infrastructure would undermine the nimble, bottom-up structure of the Internet that has been so congenial to business innovation: its costs would drive many small U.S. In­­ternet service providers (ISPs) out of business, and the top-down control it would require would threaten the nation’s role as a leader and innovator in communications. Furthermore, by putting too much emphasis on the capacity to intercept Internet communications, we may be undermining civil liberties. We may also damage the security of cyberspace and ultimately the security of the nation. If the U.S. builds extensive wiretapping into our communications system, how do we guarantee that the facilities we build will not be misused? Our police and intelligence agencies, through corruption or merely excessive zeal, may use them to spy on Americans in violation of the U.S. Constitution. And, with any intercept capability, there is a risk that it could fall into the wrong hands. Criminals, terrorists and foreign intelligence services may gain access to our surveillance facilities and use them against us. The architectures needed to protect against these two threats are different. Such issues are important enough to merit a broad national debate. Unfortunately, though, the public’s ability to participate in the discussion is impeded by the fog of secrecy that surrounds all intelligence, particularly message interception (“signals intelligence”). A Brief History of Wiretapping To understand the current controversy over wiretapping, one must understand the history of communications technology. From the development of the telephone in the 19th century until the past decade or two, remote voice communications were carried almost exclusively by circuit-switched systems. When one person picked up the phone to call another, one or more telephone switches along the way would connect their wires so that a continuous circuit would be formed. This circuit would persist for the duration of the call, after which the switches would disconnect the wires, freeing resources to handle other calls. Call switching was essentially the only thing that telephone switches did. Other services associated with the telephone—call forwarding and message taking, for example—were handled by human operators. Wiretapping has had an on-and-off legal history in the U.S. The earliest wiretaps were simply extra wires—connected to the line between the telephone company’s central office and the subscriber—that carried the signal to a pair of earphones and a recorder. Later on, wiretaps were installed at the central office on the frames that held the incoming wires. At first, the courts held that a wiretap does not constitute a search when it involves no trespass, but over time that viewpoint changed. In 1967 the U.S. Supreme Court decided in the case of Katz v. United States that the interception of communications is indeed a search and that a warrant is required. This decision prompted Congress in 1968 to pass a law providing for wiretap warrants in criminal investigations. But Congress’s action left the use of wiretapping for foreign intelligence in legal limbo. Congressional investigations that followed the 1972 Watergate break-in uncovered a history of presidential operations that had employed and, as it turned out, abused the practice, spying on peaceful, domestic political organizations as well as hostile, foreign ones. So, in 1978, Congress passed the Foreign Intelligence Surveillance Act (FISA), which took the controversial step of creating a secret federal court for issuing wiretap warrants. Most of the surveillance of communications for foreign intelligence purposes lay outside the scope of the wiretapping law, because this activity had primarily involved the interception of radio signals rather than physical intrusions into phone systems. (When operating in other countries, American intelligence services could not place wiretaps on phone lines as easily as they could in the U.S.) Another important distinction between domestic and foreign communications surveillance is scale: inside the U.S., wiretapping has traditionally been regarded as an extreme investigative technique, something to be applied only to very serious crimes. Outside the country, though, the interception of communications is big business. The National Security Agency (NSA) spends billions of dollars every year intercepting foreign communications from ground bases, ships, airplanes and satellites. But the most important differences are procedural. Within the U.S. the Fourth Amendment to the Constitution guarantees the right of the people to be free from “unreasonable searches and seizures.” The logic of a “reasonable” search is that law-enforcement officers must make an unprivileged observation (that is, one that does not invade the suspect’s privacy) whose results give them “probable cause” with which they can approach the courts for a search warrant. What they are not permitted to do, in either physical searches or wiretaps, is to search first and then use what they find as evidence that the search was legitimate. This procedure, however, is exactly what intelligence agents do, except that they usually do not employ their results to prosecute criminals. An intelligence officer relies on professional judgment and available information to make the decision to spy on a foreign target; the operation will then be judged as a success or failure depending on what intelligence was obtained and what resources were expended. The rules established in FISA make three fundamental distinctions: between “U.S. persons” (citizens, legal residents and American corporations) and foreigners; between communications inside and outside the U.S.; and between wired and wireless communications. Briefly, wired communications entirely within the U.S. are protected—intercepting them requires a warrant. But radio communications that include people outside the country are protected only if the signal is intercepted in the U.S. and the government’s target is a particular, known U.S. person who is in the country at the time. Until recently, whenever the FISA rules applied, they imposed a burden similar to that imposed by ordinary criminal law. To seek a warrant, an intelligence agency had to specify a particular location, telecommunications channel or person and explain why the target should be subject to surveillance. Operating “foreign intelligence–style,” intercepting communications and then using the recorded conversations to justify the interception, was not permitted. Almost accidentally, the rules set by FISA included an important loophole that Congress had intended to be only temporary: radio communications involving parties who were not U.S. persons could be intercepted from inside the U.S. without warrants. At the time FISA was passed and for many years thereafter, the radio exemption was a great boon to the intelligence community. Satellite radio relays had revolutionized international communications in the 1960s and 1970s and carried most of the phone calls entering and leaving the country. Radio communications that were partly or completely among parties outside the U.S. were legally and physically vulnerable to interception by NSA antennas at places such as Yakima, Wash., and Vint Hill Farms in Virginia. In the 1970s a new transmission medium emerged as an alternative for long-haul communications. Optical fibers—long, thin strands of glass that carry signals via laser light—offered great ad­vant­ages in communicating between fixed locations. Fiber lines have tremendous capacity; they are not plagued by the quarter-second delay that slows satellite relays; they are intrinsically more secure than radio; and, for a combination of technical and business reasons, they have become very cheap. From the 1990s onward, the vast majority of communications from one fixed location to another have moved by fiber. Because fiber communications are “wired,” U.S. law gives them greater protection. The intelligence community could not intercept these communications as freely as they could radio traffic, and the FISA rules began to chafe. A particularly sensitive issue for intelligence agencies was the so-called transit traffic. Some 20 percent of the communications carried on U.S. networks originate and terminate outside the country, moving between Europe, Asia and Latin America. Transit traffic is not a new phenomenon; it was already present in the satellite era. But under FISA rules, the interception of fiber communications at points inside the U.S. required a warrant. This requirement upset the standard processes of intelligence agents, who were unaccustomed to seeking probable cause before initiating surveillance. At about the same time, computer-based switching systems began to replace the traditional electromechanical switches in U.S. telephone networks. This computerization paved the way for services such as automated call forwarding and answering systems, which unintentionally but effectively bypassed standard wiretapping techniques. Suppose that a caller to a wiretapped phone left a message with an answering service provided by the telephone company. If the target of the investigation checked his messages from a phone other than his own, the communication would never travel over the tapped line and thus would not be intercepted. Congress responded in 1994 with the Communications Assistance for Law Enforcement Act (CALEA), which requires telecommunications companies to make it possible for the government to tap all the communications of a targeted subscriber no matter what automated services the subscriber uses. In addition to man­­­dating an improvement in the quality of information that can be obtained from wiretaps, CALEA obliged telecommunications carriers to be able to execute far more simultaneous wiretaps than had previously been possible. Tapping the Net CALEA was passed just as large numbers of people began using the Internet, which employs a communications method that is entirely different from circuit-switched telephony. Internet users send information in small packets, each of which carries a destination address and a return address, just like a letter in the postal system. With circuit switching, a brief telephone call incurs the same setup costs as a long one, so making a call to send only a few words is uneconomical. But on a packet-switched network, short messages are cheap and shorter messages are cheaper. Web browsing is possible because Internet connections can be used briefly and discarded. Each time you click on a Web link, you establish a new connection. In the era of circuit-switched communications, wiretapping worked because telephone instruments, numbers and users were bound closely together. A telephone was hard to move, and a new telephone number was hard to get. An organization’s messages moved on the same channels for long periods, so it was easy to intercept them repeatedly. Computerized switching and the Internet have made surveillance much more challenging. Today people can easily get new telephone numbers as well as e-mail addresses, instant messaging handles and other identifiers. And the advent of voice-over-Internet protocol (VoIP), the standard that allows the transmission of voice communications over packet-switched networks, has further decentralized control of the communications infrastructure. In a VoIP system such as the popular Skype service, for example, the setting up of phone calls and the transmission of traffic are entirely separate. If CALEA, as interpreted literally, were applied to decentralized VoIP services, the provider would be required to intercept targeted customers’ phone calls and relay them to the government but might be totally incapable of complying with such a demand. Consider a typical VoIP call running between the laptop computers of two people, both of whom are traveling. Alice initiates the call from a lounge at O’Hare airport in Chicago, and Bob receives it at a hotel bar in San Francisco. The VoIP provider’s role in the process is limited: it discovers the Internet protocol (IP) addresses through which Alice and Bob are connected and communicates each person’s address to the other’s computer. After the setup is completed, the VoIP provider plays no further role. Instead the actual voice conversation is carried by the Internet service providers (ISPs) through which Alice and Bob access the Internet, together with other Internet carriers to which those ISPs are connected. In this environment a government agency might have to serve wiretap warrants on many telecommunications carriers just to monitor a single target. Suppose we imagine a CALEA-style intercept regime that could capture a VoIP call. It must begin with an order to the VoIP provider targeting either Alice or Bob. When law-enforcement agents receive word from the provider that the target is engaged in a call, they must consider the IP addresses of Alice and Bob and send an intercept warrant to one or more ISPs at which the call can be intercepted. The ISPs must be prepared to accept, authenticate and implement the warrant in real time. One problem with this scenario is that only ISPs in the U.S. (and possibly some in cooperating countries) would be required to honor the warrant. A more serious difficulty is the massive security problem that such an arrangement would present. Anyone who could penetrate an ISP’s wiretap function would be able to spy on its subscribers at will. CALEA recognized the difference between traditional telephony and the Internet and exempted the Internet, referred to as “information services,” from the provisions of the new law. Yet in 2004, despite that distinction, the U.S. Department of Justice, the Federal Bureau of Investigation and the U.S. Drug Enforcement Administration responded to the challenge of monitoring Internet communications by proposing that providers of broadband Internet access be required to comply with the CALEA requirements. The Federal Communications Commission and the courts have so far supported law enforcement in extending CALEA to “interconnected VoIP” (the form most like traditional telephony), relying on a provision of CALEA that refers to services that are a “substantial” replacement for the telephone system. This proposal, if adopted, would be the first step on a road leading to dangers not present in conventional wiretapping. In particular, the government’s actions threaten the continued growth of the Internet, which has become a hotbed of innovation as a consequence of its distributed control and loose connectivity. Unlike a telephone carrier’s network, the Internet is not centrally planned and managed. The addition of a new service, such as call forwarding, in the telephone system typically takes years of planning and development. But an Internet entrepreneur can start a new business in a garage or dorm room, using nothing but a home computer and a broadband connection. If law enforcement succeeds in mandating interception facilities for every Internet carrier, the industry as a whole could be pushed back into the procrustean bed of conventional telecommunications. To incorporate extensive surveillance capabilities, new Internet services would have to be developed in long cycles dependent on federal approval. In a century in which the great opportunities lie in information-based businesses, Americans must do everything possible to foster innovation rather than stifling it. If we do not, we may fall behind countries that follow a different course. Such an outcome would represent a long-term threat to national security. Another threat is more immediate. Since the collapse of the Soviet Union, no opponent has had the ability to spy on U.S. communications with anything approximating comprehensive coverage. The Soviets had fleets of trawlers patrolling both coasts of the U.S., diplomatic facilities in major American cities, satellites overhead and ground bases such as the Lourdes facility near Havana. Their capabilities in signals in­telligence were second to none. In comparison, the current opponents we most fear, such as al Qaeda, and even major nations such as China have no such ability. They are, however, trying to achieve it, and building wiretapping into the Internet might give it to them. Computers would control the intercept devices, and those computers themselves would be controlled remotely. Such systems could be just as much subject to capture as Web sites and personal computers. The government’s proposed interception policies must be judged in the light of such vast and uncertain dangers. Cyberwars The administration of President George W. Bush recently relaxed some of the 30-year-old restrictions on communications surveillance mandated by FISA. In 2007 Congress, under intense pressure from the White House, passed the Protect America Act (PAA), which amended FISA by expanding the radio exemption to cover all communications. The law provided that any communication reasonably believed to have a participant outside the U.S. could be intercepted without a warrant. Given the degree to which business services in the U.S. are being outsourced to overseas providers, the new law made a large fraction of American commercial and personal telecommunications activity subject to monitoring. Congress was sufficiently nervous about this course of action that it provided for PAA to expire in 2008. This past July, after months of controversy, Congress passed a bill fundamentally expanding the executive branch’s wiretapping authority and reducing the FISA court’s role in international cases to reviewing the general procedures of a proposed wiretap rather than the specifics of a case. Political debate over the bill, however, did not center on wiretapping authority, as one might expect for a sweeping change. Most attention focused instead on giving retroactive immunity for past illegal wire­tap­ping. In early 2008 the administration offered a new rationale for expanding communications surveillance: securing the Internet. The current state of Internet security is indeed abysmal. Most computers cannot protect themselves from penetration by malware—software designed to infiltrate and damage computer systems—and a substantial fraction of the computers linked to the Internet are under the control of parties other than their owners. These machines have been surreptitiously captured and organized into “botnets,” whose computing power is then resold in a kind of electronic slave trade. In response to the failure of traditional defensive approaches, President Bush signed a national security directive in January authorizing a Cyber Initiative. Most of the initiative is secret, but its initial move­­—extensive surveillance of the substantial amount of Internet traffic that moves in and out of the U.S. government—is too sweeping to be concealed. To facilitate the surveillance, the administration plans to reduce the number of connections between government agencies and the Internet from thousands to fewer than a hundred, and that requires changing or retiring thousands of IP addresses. The Cyber Initiative captures the dilemma of signals intelligence perfectly. A system that monitors federal communications for signs of foreign intrusion will also capture all the legitimate communications that Americans have with their government. The administration is seeking the power to intercept American communications using the same tactics long employed in foreign intelligence gathering—that is, without having to go to the courts for warrants and describe in advance whose communications it intends to intercept. The advocates of expanded surveillance have valid concerns: not only do we face opponents who are not tied to particular nations and can move freely in and out of the U.S., we also have a critical cybersecurity problem. The Internet is swiftly becoming the primary medium for both commercial and government business, as well as the preferred communications method for many individuals. Its security problems are analogous to having the roads overrun with bandits or the sea-lanes controlled by pirates. Under these circumstances, it is not surprising to find the government seeking to patrol the Internet, just as the nation’s police and armed services have patrolled the roads or the high seas in the past. But policing the Internet, as opposed to securing the computers that populate it, may be a treacherous remedy. Will the government’s monitoring tools be any more secure than the network they are trying to protect? If not, we run the risk that the surveillance facilities will be subverted or actually used against the U.S. The security problems that plague the Internet may beset the computers that will do the policing as much as the computers being policed. If the government expands spying on the Internet without solving the underlying computer security problems, we are courting disaster. The inherent dangers are made worse by the secrecy surrounding the government’s initia­tives. One casualty of recent approaches to communications interception has been what might be called the two-organization rule. The security of many crucial systems, such as those controlling nuclear weapons, relies on the requirement that critical actions be taken by two people simultaneously. Until recently, federal law mandated a similar approach to wiretapping, allowing the government to issue wiretap orders but requiring the phone companies to install the taps. Under this arrangement, a phone company would be reluctant to act on a wiretap order it sus­­pected was illegal, because its compliance would make it vulnerable to both prosecution and civil liability. Eliminating the role of the phone companies removes an important safeguard. If we follow this course, we may create a re­­gime entirely out of view of Congress, the courts and the press—and perhaps entirely out of control. The distance our world has moved into cyberspace in the past century is minuscule compared with the distance it will move in the next. We are in the process of building the world in which future humans will live, as surely as the first city dwellers did 5,000 years ago. Communication is fundamental to our species; private communication is fundamental to both our national security and our democracy. Our challenge is to maintain this privacy in the midst of new communications technologies and serious national se­­­­­­­­cu­­­rity threats. But it is critical to make choices that preserve privacy, communications security and the ability to innovate. Otherwise, all hope of having a free society will vanish. Note: This article was originally published with the title, “Brave New World of Wiretapping”.
London Evening Standard Monday, Sept 8, 2008 Secret advice from a foreign power, thought to be America, helped to shape the dossier that said Saddam Hussein could attack within 45 minutes and set out the case for war in Iraq. MI6 chief John Scarlett, then chairman of the Government’s Joint Intelligence Committee (JIC), turned to the foreign country as final touches were put to the now discredited dossier, it has emerged. The document, which the Government is accused of ‘sexing up’ in the weeks before it was made public, contained a string of claims that later proved false. These included the warnings that Saddam could launch weapons of mass destruction ‘within 45 minutes’ and that it was ‘beyond doubt’ that he was developing nuclear weapons. Both claims were the key to convincing the public and Parliament of the threat posed by Iraq and were essential to putting together the legal case for war. Now it has been revealed that Mr Scarlett canvassed foreign help – which sources claim came from America’s CIA – in the days before the dossier was published. Full Article
Web 2.0 Won’t Eat Your Mouse Law firms must learn to utilize new Internet technologies or risk losing market share By Susan L. Ward New Jersey Law Journal August 20, 2007 “All our steps in creating or absorbing material of the record proceed through one of the senses — the tactile when we touch keys, the oral when we speak or listen, the visual when we read. Is it not possible that some day the path may be established more directly?” Dr. Vannevar Bush, The Atlantic Monthly (1945). Most lawyers pride themselves on their ability to think, analyze and craft solutions to problems. Traditional methods shouldn’t deter you from exploring or mastering the applications available today, regardless of the technological jargon or buzzwords surrounding computers and the Internet. These technologies have been in progress since 1945, when scientist Vannevar Bush first envisioned the device he called “the memex”: it was not unlike today’s computer, with its translucent screen, keyboard and buttons that could be operated at a distance, in which data and communications could be stored and retrieved with great speed. Bush envisioned lawyers with a bank of authorities, opinions and decisions at their fingertips; he even described a system whereby patent attorneys could bring up any issued patents, including links to those that met their client’s interests. Twenty years later, another pioneer of Internet technology, Ted Nelson, coined the term hypertext. Nelson believed that all documents could be connected via computers in a structure called the docuverse. Today, the World Wide Web has more than 10 billion pages in an ever-morphing universe of connectivity. In 2002, Google gave us the speed and relevance that Vannevar Bush could only imagine, and page rank became the leverage of competitive intelligence. When asked what the perfect search engine would be, Sergey Brin, founder of Google, said, “It would be like the mind of God.” Human intelligence is leveraged with artificial intelligence and the concept of the “Semantic Web” is attainable. The evolution from thinker to linker is underway. The Internet revolution itself is not about hypertext, but the culture of participation and the sharing of knowledge. Today, successful law firms recognize the need for expedient knowledge management. For example, Morrison & Foerster developed AnswerBase to leverage the firm’s resources and collective knowledge. The firm’s 1,000 lawyers can do more than simply access relevant documents; they can retrieve information about the context in which documents were created, identify the matter, client, practice area, the office that handled the matter and the attorneys who worked on it. They can even collaborate. The technologies that make this possible are integral to Web 2.0 applications. Tim O’Reilly, founder of O’Reilly Media, came up with the concept Web 2.0 to define the platform of new applications available to harness competitive intelligence. Web 2.0 is not about software, but about improving deliverable services the more people use dynamic applications. Hyperlinking is the foundation of the Web. As users add new content and new sites, other users discovering the content and linking to it bind it to the structure of the Web. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the Web of connections grows organically as an output of the collective activity of all Web users. We’re able to execute these associations through a group of technologies called Asynchronous JAvascript and XML (AJAX) which allow Web pages to exchange data in small amounts, thereby increasing speed. Aggregation brings multiple sources of content into one application. Really Simple Syndication (RSS) allows users automatic receipt of updates from sources. Folksonomy and tagging refer to categories of information created by users. The key to Web 2.0 is interactivity, social networking and collaboration. These applications are available in a free information market. As we thin-slice our way through the Internet on fast forward, creating our own content, the world shrinks through decentralization and openness. Blogging, video uploads, and social networks emerged to facilitate and make the entire World Wide Web convenient to everyone and driven by anyone. Blogs. Blogs or Weblogs evolved from online journals and newsgroup forums. Jorn Barger coined the term weblog to describe the process of “logging the Web.” Blogs can target a niche clientele, share case law and legislative commentary with colleagues and demonstrate expertise in a given practice area. The most effective blog software platforms include MoveableType by SixApart or WordPress. There are even cases that cite legal blogs. A frequent blog source is Sentencing Law and Policy. RSS. Aggregation and recombination tools help keep blog and news titles in real time. Multiple content sources on a specific topic are delivered in a single application. For example, Bloglines allows you to subscribe and access content as it is posted to the Web. Feeds are grouped together and readily accessible. For a complete list of aggregators see at RSS Compendium — RSS Readers. Wikis. The first wiki was created by Ward Cunningham in 1995; the application’s name was derived from the Hawaiian word “wiki-wiki,” meaning quick. While the wiki is similar to a blog in structure, wikis allow anyone who has access to the site the ability to edit, delete or otherwise modify and collaborate on research, projects and group presentations. While Wikipedia, the free online encyclopedia, is a public platform that anyone can edit, wikis can be private, accessed by invitation-only and password protected. Wikis are Web-based, and firms can utilize them as either intra- or extranets. Podcasts. Digital media files can be downloaded to computer desktops and MP3 audio files on portable devices, such as iPods. With the new iPhone, these audio webcasts will rival radio programs. Aggregators make it possible to subscribe to legal podcasts automatically via RSS. The Oyez Supreme Court Podcast posts the oral arguments of the U.S. Supreme Court. The American Bar Association offers CLE podcasts. Video-casts and Screencasts. Streaming video and now online video have made it possible for content sharing of video files. A search for “law firm” on YouTube yields more than 800 video files. For those of you who still think that the FAX ruined the legal business, watch the five-minute clip where senior partners and the office manager at a small Texas firm, Davis & Wilkerson, discuss their choices to standardize their computer platform, increase mobility via laptop and IT support. Beyond the 15 minutes of fame, video advertising is a growing area in Web 2.0. Scan Scout provides contextual video for streaming content. Vidavee allows clients to upload from cell phones, PDAs, PCs and digital camcorders, and manipulate content within a branded environment. Professional networking. Online communities like MySpace and FaceBook are examples of social networks. While MySpace dominates the market share, LinkedIn, a professional network, is growing rapidly. Founded in 2003, LinkedIn is a business tool directed at the network building process without a huge time investment. Three degrees of separation allow you to introduce yourself directly or through an intermediary to contacts you want to meet. Remote Backup. Disaster planning notwithstanding, backing up your computers and server files is made simple with Mozy Online Backup. Mozy Unlimited or Carbonite beats backing up your files on tape or CD and storing them in your car or at an undisclosed location. Mozy requires that you download and install software on your computer, then select the files and drives you wish to back up. It provides encryption, automatic or schedule backups and new and changed file detection. Compliance. Postini, Google’s latest acquisition, provides services including message security, archiving, encryption and policy enforcement in order to protect e-mail, instant messaging and Web-based communications. In the area of compliance, you can archive employee e-mail to facilitate search, retrieval and management of e-discovery requests. In addition to the above technology currently available, several services are soon to launch. Documents. Docstoc, which is currently in private beta, has yet to launch; the plans include sharing business, legal and financial templates that attorneys, business professionals and the public can revise for their own purposes. When content is uploaded, users will categorize, tag and describe their document so that it can easily be found by other users. Users will have options to drag and drop files or folders or e-mail their documents into Docstoc. Powerset. Powerset Natural Language Search hopes to be Google’s competitor by utilizing paraphrasing, compound nouns and hypotheses about the relationships between words. You can sign up now to try it before its release. As mass communication changes business and the Internet, it impacts our ability to service markets. Law firms must learn to utilize the Web 2.0 technologies or lose market share, valuable time and efficiency. As clients demand instant access to their information and data, law firms will have to provide greater service. It is often stated that lawyers tend to ignore technology until the courts require its use, for example, electronic filing in the federal courts. The critical issue for law firms is understanding how Web 2.0 services can be implemented and revenue-producing. The simplicity and user-friendliness of many of the new tools give small firms and solo practitioners an opportunity to catapult themselves into the forefront of the Internet Olympics, working smarter, faster and, yes, stronger.
last post
15 years ago
posts
23
views
7,639
can view
everyone
can comment
everyone
atom/rss

other blogs by this author

 14 years ago
Journal Type Thing?
 14 years ago
Gaming
 15 years ago
Random Shit
 15 years ago
Irish Blessings
 15 years ago
Lyrics
 15 years ago
Fail Of The Week
 15 years ago
Pimp Outs :-O
 15 years ago
College ..
official fubar blogs
 8 years ago
fubar news by babyjesus  
 13 years ago
fubar.com ideas! by babyjesus  
 10 years ago
fubar'd Official Wishli... by SCRAPPER  
 11 years ago
Word of Esix by esixfiddy  

discover blogs on fubar

blog.php' rendered in 0.068 seconds on machine '189'.