Over 16,535,284 people are on fubar.
What are you waiting for?

EnlightenedOsote's blog: "TECH."

created on 07/01/2007  |  http://fubar.com/tech/b97754

My 122108 Kiss Fortune

A thorn defends the rose, harming only those who would steal the blossom. --- Keep your thorns - but be sure to put them to use ONLY in the right situation - be gentle with your love - fiercely defend what you believe is right but withhold the thorns from your love.
Tuesday, December 16, 2008 5:40 PM Posted by Jeremie Lenfant-Engelmann, Software Engineer More than once, I've had a conversation over email and later realized that the information contained in the messages would make a great starting point for a document. So I built an experimental feature for Gmail Labs that does just that: with one simple click, "Create a document" converts an email into a Google Docs document. No more copying and pasting the text from your email -- just open the message you wish to convert, click the "Create a document" link on the right side of the page, and voila, you have a brand new document which you can then modify and share! Even if you're not interested in converting any of your current messages into documents, you can easily open up a blank doc by hitting g and then w (just make sure you have keyboard shortcuts on). To turn on this feature, go to the Gmail Labs tab under Settings, select "Enable" next to "Create a document" and hit "Save Changes" at the bottom. Though we're temporarily missing the "Send feedback" link for this feature on the Labs page (oops!), we're still anxious to hear what you think.

Becoming Screen Literate

By KEVIN KELLY Published: November 21, 2008 Everywhere we look, we see screens. The other day I watched clips from a movie as I pumped gas into my car. The other night I saw a movie on the backseat of a plane. We will watch anywhere. Screens playing video pop up in the most unexpected places — like A.T.M. machines and supermarket checkout lines and tiny phones; some movie fans watch entire films in between calls. These ever-present screens have created an audience for very short moving pictures, as brief as three minutes, while cheap digital creation tools have empowered a new generation of filmmakers, who are rapidly filling up those screens. We are headed toward screen ubiquity. Skip to next paragraph Enlarge This Image Jonathan Bruce Williams Enlarge This Image Video Citing: TimeTube, on the Web, gives a genealogy of the most popular videos and their descendants, and charts their popularity in time-line form. When technology shifts, it bends the culture. Once, long ago, culture revolved around the spoken word. The oral skills of memorization, recitation and rhetoric instilled in societies a reverence for the past, the ambiguous, the ornate and the subjective. Then, about 500 years ago, orality was overthrown by technology. Gutenberg’s invention of metallic movable type elevated writing into a central position in the culture. By the means of cheap and perfect copies, text became the engine of change and the foundation of stability. From printing came journalism, science and the mathematics of libraries and law. The distribution-and-display device that we call printing instilled in society a reverence for precision (of black ink on white paper), an appreciation for linear logic (in a sentence), a passion for objectivity (of printed fact) and an allegiance to authority (via authors), whose truth was as fixed and final as a book. In the West, we became people of the book. Now invention is again overthrowing the dominant media. A new distribution-and-display technology is nudging the book aside and catapulting images, and especially moving images, to the center of the culture. We are becoming people of the screen. The fluid and fleeting symbols on a screen pull us away from the classical notions of monumental authors and authority. On the screen, the subjective again trumps the objective. The past is a rush of data streams cut and rearranged into a new mashup, while truth is something you assemble yourself on your own screen as you jump from link to link. We are now in the middle of a second Gutenberg shift — from book fluency to screen fluency, from literacy to visuality. The overthrow of the book would have happened long ago but for the great user asymmetry inherent in all media. It is easier to read a book than to write one; easier to listen to a song than to compose one; easier to attend a play than to produce one. But movies in particular suffer from this user asymmetry. The intensely collaborative work needed to coddle chemically treated film and paste together its strips into movies meant that it was vastly easier to watch a movie than to make one. A Hollywood blockbuster can take a million person-hours to produce and only two hours to consume. But now, cheap and universal tools of creation (megapixel phone cameras, Photoshop, iMovie) are quickly reducing the effort needed to create moving images. To the utter bafflement of the experts who confidently claimed that viewers would never rise from their reclining passivity, tens of millions of people have in recent years spent uncountable hours making movies of their own design. Having a ready and reachable audience of potential millions helps, as does the choice of multiple modes in which to create. Because of new consumer gadgets, community training, peer encouragement and fiendishly clever software, the ease of making video now approaches the ease of writing. This is not how Hollywood makes films, of course. A blockbuster film is a gigantic creature custom-built by hand. Like a Siberian tiger, it demands our attention — but it is also very rare. In 2007, 600 feature films were released in the United States, or about 1,200 hours of moving images. As a percentage of the hundreds of millions of hours of moving images produced annually today, 1,200 hours is tiny. It is a rounding error. We tend to think the tiger represents the animal kingdom, but in truth, a grasshopper is a truer statistical example of an animal. The handcrafted Hollywood film won’t go away, but if we want to see the future of motion pictures, we need to study the swarming food chain below — YouTube, indie films, TV serials and insect-scale lip-sync mashups — and not just the tiny apex of tigers. The bottom is where the action is, and where screen literacy originates. An emerging set of cheap tools is now making it easy to create digital video. There were more than 10 billion views of video on YouTube in September. The most popular videos were watched as many times as any blockbuster movie. Many are mashups of existing video material. Most vernacular video makers start with the tools of Movie Maker or iMovie, or with Web-based video editing software like Jumpcut. They take soundtracks found online, or recorded in their bedrooms, cut and reorder scenes, enter text and then layer in a new story or novel point of view. Remixing commercials is rampant. A typical creation might artfully combine the audio of a Budweiser “Wassup” commercial with visuals from “The Simpsons” (or the Teletubbies or “Lord of the Rings”). Recutting movie trailers allows unknown auteurs to turn a comedy into a horror flick, or vice versa. Rewriting video can even become a kind of collective sport. Hundreds of thousands of passionate anime fans around the world (meeting online, of course) remix Japanese animated cartoons. They clip the cartoons into tiny pieces, some only a few frames long, then rearrange them with video editing software and give them new soundtracks and music, often with English dialogue. This probably involves far more work than was required to edit the original cartoon but far less work than editing a clip a decade ago. The new videos, called Anime Music Videos, tell completely new stories. The real achievement in this subculture is to win the Iron Editor challenge. Just as in the TV cookoff contest “Iron Chef,” the Iron Editor must remix videos in real time in front of an audience while competing with other editors to demonstrate superior visual literacy. The best editors can remix video as fast as you might type. In fact, the habits of the mashup are borrowed from textual literacy. You cut and paste words on a page. You quote verbatim from an expert. You paraphrase a lovely expression. You add a layer of detail found elsewhere. You borrow the structure from one work to use as your own. You move frames around as if they were phrases. Digital technology gives the professional a new language as well. An image stored on a memory disc instead of celluloid film has a plasticity that allows it to be manipulated as if the picture were words rather than a photo. Hollywood mavericks like George Lucas have embraced digital technology and pioneered a more fluent way of filmmaking. In his “Star Wars” films, Lucas devised a method of moviemaking that has more in common with the way books and paintings are made than with traditional cinematography. In classic cinematography, a film is planned out in scenes; the scenes are filmed (usually more than once); and from a surfeit of these captured scenes, a movie is assembled. Sometimes a director must go back for “pickup” shots if the final story cannot be told with the available film. With the new screen fluency enabled by digital technology, however, a movie scene is something more flexible: it is like a writer’s paragraph, constantly being revised. Scenes are not captured (as in a photo) but built up incrementally. Layers of visual and audio refinement are added over a crude outline of the motion, the mix constantly in flux, always changeable. George Lucas’s last “Star Wars” movie was layered up in this writerly way. He took the action “Jedis clashing swords — no background” and laid it over a synthetic scene of a bustling marketplace, itself blended from many tiny visual parts. Light sabers and other effects were digitally painted in later, layer by layer. In this way, convincing rain, fire and clouds can be added in additional layers with nearly the same kind of freedom with which Lucas might add “it was a dark and stormy night” while writing the script. Not a single frame of the final movie was left untouched by manipulation. In essence, a digital film is written pixel by pixel. The recent live-action feature movie “Speed Racer,” while not a box-office hit, took this style of filmmaking even further. The spectacle of an alternative suburbia was created by borrowing from a database of existing visual items and assembling them into background, midground and foreground. Pink flowers came from one photo source, a bicycle from another archive, a generic house roof from yet another. Computers do the hard work of keeping these pieces, no matter how tiny and partial they are, in correct perspective and alignment, even as they move. The result is a film assembled from a million individual existing images. In most films, these pieces are handmade, but increasingly, as in “Speed Racer,” they can be found elsewhere. In the great hive-mind of image creation, something similar is already happening with still photographs. Every minute, thousands of photographers are uploading their latest photos on the Web site Flickr. The more than three billion photos posted to the site so far cover any subject you can imagine; I have not yet been able to stump the site with a request. Flickr offers more than 200,000 images of the Golden Gate Bridge alone. Every conceivable angle, lighting condition and point of view of the Golden Gate Bridge has been photographed and posted. If you want to use an image of the bridge in your video or movie, there is really no reason to take a new picture of this bridge. It’s been done. All you need is a really easy way to find it. Similar advances have taken place with 3D models. On Google SketchUp’s 3D Warehouse, you can find insanely detailed three-dimensional virtual models of most major building structures of the world. Need a street in San Francisco? Here’s a filmable virtual set. With powerful search and specification tools, high-resolution clips of any bridge in the world can be circulated into the common visual dictionary for reuse. Out of these ready-made “words,” a film can be assembled, mashed up from readily available parts. The rich databases of component images form a new grammar for moving images. After all, this is how authors work. We dip into a finite set of established words, called a dictionary, and reassemble these found words into articles, novels and poems that no one has ever seen before. The joy is recombining them. Indeed it is a rare author who is forced to invent new words. Even the greatest writers do their magic primarily by rearranging formerly used, commonly shared ones. What we do now with words, we’ll soon do with images. For directors who speak this new cinematographic language, even the most photo-realistic scenes are tweaked, remade and written over frame by frame. Filmmaking is thus liberated from the stranglehold of photography. Gone is the frustrating method of trying to capture reality with one or two takes of expensive film and then creating your fantasy from whatever you get. Here reality, or fantasy, is built up one pixel at a time as an author would build a novel one word at a time. Photography champions the world as it is, whereas this new screen mode, like writing and painting, is engineered to explore the world as it might be. But merely producing movies with ease is not enough for screen fluency, just as producing books with ease on Gutenberg’s press did not fully unleash text. Literacy also required a long list of innovations and techniques that permit ordinary readers and writers to manipulate text in ways that make it useful. For instance, quotation symbols make it simple to indicate where one has borrowed text from another writer. Once you have a large document, you need a table of contents to find your way through it. That requires page numbers. Somebody invented them (in the 13th century). Longer texts require an alphabetic index, devised by the Greeks and later developed for libraries of books. Footnotes, invented in about the 12th century, allow tangential information to be displayed outside the linear argument of the main text. And bibliographic citations (invented in the mid-1500s) enable scholars and skeptics to systematically consult sources. These days, of course, we have hyperlinks, which connect one piece of text to another, and tags, which categorize a selected word or phrase for later sorting. All these inventions (and more) permit any literate person to cut and paste ideas, annotate them with her own thoughts, link them to related ideas, search through vast libraries of work, browse subjects quickly, resequence texts, refind material, quote experts and sample bits of beloved artists. These tools, more than just reading, are the foundations of literacy. If text literacy meant being able to parse and manipulate texts, then the new screen fluency means being able to parse and manipulate moving images with the same ease. But so far, these “reader” tools of visuality have not made their way to the masses. For example, if I wanted to visually compare the recent spate of bank failures with similar events by referring you to the bank run in the classic movie “It’s a Wonderful Life,” there is no easy way to point to that scene with precision. (Which of several sequences did I mean, and which part of them?) I can do what I just did and mention the movie title. But even online I cannot link from this sentence to those “passages” in an online movie. We don’t have the equivalent of a hyperlink for film yet. With true screen fluency, I’d be able to cite specific frames of a film, or specific items in a frame. Perhaps I am a historian interested in oriental dress, and I want to refer to a fez worn by someone in the movie “Casablanca.” I should be able to refer to the fez itself (and not the head it is on) by linking to its image as it “moves” across many frames, just as I can easily link to a printed reference of the fez in text. Or even better, I’d like to annotate the fez in the film with other film clips of fezzes as references. With full-blown visuality, I should be able to annotate any object, frame or scene in a motion picture with any other object, frame or motion-picture clip. I should be able to search the visual index of a film, or peruse a visual table of contents, or scan a visual abstract of its full length. But how do you do all these things? How can we browse a film the way we browse a book? It took several hundred years for the consumer tools of text literacy to crystallize after the invention of printing, but the first visual-literacy tools are already emerging in research labs and on the margins of digital culture. Take, for example, the problem of browsing a feature-length movie. One way to scan a movie would be to super-fast-forward through the two hours in a few minutes. Another way would be to digest it into an abbreviated version in the way a theatrical-movie trailer might. Both these methods can compress the time from hours to minutes. But is there a way to reduce the contents of a movie into imagery that could be grasped quickly, as we might see in a table of contents for a book? Academic research has produced a few interesting prototypes of video summaries but nothing that works for entire movies. Some popular Web sites with huge selections of movies (like porn sites) have devised a way for users to scan through the content of full movies quickly in a few seconds. When a user clicks the title frame of a movie, the window skips from one key frame to the next, making a rapid slide show, like a flip book of the movie. The abbreviated slide show visually summarizes a few-hour film in a few seconds. Expert software can be used to identify the key frames in a film in order to maximize the effectiveness of the summary. The holy grail of visuality is to search the library of all movies the way Google can search the Web. Everyone is waiting for a tool that would allow them to type key terms, say “bicycle + dog,” which would retrieve scenes in any film featuring a dog and a bicycle. In an instant you could locate the moment in “The Wizard of Oz” when the witchy Miss Gulch rides off with Toto. Google can instantly pinpoint desirable documents out of billions on the Web because computers can read text, but computers are only starting to learn how to read images. It is a formidable task, but in the past decade computers have gotten much better at recognizing objects in a picture than most people realize. Researchers have started training computers to recognize a human face. Specialized software can rapidly inspect a photograph’s pixels searching for the signature of a face: circular eyeballs within a larger oval, shadows that verify it is spherical. Once an algorithm has identified a face, the computer could do many things with this knowledge: search for the same face elsewhere, find similar-looking faces or substitute a happier version. Of course, the world is more than faces; it is full of a million other things that we’d like to have in our screen vocabulary. Currently, the smartest object-recognition software can detect and categorize a few dozen common visual forms. It can search through Flickr photos and highlight the images that contain a dog, a cat, a bicycle, a bottle, an airplane, etc. It can distinguish between a chair and sofa, and it doesn’t identify a bus as a car. But each additional new object to be recognized means the software has to be trained with hundreds of samples of that image. Still, at current rates of improvement, a rudimentary visual search for images is probably only a few years away. What can be done for one image can also be done for moving images. Viewdle is an experimental Web site that can automatically identify select celebrity faces in video. Hollywood postproduction companies routinely “read” sequences of frames, then “rewrite” their content. Their custom software permits human operators to eradicate wires, backgrounds, unwanted people and even parts of objects as these bits move in time simply by identifying in the first frame the targets to be removed and then letting the machine smartly replicate the operation across many frames. The collective intelligence of humans can also be used to make a film more accessible. Avid fans dissect popular movies scene by scene. With maniacal attention to detail, movie enthusiasts will extract bits of dialogue, catalog breaks in continuity, tag appearances of actors and track a thousand other traits. To date most fan responses appear in text form, on sites like the Internet Movie Database. But increasingly fans respond to video with video. The Web site Seesmic encourages “video conversations” by enabling users to reply to one video clip with their own video clip. The site organizes the sprawling threads of these visual chats so that they can be read like a paragraph of dialogue. The sheer number of user-created videos demands screen fluency. The most popular viral videos on the Web can reach millions of downloads. Success garners parodies, mashups or rebuttals — all in video form as well. Some of these offspring videos will earn hundreds of thousands of downloads themselves. And the best parodies spawn more parodies. One site, TimeTube, offers a genealogical view of the most popular videos and their descendants. You can browse a time line of all the videos that refer to an original video on a scale that measures both time and popularity. TimeTube is the visual equivalent of a citation index; instead of tracking which scholarly papers cite other papers, it tracks which videos cite other videos. All of these small innovations enable a literacy of the screen. As moving images become easier to create, easier to store, easier to annotate and easier to combine into complex narratives, they also become easier to be remanipulated by the audience. This gives images a liquidity similar to words. Fluid images­ made up of bits flow rapidly onto new screens and can be put to almost any use. Flexible images migrate into new media and seep into the old. Like alphabetic bits, they can be squeezed into links or stretched to fit search engines, indexes and databases. They invite the same satisfying participation in both creation and consumption that the world of text does. We are people of the screen now. Last year, digital-display manufacturers cranked out four billion new screens, and they expect to produce billions more in the coming years. That’s one new screen each year for every human on earth. With the advent of electronic ink, we will start putting watchable screens on any flat surface. The tools for screen fluency will be built directly into these ubiquitous screens. With our fingers we will drag objects out of films and cast them in our own movies. A click of our phone camera will capture a landscape, then display its history, which we can use to annotate the image. Text, sound, motion will continue to merge into a single intermedia as they flow through the always-on network. With the assistance of screen fluency tools we might even be able to summon up realistic fantasies spontaneously. Standing before a screen, we could create the visual image of a turquoise rose, glistening with dew, poised in a trim ruby vase, as fast as we could write these words. If we were truly screen literate, maybe even faster. And that is just the opening scene. Kevin Kelly is senior maverick at Wired and the author of “Out of Control” and a coming book on what technology wants.
Stepping down Posted November 18th, 2008 at 9:09 am by Jerry Yang, CEO & Chief Yahoo Number of Comments 16 Comments / Filed in: Trends & News As you’ve no doubt already read, I’ve decided that I will step down from my role as Chief Executive Officer after my successor has been selected. Ever since founding Yahoo! with David Filo 13 years ago, I’ve been passionate about this company, its brand, its employees, and the millions of people around the world who consider it their online home. That’s why I accepted the Board’s request to become CEO in June 2007, taking on the challenge of transforming Yahoo! at a time when the industry was evolving quickly and we needed to rethink and restructure our business. And despite the tough external environment that we face, I truly believe we’ve made tangible progress in bringing our strategic vision to life. Most significantly, we’ve rewired our entire network to create a Yahoo! that has opened its doors to outside publishers and developers. We’ve launched an advertising platform that we think will transform how ads are bought and sold online. And we’ve continued to grow our audience –- standing first or second in more than 20 product categories and demonstrating that Yahoo! is the place users turn for major events like the Olympics and the Elections. And now I believe the time is right for us to bring in a new leader –- someone who will build on the important pillars we’ve put in place and who will take the reins on the critical decisions our company faces. As for me, I’ll be returning to my role as Chief Yahoo and board member once my successor is named. I’ll go back to focusing on our global strategy, product excellence, technology innovation, and working with the Board and our executive team to help Yahoo! realize its full potential. It’s been an extraordinary year here at Yahoo! –- for all of us. I’m really proud of the determination and resilience of Yahoos around the world who are so committed to giving you the best Internet experience possible. It is for them, and for you, that I will always bleed purple. Jerry Yang Chief Yahoo and CEO
* 17:25 12 November 2008 by Colin Barras * For similar stories, visit the Nanotechnology Topic Guide DNAStrands.jpg Thanks to a new technique, DNA strands can be easily converted into tiny fibre optic cables that guide light along their length. Optical fibres made this way could be important in optical computers, which use light rather then electricity to perform calculations, or in artificial photosynthesis systems that may replace today's solar panels. Both kinds of device need small-scale light-carrying "wires" that pipe photons to where they are needed. Now Bo Albinsson and his colleagues at Chalmers University of Technology in Gothenburg, Sweden, have worked out how to make them. The wires build themselves from a mixture of DNA and molecules called chromophores that can absorb and pass on light. The result is similar to natural photonic wires found inside organisms like algae, where they are used to transport photons to parts of a cell where their energy can be tapped. In these wires, chromophores are lined up in chains to channel photons. Light wire Albinsson's team used a single type of chromophore called YO as their energy mediator. It has a strong affinity for DNA molecules and readily wedges itself between the "rungs" of bases that make up a DNA strand. The result is strands of DNA with YO chromophores along their length, transforming the strands into photonic wires just a few nanometres in diameter and 20 nanometres long. That's the right scale to function as interconnects in microchips, says Albinsson. To prove this was happening, the team made DNA strands with an "input" molecule on one end to absorb light, and on the other end a molecule that emits light when it receives it from a neighbouring molecule. When the team shone UV light on a collection of the DNA strands after they had been treated with YO, the finished wires transmitted around 30% of the light received by the input molecule along to the emitting molecule. Physicists have corralled chromophores for their own purposes in the past, but had to use a "tedious" and complex technique that chemically attaches them to a DNA scaffold, says Niek van Hulst, at the Institute of Photonic Sciences in Barcelona, Spain, who was not involved in the work. The Gothenburg group's ready-mix approach gets comparable results, says Albinsson. Because his wires assemble themselves, he says they are better than wires made by the previous chemical method as they can self-repair: if a chromophore is damaged and falls free of the DNA strand, another will readily take its place. It should be possible to transfer information along the strands encoded in pulses of light, he told New Scientist. Variable results Philip Tinnefeld at the Ludwig Maximilian University of Munich in Germany says a price has been paid for the added simplicity. Because the wire is self-assembled, he says, it's not clear exactly where the chromophores lie along the DNA strand. They are unlikely to be spread out evenly and the variation between strands could be large. Van Hulst agrees and is investigating whether synthetic molecules made from scratch can be more efficient than modified DNA. But both researchers think that with improvements, "molecular photonics" could have a wide range of applications, from photonic circuitry in molecular computers to light harvesting in artificial photosynthetic systems. Journal reference: Journal of the American Chemical Society (DOI: 10.1021/ja803407t)
By Priya Ganapati EmailNovember 12, 2008 | 1:02:46 AMCategories: Chips amd_wafer_2.jpg Amd_wafer_2 AMD is hitting new heights of achievement but that's still not enough to keep it from getting smoked by a much faster rival. The company is set to launch on Thursday its much-awaited 45-nanometer quad core processor for servers, though the release comes months after its rival Intel put out a comparable product. Codenamed Shanghai, this is AMD's first processor to use the smaller, faster 45-nm technology instead of older 65nm technology. Meanwhile, Intel is planning to release its latest 45-nm chips for the desktop on Monday, codenamed Nehalem and to be known officially as Core i7. AMD says it won't have a comparable desktop chip until next year. In the carefully-orchestrated roadmaps used by semiconductor companies, chips intended for use in servers typically precede desktop and notebook processors by several months. "I think of Shanghai as the last dance of the company," says Patrick Wang, an analyst with brokerage and research firm Wedbush Morgan. "Shanghai is significant because AMD needs it to get back into the game." For now, all eyes are on the launch of the Shanghai chips from AMD. The chips mark AMD's debut in the 45-nanometer process technology and are seen as a bid to move forward after the disastrous performance of its previous Barcelona chips, which were 65-nm quad-core processors. Barcelona was widely faulted for its technical glitches that led to multiple delays in its launch and its high pricing. The combination, some say, helped Intel gain market share at AMD's expense. The latest 45-nm quad core Opteron processor will have increased power efficiency, fit easily into the same socket as Barcelona allowing for "non-disruptive" upgrades and is priced competitively, says Brent Kerby, senior product marketing manager for server and workstations for AMD. "Shanghai is looking really good and we delivered it three months ahead of our planned schedule," he says. AMD's new chip seems impressive, say analysts, and would be groundbreaking except for the fact that Intel has had similar chips in the market for months. Intel's quad-core 45-nm server processor, called Harpertown (and officially known as Xeon) was available around the same time as AMD launched its 65-nm processor Barcelona. "Barcelona was completely botched in terms of execution and was failure on many fronts--technology, pricing and market share," says Wang. "The reason that AMD is in such dire financial situation is because of the Barcelona." Now Shanghai, hopes AMD, will change all that. "With Barcelona we had a completely new redesign," says AMD's Kerby. "We have taken on the learnings and capabilities from Barcelona and improved on it for Shanghai." Amd_roadmap_2_2 AMD is also at least six months behind Intel when it comes to six-core processors, says PC analyst Shane Rau with research firm IDC. AMD plans to introduce a six-core processor called Istanbul mid-2009. But Intel already has its six-core chip called Dunnington available for the last few weeks. Still AMD has some breathing space there. Just about 5% of Intel's shipments in the third quarter were Dunnington, giving AMD some time to catch up. Shanghai may have helped AMD move closer to Intel in terms of comparable technology for server processors. But on the desktop side, the company still has an uphill climb. AMD's 45-nm desktop chip, codenamed Deneb, is likely to launch early next year. That means Intel's core i7 processors will have a comfortable lead over its rival. "Intel's going to be the only game in town for a while for the latest in desktop processors," says Wang. With AMD and Intel locked in yet another fierce battle, here's a breakdown of how the two company's latest releases stack up. AMD Learns Its Lessons With Shanghai AMD says it did the "heavy lifting" for Barcelona and has since streamlined its processes to put out a next generation processor faster. Its latest 45-nm quad-core processors offers significantly higher CPU clock frequencies with the same power consumption as earlier generations. "What these specs mean is it will be a higher performing processor and offer better price performance per watt," says Rau. Shanghai's compatibility with sockets designed for Barcelona means OEMs can buy it and drop it in to their existing designs for servers and motherboards. That helps reduce costs for them and makes it easier to upgrade, says Rau. The chips also increases the size of the Level 3 cache by 200%, to 6 MB, which helps speed memory-intensive applications like virtualization, databases and Java apps, says AMD. The processors also draw up to 35% less power at idle compared to the previous generation while delivering up to 35% more performance, says the company. "AMD is going to be successful in applications that are memory and floating point intensive, which means in databases and scientific applications," says Wang. Intel Races Ahead to Desktops AT A GLANCE: Intel core i7 Faster Processor: Almost four to six times faster than Intel's current platform. Greater power efficiency: Allows the processor to switch off power to an idle or unused core. Integrated memory controller: Increases bandwidth directly available to the processor, reducing lag time before a CPU can begin executing the next instruction. Simultaneous multi-threading: Used in some Pentium and Xeon processors it makes a comeback. Allows for double the number of threads to be run simultaneously by each processor boosting performance The first three Core i7 chips will be quad-core and have clock speeds of 2.66GHz, 2.93GHz, and 3.20GHz and integrated memory controller. Codenamed Bloomfield and officially named Core i7, Intel's 45-nm desktop processors are targeted at largely at gaming PCs but Intel plans to have versions ready for business users in the next few weeks. The 65-nm vs. 45-nm difference is important because on a macro-level it is one of the factors that affects pricing, say analysts. "When Intel can manufacture in 45-nm earlier than AMD it can possibly have a cost advantage, which can be passed on to users," says Rau. "A 65-nm die is more expensive to cast than a 45-nm one." For Intel, that means more than just being a generation ahead of AMD: It means that Intel will be enjoying fatter margins while AMD is still struggling to catch up. In the end, that could translate into enough market share to cripple AMD for good
nc10.jpg One of the first things that caught my attention when pictures of the upcoming Samsung NC10 netbook began to surface was its keyboard. While many netbooks attempt to save space by consolidating some keys and shrinking some others, the NC10’s 84 key keyboard doesn’t seem to make many compromises. Unlike the Asus Eee PC, the NC10 positions the right shift key above the arrow keys, not to the right of them, making it easy for touch-typists to find. And by dropping the arrow keys a bit below the rest of the keyboard, there’s even room for dedicated page up and down buttons. Many other netbooks require you to hold down the Fn key while hitting the arrow buttons to access the page up and down features. French site Blogeee scored some new high resolution images of the Samsung NC10, and I have to say it’s looking good. Blogeee also uncovered a few new details about the netbook, like the fact that the computer will support an external display with a maximum resolution up to 2048 x 1536 and 85Hz, has HD audio, and a 1.3MP camera. We’ve also now got some dimensions to go with the Samsung NC10: 261mm x 185mm x 30mm or about 10.3″ x 7.3″ x 1.2″. The netbook weighs 1.33kg or about 2.9 pounds. samsung-nc10.jpg
By Justin Berka | Published: October 07, 2008 - 09:17AM CT The Pandora personalized radio streaming service was already a popular destination before Apple's App Store launched over the summer, but the launch of the Pandora iPhone application has apparently caused the site to really take off. Founder Tim Westergren spoke about the iPhone's effect on Pandora and a variety of other topics during his keynote at the Digital Music Forum West conference and, according to paidContent, revealed that the device has caused a substantial increase in the amount of new Pandora users. Sure, Pandora may still be stuck between a rock and a hard place if it can't reach a webcasting royalty agreement and gets stuck with a large bill for streaming radio to so many listeners, but the fact that the service is growing rapidly is a good sign for the company. In the case of the iPhone, the Pandora application's availability on the App Store has doubled the amount of new users that Pandora is getting every day, from 20,000 up to 40,000. In terms of the total number of iPhone users, Silicon Alley Insider says that there are just shy of 1.5 million iPhone owners using Pandora, representing a bit less than ten percent of the total number of listeners. Westergren also spoke a bit about iTunes, saying that he listens to music via iTunes more than he listens to Pandora. He used this to illustrate his point that people won't switch to Pandora and completely stop buying music, since many people don't want to have to fool with Pandora playlists and the like just to listen to some tunes. In fact, the iTunes Store may even benefit from things like the iPhone Pandora client, since the software exposes people to new music that they may later buy from Apple, so it seems to be a mutually beneficial relationship for the two companies.
last post
14 years ago
posts
238
views
40,812
can view
everyone
can comment
everyone
atom/rss

other blogs by this author

 13 years ago
FUNNY
 13 years ago
Astronomy
 13 years ago
NEWS
 13 years ago
QUOTES
 13 years ago
MUSIC
 14 years ago
Science
 14 years ago
Pictures
 14 years ago
Video
official fubar blogs
 8 years ago
fubar news by babyjesus  
 13 years ago
fubar.com ideas! by babyjesus  
 10 years ago
fubar'd Official Wishli... by SCRAPPER  
 11 years ago
Word of Esix by esixfiddy  

discover blogs on fubar

blog.php' rendered in 0.0862 seconds on machine '205'.