Labels

Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Monday, 7 January 2013

internet emits co2 -will u believe it




       Clicking on all those viral videos, chain emails, celebrity tweets and paparazzi photos online adds up enough energy to rank the Internet –- if it were a country -– fifth in the world for electricity use.

That’s more power than Russia uses, according to a  new report about cloud-computing from Greenpeace.

Computer servers in data centers account for about 2% of global energy demand, growing about 12% a year, according to the group.  The servers, Greenpeace said, can suck up as much power as 50,000 average U.S. homes.

But most of what powers the cloud comes from coal and nuclear energy rather than renewable sources such as wind and solar, according to Greenpeace. Clusters of data centers are emerging in places like the Midwest, where coal-powered electricity is cheap and plentiful, the group said.

In its report, the organization zeroed in on 10 major tech companies, including Apple, Twitter and Amazon. Recently, the group has waged a feisty fight against Facebook, which relies on coal for 53.2% of its electricity, according to Greenpeace.

Many companies, the organization said, tightly guard data about the environmental impact and energy consumption of their IT operations. They also focus more on using energy efficiently than on sourcing it cleanly, Greenpeace said.

Yahoo landed bonus points for siting facilities near clean energy hot spots and using coal-based power for just 18.3% of its portfolio. Google got love for its extensive support of wind and solar projects and for creating a subsidiary, Google Energy, that can buy electricity directly from independent renewable power producers.

In 2005, the U.S. had 10.3 million data centers gobbling up enough energy to power all of Britain for two months, according to Internet marketing company WordStream.

Each month, electricity used to power searches on Google produces 260,000 kilograms of carbon dioxide and is enough to power a freezer for 5,400 years, according to WordStream. The searches use up 3.9 million kilowatt-hours -– the equivalent of 5 million loads of laundry.

A single spam email of the 62 trillion sent each year creates 0.3 grams of carbon dioxide. A Google search for “Soylent Green” spawns the same amount as driving a car three inches.
    Internet and other components of information communication and technology (ICT) industry annually produces more than 830 million tonnes of carbon dioxide (CO2), the main greenhouse gas, and is expected to double by 2020, a new study has found.

            Researchers from the Centre for Energy-Efficient Telecommunications (CEET) and Bell Labs explain that the information communications and technology (ICT) industry, which delivers Internet, video, voice and other cloud services, produces about 2% of global CO2 emissions — the same proportion as the aviation industry produces.

        In the report published in journal Environmental Science & Technology, researchers said their projections suggest that ICT sector's share in greenhouse gas emission is expected to double by 2020.

         They have also found new models of emissions and energy consumption that could help reduce their carbon footprint.

         The study said that controlling those emissions requires more accurate but still feasible models, which take into account the data traffic, energy use and CO2 production in networks and other elements of the ICT industry.

          Existing assessment models are inaccurate, so they set out to develop new approaches that better account for variations in equipment and other factors in the ICT industry.

         They describe development and testing of two new models that better estimate the energy consumption and CO2 emissions of Internet and telecommunications services.

          The researchers suggest, based on their models, that more efficient power usage of facilities, more efficient use of energy-efficient equipment and renewable energy sources are three keys to reducing ICT emissions of CO2.



FINGERPRINT GUN-SOLUTION FOR MISUSE OF GUN









  • The illegal use of guns has become one of our major problems these days. It threatens public security worldwide. Fingerprint Gun integrates user-friendly fingerprint identification systems with an embedded memory chip to ensure the user is its legal owner. 
  • The memory chip is located at the upper end of the gun barrel. The indicator on the gun’s front sight attachment will turn to green when the fingerprint information is verified to be correct.
  • It allows the user to shoot with the gun. The wrong fingerprint will turn the indicator to red and the gun can’t be used. Fingerprint Gun has been designed to control the use of private guns for better public security.


HOW IT WORKS:


Handguns are a danger in any industrialized country, and despite active gun registries and police crackdowns, their use only seems to be increasing. Now, designer Sun Xiaotian has come up with a novel way of protecting the peace and the "piece" - the Fingerprint Gun.

    
            1. The idea here is that a safety-conscious citizen would buy and register the Fingerprint Gun and have their prints encoded in the memory chip. 



            2. If they wanted to shoot something, they would need to place their thumb on top of the fingerprint scanner at the back of the gun to have it read. 
               3. Once a correct match had been confirmed, a small indicator light would turn green at the barrel of the gun. 
               4. If a finger not belonging to the owner was scanned or if no fingerprint was present, then gun simply wouldn't work.                  

internet - 30 years glorious journey



The start of 2013 marks a significant milestone for the internet, as it is 30 years ago that Arpanet, precursor of the modern day internet, was switched to running on the TCP/IP protocol stack, a move that paved the way for the global internet as we know it today.
The deadline was 1 January, 1983: after this, any of the Advanced Research Projects Agency Network's (ARPANET) 400 hosts that were still clinging to the existing, host-to-host Network Control Protocol (NCP) were to be cut off.
The move to packet switching with TCP/IP was simultaneous and co-ordinated with the community in the years before 1983. More than 15 government and university institutions from NASA AMES to Harvard University used NCP on ARPANET.

With so many users, though, there was plenty of disagreement. The deadline was ultimately set because everybody using ARPANET was convinced of the need for wholesale change.

TCP/IP was the co-creation of Vint Cerf and Robert Kahn, who published their paper, A Protocol for Packet Network Interconnection (warning: PDF) in 1974.

ARPANET was the wide-area network sponsored by the US Defense Advanced Research Projects Agency (DARPA) that went live in 1969, while Cerf had been an ARPANET scientist at Stanford University. The military had become interested in a common protocol as different networks and systems using different protocols began to hook up to ARPANET and found they couldn’t easily talk to each other,

Cerf, who today is vice-president and "chief internet evangelist" at Google, announced the 30th anniversary of the TCP/IP switchover in an official Google blog post titled "Marking the birth of the modern-day Internet".

The 1983 deadline’s passing was anticlimactic, Cerf recalls, considering how important TCP/IP became as an enabler for the internet. Cerf writes:

When the day came, it’s fair to say the main emotion was relief, especially amongst those system administrators racing against the clock. There were no grand celebrations—I can’t even find a photograph. The only visible mementos were the “I survived the TCP/IP switchover” pins proudly worn by those who went through the ordeal!
Yet, with hindsight, it’s obvious it was a momentous occasion. On that day, the operational Internet was born. TCP/IP went on to be embraced as an international standard, and now underpins the entire Internet.

It was a significant moment, and without TCP/IP we wouldn’t have the internet as we know it.

But that wasn’t the end of the story, and three years later TCP/IP was in trouble as it suffered from severe congestion to the point of collapse.

TCP/IP had been adopted by the US military in 1980 following successful tests across three separate networks, and when it went live ARPANET was managing 400 nodes.

After the January 1983 switchover, though, so many computer users were starting to connect to ARPANET - and across ARPANET to other networks - that traffic had started to hit bottlenecks. By 1986 there were 28,000 nodes chattering across ARPANET, causing congestion with speeds dropping from 32Kbps to 40bps across relatively small distances.

It fell to TCP/IP contributor Van Jacobson, who’d spotted the slowdown between his lab in Lawrence Berkeley National Laboratory and the University of California at Berkeley — just 400 yards and two IMP hops apart – to save TCP/IP and the operational internet.

Jacobson devised a congestion-avoidance algorithm to lower a computer's network data transfer speed and settle on a stable but slower connection rather than blindly flooding the network with packets.

The algorithm allowed TCP/IP systems to process lots of requests in a more conservative fashion. The fix was first applied as a client-side patch to PCs by sysadmins and then incorporated into the TCP/IP stack. Jacobson went on to author the Congestion Avoidance and Control (SIGCOMM 88) paper (here) while the internet marched on to about one billion nodes.

And even this is not the end of the story. Years later, in an interview with The Reg, Jacobson reckoned TCP/IP faces another crisis - and, again, it's scalability.