Thursday, July 24, 2008
Sunday, January 6, 2008
Personal Air Vehicle
Personal Air Vehicle(PAV) is nowadays widely adopted by the U.S. aviation community and is actually a class of light general aviation aircraft which meet a specialized set of design and performance goals. NASA, in 2005, refined the definition of a PAV in describing its fifth Centennial Challenge initiative. PAVs are an emerging field of technology exploration.The fundamental premise of this frontier technology is to make the capability of flight convenient for an individual with a reduction in the specialized skills required to operate an aircraft. The final goal being a practical “highway in the sky” scenario where an individual is able to fly from point to point with the ease of driving an automobile.Gridlocked highways increasingly burden our society. Currently, the doorstep-to-doorstep average speed for cars is 35 mph. In the greater Los Angeles area, this speed is predicted to degrade to just 22 mph by year 2020. The U.S. Department of Transportation (DOT) states that 6.7 billion gallons of gasoline are wasted in traffic jams each year. This is over 20 times more gasoline than is consumed by today's entire general aviation fleet.
A future system of travel by PAVs expressly avoids air traffic jams and can substantially help to relieve those on our highways.A pure Synthetic Vision System infrastructure does not currently exist for general aviation aircraft. Current implementations of "Glass Cockpits" are now being adopted by general aircraft manufactures such as Cirrus Aircraft, Piper, Cessna, and Beechcraft.
The Federal Aviation Administration (FAA) current infrastructure is not currently capable of handling the sizable increase in aircraft traffic that would be generated by PAVs. The FAA is currently planning the Next Generation Air Transportation System targeted for 2025 to expand and completely transform the current aged system. See FAA NGATS Modeling by NASA and others have shown that PAV's using new smaller community airports would reduce traffic into larger airports serving the commercial fleet.
Of the two methods proposed for providing “door-to-door” capabilities, only the roadable option can be achieved utilizing existing airport facilities and ordinary roads. Currently, the only vehicles able to legally take off and land from a residential street are life-flight helicopters via special permission granted by the FAA on a case-by-case basis. In order to meet the goals set by NASA, thousands of small residential airports would be required to be built.
Community noise generated by aircraft is serious consideration for residential PAVs operations for take-off and landing. Without lower noise levels enabling residential landing capabilities, any PAV must still take off and land at an FAA controlled airport or private airfield, where the higher sound levels of operating aircraft have been approved.
A future system of travel by PAVs expressly avoids air traffic jams and can substantially help to relieve those on our highways.A pure Synthetic Vision System infrastructure does not currently exist for general aviation aircraft. Current implementations of "Glass Cockpits" are now being adopted by general aircraft manufactures such as Cirrus Aircraft, Piper, Cessna, and Beechcraft.
The Federal Aviation Administration (FAA) current infrastructure is not currently capable of handling the sizable increase in aircraft traffic that would be generated by PAVs. The FAA is currently planning the Next Generation Air Transportation System targeted for 2025 to expand and completely transform the current aged system. See FAA NGATS Modeling by NASA and others have shown that PAV's using new smaller community airports would reduce traffic into larger airports serving the commercial fleet.
Of the two methods proposed for providing “door-to-door” capabilities, only the roadable option can be achieved utilizing existing airport facilities and ordinary roads. Currently, the only vehicles able to legally take off and land from a residential street are life-flight helicopters via special permission granted by the FAA on a case-by-case basis. In order to meet the goals set by NASA, thousands of small residential airports would be required to be built.
Community noise generated by aircraft is serious consideration for residential PAVs operations for take-off and landing. Without lower noise levels enabling residential landing capabilities, any PAV must still take off and land at an FAA controlled airport or private airfield, where the higher sound levels of operating aircraft have been approved.
flying car/roadable aircraft
A flying car or roadable aircraft is an automobile that can legally travel on roads and can take off, fly, and land as an aircraft. In practice, the vehicle usually has to be converted from a standard aeroplane to an aeroplane with sufficient roadworthiness.
In science fiction, the vision of a flying car is usually a practical aircraft that the average person can fly directly from any point to another (e.g. from home to work or to the supermarket) without the requirement for roads, runways or other special prepared operating areas, and they often start and land in a garage or on a parking lot. In addition, the science-fiction version of the flying car typically resembles a conventional car with no visible means of propulsion, rather than an aeroplane. For more information on the science-fiction stereotype, see hovercar.Today, there is an active movement in the search for a practical flying car. Several conventions are held yearly to discuss and review current flying car projects. Two notable events are the Flying Car forum held at the world-famous EAA Airventure at Oshkosh, Wisconsin, and the Society of Automotive Engineers (SAE) conventions held at various cities.
Flying cars can fall into one of two styles; integrated (all the pieces can be carried in the vehicle), or modular (the pieces to fly are left at the airport when the vehicle is driven).
In science fiction, the vision of a flying car is usually a practical aircraft that the average person can fly directly from any point to another (e.g. from home to work or to the supermarket) without the requirement for roads, runways or other special prepared operating areas, and they often start and land in a garage or on a parking lot. In addition, the science-fiction version of the flying car typically resembles a conventional car with no visible means of propulsion, rather than an aeroplane. For more information on the science-fiction stereotype, see hovercar.Today, there is an active movement in the search for a practical flying car. Several conventions are held yearly to discuss and review current flying car projects. Two notable events are the Flying Car forum held at the world-famous EAA Airventure at Oshkosh, Wisconsin, and the Society of Automotive Engineers (SAE) conventions held at various cities.
Flying cars can fall into one of two styles; integrated (all the pieces can be carried in the vehicle), or modular (the pieces to fly are left at the airport when the vehicle is driven).
Thursday, January 3, 2008
Portland cement
Portland cement is the very common type of cement in public usage in many parts of the world, as it is a fundamental ingredient of concrete, mortar, stucco and most non-specialty grout. It is a fine powder made by grinding Portland cement clinker (more than 90%), a limited amount of calcium sulfate which controls the set time, and up to 5% minor constituents (as allowed by various standards). As defined by the European Standard EN197.1, "Portland cement clinker is a hydraulic material which shall consist of at least two-thirds by mass of calcium silicates (3CaO.SiO2 and 2CaO.SiO2), the remainder consisting of aluminium- and iron-containing clinker phases and other compounds. The ratio of CaO to SiO2 shall not be less than 2.0. The magnesium content (MgO) shall not exceed 5.0% by mass.". Portland cement sintering temperature, which is about 1450 °C for modern cements. The aluminium oxide and iron oxide are present as a flux and contribute little to the strength. For special cements, such as Low Heat (LH) and Sulfate Resistant (SR) types, it is necessary to limit the amount of tricalcium aluminate (3CaO.Al2O3) formed. The major raw material for the clinker-making is usually limestone (CaCO3). Normally, an impure limestone which contains SiO2 is used. The CaCO3 content of these limestones can be as low as 80%. Secondary raw materials (materials in the rawmix other than limestone) depend on the purity of the limestone. Some of the secondary raw materials used are: clay, shale, sand, iron ore, bauxite, fly ash and slag. When a cement kiln is fired by coal, the ash of the coal acts as a secondary raw material.Portland was developed from cements (or correctly hydraulic limes) made in Britain in the early part of the nineteenth century, and its name is derived from its similarity to Portland stone, a type of building stone that was quarried on the Isle of Portland in Dorset, England. Joseph Aspdin, a British bricklayer, in 1824 was granted a patent for a process of making a cement which he called Portland cement. His cement was an artificial hydraulic lime similar in properties to the material known as "Roman Cement" (patented in 1796 by James Parker) and his process was similar to that patented in 1822 and used since 1811 by James Frost who called his cement "British Cement". The name "Portland cement" is also recorded in a directory published in 1823 being associated with a William Lockwood and possibly others. Aspdin's son William in 1843 made an improved version of this cement and he initially called it "Patent Portland cement" although he had no patent. In 1848 William Aspdin further improved his cement and in 1853 moved to Germany where he was involved in cement making. Many people have claimed to have made the first Portland cement in the modern sense, but it is generally accepted that it was first manufactured by William Aspdin at Northfleet, England in about 1842. The German Government issued a standard on Portland cement in 1878. The most common use for Portland cement is in the production of concrete. Concrete is a composite material consisting of aggregate (gravel and sand), cement, and water. As a construction material, concrete can be cast in almost any shape desired, and once hardened, can become a structural (load bearing) element. Users may be involved in the factory production of pre-cast units, such as panels, beams, road furniture, or may make cast-in-situ concrete such as building superstructures, roads, dams. These may be supplied with concrete mixed on site, or may be provided with "ready-mixed" concrete made at permanent mixing sites. Portland cement is also used in mortars (with sand and water only) for plasters and screeds, and in grouts (cement/water mixes squeezed into gaps to consolidate foundations, road-beds, etc).
Tidal energy
Let us thank moon for his kind gestures. Tidal energy is the one and only form of energy whose source is the moon. Some other energy sources like nuclear power and geothermal energy for instance, have the Earth as their source. The remainder, fossil fuels, wind energy, biofuels, solar energy, etc. have the Sun as their source, directly or indirectly. The tidal power is produced by the gravitational pull of the Moon on water. Due to these gravitational forces the water level follows a periodic high and low. The height of the tide produced at a given location is the result of the changing positions of the Moon and Sun relative to the Earth coupled with the effects of Earth rotation and the local shape of the sea floor.The tidal energy generator uses this phenomenon to generate energy. The higher the height of the tide the more promising it is to harness tidal energy.Tidal power, sometimes called tidal energy, is a form of hydropower that exploits the movement of water caused by tidal currents or the rise and fall in sea levels due to the tides. Although not yet widely used, tidal power has potential for future electricity generation and is more predictable than wind energy and solar power. In Europe, tide mills have been used for over a thousand years, mainly for grinding grains.
WIND POWER
Wind power means the simple conversion of wind energy into electricity, using wind turbines. In windmills, wind energy is directly employed to crush grain or to pump water. In 2006, worldwide capacity of wind-powered generators was 73.9 gigawatts. Although wind currently produces a mere over 1% of world-wide electricity use, it accounts for approximately 20% of electricity production in Denmark, 9% in Spain, and 7% in Germany. Globally, wind power generation more than quadrupled between 2000 and 2006.Wind power is produced in large scale wind farms connected to electrical grids, and in individual turbines for providing electricity to remote isolated locations. Wind energy is plentiful, renewable, widely distributed, clean, and reduces greenhouse gas emissions when it displaces fossil-fuel-derived electricity. The intermittency of wind seldom creates insurmountable problems when using wind power to supply up to roughly 10% of total electrical demand (low to moderate penetration), but it presents challenges that are not yet fully solved when wind is to be used for a larger fraction of demand. There are many thousands of wind turbines operating, with a total capacity of 73,904 MW of which wind power in Europe accounts for 65% (2006). The average output of one megawatt of wind power is equivalent to the average electricity consumption of about 250 American households. Wind power was the most rapidly-growing means of alternative electricity generation at the turn of the 20th century. World wind generation capacity more than quadrupled between 2000 and 2006. In some countries (Spain and Denmark) wind supplies 10% or more of the nation's electricity. 81% of wind power installations are in the US and Europe, but the share of the top five countries in terms of new installations fell from 71% in 2004 to 55% in 2005.By 2010, the World Wind Energy Association expects 160GW of capacity to be installed worldwide, up from 73.9GW at the end of 2006, implying an anticipated net growth rate of more than 21% per year. India ranks 4th in the world with a total wind power capacity of 6,270 MW in 2006, or 3% of all electricity produced in India. The World Wind Energy Conference in New Delhi in November 2006 has given additional impetus to the Indian wind industry. The windfarm near Muppandal, Tamil Nadu, India, provides an impoverished village with energy for work. India-based Suzlon Energy is one of the world's largest wind turbine manufacturers.
Friday, December 28, 2007
Computed tomography
Computed tomography (CT), was known as "EMI scan" as it was developed as an exclusive research branch of EMI, a company best known at present for its music and recording business. It is a medical imaging method utilizing tomography where digital geometry processing is employed to produce a three-dimensional image of the internals of an object from a large series of two-dimensional X-ray images taken around a single axis of rotation. The word "tomography" is derived from the Greek tomos (slice) and graphein (to write). CT gives a volume of data which can be manipulated, through a step known as windowing, in order to demonstrate different structures based on their inherent property to block the X-ray beam. Although historically the images created were in the axial or transverse plane, modern scanners allow this volume of data to be reformatted in various planes or even as volumetric (3D) representations of structures. Now CT scan become most common in healthcare, and is also used in other fields, for example nondestructive materials testing. Another example is the DigiMorph project at the University of Texas at Austin which uses a CT scanner to study biological and paleontological specimens.Since its introduction in the 1970s, CT has become an exclusive tool in medical imaging to support X-rays and medical ultrasonography. But it is somewhat expensive. nowadays it is the gold standard in the diagnosis of different diseases. Recently CT scan also using for preventive medicine or screening for disease, for example CT colonography for patients with a high risk of colon cancer. Even if a number of institutions now offer full-body scans for the general population, this practice remains controversial due to its lack of proven benefit, cost, radiation exposure, and the risk of finding 'incidental' abnormalities that may trigger additional investigations..
Artificial pacemaker
Artificial pacemaker is a sophisticated medical accessary which employs electrical impulses, delivered by electrodes contacting the heart muscles, to control and monitor the beating of the heart. The primary purpose of a pacemaker is to regulate an adequate heart rate, either because the heart's native pacemaker is not fast enough, or there is a blockade in the heart's electrical conduction system. Modern pacemakers are externally programmable and give freedom for the cardiologist to select the optimum pacing modes for every individual patient. Some combine a pacemaker and implantable defibrillator in a single implantable device. Others have more than one electrodes stimulating differing positions within the heart to enhance synchronisation of the lower chambers of the heart.There exist different methods of pacing namely Percussive Pacing,Transcutaneous pacing,Transvenous pacing (temporary) and Permanent pacing.Lots of research has been made to improve the control of the pacemaker after implantation. Many of these enhancements have been now possible by using microprocessor controlled pacemakers. Pacemakers that control the ventricles and the atria are now common. Pacemakers that control both the atria and ventricles are called dual-chamber pacemakers. Although these dual-chamber models are rather expensive, timing the contractions of the atria to precede that of the ventricles improves the pumping efficiency of the heart and can be useful in congestive heart failure.
Sunday, December 23, 2007
New Computer Interfaces
Keyboards are a essential part of today’s computers. But may be not for much longer. A group of European scientists have used acoustic sensors to turn wooden tabletops and even three-dimensional objects into a new type of computer interface.Sound vibrating a windowpane or through a tabletop is something most people practice daily. Sound waves pass through well through most solid materials. Now, European researchers have demoralized the excellent propagation of sound waves through solids to turn everyday objects – including 3D objects – into a new kind of computer interface.By fixing sensors to solid materials, were able to locate exactly and track acoustic vibrations. Tapping on discrete areas of a whiteboard could generate musical notes on a computer. Tracking the sound of a finger scrawling words on a sheet of hardboard could translate, in real time, into handwriting on a computer screen. There is no need for overlays or intrusive devices.
Sensing vibrations in a solid and changing them to electrical pulses is the easy bit. Exactly locating the source of that vibration in a solid material is where it gets complicated. The problem is that the complex structures of solids make wave propagation difficult to model. Wood knots in a desktop, for example, will alter how acoustic vibrations scatter.
Sensing vibrations in a solid and changing them to electrical pulses is the easy bit. Exactly locating the source of that vibration in a solid material is where it gets complicated. The problem is that the complex structures of solids make wave propagation difficult to model. Wood knots in a desktop, for example, will alter how acoustic vibrations scatter.
Cell Phone as Health Care Resource
latest software technology permits cell phone and PDA users to download their medical records, building them quickly accessible in case of urgent situation. The new software, to be presented in a year, can smooth display animated 3D scans. Computer scientists say the technology will also facilitate students to do research using their portable devices. Imagine if your medical records were missing or misplaced. It can cause more than aggravation; it could impact the care you be given. Now, envisage being able to download your own health records -- even X-rays and diagnostic scans -- right into your cell phone or PDA. The same technology that brings games to life in your cell phone can also help you and your doctor keep track of your health.
Many of the newer cells phones and PDAs have a graphics chip like the one in your PC. The chip can turn your phone into a virtual medical library, complete with stunning 3D computer graphics and medical scans. Now medical notes and patient tests can be downloaded onto a cell phone in very soon minutes. All the data on a phone is stored in the memory expansion slot. In these medical phones, however. Instead of music and digital pictures, it could grasp a virtual scan of the body and much moreThe 3D mobile medical data program should be available within a year.
Many of the newer cells phones and PDAs have a graphics chip like the one in your PC. The chip can turn your phone into a virtual medical library, complete with stunning 3D computer graphics and medical scans. Now medical notes and patient tests can be downloaded onto a cell phone in very soon minutes. All the data on a phone is stored in the memory expansion slot. In these medical phones, however. Instead of music and digital pictures, it could grasp a virtual scan of the body and much moreThe 3D mobile medical data program should be available within a year.
Monday, December 17, 2007
Medical microchips
The potential of treatment for life-threatening conditions, including cancer and heart disease, could come from a new breed of microchips. Biological microelectromechanical devices, known as bioMEMS, could be implanted into the body to transport doses of drugs or carry new cells to damaged tissue. The possibilities presented by the field of biomedical nanotechnology were outlined by Robert Michler, chief of cardiothoracic surgery at Ohio State University, US, at the Biomedical Nanotechnology World 2000 conference. Among the applications of nanotechnology in health would be the development of robotic surgery techniques. A recent study led by Dr Michler has used robotic methods to graft arteries from the chest wall into the heart in 60 patients. Now, Dr Michler anticipates that the same techniques, allowing precision and access to tiny areas of the body, could allow microchips to be placed in tissue or blood vessels. The chips could then release drugs or even stem cells to stimulate local tissue repair. "We're ready to create the chips and use the robot to insert them into the hearts of lab animals," he said. Clinical trials on humans are possible within five years. The emerging field of nanotechnology and health in the UK is centred at the University of Birmingham. Its researchers believe there is great potential for developing new techniques and treatments, and proposals are currently being drawn up for new areas of research. Other specialities that could promote from the microchip technology include cancer detection and anticipation. Michael Caligiuri, associate director for clinical research at Ohio's Comprehensive Cancer Centre, said there was potential to develop a chip containing a cancer vaccine which would be delivered in precise doses to specific parts of the body. "Drug delivery devices would give us much better control of dosing, thus enhancing the effectiveness of the drug while limiting its toxicity," he said. Microchips with sensors attached could also detect mutated genes or high hormone levels indicating the potential for malignancy.
Robot heart surgery
A revolutionary system which uses robotic arms to carry out keyhole surgery will be used for its first UK heart operation shortly.Surgeons at London are preparing to utilize the da Vinci system to carry out heart bypass surgery. This is expected to happen at some point, when a suitable patient has been found and the surgeons involved have been fully trained.
It is hoped that the technique could cut the revival time patients need, and the number of probable complications they could suffer.The surgeon uses joysticks to work the tiny robotic arms - meaning they do not have to be leaning over the patient during lengthy operations. In theory, they could be controlling the arms and watching the results from the next room, or even another hospital. The machine adjusts itself to compensate for the natural tremor in the human hand, and can be set to make the tiniest movements. The machine has only been at the hospital for a matter of a few months, and has already been used in six successful bowel operations. Each year, many thousands of people in the UK require coronary artery bypass surgery. It is normally needed when blood vessels supplying the heart become clogged; cutting the amount of blood, and the oxygen it carries, reaching the muscle. This can cause severe chest pain or even a heart attack.
Normally a section of blood vessel is taken from another part of the body, such as the leg, and grafted to replace the ruined section of heart blood vessel. It is a major surgical procedure and patients often spend a major amount of time in hospital directly afterwards. The da Vinci system, developed in the US, allows the operation to be carried out through a much smaller notch, in theory improving recovery time. This would help cut the currently lengthy waiting times and save the NHS money, despite each system having a £750,000 price tag. The da Vinci system has been used successfully in the US for a variety of operations, including those to replace or repair heart valves, and flimsy procedures such as the reversal of male and female sterilization.
It is hoped that the technique could cut the revival time patients need, and the number of probable complications they could suffer.The surgeon uses joysticks to work the tiny robotic arms - meaning they do not have to be leaning over the patient during lengthy operations. In theory, they could be controlling the arms and watching the results from the next room, or even another hospital. The machine adjusts itself to compensate for the natural tremor in the human hand, and can be set to make the tiniest movements. The machine has only been at the hospital for a matter of a few months, and has already been used in six successful bowel operations. Each year, many thousands of people in the UK require coronary artery bypass surgery. It is normally needed when blood vessels supplying the heart become clogged; cutting the amount of blood, and the oxygen it carries, reaching the muscle. This can cause severe chest pain or even a heart attack.
Normally a section of blood vessel is taken from another part of the body, such as the leg, and grafted to replace the ruined section of heart blood vessel. It is a major surgical procedure and patients often spend a major amount of time in hospital directly afterwards. The da Vinci system, developed in the US, allows the operation to be carried out through a much smaller notch, in theory improving recovery time. This would help cut the currently lengthy waiting times and save the NHS money, despite each system having a £750,000 price tag. The da Vinci system has been used successfully in the US for a variety of operations, including those to replace or repair heart valves, and flimsy procedures such as the reversal of male and female sterilization.
Saturday, December 8, 2007
Flexible-jointed robot
If robots are going along with humans, then they will require to stand up to accidental bumps and shoves, not to mention the occasional deliberate kick.
That is why researchers in Japan have developed software that allows a life-size humanoid robot to stay on its feet no matter where on its body it is pushed. Theirs is the first full-size humanoid to show such steadiness – others of similar size inevitably topple over when nudged in the right spot. In experiments, the robot was subjected to repeated pushes. A virtual robot received much harder shoves.
Rebalancing should allow humans to interact more naturally with robots, letting them act as a physical guide, for example. If a controller tries to show other full-size humanoids how to perform a task by moving its limbs, there is a strong chance the thing will fall over.
Flexible joints
The robot, made by US firm Sarcos and then developed by researchers at the National Institute of Information and Communications Technology in Japan, suffers no such unsteadiness, it can easily rebalance when its arms are pulled into different positions.
The robot's balancing ability depends on its joints. For one thing they are never kept rigid, even when standing still, meaning they yield slightly when the robot is pushed.
Force sensors within each joint also work out the position and velocity of the robot's centre mass as it moves around. Control software rapidly figures out what forces the robot's feet need to exert on the ground to bring it back into balance, and tells the joints how to act.
As well as keeping the robot steady as it moves itself around, the technique lets it readjust to sudden, external forces.
Hip weakness
In cases when the robot's joints cannot quickly swing its centre of mass back into place, it ends up staggering – a bit like a boxer after a heavy punch. This constitutes several rounds of rebalancing, with each cycle shifting the centre of mass closer to its original balance point.
Some other humanoid robots rebalance themselves by measuring changes to the position of each joint. This requires very accurate knowledge of the magnitude of an applied shove, says ATR researcher Sang-Ho Hyon, which is difficult to achieve without covering the whole robot with force sensors.
Most robots lack such sensors, and so use a relatively simple trick to rebalance themselves. For example, Honda's ASIMO, shifts its hip joint in order to stay steady, which only works in some cases.
"Imagine what happens if we push ASIMO's hip," says Hyon. This would leave the diminutive humanoid unable to rebalance. "In our method, when we push the hip, the hip follows the external forces and other joints compensate for the balance," adds Hyon.
One-step strategy
"You just don't see the real good humanoids out there get pushed," says Jerry Pratt, a roboticist at the Institute for Human and Machine Cognition (IHMC) in Pensacola, Florida, US.
"This team is currently ahead of the pack in terms of having it work on a full robot," Pratt told New Scientist. "Making the robot more compliant instead of stiff is plays a big part in that," he says, and the ability to measure and control the torque force at every joint is also crucial.
Pratt and colleagues are working on their own control strategy, which involves rebalancing with a single step. "Imagine you are crossing a pond and you can only step to one rock to rebalance," he says. The software will be tested next year after the team finishes building a suitable humanoid.
That is why researchers in Japan have developed software that allows a life-size humanoid robot to stay on its feet no matter where on its body it is pushed. Theirs is the first full-size humanoid to show such steadiness – others of similar size inevitably topple over when nudged in the right spot. In experiments, the robot was subjected to repeated pushes. A virtual robot received much harder shoves.
Rebalancing should allow humans to interact more naturally with robots, letting them act as a physical guide, for example. If a controller tries to show other full-size humanoids how to perform a task by moving its limbs, there is a strong chance the thing will fall over.
Flexible joints
The robot, made by US firm Sarcos and then developed by researchers at the National Institute of Information and Communications Technology in Japan, suffers no such unsteadiness, it can easily rebalance when its arms are pulled into different positions.
The robot's balancing ability depends on its joints. For one thing they are never kept rigid, even when standing still, meaning they yield slightly when the robot is pushed.
Force sensors within each joint also work out the position and velocity of the robot's centre mass as it moves around. Control software rapidly figures out what forces the robot's feet need to exert on the ground to bring it back into balance, and tells the joints how to act.
As well as keeping the robot steady as it moves itself around, the technique lets it readjust to sudden, external forces.
Hip weakness
In cases when the robot's joints cannot quickly swing its centre of mass back into place, it ends up staggering – a bit like a boxer after a heavy punch. This constitutes several rounds of rebalancing, with each cycle shifting the centre of mass closer to its original balance point.
Some other humanoid robots rebalance themselves by measuring changes to the position of each joint. This requires very accurate knowledge of the magnitude of an applied shove, says ATR researcher Sang-Ho Hyon, which is difficult to achieve without covering the whole robot with force sensors.
Most robots lack such sensors, and so use a relatively simple trick to rebalance themselves. For example, Honda's ASIMO, shifts its hip joint in order to stay steady, which only works in some cases.
"Imagine what happens if we push ASIMO's hip," says Hyon. This would leave the diminutive humanoid unable to rebalance. "In our method, when we push the hip, the hip follows the external forces and other joints compensate for the balance," adds Hyon.
One-step strategy
"You just don't see the real good humanoids out there get pushed," says Jerry Pratt, a roboticist at the Institute for Human and Machine Cognition (IHMC) in Pensacola, Florida, US.
"This team is currently ahead of the pack in terms of having it work on a full robot," Pratt told New Scientist. "Making the robot more compliant instead of stiff is plays a big part in that," he says, and the ability to measure and control the torque force at every joint is also crucial.
Pratt and colleagues are working on their own control strategy, which involves rebalancing with a single step. "Imagine you are crossing a pond and you can only step to one rock to rebalance," he says. The software will be tested next year after the team finishes building a suitable humanoid.
Daewoo Electronics Purchases Moore Microprocessor Patent Portfolio License
Patriot Scientific Corporation (OTCBB:PTSC), announced today that Daewoo Electronics Corporation has purchased a Moore Microprocessor Patent™ (MMP) Portfolio license from The TPL Group. Daewoo Electronics, the first Korean licensee to purchase an MMP license, has operations in more than 40 countries, with regional headquarters in the U.S., Europe, Central America and the Middle East. Daewoo Electronics is the 26th licensee to date. “Daewoo will benefit significantly by their decisive action to purchase an MMP License,” said Carl Silverman, vice president, Licensing for Alliacense. “Daewoo joins a long and growing list of forward-thinking companies who have been agile enough to secure a significant jump on their competition.” While the MMP Portfolio Licensing Program has been in place for over two years, 30% of the license purchases have followed the favorable “Markman” ruling issued earlier this summer, confirming the strength of the MMP Portfolio, and enhancing the momentum of the Licensing Program. The sweeping scope of applications using MMP design techniques continues to encourage manufacturers of end-user products from around the globe to become MMP licensees. Since January 2006 more than twenty companies have purchased MMP Portfolio licenses; they include: Hewlett Packard, Sony, Casio, Fujitsu, Nikon, Seiko Epson, Pentax, Olympus, Kenwood, Agilent, Lexmark, Schneider Electric, NEC Corporation, Funai Electric, SanDisk, Sharp Corporation, Nokia, Bull, LEGO, DMP Electronics, Denso Wave, Philips, TEAC and Daewoo Electronics.
Wednesday, December 5, 2007
Office Professional 2003 (Full Product)
Microsoft Office Professional Edition 2003 can help you and your organization communicate information with immediacy and impact. New and familiar programs along with improved functionality help you build powerful connections between people, information, and business processes. In addition to core Office 2003 programs - Word 2003, Excel 2003, and PowerPoint 2003 - the Professional Edition includes Access 2003, Publisher 2003, and Outlook 2003 with Business Contact Manager. Office 2003 now fully supports XML in Word, Excel, and Access. New for 2003 is an e-mail Wizard that lets you create a variety of e-mail publication types -including newsletter, letter, and event announcement - for all 45 Master Design Sets. Plus, you can now create and manage accounts, business contacts and sales opportunities in Microsoft Office Outlook 2003 with Business Contact Manager. Improved features include additional customizable preformatted business reports, more powerful spam filtering in Outlook, free access to the Microsoft Office Online Template Gallery and Design Gallery and the Microsoft Office Marketplace, and tools to build a variety of publication types such as catalogs by merging photos and text from data sources (such as Excel, Access or Outlook). Information Rights Management (IRM) functionality helps protect sensitive files and e-mail messages from unauthorized access and use.
Norton Internet Security 2008
Symantec's Norton Internet Security 2008 security suite ($70 for up to three PCs) is easy to use and comes with a host of extra security features, such as a separate Security Inspector scan that warns about unsafe browser settings and other potential security holes. It was the only suite in our testing for "All-in-One Security Suites: Tried and Tested" that didn't cry wolf by reporting at least one false positive.
It detected an above-average 91 percent of AV-Test.org's 674,589 malware samples. This results put the Norton suite close behind the Avira Premium Security Suite, BitDefender Internet Security 2008, Checkpoint ZoneAlarm Internet SecuritySuite 7.1, and Kaspersky Internet Security 7.0 packages, each of which detected 95 percent or more of the malware samples. Nevertheless. a 4 percent difference in detection rates represents a difference of 26,983 undetected samples. Symantec's suite produced the second-worst showing on AV-Test's heuristic tests, catching only 10 percent of samples when required to use one-month-old signature files to detect unknown malware based on similarities to existing code.
It detected an above-average 91 percent of AV-Test.org's 674,589 malware samples. This results put the Norton suite close behind the Avira Premium Security Suite, BitDefender Internet Security 2008, Checkpoint ZoneAlarm Internet SecuritySuite 7.1, and Kaspersky Internet Security 7.0 packages, each of which detected 95 percent or more of the malware samples. Nevertheless. a 4 percent difference in detection rates represents a difference of 26,983 undetected samples. Symantec's suite produced the second-worst showing on AV-Test's heuristic tests, catching only 10 percent of samples when required to use one-month-old signature files to detect unknown malware based on similarities to existing code.
The Symantec suite did outperform the other programs at getting rid of infections. It cleaned up 80 percent of all files and Registry entries added by malware. In particular, Norton was a champ at fighting rootkits--malware designed to hide other malware. It detected every active and inactive rootkit sample, and successfully neutralized those infections. Symantec's suite was one of only two programs (Checkpoint was the other) to detect and block unidentified malware based solely on the way it behaved, but even so it caught just one sample out of five.
In our tests, on-demand (user-initiated) scans were more than 50 percent faster with Norton than with the next-fastest suite (Avira Premium), yielding an impressive data-analysis rate of 16.07 megabytes per second. And those on-demand scans look inside file archives, where crooks frequently hide malicious payloads. (On the other hand, Norton's automatic scans, which check files as your system saves them to the hard drive, won't check file archives unless you change the default settings).
Symantec's firewall is polished. It successfully blocked attempts from outside to scan a protected PC for information, and it did better than most competing suites at refraining from issuing unnecessary warnings about benign apps such as Firefox and Internet Explorer.
In addition, Norton Internet Security displayed an apt warning when it detected an unencrypted wireless connection, and it incorporates various safe-browsing features. For example, its Norton Confidential toolbar, designed for Firefox and Internet Explorer, blocks phishing sites; and its Browser Defender checks for known vulnerabilities in Internet Explorer 6 and 7. Though the suite's Identity Safe feature protects sensitive data, such as credit card numbers, from inadvertently leaving your PC, you'll must manually tell it what information to protect--a standard but laborious process. It has no antispam or parental controls, but those features are freely downloadable from Symantec's site.
The interface is well laid-out, and the software's pop-up detection alerts are generally understandable, though they provide little information, such as where a threat was found. Its impressive log entries simplify the task of finding out what the program has been up to--but again they lack information about where a given threat was discovered.
Our one major criticism of the suite is that when we uninstalled it, it left behind the separate LiveUpdate component. You have to know to go back and remove LiveUpdate as well.
Norton Internet Security 2008 has a good design and an appealing feature set. It could be better at blocking malware, but it's the best choice of the eight we looked at.
In our tests, on-demand (user-initiated) scans were more than 50 percent faster with Norton than with the next-fastest suite (Avira Premium), yielding an impressive data-analysis rate of 16.07 megabytes per second. And those on-demand scans look inside file archives, where crooks frequently hide malicious payloads. (On the other hand, Norton's automatic scans, which check files as your system saves them to the hard drive, won't check file archives unless you change the default settings).
Symantec's firewall is polished. It successfully blocked attempts from outside to scan a protected PC for information, and it did better than most competing suites at refraining from issuing unnecessary warnings about benign apps such as Firefox and Internet Explorer.
In addition, Norton Internet Security displayed an apt warning when it detected an unencrypted wireless connection, and it incorporates various safe-browsing features. For example, its Norton Confidential toolbar, designed for Firefox and Internet Explorer, blocks phishing sites; and its Browser Defender checks for known vulnerabilities in Internet Explorer 6 and 7. Though the suite's Identity Safe feature protects sensitive data, such as credit card numbers, from inadvertently leaving your PC, you'll must manually tell it what information to protect--a standard but laborious process. It has no antispam or parental controls, but those features are freely downloadable from Symantec's site.
The interface is well laid-out, and the software's pop-up detection alerts are generally understandable, though they provide little information, such as where a threat was found. Its impressive log entries simplify the task of finding out what the program has been up to--but again they lack information about where a given threat was discovered.
Our one major criticism of the suite is that when we uninstalled it, it left behind the separate LiveUpdate component. You have to know to go back and remove LiveUpdate as well.
Norton Internet Security 2008 has a good design and an appealing feature set. It could be better at blocking malware, but it's the best choice of the eight we looked at.
iPhone 8GB Smartphone
iPhone is a revolutionary new mobile phone that allows you to make a call by simply tapping a name or number in your address book, a favorites list, or a call log. It also automatically syncs all your contacts from a PC, Mac, or Internet service. And it lets you select and listen to voicemail messages in whatever order you want . just like email. The iPhone features the most revolutionary user interface since the mouse. It.s an entirely new interface based on a large multi-touch display and innovative new software that lets you control everything using only your fingers. So you can glide through albums with Cover Flow, flip through photos and email them with a touch, or zoom in and out on a section of a web page . all by simply using iPhone.s multi-touch display.
Thursday, November 29, 2007
Colossal Quad-HD Monitor
Gateway, the leading PC maker targeting gamers and graphics artists dribbling over a deluxe 30-inch LCD monitor with spectacular 2,560 by 1,600-pixel resolution. But the PC maker insists the Gateway XHD3000 is not just a appealing face, but a brain like Einstein's: A Silicon Optix Realta HQV ("Hollywood Quality Video") video processing chipset achieves 1 trillion operations per second to improve clarity and feature, reduce noise, and make standard-definition content look like high definition -- and beyond, upscaling HD's finest 1080p video to an even finer 1600p.
The $1,700 flat panel can be directly linked to virtually any PC or consumer electronics video device -- or up to six of them simultaneously, thanks to HDMI, single- and dual-link DVI, VGA, component, S-Video, and composite inputs with HDCP encryption for showing protected HD content. Besides having plenty of room for multiple application windows and Web pages, it offers picture-in-picture PC-and-video viewing.
An affordable universal IR remote control can serve not only the display, but three extra devices such as a Media Center PC, set-top box, and DVD player. In addition to a 1,000:1 contrast ratio and 6-millisecond refresh rates, the XHD3000 features a high-performance DXP powered speaker system with eight low-profile neodymium transducers; distinct audio inputs for each video input with automatic audio switching; and analog and digital audio outputs.
The $1,700 flat panel can be directly linked to virtually any PC or consumer electronics video device -- or up to six of them simultaneously, thanks to HDMI, single- and dual-link DVI, VGA, component, S-Video, and composite inputs with HDCP encryption for showing protected HD content. Besides having plenty of room for multiple application windows and Web pages, it offers picture-in-picture PC-and-video viewing.
An affordable universal IR remote control can serve not only the display, but three extra devices such as a Media Center PC, set-top box, and DVD player. In addition to a 1,000:1 contrast ratio and 6-millisecond refresh rates, the XHD3000 features a high-performance DXP powered speaker system with eight low-profile neodymium transducers; distinct audio inputs for each video input with automatic audio switching; and analog and digital audio outputs.
Envision's $500 Widescreen
Its bezel is slim enough to encourage tiling or side-by-side multiple-monitor placement, but just one of Envision Peripherals's newest LCD monitors will do fine for viewing different applications side by side, thanks to 1,920 by 1,200 resolution. The 24-inch Envision G416 is designed to maximize screen space for easier navigation between multiple programs and Windows Vista's Sidebar and Aero Flip features.
Boasting a 3,000:1 dynamic contrast ratio and 5-millisecond response time, the $500 display provides five Dynamic Color Boost options for the most natural color reproduction, plus the ability to regulate power consumption as needed for Text, Internet, Game, Movie, and Sports settings.
HDCP-supported DVI input enables crystal-clear video playback from HD 720p/1080i program sources such as HD-DVD and Blu-ray discs or a cable or satellite set-top box equipped with DVI-HDCP or HDMI using a separate HDMI-DVI adapter.
Boasting a 3,000:1 dynamic contrast ratio and 5-millisecond response time, the $500 display provides five Dynamic Color Boost options for the most natural color reproduction, plus the ability to regulate power consumption as needed for Text, Internet, Game, Movie, and Sports settings.
HDCP-supported DVI input enables crystal-clear video playback from HD 720p/1080i program sources such as HD-DVD and Blu-ray discs or a cable or satellite set-top box equipped with DVI-HDCP or HDMI using a separate HDMI-DVI adapter.
Monday, November 26, 2007
Probability theory
Probability theory is the mathematical study of phenomena characterized by randomness or uncertainty.More precisely, probability is used for modelling situations when the result of an experiment, realized under the same circumstances, produces different results (typically throwing a dice or a coin). Mathematicians and actuaries think of probabilities as numbers in the closed interval from 0 to 1 assigned to "events" whose occurrence or failure to occur is random.
Monkeys Able To Fend Off AIDS-like Symptoms With Enhanced HIV Vaccine
Researchers at the University of Pennsylvania School of Medicine have discovered that using an immune system gene to enhance a vaccine used to study HIV in macaque monkeys provides the animals with greater protection against simian HIV (SHIV) than an unmodified vaccine. This multi-year study found that the addition of a molecule called Interleukin-15 effectively boosts the effects of a vaccine derived from the DNA of simian HIV. The study illustrates that DNA vaccine effectiveness can be improved by the inclusion of specific immune adjuvants, or helpers."DNA vaccine technology has great promise for the development of vaccines and immune therapeutics for a variety of infectious diseases and cancers," says senior author David B. Weiner, PhD, Professor of Pathology and Laboratory Medicine at Penn. While previous studies have established that the technology can induce immune responses safely, "improving the immune potency of this platform is critical for further development in humans."
The research builds on previous work aimed at engineering a more potent immune response to SHIV DNA vaccine technology. Mouse model studies previously showed that the cytokine IL-15 -- a substance that can improve the body's natural response to infection and disease -- helps better immune responses and protection, while this study mirrors those findings in a larger, non-human primate species.
In this study, the group of macaques that was injected with the vaccine containing a loop of DNA enabling them to make IL-15 developed no signs of AIDS-like symptoms when exposed to live SHIV, compared to four animals in the control group that received only the DNA vaccine. The modified vaccine appeared to help suppress viral replication among the IL-15 group.
Next, Weiner's team will study the protected macaques to determine the actual mechanism of their protection, and seek out any pockets of the virus that may be hiding in specific immune compartments. The approach will also be tested for safety and immunogenicity in humans through the HIV Vaccine Trials Network.
The findings are published in online edition of the Proceedings of the National Academy of Sciences.
The lead author of the study is Dr. Jean Boyer, of the University of Pennsylvania. Co-authors include researchers from the National Cancer Institute, the Southern Research Institute in Frederick, MD, the National Institute of Allergy and Infectious Diseases in Bethesda, MD, Genomix (San Diego, CA) and Wyeth (Pearl River, N.Y.). The research was supported by the National Institutes of Health and the Intramural Research Program of the NIH.
The research builds on previous work aimed at engineering a more potent immune response to SHIV DNA vaccine technology. Mouse model studies previously showed that the cytokine IL-15 -- a substance that can improve the body's natural response to infection and disease -- helps better immune responses and protection, while this study mirrors those findings in a larger, non-human primate species.
In this study, the group of macaques that was injected with the vaccine containing a loop of DNA enabling them to make IL-15 developed no signs of AIDS-like symptoms when exposed to live SHIV, compared to four animals in the control group that received only the DNA vaccine. The modified vaccine appeared to help suppress viral replication among the IL-15 group.
Next, Weiner's team will study the protected macaques to determine the actual mechanism of their protection, and seek out any pockets of the virus that may be hiding in specific immune compartments. The approach will also be tested for safety and immunogenicity in humans through the HIV Vaccine Trials Network.
The findings are published in online edition of the Proceedings of the National Academy of Sciences.
The lead author of the study is Dr. Jean Boyer, of the University of Pennsylvania. Co-authors include researchers from the National Cancer Institute, the Southern Research Institute in Frederick, MD, the National Institute of Allergy and Infectious Diseases in Bethesda, MD, Genomix (San Diego, CA) and Wyeth (Pearl River, N.Y.). The research was supported by the National Institutes of Health and the Intramural Research Program of the NIH.
Thursday, November 22, 2007
Microsoft Patches a 'Critical' Hole
Microsoft Corp. issued two security fixes in a regular monthly update Tuesday, including one that removes a dangerous bug in all versions of Windows XP and Windows Server 2003. Microsoft gave the serious security fix its most urgent "critical" rating. Hackers could exploit a vulnerability using Internet Explorer 7, and possibly other programs, and take over a user's computer for a variety of nefarious purposes, such as stealing passwords or pumping out spam. The security hole "is concerning as it's a publicly known issue that puts computer users at risk," said Ben Greenbaum, a senior research manager on antivirus software maker Symantec Corp.'s security response team. The other fix, which Microsoft gave the second-highest "important" rating, is for computers running versions of Windows 2000 Server and Windows Server 2003. Hackers could exploit the flaw in Microsoft's program to redirect Internet traffic from legitimate sites to fake ones. Windows users can visit Microsoft's security Web site to get the updates or configure their computers to automatically update each month.
Bookmark Improvements New to Firefox 3
A new version of the Firefox browser, now available for testing mainly by developers, offers improvements on finding frequently visited Web sites and tools for running Web applications without a live Internet connection. The Beta 1 version of Firefox 3 released this week still has problems, including the inability to run newer Web-mail programs from Yahoo Inc. and Microsoft Corp., and a final version for consumers isn't expected for several months. But it offers a window on what's to come. Many of its new features concern bookmarks, an area typically slow to change in the browsing world. You can now add keywords, or tags, to sort bookmarks by topic. And a new "Places" feature lets you quickly access sites you recently bookmarked or tagged and pages you visit frequently but haven't bookmarked. There's also a new star button for easily adding sites to your bookmark list - similar to what's already available on Microsoft's Internet Explorer 7 browser. Offline Web support - for example, letting you compose Web mail while offline to send after you're back online - is bound to come in handy as more software developers design programs to be run completely over the Internet, eliminating installation complexities. But Web developers must add the Firefox offline functionality to their sites, so the usefulness of this feature will be limited at first. Other new features include the ability to resume downloads midway if the connection is interrupted and an updated password manager that doesn't disrupt the log-in process. Versions for Windows, Mac and Linux computers were released Monday by Mozilla, an open-source community in which thousands of people collectively develop free products, mostly as volunteers.
Google knows your passwords
Here's a clever search trick with implications for anyone who thinks their passwords are a well kept secret.It's described in this blog post by Steven J Murdoch, one of several computer security researchers at the University of Cambridge behind the excellent Light Blue Touchpaper blog.The blog was compromised a few weeks ago using weaknesses in the underlying publishing software and Murdoch decided to perform a thorough forensic analysis of the event. During his investigation, he discovered an account created by the perpetrator along with an associated MD5 hash "20f1aeb7819d7858684c898d1e98c1bb" (the cryptographic code used by the system's database to identify the correct password).Murdoch then tried to guess the password. After running through several dictionaries of potential passwords, he tried simply pasting the hash into Google, which promptly revealed the password in question to be "Anthony" – no doubt the name of whoever broke into the system in the first place.Plenty has been written before about using Google to find potential software vulnerabilities, but this is a particularly nice example. As search engines become increasingly powerful, it is undoubtedly something software programmers and computer administrators will have to bear in mind.
Computer programs that make sense of life
Complex living systems would benefit from being modelled as if they were computer programs.
Biologists have amassed huge data sets, such as the entire human genome, but how the components work together is often a mystery. Most simulations require detailed knowledge of the rates at which components change, which often aren't known.
Now Jasmin Fisher of Microsoft Research in Cambridge, UK, and Thomas Henzinger of the Federal Polytechnic School of Lausanne, Switzerland, suggest using models structured like computer programs, where many "subroutines" run in parallel and produce outputs that depend on each othe. By representing proteins, say, as subroutines, it is possible to work out how the overall system works without knowing some details.
Biologists have amassed huge data sets, such as the entire human genome, but how the components work together is often a mystery. Most simulations require detailed knowledge of the rates at which components change, which often aren't known.
Now Jasmin Fisher of Microsoft Research in Cambridge, UK, and Thomas Henzinger of the Federal Polytechnic School of Lausanne, Switzerland, suggest using models structured like computer programs, where many "subroutines" run in parallel and produce outputs that depend on each othe. By representing proteins, say, as subroutines, it is possible to work out how the overall system works without knowing some details.
Privacy risk for Google cellphone software
A cellphone operating system designed to encourage web surfing on the go could trigger a fresh assault on privacy.
On 5 November, Google and 30 partners unveiled a joint venture called the Open Handset Alliance that aims to develop a Linux-based open-source cellphone operating system to be called Android. Anyone will be able to write applications for Android, and Google hopes this will lead to applications that free users from today's clunky handset browsers and web portals.
"They are trying to take the 'mobile' out of the mobile internet, making it as close to the experience on a PC as possible," says Ben Wood of telecoms consultancy CCS Insight in Solihull, UK.
What worries some privacy experts, though, is the combination of Google's policy of retaining users' search histories and a cellphone's ability to reveal your location and store the numbers you have called.
On 5 November, Google and 30 partners unveiled a joint venture called the Open Handset Alliance that aims to develop a Linux-based open-source cellphone operating system to be called Android. Anyone will be able to write applications for Android, and Google hopes this will lead to applications that free users from today's clunky handset browsers and web portals.
"They are trying to take the 'mobile' out of the mobile internet, making it as close to the experience on a PC as possible," says Ben Wood of telecoms consultancy CCS Insight in Solihull, UK.
What worries some privacy experts, though, is the combination of Google's policy of retaining users' search histories and a cellphone's ability to reveal your location and store the numbers you have called.
Transistor Technology Breakthrough Represents Biggest Change To Computer Chips In 40 Years
In one of the biggest advancements in fundamental transistor design, Intel Corporation revealed that it is using two dramatically new materials to build the insulating walls and switching gates of its 45 nanometer (nm) transistors. Hundreds of millions of these microscopic transistors -- or switches -- will be inside the next generation Intel® Core™ 2 Duo, Intel Core 2 Quad and Xeon® families of multi-core processors. The company also said it has five early-version products up and running -- the first of fifteen 45nm processor products planned from Intel.The transistor feat allows the company to continue delivering record-breaking PC, laptop and server processor speeds, while reducing the amount of electrical leakage from transistors that can hamper chip and PC design, size, power consumption, noise and costs. It also ensures Moore's Law, a high-tech industry axiom that transistor counts double about every two years, thrives well into the next decade.
Intel believes it has extended its lead of more than a year over the rest of the semiconductor industry with the first working 45nm processors of its next-generation 45nm family of products -- codenamed "Penryn." The early versions, which will be targeted at five different computer market segments, are running Windows* Vista*, Mac OS X*, Windows* XP and Linux operating systems, as well as various applications. The company remains on track for 45nm production in the second half of this year.
Intel's Transistors Get a "High-k and Metal Gate" Make-Over at 45nm
Intel is the first to implement an innovative combination of new materials that drastically reduces transistor leakage and increases performance in its 45nm process technology. The company will use a new material with a property called high-k, for the transistor gate dielectric, and a new combination of metal materials for the transistor gate electrode.
"The implementation of high-k and metal materials marks the biggest change in transistor technology since the introduction of polysilicon gate MOS transistors in the late 1960s," said Intel Co-Founder Gordon Moore.
Transistors are tiny switches that process the ones and zeroes of the digital world. The gate turns the transistor on and off and the gate dielectric is an insulator underneath it that separates it from the channel where current flows. The combination of the metal gates and the high-k gate dielectric leads to transistors with very low current leakage and record high performance.
"As more and more transistors are packed onto a single piece of silicon, the industry continues to research current leakage reduction solutions," said Mark Bohr, Intel senior fellow. "Meanwhile our engineers and designers have achieved a remarkable accomplishment that ensures the leadership of Intel products and innovation. Our implementation of novel high-k and metal gate transistors for our 45nm process technology will help Intel deliver even faster, more energy efficient multi-core products that build upon our successful Intel Core 2 and Xeon family of processors, and extend Moore's Law well into the next decade."
For comparison, approximately 400 of Intel's 45nm transistors could fit on the surface of a single human red blood cell. Just a decade ago, the state-of-the-art process technology was 250nm, meaning transistor dimensions were approximately 5.5 times the size and 30 times the area of the technology announced today by Intel.
As the number of transistors on a chip roughly doubles every two years in accordance with Moore's Law, Intel is able to innovate and integrate, adding more features and computing processing cores, increasing performance, and decreasing manufacturing costs and cost per transistor. To maintain this pace of innovation, transistors must continue to shrink to ever-smaller sizes. However, using current materials, the ability to shrink transistors is reaching fundamental limits because of increased power and heat issues that develop as feature sizes reach atomic levels. As a result, implementing new materials is imperative to the future of Moore's Law and the economics of the information age.
Intel's High-k, Metal Gate Recipe for 45nm Process Technology
Silicon dioxide has been used to make the transistor gate dielectric for more than 40 years because of its manufacturability and ability to deliver continued transistor performance improvements as it has been made ever thinner. Intel has successfully shrunk the silicon dioxide gate dielectric to as little as 1.2nm thick -- equal to five atomic layers -- on our previous 65nm process technology, but the continued shrinking has led to increased current leakage through the gate dielectric, resulting in wasted electric current and unnecessary heat.
Transistor gate leakage associated with the ever-thinning silicon dioxide gate dielectric is recognized by the industry as one of the most formidable technical challenges facing Moore's Law. To solve this critical issue, Intel replaced the silicon dioxide with a thicker hafnium-based high-k material in the gate dielectric, reducing leakage by more than 10 times compared to the silicon dioxide used for more than four decades.
Because the high-k gate dielectric is not compatible with today's silicon gate electrode, the second part of Intel's 45nm transistor material recipe is the development of new metal gate materials. While the specific metals that Intel uses remains secret, the company will use a combination of different metal materials for the transistor gate electrodes.
The combination of the high-k gate dielectric with the metal gate for Intel's 45nm process technology provides more than a 20 percent increase in drive current, or higher transistor performance. Conversely it reduces source-drain leakage by more than five times, thus improving the energy efficiency of the transistor.
Intel's 45nm process technology also improves transistor density by approximately two times that of the previous generation, allowing the company to either increase the overall transistor count or to make processors smaller. Because the 45nm transistors are smaller than the previous generation, they take less energy to switch on and off, reducing active switching power by approximately 30 percent. Intel will use copper wires with a low-k dielectric for its 45nm interconnects for increased performance and lower power consumption. It will also use innovative design rules and advanced mask techniques to extend the use of 193nm dry lithography to manufacture its 45nm processors because of the cost advantages and high manufacturability it affords.
Penryn Family Will Bring More Energy Efficient Performance
The Penryn family of processors is a derivative of the Intel Core microarchitecture and marks the next step in Intel's rapid cadence of delivering a new process technology and new microarchitecture every other year. The combination of Intel's leading 45nm process technology, high-volume manufacturing capabilities, and leading microarchitecture design enabled the company to already develop its first working 45nm Penryn processors.
The company has more than 15 products based on 45nm in development across desktop, mobile, workstation and enterprise segments. With more than 400 million transistors for dual-core processors and more than 800 million for quad-core, the Penryn family of 45nm processors includes new microarchitecture features for greater performance and power management capabilities, as well as higher core speeds and larger caches. The Penryn family designs also bring approximately 50 new Intel SSE4 instructions that expand capabilities and performance for media and high-performance computing applications.
Intel believes it has extended its lead of more than a year over the rest of the semiconductor industry with the first working 45nm processors of its next-generation 45nm family of products -- codenamed "Penryn." The early versions, which will be targeted at five different computer market segments, are running Windows* Vista*, Mac OS X*, Windows* XP and Linux operating systems, as well as various applications. The company remains on track for 45nm production in the second half of this year.
Intel's Transistors Get a "High-k and Metal Gate" Make-Over at 45nm
Intel is the first to implement an innovative combination of new materials that drastically reduces transistor leakage and increases performance in its 45nm process technology. The company will use a new material with a property called high-k, for the transistor gate dielectric, and a new combination of metal materials for the transistor gate electrode.
"The implementation of high-k and metal materials marks the biggest change in transistor technology since the introduction of polysilicon gate MOS transistors in the late 1960s," said Intel Co-Founder Gordon Moore.
Transistors are tiny switches that process the ones and zeroes of the digital world. The gate turns the transistor on and off and the gate dielectric is an insulator underneath it that separates it from the channel where current flows. The combination of the metal gates and the high-k gate dielectric leads to transistors with very low current leakage and record high performance.
"As more and more transistors are packed onto a single piece of silicon, the industry continues to research current leakage reduction solutions," said Mark Bohr, Intel senior fellow. "Meanwhile our engineers and designers have achieved a remarkable accomplishment that ensures the leadership of Intel products and innovation. Our implementation of novel high-k and metal gate transistors for our 45nm process technology will help Intel deliver even faster, more energy efficient multi-core products that build upon our successful Intel Core 2 and Xeon family of processors, and extend Moore's Law well into the next decade."
For comparison, approximately 400 of Intel's 45nm transistors could fit on the surface of a single human red blood cell. Just a decade ago, the state-of-the-art process technology was 250nm, meaning transistor dimensions were approximately 5.5 times the size and 30 times the area of the technology announced today by Intel.
As the number of transistors on a chip roughly doubles every two years in accordance with Moore's Law, Intel is able to innovate and integrate, adding more features and computing processing cores, increasing performance, and decreasing manufacturing costs and cost per transistor. To maintain this pace of innovation, transistors must continue to shrink to ever-smaller sizes. However, using current materials, the ability to shrink transistors is reaching fundamental limits because of increased power and heat issues that develop as feature sizes reach atomic levels. As a result, implementing new materials is imperative to the future of Moore's Law and the economics of the information age.
Intel's High-k, Metal Gate Recipe for 45nm Process Technology
Silicon dioxide has been used to make the transistor gate dielectric for more than 40 years because of its manufacturability and ability to deliver continued transistor performance improvements as it has been made ever thinner. Intel has successfully shrunk the silicon dioxide gate dielectric to as little as 1.2nm thick -- equal to five atomic layers -- on our previous 65nm process technology, but the continued shrinking has led to increased current leakage through the gate dielectric, resulting in wasted electric current and unnecessary heat.
Transistor gate leakage associated with the ever-thinning silicon dioxide gate dielectric is recognized by the industry as one of the most formidable technical challenges facing Moore's Law. To solve this critical issue, Intel replaced the silicon dioxide with a thicker hafnium-based high-k material in the gate dielectric, reducing leakage by more than 10 times compared to the silicon dioxide used for more than four decades.
Because the high-k gate dielectric is not compatible with today's silicon gate electrode, the second part of Intel's 45nm transistor material recipe is the development of new metal gate materials. While the specific metals that Intel uses remains secret, the company will use a combination of different metal materials for the transistor gate electrodes.
The combination of the high-k gate dielectric with the metal gate for Intel's 45nm process technology provides more than a 20 percent increase in drive current, or higher transistor performance. Conversely it reduces source-drain leakage by more than five times, thus improving the energy efficiency of the transistor.
Intel's 45nm process technology also improves transistor density by approximately two times that of the previous generation, allowing the company to either increase the overall transistor count or to make processors smaller. Because the 45nm transistors are smaller than the previous generation, they take less energy to switch on and off, reducing active switching power by approximately 30 percent. Intel will use copper wires with a low-k dielectric for its 45nm interconnects for increased performance and lower power consumption. It will also use innovative design rules and advanced mask techniques to extend the use of 193nm dry lithography to manufacture its 45nm processors because of the cost advantages and high manufacturability it affords.
Penryn Family Will Bring More Energy Efficient Performance
The Penryn family of processors is a derivative of the Intel Core microarchitecture and marks the next step in Intel's rapid cadence of delivering a new process technology and new microarchitecture every other year. The combination of Intel's leading 45nm process technology, high-volume manufacturing capabilities, and leading microarchitecture design enabled the company to already develop its first working 45nm Penryn processors.
The company has more than 15 products based on 45nm in development across desktop, mobile, workstation and enterprise segments. With more than 400 million transistors for dual-core processors and more than 800 million for quad-core, the Penryn family of 45nm processors includes new microarchitecture features for greater performance and power management capabilities, as well as higher core speeds and larger caches. The Penryn family designs also bring approximately 50 new Intel SSE4 instructions that expand capabilities and performance for media and high-performance computing applications.
Security Loophole In Windows Operating System
A group of researchers headed by Dr. Benny Pinkas from the Department of Computer Science at the University of Haifa succeeded in finding a security vulnerability in Microsoft's "Windows 2000" operating system. The significance of the loophole: emails, passwords, credit card numbers, if they were typed into the computer, and actually all correspondence that emanated from a computer using "Windows 2000" is susceptible to tracking. "This is not a theoretical discovery. Anyone who exploits this security loophole can definitely access this information on other computers," remarked Dr. Pinkas. Various security vulnerabilities in different computer operating systems have been discovered over the years. Previous security breaches have enabled hackers to follow correspondence from a computer from the time of the breach onwards. This newly discovered loophole, exposed by a team of researchers which included, along with Dr. Pinkas, Hebrew University graduate students Zvi Gutterman and Leo Dorrendorf, enables hackers to access information that was sent from the computer prior to the security breach and even information that is no longer stored on the computer.
The researchers found the security loophole in the random number generator of Windows. This is a program which is, among other things, a critical building block for file and email encryption, and for the SSL encryption protocol which is used by all Internet browsers.
For example: in correspondence with a bank or any other website that requires typing in a password, or a credit card number, the random number generator creates a random encryption key, which is used to encrypt the communication so that only the relevant website can read the correspondence. The research team found a way to decipher how the random number generator works and thereby compute previous and future encryption keys used by the computer, and eavesdrop on private communication.
"There is no doubt that hacking into a computer using our method requires advanced planning. On the other hand, simpler security breaches also require planning, and I believe that there is room for concern at large companies, or for people who manage sensitive information using their computers, who should understand that the privacy of their data is at risk," explained Dr. Pinkas.
According to the researchers, who have already notified the Microsoft security response team about their discovery, although they only checked "Windows 2000" (which is currently the third most popular operating system in use) they assume that newer versions of "Windows", XP and Vista, use similar random number generators and may also be vulnerable.
Their conclusion is that Microsoft needs to improve the way it encodes information. They recommend that Microsoft publish the code of their random number generators as well as of other elements of the "Windows" security system to enable computer security experts outside Microsoft to evaluate their effectiveness.
The researchers found the security loophole in the random number generator of Windows. This is a program which is, among other things, a critical building block for file and email encryption, and for the SSL encryption protocol which is used by all Internet browsers.
For example: in correspondence with a bank or any other website that requires typing in a password, or a credit card number, the random number generator creates a random encryption key, which is used to encrypt the communication so that only the relevant website can read the correspondence. The research team found a way to decipher how the random number generator works and thereby compute previous and future encryption keys used by the computer, and eavesdrop on private communication.
"There is no doubt that hacking into a computer using our method requires advanced planning. On the other hand, simpler security breaches also require planning, and I believe that there is room for concern at large companies, or for people who manage sensitive information using their computers, who should understand that the privacy of their data is at risk," explained Dr. Pinkas.
According to the researchers, who have already notified the Microsoft security response team about their discovery, although they only checked "Windows 2000" (which is currently the third most popular operating system in use) they assume that newer versions of "Windows", XP and Vista, use similar random number generators and may also be vulnerable.
Their conclusion is that Microsoft needs to improve the way it encodes information. They recommend that Microsoft publish the code of their random number generators as well as of other elements of the "Windows" security system to enable computer security experts outside Microsoft to evaluate their effectiveness.
Adaptive Technology Developed For Visually Impaired Engineers
By adding features to commonly used chemical-engineering software packages, researchers at the University of Arkansas, the University of Akron and Chemstations Inc. have developed adaptive technology that allows blind or visually impaired students and working professionals to perform the essential functions of chemical-engineering process design.Led by Bob Beitle, professor of chemical engineering in the College of Engineering at the University of Arkansas, the research team created a system that combines tactile, Braille-like representations that can be "read" by visually impaired chemical engineers. The system also includes an audio, screen-reading component and audible indicators of certain software functions.
Researchers have also overcome a major obstacle associated with the user function of dragging and dropping or copying and pasting. A tablet computer with a customized overlay, a tablet pen functioning as a computer mouse, and alignment holes mapped to the tactile objects help facilitate the drag-and-drop function, which is the method that connects unit operations.
"We are far enough into this project for me say that we have significantly minimized the differences between visually impaired and sighted engineers who do process design," Beitle said. "While we haven't eliminated all differences, we have reached a point where a blind chemical engineer can conduct himself as any engineer by manipulating process-engineering software to achieve improvements or investigate alternatives."
The system has been extensively tested at a process-engineering firm by Noel Romey, a graduate student in the Ralph E. Martin Department of Chemical Engineering. Romey, who has been blind since birth, came to the university to study chemical engineering. Since May, he has tested the system by simulating and designing various chemical facilities. The extensive designs are used by clients of the design firm to improve manufacturing systems.
The teaching and practice of chemical-engineering design traditionally has had a strong visual component due to many visual tools that describe concepts and processes. This reality, combined with the fact that industry-specific software does not include any adaptive-technology features, means that professors and engineering professionals have little experience with visually impaired students, which may contribute to blind and visually impaired students avoiding the profession.
Beitle's team converted GUIs into TUIs. GUI stands for graphical user interface, which describes software that relies heavily on icons and visual tools to represent concepts, functions and processes. Of course, behind any GUI are codes programmed to execute various user commands, such as opening programs or dragging documents. To accommodate those who can't rely on visual cues, the researchers had to alter this visually dependent system into something that could be felt - a tactile user interface. Their system includes a TabletPC or CintiQ - personal computers/screens that simulate notepads - and a pen-based mouse. Most importantly, the system uses custom-made tactiles - small objects embossed with patterns that represent various GUI icons that symbolize parts, such as valves, pumps and reactors - and an overlay that is placed on the screen. The tactiles adhere to the overlay. Alignment holes on the tactiles allow users to place them at desired locations on the overlay and thus build process-flow diagrams. Tactile and graphical interfaces are the same size because when a tactile is clicked, the design is built on the computer screen under it.
In addition to the computer modifications, the research project has an equally important psychological component, one that Beitle thinks will help both sighted and visually impaired engineers. Whether in the classroom or at an engineering firm, engineers must work as a team on design projects. This reality made Beitle think about the importance of language and the verbal exchange of information between blind and sighted professionals. How can design team members convey technical information when a visual diagram cannot be relied upon?
To answer this question, Beitle and his design students collaborated with Douglas Behrend, professor and chair of the psychology department in the J. William Fulbright College of Arts and Sciences, and Rachel Schwartz, a psychology graduate student. Led by Schwartz and Behrend, who is an expert in cognitive and language development, the researchers studied individuals with different communication styles and measured the reliance on vague language, visual cues and gestures. When working with Romey, sighted students seemed to modify patterns of communication styles in ways that suggested they were considering the dynamics of working with a visually impaired colleague. Behrend said this may be explained by group members using metacognition, which psychologists broadly refer to as individuals' knowledge of and about their own and others' cognitive processes.
"This added dimension of this project will prepare sighted members of a design team to communicate effectively in a technical fashion with less reliance on visual cues," Beitle said.
Researchers have also overcome a major obstacle associated with the user function of dragging and dropping or copying and pasting. A tablet computer with a customized overlay, a tablet pen functioning as a computer mouse, and alignment holes mapped to the tactile objects help facilitate the drag-and-drop function, which is the method that connects unit operations.
"We are far enough into this project for me say that we have significantly minimized the differences between visually impaired and sighted engineers who do process design," Beitle said. "While we haven't eliminated all differences, we have reached a point where a blind chemical engineer can conduct himself as any engineer by manipulating process-engineering software to achieve improvements or investigate alternatives."
The system has been extensively tested at a process-engineering firm by Noel Romey, a graduate student in the Ralph E. Martin Department of Chemical Engineering. Romey, who has been blind since birth, came to the university to study chemical engineering. Since May, he has tested the system by simulating and designing various chemical facilities. The extensive designs are used by clients of the design firm to improve manufacturing systems.
The teaching and practice of chemical-engineering design traditionally has had a strong visual component due to many visual tools that describe concepts and processes. This reality, combined with the fact that industry-specific software does not include any adaptive-technology features, means that professors and engineering professionals have little experience with visually impaired students, which may contribute to blind and visually impaired students avoiding the profession.
Beitle's team converted GUIs into TUIs. GUI stands for graphical user interface, which describes software that relies heavily on icons and visual tools to represent concepts, functions and processes. Of course, behind any GUI are codes programmed to execute various user commands, such as opening programs or dragging documents. To accommodate those who can't rely on visual cues, the researchers had to alter this visually dependent system into something that could be felt - a tactile user interface. Their system includes a TabletPC or CintiQ - personal computers/screens that simulate notepads - and a pen-based mouse. Most importantly, the system uses custom-made tactiles - small objects embossed with patterns that represent various GUI icons that symbolize parts, such as valves, pumps and reactors - and an overlay that is placed on the screen. The tactiles adhere to the overlay. Alignment holes on the tactiles allow users to place them at desired locations on the overlay and thus build process-flow diagrams. Tactile and graphical interfaces are the same size because when a tactile is clicked, the design is built on the computer screen under it.
In addition to the computer modifications, the research project has an equally important psychological component, one that Beitle thinks will help both sighted and visually impaired engineers. Whether in the classroom or at an engineering firm, engineers must work as a team on design projects. This reality made Beitle think about the importance of language and the verbal exchange of information between blind and sighted professionals. How can design team members convey technical information when a visual diagram cannot be relied upon?
To answer this question, Beitle and his design students collaborated with Douglas Behrend, professor and chair of the psychology department in the J. William Fulbright College of Arts and Sciences, and Rachel Schwartz, a psychology graduate student. Led by Schwartz and Behrend, who is an expert in cognitive and language development, the researchers studied individuals with different communication styles and measured the reliance on vague language, visual cues and gestures. When working with Romey, sighted students seemed to modify patterns of communication styles in ways that suggested they were considering the dynamics of working with a visually impaired colleague. Behrend said this may be explained by group members using metacognition, which psychologists broadly refer to as individuals' knowledge of and about their own and others' cognitive processes.
"This added dimension of this project will prepare sighted members of a design team to communicate effectively in a technical fashion with less reliance on visual cues," Beitle said.
Monday, November 19, 2007
Smart Meters
Mechanical Engineers' Device Helps Electricity Conservation
Smart meters are small computers that provide real-time information on how much electricity is being used by each customer and when, and can relay information back to the head office over the very same power lines they feed from. Smart meters can bill different rates depending overall grid usage, to encourage conservation. Users can program them to control appliances, for example running the dishwasher during off-peak hours, when rates are lower.During the winter, we crank up the heat. During the summer, we turn up the air. And all the time we're eating up electricity. Now a new smart meter may help to save energy and save money.
Like most parents, finding ways to save is a priority for Trina Camping. But she thought saving on her electric bill was a lost cause. That is, until she saw the light and started using a SmartMeter.
"This is what makes a SmartMeter smart," Mechanical Engineer Tim Vahlstrom, tells DBIS. "It's mounted onto a typical electric meter."
Engineers at Pacific Gas and Electric in San Francisco are working on SmartMeters. They're mini computers that provide real-time information on how much electricity is being used by each customer and when.
"This utilizes a technology called 'Powerline Carrier.' So it puts the signal back on the lines that actually feed the Meter, back through the power lines, to the transformer, back all the way to the head office," Vahlstrom says.
Everything is done remotely. You can turn your air conditioning, heating and lights on at home -- even when you're on vacation.
Vahlstrom says, "I want my thermostat to raise temperature by five degrees or four degrees if the price of electricity gets above this level. Then automatically the thermostat we can send a command to do that."
Power outages can be detected immediately, and within seconds, power is back on. "It actually does these things remotely without a person on-site," Vahlstrom says.
Peak power hours are from 2 p.m. to 7 p.m. By running her dishwasher and other appliances later at night, the power company gives Camping and her family a discount.
"We've saved about $150 to $200," Camping says. It's the first step to saving electricity and saving money.
Minnesota, Arizona, Pennsylvania and Florida are already using the SmartMeters. Not all of the functions are available right now in every state, but will be soon.
Like most parents, finding ways to save is a priority for Trina Camping. But she thought saving on her electric bill was a lost cause. That is, until she saw the light and started using a SmartMeter.
"This is what makes a SmartMeter smart," Mechanical Engineer Tim Vahlstrom, tells DBIS. "It's mounted onto a typical electric meter."
Engineers at Pacific Gas and Electric in San Francisco are working on SmartMeters. They're mini computers that provide real-time information on how much electricity is being used by each customer and when.
"This utilizes a technology called 'Powerline Carrier.' So it puts the signal back on the lines that actually feed the Meter, back through the power lines, to the transformer, back all the way to the head office," Vahlstrom says.
Everything is done remotely. You can turn your air conditioning, heating and lights on at home -- even when you're on vacation.
Vahlstrom says, "I want my thermostat to raise temperature by five degrees or four degrees if the price of electricity gets above this level. Then automatically the thermostat we can send a command to do that."
Power outages can be detected immediately, and within seconds, power is back on. "It actually does these things remotely without a person on-site," Vahlstrom says.
Peak power hours are from 2 p.m. to 7 p.m. By running her dishwasher and other appliances later at night, the power company gives Camping and her family a discount.
"We've saved about $150 to $200," Camping says. It's the first step to saving electricity and saving money.
Minnesota, Arizona, Pennsylvania and Florida are already using the SmartMeters. Not all of the functions are available right now in every state, but will be soon.
Laser diagnosis Before Symptoms Appear
It may not rank among the top 10 causes of death, but decompression sickness can be fatal. Instead of waiting for symptoms to appear, a University of Houston professor is developing a laser-based system that can diagnose the sickness in a matter of seconds.
Kirill Larin, assistant professor of biomedical engineering and mechanical engineering, is using a $400,000 grant from the U.S. Navy to develop the first optical non-invasive tool to test those most likely to suffer from decompression sickness, such as scuba divers, submariners and airplane pilots. Decompression sickness affects those who experience sudden, drastic changes in the air or water pressure surrounding their bodies. It can cause anything from joint pain – known as the bends – to seizure, stroke, coma and, in the most extreme cases, death.
“Most of the time, decompression sickness isn’t addressed until the person starts showing clinical symptoms,” Larin said. “It would be better, of course, to treat the problem before the symptoms appear. That would allow individuals to take the appropriate medical actions to reduce the side effects of decompression sickness.”
Larin’s optical device can locate the presence of nitrogen gas – or microbubbles – in blood and tissues, which can restrict the flow of blood throughout the body and cause damage. Larin is developing the tool, which works much like an ultrasound machine, with Dr. Bruce Butler of the UT Health Science Center in Houston. Instead of getting readings using sound waves, however, Larin’s system uses light waves in the form of lasers that bounce back when they encounter resistance, thereby providing a high-resolution image.
The Navy could eventually use this technology on all divers or pilots returning to the surface. By shining the laser on one of these individuals, it would provide an image that would reveal the presence of any microbubbles in the blood or tissue – all in a matter of seconds. If microbubbles are found, then medical steps, such as time in a decompression chamber, could be taken before the symptoms appear.
An early version of the tool has been able to locate microbubbles as small as six micrometers, or six thousandths of a millimeter. Most microbubbles are between five and 15 micrometers, about the size of a red blood cell.
The device also could be used at the International Space Station, where individuals moving from a ship to the station have suffered from the effects of decompression sickness. With continued research, everyone from highly trained naval divers and pilots, to astronauts and seaside vacationers could benefit.
“Most of the time, decompression sickness isn’t addressed until the person starts showing clinical symptoms,” Larin said. “It would be better, of course, to treat the problem before the symptoms appear. That would allow individuals to take the appropriate medical actions to reduce the side effects of decompression sickness.”
Larin’s optical device can locate the presence of nitrogen gas – or microbubbles – in blood and tissues, which can restrict the flow of blood throughout the body and cause damage. Larin is developing the tool, which works much like an ultrasound machine, with Dr. Bruce Butler of the UT Health Science Center in Houston. Instead of getting readings using sound waves, however, Larin’s system uses light waves in the form of lasers that bounce back when they encounter resistance, thereby providing a high-resolution image.
The Navy could eventually use this technology on all divers or pilots returning to the surface. By shining the laser on one of these individuals, it would provide an image that would reveal the presence of any microbubbles in the blood or tissue – all in a matter of seconds. If microbubbles are found, then medical steps, such as time in a decompression chamber, could be taken before the symptoms appear.
An early version of the tool has been able to locate microbubbles as small as six micrometers, or six thousandths of a millimeter. Most microbubbles are between five and 15 micrometers, about the size of a red blood cell.
The device also could be used at the International Space Station, where individuals moving from a ship to the station have suffered from the effects of decompression sickness. With continued research, everyone from highly trained naval divers and pilots, to astronauts and seaside vacationers could benefit.
Thursday, November 15, 2007
Small planets forming in the Pleiades
Small, rocky planets that could resemble the Earth or Mars may be forming around a star in the Pleiades star cluster, astronomers reported on Wednesday.
One of the stars in the cluster, also known as the Seven Sisters, is surrounded by an extraordinary number of hot dust particles that could be the "building blocks of planets" said Inseok Song, a staff scientist at NASA's Spitzer Science Center at the California Institute of Technology.
"This is the first clear evidence for planet formation in the Pleiades, and the results we are presenting may well be the first observational evidence that terrestrial planets like those in our solar system are quite common," said Joseph Rhee of the University of California Los Angeles, who led the study.
There is "hundreds of thousands of times as much dust as around our sun," said Benjamin Zuckerman, a UCLA professor of physics and astronomy. "The dust must be the debris from a monster collision, a cosmic catastrophe."
The team used two telescopes to spot the dust, and report their findings in Astrophysical Journal.
Located about 400 light years away in the constellation of Taurus, the Pleiades is one of the best known star clusters and among the closest to Earth. A light-year is the distance light travels in a year, about 5.8 trillion miles.
"The cluster actually contains some 1,400 stars," said Song.
Song said the dust can accumulate into comets and small asteroid-size bodies, and then clump together to form planetary embryos, and finally full-fledged planets.
"In the process of creating rocky, terrestrial planets, some objects collide and grow into planets, while others shatter into dust; we are seeing that dust," Song said.
One of the stars in the cluster, also known as the Seven Sisters, is surrounded by an extraordinary number of hot dust particles that could be the "building blocks of planets" said Inseok Song, a staff scientist at NASA's Spitzer Science Center at the California Institute of Technology.
"This is the first clear evidence for planet formation in the Pleiades, and the results we are presenting may well be the first observational evidence that terrestrial planets like those in our solar system are quite common," said Joseph Rhee of the University of California Los Angeles, who led the study.
There is "hundreds of thousands of times as much dust as around our sun," said Benjamin Zuckerman, a UCLA professor of physics and astronomy. "The dust must be the debris from a monster collision, a cosmic catastrophe."
The team used two telescopes to spot the dust, and report their findings in Astrophysical Journal.
Located about 400 light years away in the constellation of Taurus, the Pleiades is one of the best known star clusters and among the closest to Earth. A light-year is the distance light travels in a year, about 5.8 trillion miles.
"The cluster actually contains some 1,400 stars," said Song.
Song said the dust can accumulate into comets and small asteroid-size bodies, and then clump together to form planetary embryos, and finally full-fledged planets.
"In the process of creating rocky, terrestrial planets, some objects collide and grow into planets, while others shatter into dust; we are seeing that dust," Song said.
Monday, November 12, 2007
QUAD CORE
Processors are tiny bits of silicon the size of a fingernail that do the grunt work inside your machine.
First came the single, then the dual-core and now Intel and AMD are promising four processors on a chip.
They sparkle like gems and are easily worth as much to mankind: the copper coloured disks of silicon form which the chips are cut are the platters that modern life is built on. Processors are tiny bits of silicon the size of a fingernail that do the grunt work inside your machine.
First came the single, then the dual-core and now Intel and AMD are promising four processors on a chip.
They sparkle like gems and are easily worth as much to mankind: the copper coloured disks of silicon form which the chips are cut are the platters that modern life is built on.
First came the single, then the dual-core and now Intel and AMD are promising four processors on a chip.
They sparkle like gems and are easily worth as much to mankind: the copper coloured disks of silicon form which the chips are cut are the platters that modern life is built on. Processors are tiny bits of silicon the size of a fingernail that do the grunt work inside your machine.
First came the single, then the dual-core and now Intel and AMD are promising four processors on a chip.
They sparkle like gems and are easily worth as much to mankind: the copper coloured disks of silicon form which the chips are cut are the platters that modern life is built on.
Each little square on these wafers as they are known is a fully-blown processor chip and these small chunks of silicon have changed how we live our lives.
We are not just talking personal computers here: think power stations, cars, television, hospitals and anything and everything is run by bits of silicon.
The first ever microprocessor drove a rudimentary calculating machine but since then, the technology has grown and the chips have got smaller.
Running hot
The problem has always been the hunt for performance. Chips need more and more power to keep up with the latest application and operating system developments and it is playing catch-up that causes the headaches. The faster a chip runs the hotter it gets, so the aim is to make a chip more powerful so it can run more slowly and therefore cooler. But that makes the chip bigger.
So you need to grow its performance but not its size - and that's a difficult trade off.
We are now at the stage where shrinking the core of the chip, the part of the chip that does the thinking, is getting less efficient.
"It takes an extraordinary amount of electrical power to eke out even a small increase in the performance of a single core," said Justin Rattner, chief technical officer at Intel. "So as power has become the predominant concern, we've had to step back and rethink this.
"A much more energy efficient strategy is to limit the individual core power, or even reduce it, and increase the number of cores.
"The challenge is the software required to use those large numbers of cores, which is a problem we're facing."
We are not just talking personal computers here: think power stations, cars, television, hospitals and anything and everything is run by bits of silicon.
The first ever microprocessor drove a rudimentary calculating machine but since then, the technology has grown and the chips have got smaller.
Running hot
The problem has always been the hunt for performance. Chips need more and more power to keep up with the latest application and operating system developments and it is playing catch-up that causes the headaches. The faster a chip runs the hotter it gets, so the aim is to make a chip more powerful so it can run more slowly and therefore cooler. But that makes the chip bigger.
So you need to grow its performance but not its size - and that's a difficult trade off.
We are now at the stage where shrinking the core of the chip, the part of the chip that does the thinking, is getting less efficient.
"It takes an extraordinary amount of electrical power to eke out even a small increase in the performance of a single core," said Justin Rattner, chief technical officer at Intel. "So as power has become the predominant concern, we've had to step back and rethink this.
"A much more energy efficient strategy is to limit the individual core power, or even reduce it, and increase the number of cores.
"The challenge is the software required to use those large numbers of cores, which is a problem we're facing."
Physics provides new insights on cataract formation
Using the tools and techniques of soft condensed matter physics, a research team in Switzerland has demonstrated that a finely tuned balance of attractions between proteins keeps the lens of the eye transparent, and that even a small change in this balance can cause proteins to aggregate and de-mix. This leads to cataract formation, the world’s leading cause of blindness.
This work could shed light on other protein aggregation diseases (such as Alzheimer’s disease), and may one day lead to methods for stabilizing protein interactions and thus preventing these problematic aggregations from occurring.
The eye lens is made up of densely packed crystallin proteins, arranged in such a way that light in the visible wavelength range can pass through. But for a variety of reasons including UV radiation exposure and age, the proteins sometimes change their behavior and clump together. As a result, light is scattered once it enters the lens, resulting in cloudy vision or blindness. There is currently no known way to reverse the protein aggregation process once it has begun. Nearly 5 million people every year undergo cataract surgery in which their lenses are removed and replaced with artificial ones.
This work could shed light on other protein aggregation diseases (such as Alzheimer’s disease), and may one day lead to methods for stabilizing protein interactions and thus preventing these problematic aggregations from occurring.
The eye lens is made up of densely packed crystallin proteins, arranged in such a way that light in the visible wavelength range can pass through. But for a variety of reasons including UV radiation exposure and age, the proteins sometimes change their behavior and clump together. As a result, light is scattered once it enters the lens, resulting in cloudy vision or blindness. There is currently no known way to reverse the protein aggregation process once it has begun. Nearly 5 million people every year undergo cataract surgery in which their lenses are removed and replaced with artificial ones.
Thursday, November 8, 2007
Planet orbiting star
A new planet was discovered orbiting a sun-like star 41 light years away, making it the first known planetary quintet outside our solar system, astronomers said Tuesday. The newfound planet joins four others circling the nearby star 55 Cancri in the constellation Cancer. Although it resides in the star's so-called habitable zone, a place where liquid water and mild temperatures should exist, it is more like Saturn than Earth and therefore not likely to support life. Still, scientists have not ruled out the possibility of finding an Earth-like planet within the system as technology improves. "It's a system that appears to be packed with planets," said co-discoverer Debra Fischer, an astronomer at San Francisco State University. Ranked fourth from 55 Cancri, the latest planet is about 45 times the mass of Earth and has an orbit of 260 days. It was detected after nearly two decades of observations by ground-based telescopes using the Doppler technique that measures a planet's stellar wobble.
The other planets in the 55 Cancri system were discovered between 1996 and 2004. The innermost planet is believed to resemble Neptune, while the most distant is thought to be Jupiter-like. Scientists have detected about 250 exoplanets, or planets orbiting a star other than the sun. The 55 Cancri star holds the record for number of confirmed planets. Only one other star is known to have four planets, while several others have three or less. "We can now say there are stars like the sun that have many worlds around them," said planetary scientist Jonathan Lunine of the University of Arizona, Tucson, who had no role in the discovery. The research will appear in a future issue of the Astrophysical Journal. It was funded by NASA, the National Science Foundation and the University of California. The latest discovery shows that our solar system is not unique, scientists said. "When you look up into the night sky and see the twinkling lights of stars, you can imagine with certainty that they have their own complement of planets," said astronomer Geoff Marcy of the University of California, Berkeley, who was part of the research.
The other planets in the 55 Cancri system were discovered between 1996 and 2004. The innermost planet is believed to resemble Neptune, while the most distant is thought to be Jupiter-like. Scientists have detected about 250 exoplanets, or planets orbiting a star other than the sun. The 55 Cancri star holds the record for number of confirmed planets. Only one other star is known to have four planets, while several others have three or less. "We can now say there are stars like the sun that have many worlds around them," said planetary scientist Jonathan Lunine of the University of Arizona, Tucson, who had no role in the discovery. The research will appear in a future issue of the Astrophysical Journal. It was funded by NASA, the National Science Foundation and the University of California. The latest discovery shows that our solar system is not unique, scientists said. "When you look up into the night sky and see the twinkling lights of stars, you can imagine with certainty that they have their own complement of planets," said astronomer Geoff Marcy of the University of California, Berkeley, who was part of the research.
Windows Home Server
The just-released Windows Home Server (WHS) from Microsoft Corp. is a surprisingly powerful networking tool that offers some of the sophisticated networking capabilities you would expect from big-boy servers, but aims them at the home "enthusiast" and power user markets.
In fact, some of WHS's capabilities are powerful enough that it's useful for the home office, not just home enthusiasts. In particular, its backup, file-sharing and remote access capabilities are ideal for users who run an office from their home, and possibly even for a small office of a half-dozen or fewer PCs. This should be no surprise: The software is based on the code of Microsoft Windows Server 2003
.The target for this software is aimed squarely at the home, though, and it does a very good job of doing most of the things that home users need, notably sharing media and other files, and no-fuss automated backup. It's designed for a world in which many households have multiple PCs attached via a home router to share broadband access.
WHS will primarily be sold as a plug-and-play, self-contained box that includes all hardware and software. You plug it in, turn it on and after some basic configuration, you're ready to go. Prices vary according to the capabilities of the hardware, size of hard drives and so on.
This review looks at the server software installed on a test machine with a 1.8-GHz Pentium 4 processor, 1GB of RAM, DVD drive, Ethernet connection and a 75GB hard drive. The minimum requirements for WHS are a 1-GHz Pentium 3, 512MB of RAM, a 70GB hard drive, DVD drive and Ethernet connection. On my test machine, the software ran flawlessly with no slowdowns or performance degradation.
In fact, some of WHS's capabilities are powerful enough that it's useful for the home office, not just home enthusiasts. In particular, its backup, file-sharing and remote access capabilities are ideal for users who run an office from their home, and possibly even for a small office of a half-dozen or fewer PCs. This should be no surprise: The software is based on the code of Microsoft Windows Server 2003
.The target for this software is aimed squarely at the home, though, and it does a very good job of doing most of the things that home users need, notably sharing media and other files, and no-fuss automated backup. It's designed for a world in which many households have multiple PCs attached via a home router to share broadband access.
WHS will primarily be sold as a plug-and-play, self-contained box that includes all hardware and software. You plug it in, turn it on and after some basic configuration, you're ready to go. Prices vary according to the capabilities of the hardware, size of hard drives and so on.
This review looks at the server software installed on a test machine with a 1.8-GHz Pentium 4 processor, 1GB of RAM, DVD drive, Ethernet connection and a 75GB hard drive. The minimum requirements for WHS are a 1-GHz Pentium 3, 512MB of RAM, a 70GB hard drive, DVD drive and Ethernet connection. On my test machine, the software ran flawlessly with no slowdowns or performance degradation.
Foldable and stackable electric car
A team at the Cambridge, Mass.-based university is working on a design project for the City Car, a foldable, stackable two-seater vehicle. The frame of the car is designed to fold in half so the cars can be stacked up eight deep in one city parking space. Franco Vairani, a Ph.D. candidate at MIT and one of the original designers in the City Car project, said his team is taking a vending-machine approach to city travel. In his vision of the future, people would find a stack of electrical-powered City Cars on nearly every block in the city. When a user would want to drive somewhere in town, he would swipe a smart card or cell phone across an electronic reader and take a car out of the stack. When he gets to a business meeting across town, a shopping mall or their doctor's office, the driver simply leaves the car in a stack at his destination. The drivers don't own the cars. They simply rent them. It's fully self-service. The next person takes a car out of the stack, and off he goes.
Saturday, November 3, 2007
Can electron spin be rotated with electric field ?
An electron microscope photo of a nanostructure similar to that used in the experiment. The light-grey colors show the metal structure (made of gold) used to create an electric trap (white lines) for the electrons. A voltage (V) that changes with time is applied to the rightmost piece of metal. As a result, the electron, which is locked in the right trap, feels an electric field. This electric field causes the electron to move (white dotted line), so that the position of the electron changes with time.Researchers at the Delft University of Technology’s Kavli Institute of Nanoscience and the Foundation for Fundamental Research on Matter (FOM) have succeeded in controlling the spin of a single electron merely by using electric fields. This clears the way for a much simpler realization of the building blocks of a (future) super-fast quantum computer.
Novel method for making water
In a familiar high-school chemistry demonstration, an instructor first uses electricity to split liquid water into its constituent gases, hydrogen and oxygen. Then, by combining the two gases and igniting them with a spark, the instructor changes the gases back into water with a loud pop.
Scientists at the University of Illinois have discovered a new way to make water, and without the pop. Not only can they make water from unlikely starting materials, such as alcohols, their work could also lead to better catalysts and less expensive fuel cells. “We found that unconventional metal hydrides can be used for a chemical process called oxygen reduction, which is an essential part of the process of making water,” said Zachariah Heiden, a doctoral student and lead author of a paper accepted for publication in the Journal of the American Chemical Society. A water molecule (formally known as dihydrogen monoxide) is composed of two hydrogen atoms and one oxygen atom. But you can’t simply take two hydrogen atoms and stick them onto an oxygen atom. The actual reaction to make water is a bit more complicated: 2H2 + O2 = 2H2O + Energy. In English, the equation says: To produce two molecules of water (H2O), two molecules of diatomic hydrogen (H2) must be combined with one molecule of diatomic oxygen (O2). Energy will be released in the process. “This reaction (2H2 + O2 = 2H2O + Energy) has been known for two centuries, but until now no one has made it work in a homogeneous solution,” said Thomas Rauchfuss, a U. of I. professor of chemistry and the paper’s corresponding author. The well-known reaction also describes what happens inside a hydrogen fuel cell. In a typical fuel cell, the diatomic hydrogen gas enters one side of the cell, diatomic oxygen gas enters the other side. The hydrogen molecules lose their electrons and become positively charged through a process called oxidation, while the oxygen molecules gain four electrons and become negatively charged through a process called reduction. The negatively charged oxygen ions combine with positively charged hydrogen ions to form water and release electrical energy.
The “difficult side” of the fuel cell is the oxygen reduction reaction, not the hydrogen oxidation reaction, Rauchfuss said. “We found, however, that new catalysts for oxygen reduction could also lead to new chemical means for hydrogen oxidation.” Rauchfuss and Heiden recently investigated a relatively new generation of transfer hydrogenation catalysts for use as unconventional metal hydrides for oxygen reduction. In their JACS paper, the researchers focus exclusively on the oxidative reactivity of iridium-based transfer hydogenation catalysts in a homogenous, non-aqueous solution. They found the iridium complex effects both the oxidation of alcohols, and the reduction of the oxygen. “Most compounds react with either hydrogen or oxygen, but this catalyst reacts with both,” Heiden said. “It reacts with hydrogen to form a hydride, and then reacts with oxygen to make water; and it does this in a homogeneous, non-aqueous solvent.” The new catalysts could lead to eventual development of more efficient hydrogen fuel cells, substantially lowering their cost, Heiden said.
Phononic Computer
Most computers today use electrons to carry information, while theoretical optical computers use photons. Recently, physicists from Singapore have proposed a third type of computer: a “phononic computer,” which would use heat, carried by phonons, to perform operations similar to its electronic counterpart.
“Heat is very abundant and very often it is regarded as useless and harmful for information processing,” Professor Baowen Li of the National University of Singapore told PhysOrg.com. “The merit of our paper is that we demonstrate that, in addition to the existing electrons and photons, the phonons can also perform a similar function. This provides an alternative way for information processing. Moreover, the heat can be harnessed to use.”
Logic gates, one of the basic elements of computers, perform an operation on one or more logic inputs to produce a single logic output. In electronic logic gates, the inputs and outputs are represented by different voltages. However, in a thermal logic gate, the inputs and outputs are represented by different temperatures. The key element of the logic gate is the thermal transistor (which was invented by Li’s group last year), which works similar to how a field-effect transistor controls electric current. The thermal transistor is composed of two terminals that are weakly coupled, plus a third control terminal. “Like all other theoretical modeling, we use heat bath to produce heat, which is a kind of random atomic or molecular motion,” Li explained. “To conduct heat, you don't need too much external power. Any temperature difference will lead to heat conduction.”
In the researchers’ model, heat is conducted by lattice vibration. When the vibration spectra of the two terminals are combined, their overlap determines the heat current. For example, when the two spectra overlap, the heat can easily travel between the terminals, representing the “on” state. When the vibration spectra do not overlap, very little heat (or no heat) passes through, representing the “off” state. The “negative differential thermal resistance” (NDTR) that occurs due to the match/mismatch of vibrational spectra of the terminals’ interface particles, makes the “on” and “off” states both stable, making the thermal logic operations possible. “Like we explain in our Physical Review Letters article, all these logic gate functions can be achieved only when the system has the so-called negative or super response, by which we mean that the large temperature difference (change) will induce the small heat current,” Li said. “This is the so-called ‘negative differential thermal resistance.’” The NDTR phenomenon was also discovered by Li’s group in 2006. The researchers demonstrate how combining thermal transistors can be used to build different thermal logic gates, such as a signal repeater. A signal repeater “digitizes” the heat input, so that when the temperature is higher or lower than a critical value, the output is either “on” or “off,” but not in between. By connecting a few thermal transistors in series, the researchers achieved a nearly ideal repeater. Besides signal repeaters, they also demonstrated a NOT gate, which reverses the input signal, and an AND/OR gate, made from the same thermal transistor model. While the current model simply shows the feasibility of thermal logic gates, Wang and Li predict that an experimental realization of the devices in nanoscale systems may not be too far off. They point out that another thermal device, the solid-state thermal rectifier, was experimentally demonstrated in 2006, just a few years after the proposed theoretical model. “One advantage of a phononic computer might be that we don't need to consume a lot of electricity,” Li said. “We may use the redundant heat produced by electronic devices or provided by Mother Nature to do useful work. Another advantage is that, one day, human beings can control and use heat wisely so that we may save a lot of energy—which is a big issue nowadays.”
More information: Wang, Lei, and Li, Baowen. “Thermal Logic Gates: Computation with Phonons.” Physical Review Letters 99, 177208 (2007).
Battlefield stethoscope
Noise can be a major problem when treating military casualties. Evacuate a patient by helicopter and the sound of the engine will drown any attempt to listen to his or her heartbeat. That's why the US Army has funded the development of a stethoscope designed to work in noise levels of 95 decibels and above - comparable to those that inside a helicopter, an airplane or on the battlefield itself.The trick is to design the stethoscope using a material that allows only sound waves of a certain amplitude and frequency to pass. Other sounds, including louder ones, are simply blocked. This process, known as "acoustic impedance matching", ensures that only sounds related to breathing and the heart are transmitted into the medic's ears.However, in case this doesn't work, the army's new stethoscope also includes an active heart beat detecting system. This sends a radio signal into the body and listening for the doppler-shifted reflection from a beating heart.The device should also help civilian medics who have to work on noisy city streets.
Ceramic TV displays
Conventional plasma displays work by passing a current through a cavity of low-pressure gas to create a plasma that emits light. But reduce the size of the cavity to less than a millimetre and the process works at atmospheric pressure and converts electrical power into light with much higher efficiencies.Gary Eden and Sung-Jin Park work at the department of electrical and computer engineering at the University of Illinois in Urbana-Champaign where conventional plasma display screens were developed in the 1960s.Their idea for a new generation of displays is to carve an array of microscopic cavities into a ceramic slab in which each cavity is can be switched on and off via an array of electrodes. Passing a current through the microcavity generates ultraviolet light which stimulates a coloured phosphorescent layer on a sheet of glass above to produce a visible glow.The advantage of this design, say the researchers, is that the display should work at much higher efficiencies than conventional plasma displays which convert only 1 per cent of electrical power into light. It should also produce displays with much higher resolution because the pixels can be made only a few tens of micrometres across. The team has even demonstrated the idea with 25 centimetre square display with 250,00 pixels.Efficiency is an important aspect of modern displays, but so is cost. The big question raised by this work is whether a device based on a ceramic substrate can beat other displays on price. And with plenty of other ideas competing in the race to develop the next generation of displays, this one will have to be cheap as well as efficient to rise to the top of the heap.
Microsoft mind reading
Not content with running your computer, Microsoft now wants to read your mind too.The company says that it is hard to properly evaluate the way people interact with computers since questioning them at the time is distracting and asking questions later may not produce reliable answers. "Human beings are often poor reporters of their own actions," the company says.Instead, Microsoft wants to read the data straight from the user's brain as he or she works away. They plan to do this using electroencephalograms (EEGs) to record electrical signals within the brain. The trouble is that EEG data is filled with artefacts caused, for example, by blinking or involuntary actions, and this is hard to tease apart from the cognitive data that Microsoft would like to study.So the company has come up with a method for filtering EEG data in such a way that it separates useful cognitive information from the not-so-useful non-cognitive stuff. The company hopes that the data will better enable to them to design user interfaces that people find easy to use. Whether users will want Microsoft reading their brain waves is another matter altogether.
Thursday, November 1, 2007
Polymer based Piezoelectric Materials
Polymer-based piezoelectric materials are currently the object of great interest in the world of industry because they enable their use in new applications in sectors such as transport and aeronautics, amongst others.A definition of piezoelectricity – piezo being Greek for “subjected to pressure” - is the generation of the electrical polarisation of a material as a response to mechanical strain. This phenomenon is known as direct effect or generator effect and is applied fundamentally in the manufacture of sensors (mobile phone vibrators, lighters, etc.). In these cases piezoelectric materials, also used in actuators, undergo an inverse or motor effect, i.e. a mechanical deformation due to the application of an electrical signal.
Over the last four decades perovskita-type ceramics (zirconium or lead titanate ceramics) have been mainly used as piezoelectric materials in acoustic applications, amongst other reasons because of their high elastic modularity, their high dielectric constant and their low dielectric and elastic losses.
However, and although they have also been used successfully in many other applications, ceramic piezoelectric materials have some important drawbacks: limited deformation, fragility and a high mass density that limit their use in sectors such as aeronautics or electrical-electronics. These limitations can be overcome in specific applications using polymeric piezoelectric materials instead of ceramic ones.
The only piezoelectric polymer that currently exists on the market is Polyvinylidene Difluoride (PVDF). This semi-crystalline polymer is characterised by having very good piezoelectric properties, but only to 90 ºC. Thus the interest in synthesising new piezoelectric polymers capable of maintaining their properties at greater temperatures.
Over the last four decades perovskita-type ceramics (zirconium or lead titanate ceramics) have been mainly used as piezoelectric materials in acoustic applications, amongst other reasons because of their high elastic modularity, their high dielectric constant and their low dielectric and elastic losses.
However, and although they have also been used successfully in many other applications, ceramic piezoelectric materials have some important drawbacks: limited deformation, fragility and a high mass density that limit their use in sectors such as aeronautics or electrical-electronics. These limitations can be overcome in specific applications using polymeric piezoelectric materials instead of ceramic ones.
The only piezoelectric polymer that currently exists on the market is Polyvinylidene Difluoride (PVDF). This semi-crystalline polymer is characterised by having very good piezoelectric properties, but only to 90 ºC. Thus the interest in synthesising new piezoelectric polymers capable of maintaining their properties at greater temperatures.
Researchers also developed amorphous piezoelectric polymers to be employed in conditions of extreme temperature where semi-crystalline polymers cannot be used. To this end, and after prior work with different materials, the use of polymides was opted for, given their excellent thermal, mechanical and dielectric properties.
Various dipolar groups (-CN, -SO2-, -CF3) have been incorporated into the molecule, varying the number and position of these groups in order to fix their physical - and consequently, their piezoelectric - properties.
Moreover, it has been shown that the value for the temperature of vitreous transition is fundamental for these polymides, as this determines the temperature at which piezoelectric properties are lost. Specifically, this type of polymers show piezoelectric stability up to temperatures of 150ºC and do not begin to degrade until above 400 ºC.
Various dipolar groups (-CN, -SO2-, -CF3) have been incorporated into the molecule, varying the number and position of these groups in order to fix their physical - and consequently, their piezoelectric - properties.
Moreover, it has been shown that the value for the temperature of vitreous transition is fundamental for these polymides, as this determines the temperature at which piezoelectric properties are lost. Specifically, this type of polymers show piezoelectric stability up to temperatures of 150ºC and do not begin to degrade until above 400 ºC.
Quantum Cascade Laser Nanoantenna
In a major feat of nanotechnology engineering researchers from Harvard University have demonstrated a laser with a wide-range of potential applications in chemistry, biology and medicine. Called a quantum cascade (QC) laser nanoantenna, the device is capable of resolving the chemical composition of samples, such as the interior of a cell, with unprecedented detail.
The device consists of an optical antenna fabricated on the facet of a quantum cascade laser emitting infrared light with a wavelength of 7 microns. The Harvard team used nanofabrication techniques to form the optical antenna, which consists of two gold rectangles, each 1.2 microns long, separated by a narrow gap (100 nm). Light from the laser illuminates the antenna, resulting in an intense spot of light in the gap of size seventy times smaller than the wavelength. This is far smaller than what would be possible with the conventional approach of forming a spot of light by focusing with a lens. Due to the wave nature of light, such a spot would have a diameter of more than 7 microns. The figure is an electron microscope micrograph of the facet of the QC laser with the built-in nanoantenna. Shown are also an atomic force microscope topographic image of the antenna and an optical imnear field scanning optical microscope, showing the highly localized light spot in the antenna gap.age obtained with a near field scanning optical microscope, showing the highly localized light spot in the antenna gap.
Wireless Sensors
Researchers at Purdue University, working with the U.S. Air Force, have developed tiny wireless sensors resilient enough to survive the harsh conditions inside jet engines to detect when critical bearings are close to failing and prevent breakdowns.The devices are an example of an emerging technology known as "micro electromechanical systems," or MEMS, which are machines that combine electronic and mechanical components on a microscopic scale.
"The MEMS technology is critical because it needs to be small enough that it doesn't interfere with the performance of the bearing itself," said Farshid Sadeghi, a professor of mechanical engineering. "And the other issue is that it needs to be able to withstand extreme heat."
The engine bearings must function amid temperatures of about 300 degrees Celsius, or 572 degrees Fahrenheit.
The researchers have shown that the new sensors can detect impending temperature-induced bearing failure significantly earlier than conventional sensors.
"This kind of advance warning is critical so that you can shut down the engine before it fails," said Dimitrios Peroulis, an assistant professor of electrical and computer engineering.
Findings will be detailed in a research paper to be presented on Tuesday (Oct. 30) during the IEEE Sensors 2007 conference in Atlanta, sponsored by the Institute of Electrical and Electronics Engineers. The paper was written by electrical and computer engineering graduate student Andrew Kovacs, Peroulis and Sadeghi.
The sensors could be in use in a few years in military aircraft such as fighter jets and helicopters. The technology also has potential applications in commercial products, including aircraft and cars.
"Anything that has an engine could benefit through MEMS sensors by keeping track of vital bearings," Peroulis said. "This is going to be the first time that a MEMS component will be made to work in such a harsh environment. It is high temperature, messy, oil is everywhere, and you have high rotational speeds, which subject hardware to extreme stresses."
The work is an extension of Sadeghi's previous research aimed at developing electronic sensors to measure the temperature inside critical bearings in communications satellites.
"This is a major issue for aerospace applications, including bearings in satellite attitude control wheels to keep the satellites in position," Sadeghi said.
The wheels are supported by two bearings. If mission controllers knew the bearings were going bad on a specific unit, they could turn it off and switch to a backup.
"What happens, however, is that you don't get any indication of a bearing's imminent failure, and all of a sudden the gyro stops, causing the satellite to shoot out of orbit," Sadeghi said. "It can take a lot of effort and fuel to try to bring it back to the proper orbit, and many times these efforts fail."
The Purdue researchers received a grant from the U.S. Air Force in 2006 to extend the work for high-temperature applications in jet engines.
"Current sensor technology can withstand temperatures of up to about 210 degrees Celsius, and the military wants to extend that to about 300 degrees Celsius," Sadeghi said. "At the same time, we will need to further miniaturize the size."
The new MEMS sensors provide early detection of impending failure by directly monitoring the temperature of engine bearings, whereas conventional sensors work indirectly by monitoring the temperature of engine oil, yielding less specific data.
The MEMS devices will not require batteries and will transmit temperature data wirelessly.
"This type of system uses a method we call telemetry because the devices transmit signals without wires, and we power the circuitry remotely, eliminating the need for batteries, which do not perform well in high temperatures," Peroulis said.
Power will be provided using a technique called inductive coupling, which uses coils of wire to generate current.
"The major innovation will be the miniaturization and design of the MEMS device, allowing us to install it without disturbing the bearing itself," Peroulis said.
Data from the onboard devices will not only indicate whether a bearing is about to fail but also how long it is likely to last before it fails, Peroulis said.
The research is based at the Birck Nanotechnology Center in Purdue's Discovery Park and at Sadeghi's mechanical engineering laboratory.
"The MEMS technology is critical because it needs to be small enough that it doesn't interfere with the performance of the bearing itself," said Farshid Sadeghi, a professor of mechanical engineering. "And the other issue is that it needs to be able to withstand extreme heat."
The engine bearings must function amid temperatures of about 300 degrees Celsius, or 572 degrees Fahrenheit.
The researchers have shown that the new sensors can detect impending temperature-induced bearing failure significantly earlier than conventional sensors.
"This kind of advance warning is critical so that you can shut down the engine before it fails," said Dimitrios Peroulis, an assistant professor of electrical and computer engineering.
Findings will be detailed in a research paper to be presented on Tuesday (Oct. 30) during the IEEE Sensors 2007 conference in Atlanta, sponsored by the Institute of Electrical and Electronics Engineers. The paper was written by electrical and computer engineering graduate student Andrew Kovacs, Peroulis and Sadeghi.
The sensors could be in use in a few years in military aircraft such as fighter jets and helicopters. The technology also has potential applications in commercial products, including aircraft and cars.
"Anything that has an engine could benefit through MEMS sensors by keeping track of vital bearings," Peroulis said. "This is going to be the first time that a MEMS component will be made to work in such a harsh environment. It is high temperature, messy, oil is everywhere, and you have high rotational speeds, which subject hardware to extreme stresses."
The work is an extension of Sadeghi's previous research aimed at developing electronic sensors to measure the temperature inside critical bearings in communications satellites.
"This is a major issue for aerospace applications, including bearings in satellite attitude control wheels to keep the satellites in position," Sadeghi said.
The wheels are supported by two bearings. If mission controllers knew the bearings were going bad on a specific unit, they could turn it off and switch to a backup.
"What happens, however, is that you don't get any indication of a bearing's imminent failure, and all of a sudden the gyro stops, causing the satellite to shoot out of orbit," Sadeghi said. "It can take a lot of effort and fuel to try to bring it back to the proper orbit, and many times these efforts fail."
The Purdue researchers received a grant from the U.S. Air Force in 2006 to extend the work for high-temperature applications in jet engines.
"Current sensor technology can withstand temperatures of up to about 210 degrees Celsius, and the military wants to extend that to about 300 degrees Celsius," Sadeghi said. "At the same time, we will need to further miniaturize the size."
The new MEMS sensors provide early detection of impending failure by directly monitoring the temperature of engine bearings, whereas conventional sensors work indirectly by monitoring the temperature of engine oil, yielding less specific data.
The MEMS devices will not require batteries and will transmit temperature data wirelessly.
"This type of system uses a method we call telemetry because the devices transmit signals without wires, and we power the circuitry remotely, eliminating the need for batteries, which do not perform well in high temperatures," Peroulis said.
Power will be provided using a technique called inductive coupling, which uses coils of wire to generate current.
"The major innovation will be the miniaturization and design of the MEMS device, allowing us to install it without disturbing the bearing itself," Peroulis said.
Data from the onboard devices will not only indicate whether a bearing is about to fail but also how long it is likely to last before it fails, Peroulis said.
The research is based at the Birck Nanotechnology Center in Purdue's Discovery Park and at Sadeghi's mechanical engineering laboratory.
Subscribe to:
Posts (Atom)