Heat Waves under the scope



Scientists have fingerprinted a distinctive atmospheric wave pattern high above the Northern Hemisphere that can foreshadow the emergence of summertime heat waves in the United States more than two weeks in advance.



The new research, led by scientists at the National Center for Atmospheric Research (NCAR), could potentially enable forecasts of the likelihood of U.S. heat waves 15-20 days out, giving society more time to prepare for these often-deadly events.
The research team discerned the pattern by analyzing a 12,000-year simulation of the atmosphere over the Northern Hemisphere. During those times when a distinctive "wavenumber-5" pattern emerged, a major summertime heat wave became more likely to subsequently build over the United States.
"It may be useful to monitor the atmosphere, looking for this pattern, if we find that it precedes heat waves in a predictable way," says NCAR scientist Haiyan Teng, the lead author. "This gives us a potential source to predict heat waves beyond the typical range of weather forecasts."
The wavenumber-5 pattern refers to a sequence of alternating high- and low-pressure systems (five of each) that form a ring circling the northern midlatitudes, several miles above the surface. This pattern can lend itself to slow-moving weather features, raising the odds for stagnant conditions often associated with prolonged heat spells.
The study is being published next week in Nature Geoscience. It was funded by the U.S. Department of Energy, NASA, and the National Science Foundation (NSF), which is NCAR's sponsor. NASA scientists helped guide the project and are involved in broader research in this area.
Predicting a lethal event
Heat waves are among the most deadly weather phenomena on Earth. A 2006 heat wave across much of the United States and Canada was blamed for more than 600 deaths in California alone, and a prolonged heat wave in Europe in 2003 may have killed more than 50,000 people.
To see if heat waves can be triggered by certain large-scale atmospheric circulation patterns, the scientists looked at data from relatively modern records dating back to 1948. They focused on summertime events in the United States in which daily temperatures reached the top 2.5 percent of weather readings for that date across roughly 10 percent or more of the contiguous United States. However, since such extremes are rare by definition, the researchers could identify only 17 events that met such criteria -- not enough to tease out a reliable signal amid the noise of other atmospheric behavior.
The group then turned to an idealized simulation of the atmosphere spanning 12,000 years. The simulation had been created a couple of years before with a version of the NCAR-based Community Earth System Model, which is funded by NSF and the Department of Energy.
By analyzing more than 5,900 U.S. heat waves simulated in the computer model, they determined that the heat waves tended to be preceded by a wavenumber-5 pattern. This pattern is not caused by particular oceanic conditions or heating of Earth's surface, but instead arises from naturally varying conditions of the atmosphere. It was associated with an atmospheric phenomenon known as a Rossby wave train that encircles the Northern Hemisphere along the jet stream.
During the 20 days leading up to a heat wave in the model results, the five ridges and five troughs that make up a wavenumber-5 pattern tended to propagate very slowly westward around the globe, moving against the flow of the jet stream itself. Eventually, a high-pressure ridge moved from the North Atlantic into the United States, shutting down rainfall and setting the stage for a heat wave to emerge.
When wavenumber-5 patterns in the model were more amplified, U.S. heat waves became more likely to form 15 days later. In some cases, the probability of a heat wave was more than quadruple what would be expected by chance.
In follow-up work, the research team turned again to actual U.S. heat waves since 1948. They recognized that some historical heat wave events are indeed characterized by a large-scale circulation pattern that indicated a wavenumber-5 event.
Extending forecasts beyond 10 days
The research finding suggests that scientists are making progress on a key meteorological goal: forecasting the likelihood of extreme events more than 10 days in advance. At present, there is very limited skill in such long-term forecasts.
Previous research on extending weather forecasts has focused on conditions in the tropics. For example, scientists have found that El Niño and La Niña, the periodic warming and cooling of surface waters in the central and eastern tropical Pacific Ocean, are correlated with a higher probability of wet or dry conditions in different regions around the globe. In contrast, the wavenumber-5 pattern does not rely on conditions in the tropics. However, the study does not exclude the possibility that tropical rainfall could act to stimulate or strengthen the pattern.
Now that the new study has connected a planetary wave pattern to a particular type of extreme weather event, Teng and her colleagues will continue searching for other circulation patterns that may presage extreme weather events.
"There may be sources of predictability that we are not yet aware of," she says. "This brings us hope that the likelihood of extreme weather events that are damaging to society can be predicted further in advance."
The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this release are those of the author(s) and do not necessarily reflect the views of the National Science Foundation
.

VR is on its way



If mere texting, talking, e-mailing and snapping pictures on mobile devices aren’t enough to satisfy your data cravings, now there’s the prospect of accessing and displaying 3-D virtual reality simulations and animations on them. New information architecture from researchers in Offenburg, Germany puts 3-D visualizations in the palm of your hand to make this possible.



By devising a novel information and communication architecture with optics technology, researchers created a new approach based on outsourcing to servers all the heavy number crunching required by computer animations and virtual reality simulations. After churning through it, the servers then provide the information either as stream (avi, motion JPEG) or as vector-based data (VRML, X3D) displayable as 3-D on mobile devices. Dan Curticapean and his colleagues Andreas Christ and Markus Feisst of Offenburg’s University of Applied Science devised the approach.
"Since the processing power of mobile phones, smart phones and personal digital assistants is increasing—along with expansion in transmission bandwidth—it occurred to us that it is possible to harness this power to create 3-D virtual reality," says Curticapean. "So we designed a system to optimize and send the virtual reality data to the mobile phone or other mobile device."
Their approach works like this: Virtual reality data sent by the server to a mobile phone can be visualized on the phone’s screen, or on external display devices, such as a stereoscopic two-video projector system or a head-mounted stereoscopic display. The displays are connected to the mobile phone by wireless Bluetooth so the user’s mobility is preserved. In order to generate stereoscopic views on the mobile display screens, a variety of means can be used, such as a built-in 3-D screen or using lenticular lenses or anaglyph images viewed with special glasses having lenses of two different colors to create the illusion of depth.
The upshot of this new approach is improved realistic 3-D presentation, enhanced user ability to visualize and interact with 3-D objects and easier presentation of complex 3-D objects. "Perhaps most important," says Curticapean, "is the prospect of using mobile devices such as cell phones as a user interface to communicate more data with more people as an important component of mobile-Learning (m-Learning), given the ubiquity of mobile devices, particularly in developing countries."
The scientists are presenting their research at the 92nd Annual Meeting of the Optical Society (OSA), being held from Oct. 19-23 in Rochester, N.Y
.

Quantum and Viruses?



The weird world of quantum mechanics describes the strange, often contradictory, behaviour of small inanimate objects such as atoms. Researchers have now started looking for ways to detect quantum properties in more complex and larger entities, possibly even living organisms.



A German-Spanish research group, split between the Max Planck Institute for Quantum Optics in Garching and the Institute of Photonic Sciences (ICFO), is using the principles of an iconic quantum mechanics thought experiment -- Schrödinger's superpositioned cat -- to test for quantum properties in objects composed of as many as one billion atoms, possibly including the flu virus.
New research published on March 11 inNew Journal of Physics describes the construction of an experiment to test for superposition states in these larger objects.
Quantum optics is a field well-rehearsed in the process of detecting quantum properties in single atoms and some small molecules but the scale that these researchers wish to work at is unprecedented.
When physicists try to fathom exactly how the tiniest constituents of matter and energy behave, confusing patterns of their ability to do two things at once (referred to as being in a superposition state), and of their 'spooky' connection (referred to as entanglement) to their physically distant sub-atomic brethren, emerge.
It is the ability of these tiny objects to do two things at once that Oriol Romero-Isart and his co-workers are preparing to probe.
With this new technique, the researchers suggest that viruses are one type of object that could be probed. Albeit speculatively, the researchers hope that their technique might offer a route to experimentally address questions such as the role of life and consciousness in quantum mechanics.
In order to test for superposition states, the experiment involves finely tuning lasers to capture larger objects such as viruses in an 'optical cavity' (a very tiny space), another laser to slow the object down (and put it into what quantum mechanics call a 'ground state') and then adding a photon (the basic element of light) in a specific quantum state to the laser to provoke it into a superposition
The researchers say, "We hope that this system, apart from providing new quantum technology, will allow us to test quantum mechanics at larger scales, by preparing macroscopic superpositions of objects at the nano and micro scale. This could then enable us to use more complex microorganisms, and thus test the quantum superposition principle with living organisms by performing quantum optics experiments with them.
"

Joystick vs Hands



Up until recently, users needed a mouse and a keyboard, a touch-screen or a joystick to control a computer system. Researchers in Germany have now developed a new kind of gesture command system that makes it possible to use just the fingers of a hand.

Before a new vehicle rolls off the assembly lines, it first takes shape as a virtual model. In a cave -- a room for the virtual representation of objects -- the developers


look at it from all sides. They "sit" in it, they examine and improve it. For example, are all the switches easy to reach? The developers have so far used a joystick to interact with the computer which displays the virtual car model.
In the future, they will be able to do so without such an aid -- their hand alone is intended to be enough to provide the computer with the respective signals. A multi-touch interface, which h was developed by Georg Hackenberg during his Master's thesis work at the Fraunhofer Institute for Applied Information Technology FIT, made this possible. His work earned him first place in the Hugo Geiger Prizes. "We are using a camera that, instead of providing color information, provides pixel for pixel the distance of how far this point is from the camera. Basically this is achieved by means of a type of gray-scale image where the shade of gray represents the distance of the objects. The camera also provides three-dimensional information that the system evaluates with the help of special algorithms," explains Georg Hackenberg.
Hackenberg's main work consisted in developing the corresponding algorithms. They ensure that the system is first able to recognize a hand and then able to follow its movements. The result: The 3D camera system processes gestures down to the movements of individual fingers and processes them in real time. Up to this point in time comparable processes with finger support could only detect how hands moved in the image level -- they could not solve the depth information, in other words, how far the hand is from the camera system. For this reason it was often difficult to answer with which object the hand was interacting. Is it activating the windshield wipers or is it turning on the radio? Small movements of the hand, such as gripping, have so far been hardly possible to detect in real time -- or only with great amounts of computing power. That is no problem for the new system.
Gesture commands are also interesting for computer games. A gesture recognition prototype already exists. The researchers want to improve weaknesses in the algorithm now and carry out initial application studies. Hackenberg hopes that the system could be ready for series production within a year, from a technical viewpoint. In the medium term, the researchers hope to further develop it such that it can be used in mobile applications as well, which means that it will also find its way into laptops and cell phones
.

Gesture Recognition towards humanity



A system that can recognize human gestures could provide a new way for people with physical disabilities to interact with computers. A related system for the able bodied could also be used to make virtual worlds more realistic. 



Manolya Kavakli of the Virtual and Interactive Simulations of Reality Research Group, at Macquarie University, Sydney, Australia, explains how standard input devices - keyboard and computer mouse, do not closely mimic natural hand motions such as drawing and sketching. Moreover, these devices have not been developed for ergonomic use nor for people with disabilities.
She and her colleagues have developed a computer system architecture that can carry out "gesture recognition". In this system, the person wears "datagloves" which have illuminated LEDs that are tracked by two pairs of computer webcams working to produce an all-round binocular view. This allows the computer to monitor the person's hand or shoulder movements. This input can then be fed to a program, a game, or simulator, or to control a character, an avatar, in a 3D virtual environment.
"We developed two gesture recognition systems: DESigning In virtual Reality (DesIRe) and DRiving for disabled (DRive). DesIRe allows any user to control dynamically in real-time simulators or other programs. DRive allows a quadriplegic person to control a car interface using input from just two LEDs on an over-shoulder garment. For more precise gestures, a DataGlove user can gesture using their fingers.
The system architecture include the following components: Vizard Virtual Reality Toolkit, an immersive projection system (VISOR), an optical tracking system (specifically the Precision Position Tracker (PPT) system) and a data input system, Kavakli explains. The DataGlove input is quite simplistic at the moment, but future work will lead to an increase in sensitivity to specific gestures, such as grasping, strumming, stroking, and other hand movements
.

Physics and Fiber



Physicists at the National Institute of Standards and Technology (NIST) have demonstrated an ion trap with a built-in optical fiber that collects light emitted by single ions (electrically charged atoms), allowing quantum information stored in the ions to be measured. The advance could simplify quantum computer design and serve as a step toward swapping information between matter and light in future quantum networks.


Described in a forthcoming issue ofPhysical Review Letters, the new device is a 1-millimeter-square ion trap with a built-in optical fiber. The authors use ions as quantum bits (qubits) to store information in experimental quantum computing, which may someday solve certain problems that are intractable today. An ion can be adjustably positioned 80 to 100 micrometers from an optical fiber, which detects the ion's fluorescence signals indicating the qubit's information content.

"The design is helpful because of the tight coupling between the ion and the fiber, and also because it's small, so you can get a lot of fibers on a chip," says first author Aaron VanDevender, a NIST postdoctoral researcher.
NIST scientists demonstrated the new device using magnesium ions. Light emitted by an ion passes through a hole in an electrode and is collected in the fiber below the electrode surface (see image). By contrast, conventional ion traps use large external lenses typically located 5 centimeters away from the ions -- about 500 times farther than the fiber -- to collect the fluorescence light. Optical fibers may handle large numbers of ions more easily than the bulky optical systems, because multiple fibers may eventually be attached to a single ion trap.
The fiber method currently captures less light than the lens system but is adequate for detecting quantum information because ions are extremely bright, producing millions of photons (individual particles of light) per second, VanDevender says.

The authors expect to boost efficiency by shaping the fiber tip and using anti-reflection coating on surfaces. The new trap design is intended as a prototype for eventually pairing single ions with single photons, to make an interface enabling matter qubits to swap information with photon qubits in a quantum computing and communications network. Photons are used as qubits in quantum communications, the most secure method known for ensuring the privacy of a communications channel. In a quantum network, the information encoded in the "spins" of individual ions could be transferred to, for example, electric field orientations of individual photons for transport to other processing regions of the network.


The research was supported by the Defense Advanced Research Projects Agency, National Security Agency, Office of Naval Research, Intelligence Advanced Research Projects Activity, and Sandia National Laboratories
.
newer post older post