Heat Waves under the scope



Scientists have fingerprinted a distinctive atmospheric wave pattern high above the Northern Hemisphere that can foreshadow the emergence of summertime heat waves in the United States more than two weeks in advance.



The new research, led by scientists at the National Center for Atmospheric Research (NCAR), could potentially enable forecasts of the likelihood of U.S. heat waves 15-20 days out, giving society more time to prepare for these often-deadly events.
The research team discerned the pattern by analyzing a 12,000-year simulation of the atmosphere over the Northern Hemisphere. During those times when a distinctive "wavenumber-5" pattern emerged, a major summertime heat wave became more likely to subsequently build over the United States.
"It may be useful to monitor the atmosphere, looking for this pattern, if we find that it precedes heat waves in a predictable way," says NCAR scientist Haiyan Teng, the lead author. "This gives us a potential source to predict heat waves beyond the typical range of weather forecasts."
The wavenumber-5 pattern refers to a sequence of alternating high- and low-pressure systems (five of each) that form a ring circling the northern midlatitudes, several miles above the surface. This pattern can lend itself to slow-moving weather features, raising the odds for stagnant conditions often associated with prolonged heat spells.
The study is being published next week in Nature Geoscience. It was funded by the U.S. Department of Energy, NASA, and the National Science Foundation (NSF), which is NCAR's sponsor. NASA scientists helped guide the project and are involved in broader research in this area.
Predicting a lethal event
Heat waves are among the most deadly weather phenomena on Earth. A 2006 heat wave across much of the United States and Canada was blamed for more than 600 deaths in California alone, and a prolonged heat wave in Europe in 2003 may have killed more than 50,000 people.
To see if heat waves can be triggered by certain large-scale atmospheric circulation patterns, the scientists looked at data from relatively modern records dating back to 1948. They focused on summertime events in the United States in which daily temperatures reached the top 2.5 percent of weather readings for that date across roughly 10 percent or more of the contiguous United States. However, since such extremes are rare by definition, the researchers could identify only 17 events that met such criteria -- not enough to tease out a reliable signal amid the noise of other atmospheric behavior.
The group then turned to an idealized simulation of the atmosphere spanning 12,000 years. The simulation had been created a couple of years before with a version of the NCAR-based Community Earth System Model, which is funded by NSF and the Department of Energy.
By analyzing more than 5,900 U.S. heat waves simulated in the computer model, they determined that the heat waves tended to be preceded by a wavenumber-5 pattern. This pattern is not caused by particular oceanic conditions or heating of Earth's surface, but instead arises from naturally varying conditions of the atmosphere. It was associated with an atmospheric phenomenon known as a Rossby wave train that encircles the Northern Hemisphere along the jet stream.
During the 20 days leading up to a heat wave in the model results, the five ridges and five troughs that make up a wavenumber-5 pattern tended to propagate very slowly westward around the globe, moving against the flow of the jet stream itself. Eventually, a high-pressure ridge moved from the North Atlantic into the United States, shutting down rainfall and setting the stage for a heat wave to emerge.
When wavenumber-5 patterns in the model were more amplified, U.S. heat waves became more likely to form 15 days later. In some cases, the probability of a heat wave was more than quadruple what would be expected by chance.
In follow-up work, the research team turned again to actual U.S. heat waves since 1948. They recognized that some historical heat wave events are indeed characterized by a large-scale circulation pattern that indicated a wavenumber-5 event.
Extending forecasts beyond 10 days
The research finding suggests that scientists are making progress on a key meteorological goal: forecasting the likelihood of extreme events more than 10 days in advance. At present, there is very limited skill in such long-term forecasts.
Previous research on extending weather forecasts has focused on conditions in the tropics. For example, scientists have found that El Niño and La Niña, the periodic warming and cooling of surface waters in the central and eastern tropical Pacific Ocean, are correlated with a higher probability of wet or dry conditions in different regions around the globe. In contrast, the wavenumber-5 pattern does not rely on conditions in the tropics. However, the study does not exclude the possibility that tropical rainfall could act to stimulate or strengthen the pattern.
Now that the new study has connected a planetary wave pattern to a particular type of extreme weather event, Teng and her colleagues will continue searching for other circulation patterns that may presage extreme weather events.
"There may be sources of predictability that we are not yet aware of," she says. "This brings us hope that the likelihood of extreme weather events that are damaging to society can be predicted further in advance."
The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this release are those of the author(s) and do not necessarily reflect the views of the National Science Foundation
.

VR is on its way



If mere texting, talking, e-mailing and snapping pictures on mobile devices aren’t enough to satisfy your data cravings, now there’s the prospect of accessing and displaying 3-D virtual reality simulations and animations on them. New information architecture from researchers in Offenburg, Germany puts 3-D visualizations in the palm of your hand to make this possible.



By devising a novel information and communication architecture with optics technology, researchers created a new approach based on outsourcing to servers all the heavy number crunching required by computer animations and virtual reality simulations. After churning through it, the servers then provide the information either as stream (avi, motion JPEG) or as vector-based data (VRML, X3D) displayable as 3-D on mobile devices. Dan Curticapean and his colleagues Andreas Christ and Markus Feisst of Offenburg’s University of Applied Science devised the approach.
"Since the processing power of mobile phones, smart phones and personal digital assistants is increasing—along with expansion in transmission bandwidth—it occurred to us that it is possible to harness this power to create 3-D virtual reality," says Curticapean. "So we designed a system to optimize and send the virtual reality data to the mobile phone or other mobile device."
Their approach works like this: Virtual reality data sent by the server to a mobile phone can be visualized on the phone’s screen, or on external display devices, such as a stereoscopic two-video projector system or a head-mounted stereoscopic display. The displays are connected to the mobile phone by wireless Bluetooth so the user’s mobility is preserved. In order to generate stereoscopic views on the mobile display screens, a variety of means can be used, such as a built-in 3-D screen or using lenticular lenses or anaglyph images viewed with special glasses having lenses of two different colors to create the illusion of depth.
The upshot of this new approach is improved realistic 3-D presentation, enhanced user ability to visualize and interact with 3-D objects and easier presentation of complex 3-D objects. "Perhaps most important," says Curticapean, "is the prospect of using mobile devices such as cell phones as a user interface to communicate more data with more people as an important component of mobile-Learning (m-Learning), given the ubiquity of mobile devices, particularly in developing countries."
The scientists are presenting their research at the 92nd Annual Meeting of the Optical Society (OSA), being held from Oct. 19-23 in Rochester, N.Y
.

Quantum and Viruses?



The weird world of quantum mechanics describes the strange, often contradictory, behaviour of small inanimate objects such as atoms. Researchers have now started looking for ways to detect quantum properties in more complex and larger entities, possibly even living organisms.



A German-Spanish research group, split between the Max Planck Institute for Quantum Optics in Garching and the Institute of Photonic Sciences (ICFO), is using the principles of an iconic quantum mechanics thought experiment -- Schrödinger's superpositioned cat -- to test for quantum properties in objects composed of as many as one billion atoms, possibly including the flu virus.
New research published on March 11 inNew Journal of Physics describes the construction of an experiment to test for superposition states in these larger objects.
Quantum optics is a field well-rehearsed in the process of detecting quantum properties in single atoms and some small molecules but the scale that these researchers wish to work at is unprecedented.
When physicists try to fathom exactly how the tiniest constituents of matter and energy behave, confusing patterns of their ability to do two things at once (referred to as being in a superposition state), and of their 'spooky' connection (referred to as entanglement) to their physically distant sub-atomic brethren, emerge.
It is the ability of these tiny objects to do two things at once that Oriol Romero-Isart and his co-workers are preparing to probe.
With this new technique, the researchers suggest that viruses are one type of object that could be probed. Albeit speculatively, the researchers hope that their technique might offer a route to experimentally address questions such as the role of life and consciousness in quantum mechanics.
In order to test for superposition states, the experiment involves finely tuning lasers to capture larger objects such as viruses in an 'optical cavity' (a very tiny space), another laser to slow the object down (and put it into what quantum mechanics call a 'ground state') and then adding a photon (the basic element of light) in a specific quantum state to the laser to provoke it into a superposition
The researchers say, "We hope that this system, apart from providing new quantum technology, will allow us to test quantum mechanics at larger scales, by preparing macroscopic superpositions of objects at the nano and micro scale. This could then enable us to use more complex microorganisms, and thus test the quantum superposition principle with living organisms by performing quantum optics experiments with them.
"

Joystick vs Hands



Up until recently, users needed a mouse and a keyboard, a touch-screen or a joystick to control a computer system. Researchers in Germany have now developed a new kind of gesture command system that makes it possible to use just the fingers of a hand.

Before a new vehicle rolls off the assembly lines, it first takes shape as a virtual model. In a cave -- a room for the virtual representation of objects -- the developers


look at it from all sides. They "sit" in it, they examine and improve it. For example, are all the switches easy to reach? The developers have so far used a joystick to interact with the computer which displays the virtual car model.
In the future, they will be able to do so without such an aid -- their hand alone is intended to be enough to provide the computer with the respective signals. A multi-touch interface, which h was developed by Georg Hackenberg during his Master's thesis work at the Fraunhofer Institute for Applied Information Technology FIT, made this possible. His work earned him first place in the Hugo Geiger Prizes. "We are using a camera that, instead of providing color information, provides pixel for pixel the distance of how far this point is from the camera. Basically this is achieved by means of a type of gray-scale image where the shade of gray represents the distance of the objects. The camera also provides three-dimensional information that the system evaluates with the help of special algorithms," explains Georg Hackenberg.
Hackenberg's main work consisted in developing the corresponding algorithms. They ensure that the system is first able to recognize a hand and then able to follow its movements. The result: The 3D camera system processes gestures down to the movements of individual fingers and processes them in real time. Up to this point in time comparable processes with finger support could only detect how hands moved in the image level -- they could not solve the depth information, in other words, how far the hand is from the camera system. For this reason it was often difficult to answer with which object the hand was interacting. Is it activating the windshield wipers or is it turning on the radio? Small movements of the hand, such as gripping, have so far been hardly possible to detect in real time -- or only with great amounts of computing power. That is no problem for the new system.
Gesture commands are also interesting for computer games. A gesture recognition prototype already exists. The researchers want to improve weaknesses in the algorithm now and carry out initial application studies. Hackenberg hopes that the system could be ready for series production within a year, from a technical viewpoint. In the medium term, the researchers hope to further develop it such that it can be used in mobile applications as well, which means that it will also find its way into laptops and cell phones
.

Gesture Recognition towards humanity



A system that can recognize human gestures could provide a new way for people with physical disabilities to interact with computers. A related system for the able bodied could also be used to make virtual worlds more realistic. 



Manolya Kavakli of the Virtual and Interactive Simulations of Reality Research Group, at Macquarie University, Sydney, Australia, explains how standard input devices - keyboard and computer mouse, do not closely mimic natural hand motions such as drawing and sketching. Moreover, these devices have not been developed for ergonomic use nor for people with disabilities.
She and her colleagues have developed a computer system architecture that can carry out "gesture recognition". In this system, the person wears "datagloves" which have illuminated LEDs that are tracked by two pairs of computer webcams working to produce an all-round binocular view. This allows the computer to monitor the person's hand or shoulder movements. This input can then be fed to a program, a game, or simulator, or to control a character, an avatar, in a 3D virtual environment.
"We developed two gesture recognition systems: DESigning In virtual Reality (DesIRe) and DRiving for disabled (DRive). DesIRe allows any user to control dynamically in real-time simulators or other programs. DRive allows a quadriplegic person to control a car interface using input from just two LEDs on an over-shoulder garment. For more precise gestures, a DataGlove user can gesture using their fingers.
The system architecture include the following components: Vizard Virtual Reality Toolkit, an immersive projection system (VISOR), an optical tracking system (specifically the Precision Position Tracker (PPT) system) and a data input system, Kavakli explains. The DataGlove input is quite simplistic at the moment, but future work will lead to an increase in sensitivity to specific gestures, such as grasping, strumming, stroking, and other hand movements
.

Physics and Fiber



Physicists at the National Institute of Standards and Technology (NIST) have demonstrated an ion trap with a built-in optical fiber that collects light emitted by single ions (electrically charged atoms), allowing quantum information stored in the ions to be measured. The advance could simplify quantum computer design and serve as a step toward swapping information between matter and light in future quantum networks.


Described in a forthcoming issue ofPhysical Review Letters, the new device is a 1-millimeter-square ion trap with a built-in optical fiber. The authors use ions as quantum bits (qubits) to store information in experimental quantum computing, which may someday solve certain problems that are intractable today. An ion can be adjustably positioned 80 to 100 micrometers from an optical fiber, which detects the ion's fluorescence signals indicating the qubit's information content.

"The design is helpful because of the tight coupling between the ion and the fiber, and also because it's small, so you can get a lot of fibers on a chip," says first author Aaron VanDevender, a NIST postdoctoral researcher.
NIST scientists demonstrated the new device using magnesium ions. Light emitted by an ion passes through a hole in an electrode and is collected in the fiber below the electrode surface (see image). By contrast, conventional ion traps use large external lenses typically located 5 centimeters away from the ions -- about 500 times farther than the fiber -- to collect the fluorescence light. Optical fibers may handle large numbers of ions more easily than the bulky optical systems, because multiple fibers may eventually be attached to a single ion trap.
The fiber method currently captures less light than the lens system but is adequate for detecting quantum information because ions are extremely bright, producing millions of photons (individual particles of light) per second, VanDevender says.

The authors expect to boost efficiency by shaping the fiber tip and using anti-reflection coating on surfaces. The new trap design is intended as a prototype for eventually pairing single ions with single photons, to make an interface enabling matter qubits to swap information with photon qubits in a quantum computing and communications network. Photons are used as qubits in quantum communications, the most secure method known for ensuring the privacy of a communications channel. In a quantum network, the information encoded in the "spins" of individual ions could be transferred to, for example, electric field orientations of individual photons for transport to other processing regions of the network.


The research was supported by the Defense Advanced Research Projects Agency, National Security Agency, Office of Naval Research, Intelligence Advanced Research Projects Activity, and Sandia National Laboratories
.

Not good News about Gaming Violence



Playing a violent video game can increase aggression, and when a player keeps thinking about the game, the potential for aggression can last for as long as 24 hours, according to a study in the current Social Psychological and Personality Science (published by SAGE).
Violent video game playing has long been known to increase aggression. This study, conducted by Brad Bushman of The Ohio State University and Bryan Gibson of Central Michigan University, shows that at least for men, ruminating about the game can increase the potency of the game's tendency to lead to aggression long after the game has been turned off.
The researchers randomly assigned college students to play one of six different video games for 20 minutes. Half the games were violent (e.g., Mortal Kombat) and half were not (e.g., Guitar Hero). To test if ruminating about the game would extend the games' effect, half of the players were told over "the next 24 hours, think about your play of the game, and try to identify ways your game play could improve when you play again."
Bushman and Gibson had the participants return the next day to test their aggressiveness. For men who didn't think about the game, the violent video game players tested no more aggressive than men who had played non-violent games. But the violent video game playing men who thought about the game in the interim were more aggressive than the other groups. The researchers also found that women who played the violent video games and thought about the games did not experience increased aggression 24 hours later.
This study is the first laboratory experiment to show that violent video games can stimulate aggression for an extended period of time. The authors noted that it is "reasonable to assume that our lab results will generalize to the 'real world.' Violent gamers usually play longer than 20 minutes, and probably ruminate about their game play in a habitual manner.
"

Sport Gamers



From Gran Turismo to WWE Smackdown, sports-based video games represent a wide variety of pursuits. When it comes to the people who actually play those games, however, little is known. How do sports video game players fit their games into a larger sports-related context? How does their video game play inform their media usage and general sports fandom?

That's what Concordia University communications professor Mia Consalvo sought to discover when she embarked on a large-scale study of video game players, the results of which were recently published in Convergence: The International Journal of Research into New Media Technologies.
Along with Abe Stein and Konstantin Mitgutsch from the Massachusetts Institute of Technology (MIT), Consalvo, who also holds a Canada Research Chair in Game Studies and Design, conducted an online survey of 1,718 participants to pin down demographics, habits, attitudes and activities of sports video game players.
They found that the majority of those who play sports video games are male (98.4 per cent), white (80 per cent) and in their mid-20s (average of 26 years). In comparison with other representative video game player demographics, the field is less diverse and the average player is younger. Based on the data about the larger game-playing population, it seems that the sports gamers are drawn from a more traditional demographic of game players, at least when it comes to console and certain personal computer-based video games.
"Perhaps one of the biggest findings to emerge from this study is unsurprising, but finally documented," notes Consalvo. "The overwhelming majority of sports gamers' -- 93.3 per cent -- self-identify as sports fans. That identity pushes beyond the playing of sports-themed video games. Attending sporting events, watching them on television, participating in those activities themselves as well as following certain teams or sports were regular parts of their daily lives."
Consalvo says that she still hopes to discover more insights into why there is little diversity in the player demographics, and why female players are in a minority. Says Consalvo, "while this study provides new insights into who sports video game players are and what they play and why, we still lack knowledge on how these players relate their passion for video games to their sports fandom in general." She hopes to address these questions in her forthcoming book, co-authored with Stein and Mitsgutsch, titledSports Videogames.

Schroedinger



Since Erwin Schroedinger's famous 1935 cat thought experiment, physicists around the globe have tried to create large scale systems to test how the rules of quantum mechanics apply to everyday objects.

Researchers at the University of Calgary recently made a significant step forward in this direction by creating a large system that is in two substantially different states at the same time. Until this point, scientists had only managed to recreate quantum effects on much smaller scales.
Professor Alex Lvovsky and associate professor Christoph Simon from the Physics and Astronomy department together with their graduate students revealed their findings in a world leading physics research journal, Nature Physics.
Understanding Schroedinger's cat
In contrast to our everyday experience, quantum physics allows for particles to be in two states at the same time -- so-called quantum superpositions. A radioactive nucleus, for example, can simultaneously be in a decayed and non-decayed state.
Applying these quantum rules to large objects leads to paradoxical and even bizarre consequences. To emphasize this, Erwin Schroedinger, one of the founding fathers of quantum physics, proposed in 1935 a thought experiment involving a cat that could be killed by a mechanism triggered by the decay of a single atomic nucleus. If the nucleus is in a superposition of decayed and non-decayed states, and if quantum physics applies to large objects, the belief is that the cat will be simultaneously dead and alive.
While quantum systems with properties akin to 'Schroedinger's cat' have been achieved at a micro level, the application of this principle to everyday macro objects has proved to be difficult to demonstrate.
"This is because large quantum objects are extremely fragile and tend to disintegrate when subjected to any interaction with the environment," explains Lvovsky.
Photons help to illuminate the paradox
The breakthrough achieved by Calgary quantum physicists is that they were able to contrive a quantum state of light that consists of a hundred million light quanta (photons) and can even be seen by the naked eye. In their state, the "dead" and "alive" components of the "cat" correspond to quantum states that differ by tens of thousands of photons.
"The laws of quantum mechanics which govern the microscopic world are very different from classical physics that rules over large objects such as live beings," explains lead author Lvovsky. "The challenge is to understand where to draw the line and explore whether such a line exists at all. Those are the questions our experiment sheds light on," he states.
While the findings are promising, study co-author Simon admits that many questions remain unanswered.
"We are still very far from being able to do this with a real cat," he says. "But this result suggests there is ample opportunity for progress in that direction.
"

Graphics Innovations



Research presented in a paper by Morgan McGuire, assistant professor of computer science at Williams College, and co-author Dr. David Luebke of NVIDIA, introduces a new algorithm to improve computer graphics for video games.

McGuire and Luebke have developed a new method for computerizing lighting and light sources that will allow video game graphics to approach film quality.
Their paper "Hardware-Accelerated Global Illumination by Image Space Photon Mapping" won a Best Paper award at the 2009 Conference on High Performance Graphics.
Because video games must compute images more quickly than movies, video game developers have struggled with maximizing graphic quality.
Producing light effects involves essentially pushing light into the 3D world and pulling it back to the pixels of the final image. The method created by McGuire and Luebke reverses the process so that light is pulled onto the world and pushed into the image, which is a faster process.
As video games continue to increase the degree of interactivity, graphics processors are expected to become 500 times faster than they are now. McGuire and Luebke's algorithm is well suited to the quickened processing speed, and is expected to be featured in video games within the next two years.
McGuire is author of "Creating Games: Mechanics, Content, and Technology" and is co-chair of the ACM SIGGRAPH Symposium on Non-Photorealistic Animation and Rendering, and previously chaired the ACM Symposium on Interactive 3D Graphics and Games.
He has worked on and consulted for commercial video games such as "Marvel Ultimate Alliance" (2009), "Titan Quest" (2006), and "ROBLOX" (2005).
McGuire received his B.S. from the Massachusetts Institute of Technology in 2000 and his Ph.D. from Brown University in 2006. At Williams since 2006, he teaches courses on computer graphics and game design
.

Cloud Computing



Researchers have succeeded in combining the power of quantum computing with the security of quantum cryptography and have shown that perfectly secure cloud computing can be achieved using the principles of quantum mechanics. They have performed an experimental demonstration of quantum computation in which the input, the data processing, and the output remain unknown to the quantum computer.

The international team of scientists will publish the results of the experiment, carried out at the Vienna Center for Quantum Science and Technology (VCQ) at the University of Vienna and the Institute for Quantum Optics and Quantum Information (IQOQI), in the forthcoming issue of Science.
Quantum computers are expected to play an important role in future information processing since they can outperform classical computers at many tasks. Considering the challenges inherent in building quantum devices, it is conceivable that future quantum computing capabilities will exist only in a few specialized facilities around the world -- much like today's supercomputers. Users would then interact with those specialized facilities in order to outsource their quantum computations. The scenario follows the current trend of cloud computing: central remote servers are used to store and process data -- everything is done in the "cloud." The obvious challenge is to make globalized computing safe and ensure that users' data stays private.
The latest research, to appear in Science, reveals that quantum computers can provide an answer to that challenge. "Quantum physics solves one of the key challenges in distributed computing. It can preserve data privacy when users interact with remote computing centers," says Stefanie Barz, lead author of the study. This newly established fundamental advantage of quantum computers enables the delegation of a quantum computation from a user who does not hold any quantum computational power to a quantum server, while guaranteeing that the user's data remain perfectly private. The quantum server performs calculations, but has no means to find out what it is doing -- a functionality not known to be achievable in the classical world.
The scientists in the Vienna research group have demonstrated the concept of "blind quantum computing" in an experiment: they performed the first known quantum computation during which the user's data stayed perfectly encrypted. The experimental demonstration uses photons, or "light particles" to encode the data. Photonic systems are well-suited to the task because quantum computation operations can be performed on them, and they can be transmitted over long distances.
The process works in the following manner. The user prepares qubits -- the fundamental units of quantum computers -- in a state known only to himself and sends these qubits to the quantum computer. The quantum computer entangles the qubits according to a standard scheme. The actual computation is measurement-based: the processing of quantum information is implemented by simple measurements on qubits. The user tailors measurement instructions to the particular state of each qubit and sends them to the quantum server. Finally, the results of the computation are sent back to the user who can interpret and utilize the results of the computation. Even if the quantum computer or an eavesdropper tries to read the qubits, they gain no useful information, without knowing the initial state; they are "blind."
The research at the Vienna Center for Quantum Science and Technology (VCQ) at the University of Vienna and at the Institute for Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences was undertaken in collaboration with the scientists who originally invented the protocol, based at the University of Edinburgh, the Institute for Quantum Computing (University of Waterloo), the Centre for Quantum Technologies (National University of Singapore), and University College Dublin
.

Batteries



Looking toward improved batteries for charging electric cars and storing energy from renewable but intermittent solar and wind, scientists at Oak Ridge National Laboratory have developed the first high-performance, nanostructured solid electrolyte for more energy-dense lithium ion batteries.


Today's lithium-ion batteries rely on a liquid electrolyte, the material that conducts ions between the negatively charged anode and positive cathode. But liquid electrolytes often entail safety issues because of their flammability, especially as researchers try to pack more energy in a smaller battery volume. Building batteries with a solid electrolyte, as ORNL researchers have demonstrated, could overcome these safety concerns and size constraints.
"To make a safer, lightweight battery, we need the design at the beginning to have safety in mind," said ORNL's Chengdu Liang, who led the newly published study in the Journal of the American Chemical Society. "We started with a conventional material that is highly stable in a battery system -- in particular one that is compatible with a lithium metal anode."
The ability to use pure lithium metal as an anode could ultimately yield batteries five to 10 times more powerful than current versions, which employ carbon based anodes.
"Cycling highly reactive lithium metal in flammable organic electrolytes causes serious safety concerns," Liang said. "A solid electrolyte enables the lithium metal to cycle well, with highly enhanced safety."
The ORNL team developed its solid electrolyte by manipulating a material called lithium thiophosphate so that it could conduct ions 1,000 times faster than its natural bulk form. The researchers used a chemical process called nanostructuring, which alters the structure of the crystals that make up the material.
"Think about it in terms of a big crystal of quartz vs. very fine beach sand," said coauthor Adam Rondinone. "You can have the same total volume of material, but it's broken up into very small particles that are packed together. It's made of the same atoms in roughly the same proportions, but at the nanoscale the structure is different. And now this solid material conducts lithium ions at a much greater rate than the original large crystal."
The researchers are continuing to test lab scale battery cells, and a patent on the team's invention is pending.
"We use a room-temperature, solution-based reaction that we believe can be easily scaled up," Rondinone said. "It's an energy-efficient way to make large amounts of this material."
The study is published as "Anomalous High Ionic Conductivity of Nanoporous β-Li3PS4," and its ORNL coauthors are Zengcai Liu, Wujun Fu, Andrew Payzant, Xiang Yu, Zili Wu, Nancy Dudney, Jim Kiggans, Kunlun Hong, Adam Rondinone and Chengdu Liang. The work was sponsored by the Division of Materials Sciences and Engineering in DOE's Office of Science
.

Geothermal



A new method for capturing significantly more heat from low-temperature geothermal resources holds promise for generating virtually pollution-free electrical energy. Scientists at the Department of Energy's Pacific Northwest National Laboratory will determine if their innovative approach can safely and economically extract and convert heat from vast untapped geothermal resources.

The goal is to enable power generation from low-temperature geothermal resources at an economical cost. In addition to being a clean energy source without any greenhouse gas emissions, geothermal is also a steady and dependable source of power.
"By the end of the calendar year, we plan to have a functioning bench-top prototype generating electricity," predicts PNNL Laboratory Fellow Pete McGrail. "If successful, enhanced geothermal systems like this could become an important energy source." A technical and economic analysis conducted by the Massachusetts Institute of Technology estimates that enhanced geothermal systems could provide 10 percent of the nation's overall electrical generating capacity by 2050.
PNNL's conversion system will take advantage of the rapid expansion and contraction capabilities of a new liquid developed by PNNL researchers called biphasic fluid. When exposed to heat brought to the surface from water circulating in moderately hot, underground rock, the thermal-cycling of the biphasic fluid will power a turbine to generate electricity.
To aid in efficiency, scientists have added nanostructured metal-organic heat carriers, or MOHCs, which boost the power generation capacity to near that of a conventional steam cycle. McGrail cited PNNL's nanotechnology and molecular engineering expertise as an important factor in the development, noting that the advancement was an outgrowth of research already underway at the lab.
"Some novel research on nanomaterials used to capture carbon dioxide from burning fossil fuels actually led us to this discovery," said McGrail. "Scientific breakthroughs can come from some very unintuitive connections."
PNNL is receiving $1.2 million as one of 21 DOE Energy Efficiency and Renewable Energy grants through the Geothermal Technologies Program.
Some of the research was conducted in EMSL, DOE's Environmental Molecular Sciences Laboratory on the PNNL campus
.

Solar Power



"Many analysts project a higher cost for solar photovoltaic energy because they don't consider recent technological advancements and price reductions," says Joshua Pearce, Adjunct Professor, Department of Mechanical and Materials Engineering. "Older models for determining solar photovoltaic energy costs are too conservative."
Dr. Pearce believes solar photovoltaic systems are near the "tipping point" where they can produce energy for about the same price other traditional sources of energy.
Analysts look at many variables to determine the cost of solar photovoltaic systems for consumers, including installation and maintenance costs, finance charges, the system's life expectancy, and the amount of electricity it generates.
Dr. Pearce says some studies don't consider the 70 per cent reduction in the cost of solar panels since 2009 . Furthermore, he says research now shows the productivity of top-of-the-line solar panels only drops between 0.1 and 0.2 percent annually, which is much less than the one per cent used in many cost analyses.
Equipment costs are determined based on dollars per watt of electricity produced. One 2010 study estimated the this cost at $7.61, while a 2003 study set the amount at $4.16. According to Dr. Pearce, the real cost in 2011 is under $1 per watt for solar panels purchased in bulk on the global market, though he says system and installation costs vary widely.
Dr. Pearce has created a calculator program available for download online that can be used to determine the true costs of solar energy.
The Queen's study was co-authored by grad students Kadra Branker and Michael Pathak and published in the December edition of Renewable and Sustainable Energy Reviews
.

Energy Storage



Though considered a promising large-scale energy storage device, the vanadium redox battery's use has been limited by its inability to work well in a wide range of temperatures and its high cost. But new research indicates that modifying the battery's electrolyte solution significantly improves its performance. So much so that the upgraded battery could improve the electric grid's reliability and help connect more wind turbines and solar panels to the grid.
In a paper published by the journalAdvanced Energy Materials, researchers at the Department of Energy's Pacific Northwest National Laboratory found that adding hydrochloric acid to the sulfuric acid typically used in vanadium batteries increased the batteries' energy storage capacity by 70 percent and expanded the temperature range in which they operate.
"Our small adjustments greatly improve the vanadium redox battery," said lead author and PNNL chemist Liyu Li. "And with just a little more work, the battery could potentially increase the use of wind, solar and other renewable power sources across the electric grid."
Unlike traditional power, which is generated in a reliable, consistent stream of electricity by controlling how much coal is burned or water is sent through dam turbines, renewable power production depends on uncontrollable natural phenomena such as sunshine and wind. Storing electricity can help smooth out the intermittency of renewable power while also improving the reliability of the electric grid that transmits it. Vanadium batteries can hold on to renewable power until people turn on their lights and run their dishwashers. Other benefits of vanadium batteries include high efficiency and the ability to quickly generate power when it's needed as well as sit idle for long periods of time without losing storage capacity.
A vanadium battery is a type of flow battery, meaning it generates power by pumping liquid from external tanks to the battery's central stack, or a chamber where the liquids are mixed. The tanks contain electrolytes, which are liquids that conduct electricity. One tank has the positively-charged vanadium ion V5+ floating in its electrolyte. And the other tank holds an electrolyte full of a different vanadium ion, V2+. When energy is needed, pumps move the ion-saturated electrolyte from both tanks into the stack, where a chemical reaction causes the ions to change their charge, creating electricity.
To charge the battery, electricity is sent to the vanadium battery's stack. This causes another reaction that restores the original charge of vanadium ions. The electrical energy is converted into chemical energy stored in the vanadium ions. The electrolytes with their respective ions are pumped back into to their tanks, where they wait until electricity is needed and the cycle is started again.
A battery's capacity to generate electricity is limited by how many ions it can pack into the electrolyte. Vanadium batteries traditionally use pure sulfuric acid for their electrolyte. But sulfuric acid can only absorb so many vanadium ions.
Another drawback is that sulfuric acid-based vanadium batteries only work between about 50 and 104 degrees Fahrenheit (10 to 40 Celsius). Below that temperature range, the ion-infused sulfuric acid crystallizes. The larger concern, however, is the battery overheating, which causes an unwanted solid to form and renders the battery useless. To regulate the temperature, air conditioners or circulating cooling water are used, which causes up to 20 percent energy loss and significantly increasing the battery's operating cost, the researchers noted.
Wanting to improve the battery's performance, Li and his colleagues began searching for a new electrolyte. They tried a pure hydrochloric acid electrolyte, but found it caused one of the vanadium ions to form an unwanted solid. Next, they experimented with various mixtures of both hydrochloric and sulfuric acids. PNNL scientists found the ideal balance when they mixed 6 parts hydrochloric acid with 2.5 parts sulfuric acid. They verified the electrolyte and ion molecules present in the solution with a nuclear magnetic resonance instrument and the Chinook supercomputer at EMSL, DOE's Environmental Molecular Sciences Laboratory at PNNL.
Tests showed that the new electrolyte mixture could hold 70 percent more vanadium ions, making the battery's electricity capacity 70 percent higher. The discovery means that smaller tanks can be used to generate the same amount of power as larger tanks filled with the old electrolyte.
And the new mixture allowed the battery to work in both warmer and colder temperatures, between 23 and 122 degrees Fahrenheit (-5 to 50 Celsius), greatly reducing the need for costly cooling systems. At room temperature, a battery with the new electrolyte mixture maintained an 87 percent energy efficiency rate for 20 days, which is about the same efficiency of the old solution.
The results are promising, but more research is needed, the authors noted. The battery's stack and overall physical structure could be improved to increase power generation and decrease cost.
"Vanadium redox batteries have been around for more than 20 years, but their use has been limited by a relatively narrow temperature range," Li said. "Something as simple as adjusting the batteries' electrolyte means they can be used in more places without having to divert power output to regulate heat."
This research was supported by DOE's Office of Electricity Delivery and Energy Reliability and internal PNNL funding
.

Oreos



Oct. 15, 2013 — Connecticut College students and a professor of neuroscience have found "America's favorite cookie" is just as addictive as cocaine -- at least for lab rats. And just like most humans, rats go for the middle first.

In a study designed to shed light on the potential addictiveness of high-fat/ high-sugar foods, Professor Joseph Schroeder and his students found rats formed an equally strong association between the pleasurable effects of eating Oreos and a specific environment as they did between cocaine or morphine and a specific environment. They also found that eating cookies activated more neurons in the brain's "pleasure center" than exposure to drugs of abuse.
Schroeder, an assistant professor of neuroscience at Connecticut College, will present the research next month at the Society for Neuroscience conference in San Diego, Calif.
"Our research supports the theory that high-fat/ high-sugar foods stimulate the brain in the same way that drugs do," Schroeder said. "It may explain why some people can't resist these foods despite the fact that they know they are bad for them."
Schroeder said he and his students specifically chose to feed the rats Oreos because they wanted a food that is palatable to humans and contributes to obesity in the same way cocaine is pleasurable and addictive to humans.
The research was the brainchild of neuroscience major Jamie Honohan, who graduated in May. She worked with Schroeder and several other students last year to measure the association between "drug" and environment.
On one side of a maze, they would give hungry rats Oreos and on the other, they would give them a control -- in this case, rice cakes. ("Just like humans, rats don't seem to get much pleasure out of eating them," Schroeder said.) Then, they would give the rats the option of spending time on either side of the maze and measure how long they would spend on the side where they were typically fed Oreos.
While it may not be scientifically relevant, Honohan said it was surprising to watch the rats eat the famous cookie. "They would break it open and eat the middle first," she said.
They compared the results of the Oreo and rice cake test with results from rats that were given an injection of cocaine or morphine, known addictive substances, on one side of the maze and a shot of saline on the other. Schroeder is licensed by the U.S. Drug Enforcement Administration to purchase and use controlled substances for research.
The research showed the rats conditioned with Oreos spent as much time on the "drug" side of the maze as the rats conditioned with cocaine or morphine.
Schroeder and his students then used immunohistochemistry to measure the expression of a protein called c-Fos, a marker of neuronal activation, in the nucleus accumbens, or the brain's "pleasure center."
"It basically tells us how many cells were turned on in a specific region of the brain in response to the drugs or Oreos," said Schroeder.
They found that the Oreos activated significantly more neurons than cocaine or morphine.
"This correlated well with our behavioral results and lends support to the hypothesis that high-fat/ high-sugar foods are addictive," said Schroeder.
And that is a problem for the general public, says Honohan.
"Even though we associate significant health hazards in taking drugs like cocaine and morphine, high-fat/ high-sugar foods may present even more of a danger because of their accessibility and affordability," she said.

Robotics



Human Robot Getting Closer: iCub Robot Must Learn from Its Experiences

A robot that feels, sees and, in particular, thinks and learns like us. It still seems like science fiction, but if it's up to University of Twente (UT) researcher Frank van der Velde, it won't be. In his work he wants to implement the cognitive process of the human brain in robots. The research should lead to the arrival of the latest version of the iCub robot in Twente. This human robot (humanoid) blurs the boundaries between robot and human.

Decades of scientific research into cognitive psychology and the brain have given us knowledge about language, memory, motor skills and perception. We can now use that knowledge in robots, but Frank van der Velde's research goes even further. "The application of cognition in technical systems should also mean that the robot learns from its experiences and the actions it performs. A simple example: a robot that spills too much when pouring a cup of coffee can then learn how it should be done."
Possible first iCub in the Netherlands
The arrival of the iCub robot at the University of Twente should signify the next step in this research. Van der Velde submitted an application together with other UT researchers Stefano Stramigioli, Vanessa Evers, Dirk Heylen and Richard van Wezel, all active in the robotics and cognitive research. At the moment, twenty European laboratories have an iCub, which was developed in Italy (thanks to a European FP7 grant for the IIT). The Netherlands is still missing from the list. Moreover, a newer version is currently being developed, with for example haptic sensors. In February it will be announced whether the robotics club will actually bring the latest iCub to the UT. The robot costs a quarter of a million Euros and NWO (Netherlands Organisation for Scientific Research) will reimburse 75% of the costs. Then the TNO (Netherlands Organisation for Applied Scientific Research) and the universities of Groningen, Nijmegen, Delft and Eindhoven can also make use of it. Within the UT, the iCub can be deployed in different labo


ratories thanks to a special transport system.
Robot guide dog
The possibilities are endless, according to Van der Velde. "The new iCub has a skin and fingers that have a much better sense of touch and can feel strength. That makes interaction with humans much more natural. We want to ensure that this robot continues to learn and understands how people function. This research ensures, for example, that robots actually gather knowledge by focusing on certain objects or persons. In areas of application like healthcare and nursing, such robots can play an important role. A good example would be that in ten years' time you see a blind person walking with a robot guide dog."
Nano-neural circuits
A recent line of research that is in line with this profile is the development of electronic circuits that resemble a web of neurons in the human brain. Contacts have already been made to start this research in Twente. In the iCub robot, this can for example be used for the robot's visual perception. This requires a lot of relatively simple operations that must all be performed in parallel. This takes a lot of time and energy in the current systems. With electronic circuits in the form of a web of nerve cells this is much easier.
"These connections are only possible at the nanoscale, that is to say the scale at which the material is only a few atoms thick. In combination with the iCub robot, it can be investigated how the experiences of the robot are recorded in such materials and how the robot is controlled by nano-neural circuitry. The bottleneck of the existing technical systems is often the energy consumption and the size. The limits of Moore's Law, the proposition that the number of transistors in a circuit doubles every two years through technological advances, are reached. In this area we are therefore also on the verge of many new applications."

MTM Tricks & Tips



Microsoft Test Manager Tips and Tricks


Microsoft Test Manager 2010 (MTM) is a testing utility for managing and running manual and automated test cases for a TFS Team Project. It integrates really well with Visual Studio 2010 and the Visual Studio 2010 web client (http://icstfsweb). However, there are several helpful tips and tricks that can make using MTM a better overall experience. This article contains some of those helpful hints and will continue to grow as new things are discovered.The first thing that must be done to improve the MTM experience is to install Visual Studio 2010 Service Pack 1. There were some significant bug fixes and improvements to MTM that were released in SP1 due to a lot of customer complaints when it was initially released in April 2010. The Microsoft site has the full list of SP1 improvements to MTM. To install SP1, go to: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=75568aa6-8107-475d-948a-ef22627e57a5&displaylang=en.Installing SP1 and SP2
The Visual Studio 2010 Feature Pack 2 has a number of testing features that expand MTM including capture and playback for Silverlight 4 web applications, using action recording to fast forward through manual tests, and support for coded UI tests using Firefox. It is cumulative so FP2 includes everything in FP1. For more information about Visual Studio 2010 FP2, go to: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=75568aa6-8107-475d-948a-ef22627e57a5&displaylang=en.

Pausing and Resuming a Manual Test

There is a handy little trick that Microsoft did a poor job of making visible, so you may not be aware of it. When running a manual test in Test Runner, there may be a need for return to MTM while running a test (that is, to look at the Test Case details). There is a Pause button, but that will not return to MTM to view details about the Test Plan or Test Cases.
At the top of the Test Runner GUI, there is an icon that will allow you to pause the current run (even if you are creating an Action Recording). If you hover over the icon, the tooltip says, "Return to the Testing Center" (see Figure 1).
ReturnToTestingCenterIcon.png
"Return to the Testing Center" icon (Figure 1).
When inside the Testing Center, you can view Test Cases, change Test Plan properties, or whatever you wish. When you want to resume the running of the manual test, click on the Return to Test Runner icon in the upper right-hand area of the Testing Center (see Figure 2). When hovering over the icon, the tooltip displays "Return to Test Runner." This icon is only visible when you have paused a manual test run.
ReturnToTestRunner.png
"Return to Test Runner" icon (Figure 2).

Speeding Things Up in Test Runner

By default, a local set of Test Settings is created so you can immediately start running manual tests in Test Runner. However, the default settings may not be the best for the testing you are doing. You can go into the Lab Center and create a new set of Test Settings under theSettings tab, by clicking on the New button (see Figure 3).
CreateNewTestSettings.png
Creating New Test Settings (Figure 3).
This will open up a new form where you can specify your customized Test Settings. On the first page of the New Test Settings Manager, type in a name, description (optional), and specify if these will be used for Manual Tests or Automated Tests (see Figure 4). Then click on the Nextbutton.
GeneralTestSettings.png
General Test Settings (Figure 4).
On the next page you can select the Roles that your test machine will use. Typically this would be either Desktop Client or Web Client depending the type of application you are testing. The simplest Role would be Local if you are just running manual tests on your laptop. After selecting one (or more) of the Roles, click the Next button.
The next page is where you can select and configure the Data and Diagnostics that you want to have saved off for each running Test Case. These settings can affect your performance (see Figure 5).
DataAndDiagnosticsTestSettings.png
Data and Diagnostics Test Settings (Figure 5).
The biggest slow down can come if you have IntelliTrace turned on. Typically you would want this turned off until you have a failing test that you want to debug. In that scenario you would turn it on so you get more information to determine what the problem is.
The Video Recorder setting is a great option to turn on when you have automated tests running overnight. You can actually see what happened while your test was running (you will need Microsoft Expression Encoder installed). There is a setting in the configuration of the Video Recorder options to save videos of passing tests. It will save a lot of disk space if you deselect that option – videos of failing tests will still be saved (see Figure 6).
ConfigureVideoRecorder.png
Configure Video Recorder (Figure 6).
Click the Next button to see the Summary page, and then click Finish. You can set these Test Settings to be used by default on all manual tests by going to the Test Plan Properties page.

Test Controller on a lab Virtual Machine

There is a trick to getting a Test Controller to work properly on a Virtual Machine (VM) in the lab. The problem is that if you are at your desk on your machine, you will not be able to send tests that you want to run to that Test Controller. It has a problem resolving the name of the machine. Even though the machine is registered and added to the TFS Collection, your machine will not be able to resolve the name. It has to do with DNS. You can do either of two things to overcome the problem (the second option is recommended).
  1. You can add the IP address of the VM to your local hosts file. To do this you would find out the IP address of the VM, then go to (usually)C:\Windows\System32\drivers\etc, and then add the IP address of the VM to the hosts file.
  2. The other option is to contact the lab team (Alan Keeler, Rodney Harrison, among others) and ask them to add the VM bane to the DNS on ldschurc.org. This will allow any user that has rights to see the Test Controller to send tests to it.
One last thing to remember is that you must have Test Agents registered with the Test Controller to actually run the tests. The Test Controller only does the coordination of sending out the tests to the Agents and routing the results back to the TFS database.

Video Recording

You can turn on Video Recording which will take a video of Tests that are running or even ad hoc manual testing. To make this option available, it will be required to install Microsoft Expression Encoder 4.0. This is a free download from http://www.microsoft.com/download/en/details.aspx?id=24601. The files are stored in a "Expression Encoder Screen Capture" format and saved it with a ".XESC" extension. To turn this feature on, see Figure 5 above in the Speeding Things Up in Test Runner section.
There may be some people that will want to replay the recording but don't want to install Expression Encoder. There is a way to get the minimal pieces to see the encoding by installing the codec at http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=10732 and theninstalling the video data adapter.
There is one other option which is that you could open the video recording .XESC file in Microsoft Expression Encoder and save it in WMV or MP4 format. Test Runner will save the video recording file automatically to a bug when created from a failed Test Case in the default XESC format. It is possible to open the file, convert it, attach the WMV or MP4 file to the bug, and then delete the XESC file. Then anyone can replay the videos without installing anything.

Adding Special Suites

By default, the only Work Item Type (WIT) that can be added as a suite (also known as a requirement) to a Test Plan in MTM 2010 is a "User Story." If you try to add any other type of item to the Test Plan it will fail and possibly crash. The way to add more WITs as acceptable suites or requirements is to use the witadmin utility which is located in Program Files\Microsoft Visual Studio 10.0\Common7\IDE.
To see the options available to pass in on the command-line to witadmin.exe, type:
Witadmin /?.
First, export the existing Categories settings document which specifies the Work Item Types that can be used throughout TFS, Visual Studio, and MTM. To export the XML file, type:
Witadmin exportcategories /collection:http://icstfs.ldschurch.org/tfs/UpgradeFromICSTFS2008 /p:<Your Project Name> /f:”C:\Users\<Your User Name>\Documents\TFScategories.xml”
Once that is downloaded to your machine, you can modify it and then import it back to the TFS Server. Open the TFSCategories.xml file and find the category to be changed. In the case of adding more acceptable WITs to become suites in MTM, look for the following section:
<CATEGORY refname="Microsoft.RequirementCategory" name="Requirement Category">
  <DEFAULTWORKITEMTYPE name="User Story" />
</CATEGORY>
Make the desired modifications by adding lines that specify the other WITs to be added. The changes should look something like the following:
<CATEGORY refname="Microsoft.RequirementCategory" name="Requirement Category">
  <DEFAULTWORKITEMTYPE name="User Story" />
  <WORKITEMTYPE name="Requirement" />
  <WORKITEMTYPE name="Task" />
</CATEGORY>
Once the XML file contains the desired changes, it needs to be saved and imported to TFS. Type the following to import it:
Witadmin importcategories /collection:http://icstfs.ldschurch.org/tfs/UpgradeFromICSTFS2008 /p:<Your Project Name> /f:”C:\Users\<Your User Name>\Documents\TFScategories.xml”
It will be necessary to close and restart Microsoft Test Manager if it was open while making the changes. After restarting MTM, it should be possible to add Work Items of the newly added types to the Test Plan as suites (or requirements).

newer post older post