Myths About E-Cigarettes

Myths About E-Cigarettes

Myths About E-Cigarettes

I personally think E-cigs are great but advocates tout that e-cigarette are a clean alternative to old-fashioned tobacco, one that can even help people quit smoking. But although the companies making these largely unregulated products promote e-cigarettes as safe and pure, the reality is a little more complicated. Here are four common misconceptions about e-cigarettes, and the scientific evidence against them.

Myth 1: Vapor from e-cigs is pure.

The liquid “vaped” in an e-cigarette contains nicotine, water and a solvent (usually glycerine or propylene glycol). It may also contain flavoring agents, such as oil of wintergreen. Although this mixture may sound pure enough, neither the liquid (called the e-liquid) nor the device’s delivery system are regulated; this means e-cigarettes could produce harmful chemicals.

In fact, recent studies have identified impurities ranging from formaldehyde to heavy metals in e-cig vapor. And vaporized propylene glycol is a known eye and respiratory irritant.

One recent study found formaldehyde, acetaldehyde and acetone in the vapor of several different e-cigarette models and liquid nicotine products found formaldehyde, acetaldehyde and acetone. “We found nicotine, of course, but we also found some potentially dangerous compounds,” said study researcher Maciej Goniewicz, an assistant professor of oncology at Roswell Park Cancer Center in Buffalo, New York.

What’s more, users can amp up the voltage of an e-cig delivery device, resulting in a denser, more nicotine-rich vapor. Goniewicz and his team found that at a higher voltage and hotter temperature, levels of harmful chemicals increased, too.

The vapor had a lower chemical content than tobacco smoke, but there was “huge variability” among the products tested, Goniewicz told Live Science. “It doesn’t mean that each product will expose users to high levels of formaldehyde, but there is a risk for sure,” he said.

Myth 2: E-cigs are safe.

In addition to potential toxicity from chemical byproducts, which could harm users over the long term, e-cigs carry another safety risk. Liquid nicotine is extremely toxic when swallowed, and in some case reports, infants and children have accidentally ingested the substance.

The chances of this happening may increase with flavored liquid nicotine, which may come in enticing-looking packages and can smell tempting, according to new research.

“It mistakenly has this reputation for being safe because it’s purchased over the counter, but it easily can be fatal if it’s taken in high doses,” said Dr. Robert A. Bassett, a medical toxicologist and emergency medicine physician at Einstein Medical Center in Philadelphia. Bassett and his colleagues reported a case of liquid nicotine poisoning in a 10-month-old infant in the May 7 issue of JAMA.

The boy recovered within a few hours, but nicotine poisoning could easily be fatal, Bassett said. A teaspoon of standard liquid nicotine would be enough to kill a person who weighs 200 pounds (90 kilograms), Bassett and his colleagues noted in their report.

Myth 3: E-cigs can help you quit smoking.

The few studies looking at whether or not using e-cigs helps people kick the habit have had mixed results. Some studies have found people who tried e-cigs wound up smoking fewer regular cigarettes, but they were no more likely to give up smoking entirely.

Overall, the authors of a recent scientific review conclude, “studies that reflect real-world e-cigarette use found that e-cigarette use is not associated with successful quitting … Taken together, the studies suggest that e-cigarettes are not associated with successful quitting in general population-based samples of smokers.”

And there is even some evidence that e-cigs may get non-smokers hooked on nicotine. Studies have found as many as one-third of young e-cigarette users have never tried conventional cigarettes.

Myth 4: E-cigs don’t produce harmful second-hand smoke.

A main selling point of e-cigs is that they can be used anywhere, because they don’t produce toxic smoke that puts others at risk. But breathing in second-hand vapor,  also known as “passive vaping,” may not be harmless. In fact, experts say although The level of toxic chemicals in second-hand vapor is smaller than that in second-hand smoke. But experts say e-cig smoke contains a similar amount of tiny particles of heavy metals and other substances that can damage the lungs.

The Food and Drug Administration has proposed a rule that would permit the agency to regulate e-cigarettes and similar products. If the proposal becomes final, the agency said, it will be able to use regulatory tools, such as age restrictions and rigorous scientific review of new tobacco products and claims to reduce tobacco-related disease and death.

 

Source:  livescience.com

Baking Soda kills Cancer

Baking Soda kills Cancer

Baking Soda kills Cancer

Even the most aggressive cancers which have metastasized have been reversed with baking soda cancer treatments. Although chemotherapy is toxic to all cells, it represents the only measure that oncologists employ in their practice to almost all cancer patients.

In fact, 9 out of 10 cancer patients agree to chemotherapy first without investigating other less invasive options.

Doctors and pharmaceutical companies make money from it. That’s the only reason chemotherapy is still used. Not because it’s effective, decreases morbidity, mortality or diminishes any specific cancer rates. In fact, it does the opposite. Chemotherapy boosts cancer growth and long-term mortality rates and oncologists know it.

A few years ago, University of Arizona Cancer Center member Dr. Mark Pagel received a $2 million grant from the National Institutes of Health to study the effectiveness of personalized baking soda cancer treatment for breast cancer.

Obviously, there are people in the know who have understood that sodium bicarbonate, that same stuff that can save a person’s life in the emergency room in a heartbeat, is a primary cancer treatment option of the safest and most effective kind.

Studies have shown that dietary measures to boost bicarbonate levels can increase the pH of acidic tumors without upsetting the pH of the blood and healthy tissues. Animal models of human breast cancer show that oral sodium bicarbonate does indeed make tumors more alkaline and inhibit metastasis.

Based on these studies, plus the fact that baking soda is safe and well tolerated, world renowned doctors such as Dr. Julian Whitaker have adopted successful cancer treatment protocols as part of an overall nutritional and immune support program for patients who are dealing with the disease.

The Whitaker protocol uses 12 g (2 rounded teaspoons) of baking soda mixed in 2 cups water, along with a low-cal sweetener of your choice. (It’s quite salty tasting.)

Sip this mixture over the course of an hour or two and repeat for a total of three times a day. One man claims he has found a cure for cancer using baking soda and molasses and actually successfully treated his own disease by using baking soda.

When taken orally with water, especially water with high magnesium content, and when used transdermally in medicinal baths, sodium bicarbonate becomes a first-line medicinal for the treatment of cancer, and also kidney disease, diabetes, influenza and even the common cold.

It is also a powerful buffer against radiation exposure, so everyone should be up to speed on its use. Everybody’s physiology is under heavy nuclear attack from strong radioactive winds that are circling the northern hemisphere.

Dr. Robert J. Gillies and his colleagues have already demonstrated that pre-treatment of mice with baking soda results in the alkalinization of the area around tumors. The same researchers reported that bicarbonate increases tumor pH and also inhibits spontaneous metastases in mice with breast cancer.

The Baking Soda Formula for Cancer

To make the baking soda natural cancer remedy at home, you need:

  • maple syrup,
  • molasses or
  • honey
  • to go along with the baking soda.

In Dr. Sircus’ book, he documented how one patient used baking soda and blackstrap molasses to fight the prostate cancer that had metastasized to his bones. On the first day, the patient mixed 1 teaspoon of baking soda with 1 teaspoon of molasses in a cup of water.

He took this for another 3 days after which his saliva pH read 7.0 and his urine pH read 7.5.

Encouraged by these results, the patient took the solution 2 times on day 5 instead of once daily. And from day 6 – 10, he took 2 teaspoons each of baking soda and molasses twice daily.

By the 10th day, the patient’s pH had risen to 8.5 and the only side effects experienced were headaches and night sweat (similar to cesium therapy).

The next day, the patient had a bone scan and too other medical tests. His results showed that his PSA (prostate-specific antigen, the protein used to determine the severity of prostate enlargement and prostate cancer) level was down from 22.3 at the point of diagnosis to 0.1.

Another baking soda formula recommends mixing 90 teaspoons of maple syrup with 30 teaspoons of baking soda.

To do this, the maple syrup must be heated to become less viscous. Then the baking syrup is added and stirred for 5 minutes until it is fully dissolved.

This preparation should provide about 10-day worth of the baking soda remedy. 5 – 7 teaspoons per day is the recommended dose for cancer patients.

Care should be taken when using the baking soda remedy to treat cancer. This is because sustaining a high pH level can itself cause metabolic alkalosis and electrolyte imbalance. These can result in edema and also affect the heart and blood pressure.

One does not have to be a doctor to practice pH medicine. Every practitioner of the healing arts and every mother and father needs to understand how to use sodium bicarbonate.

Bicarbonate deficiency is a real problem that deepens with age so it really does pay to understand and appreciate what baking soda is all about.

Do you have baking soda in your house?

 

Source:   humansarefree.com

Cancer Kill Switch

Cancer kill switch

Cancer kill switch

What if you could just flick a switch and turn off cancer? It seems like something you would see in a sci-fi flick, but scientists are working towards a future where that could be a reality. At the Mayo Clinic in Jacksonville, Florida, a group of researchers have made a discovery that could be a kill switch for cancer. They have found a way to reprogram mutating cancer cells back to normal, healthy cells.

Panos Anastasiadis, PhD, head of the Department of Cancer Biology at the Mayo Clinic, and his team were studying the role of adhesion proteins in cells. Anastasiadis’ primary focus was on the p120 catenin protein and long held hypothesis on it being a major player in the suppressor of tumors. The team found that p120, along with another adhesion protein, E-cadherin, actually promoted cancer growth. “That led us to believe that these molecules have two faces — a good one, maintaining the normal behavior of the cells, and a bad one that drives tumorigenesis.”

In that research, however, Anastasiadis made a remarkable discovery, “an unexpected new biology that provides the code, the software for turning off cancer.” That would be a partner to the P120 protein, dubbed PLEKHA7. When introduced to tumors, PLEKHA7 was able to “turn off” the cancerous cells’ ability to replicate and return it to a benign state. It stopped the cancer in its tracks.

How it all works is pretty straightforward. Normal, healthy cells are regulated by a sort of biological microprocessor known as microRNAs, which tell the cells to stop replicating when they have reproduced enough. Cancer is caused by a cell’s inability to stop replicating itself, and eventually grows into a cluster of cells that we know as a tumor. Anastasiadis’ team found that PLEKHA7 was an important factor in halting the replication of cells, but that it wasn’t present in the cancerous cells. By reintroducing PLEKHA7, what were once raging cancerous cells returned to normal.

This was done by injecting PLEKHA7 directly into the cells, under a controlled lab test. Anastasiadis said they still need to work on “better delivery options,” as these tests were done on human cells in a lab. They did find success, however, in stopping the growth in two very aggressive forms of cancer: breast and bladder. While this isn’t being tested on humans yet, it represents a huge step forward in understanding the nature of cancer and we can cure it.

 

Source:  Geek.com

Scientists grow 5-week-old human brain

Scientists successfully grow human brain in lab

Scientists successfully grow human brain in lab

Growing brain tissue in a dish has been done before, but bold new research announced this week shows that scientists’ ability to create human brains in laboratory settings has come a long way quickly.

Researchers at the Ohio State University in the US claim to have developed the most complete laboratory-grown human brain ever, creating a model with the brain maturity of a 5-week-old foetus. The brain, which is approximately the size of a pencil eraser, contains 99 percent of the genes that would be present in a natural human foetal brain.

“It not only looks like the developing brain, its diverse cell types express nearly all genes like a brain,” Rene Anand, professor of biological chemistry and pharmacology at Ohio State and lead researcher on the brain model, said in a statement.

“We’ve struggled for a long time trying to solve complex brain disease problems that cause tremendous pain and suffering. The power of this brain model bodes very well for human health because it gives us better and more relevant options to test and develop therapeutics other than rodents.”

Anand turned to stem cell engineering four years ago after his specialized field of research – examining the relationship between nicotinic receptors and central nervous system disorders – ran into complications using rodent specimens. Despite having limited funds, Anand and his colleagues succeeded with their proprietary technique, which they are in the process of commercializing.

The brain they have developed is a virtually complete recreation of a human foetal brain, primarily missing only a vascular system – in other words, all the blood vessels. But everything else (spinal cord, major brain regions, multiple cell types, signalling circuitry is there). What’s more, it’s functioning, with high-resolution imaging of the brain model showing functioning neurons and brain cells.

The researchers say that it takes 15 weeks to grow a lab-developed brain to the equivalent of a 5-week-old foetal human brain, and the longer the maturation process the more complete the organoid will become.

“If we let it go to 16 or 20 weeks, that might complete it, filling in that 1 percent of missing genes. We don’t know yet,” said Anand.

The scientific benefit of growing human brains in laboratory settings is that it enables high-end research into human diseases that cannot be completed using rodents.

“In central nervous system diseases, this will enable studies of either underlying genetic susceptibility or purely environmental influences, or a combination,” said Anand. “Genomic science infers there are up to 600 genes that give rise to autism, but we are stuck there. Mathematical correlations and statistical methods are insufficient to in themselves identify causation. You need an experimental system – you need a human brain.”

The research was presented this week at the Military Health System Research Symposium.

 

Source:  sciencealert.com

1% of 84,000 Chemicals Have only Been Tested

women make-up

women make-up

There are around 84,000 chemicals on the market, and we come into contact with many of them every single day. And if that isn’t enough to cause concern, the shocking fact is that only about 1 percent of them have been studied for safety.

In 2010, at a hearing of the Senate Subcommittee on Superfund, Toxics and Environmental Health, Lisa Jackson, then the administrator of the EPA, put our current, hyper-toxic era into sharp perspective: “A child born in America today will grow up exposed to more chemicals than any other generation in our history.”

Just consider your morning routine: If you’re an average male, you use up to nine personal care products every single day: shampoo, toothpaste, soap, deodorant, hair conditioner, lip balm, sunscreen, body lotion and shaving products — amounting to about 85 different chemicals. Many of the ingredients in these products are harmless, but some are carcinogens, neurotoxins and endocrine disruptors.

Women are particularly at risk because they generally use more personal care products than men: 25 percent of women apply 15 or more products daily, including makeup and anti-aging creams, amounting to an average of 168 chemicals. For a pregnant woman, the risk is multiplied as she can pass on those toxins to her unborn child: 300 contaminants have been detected in the umbilical cord blood of newborns.

Many people don’t think twice about the chemicals they put on their bodies, perhaps thinking that the government regulates the personal care products that flood the marketplace. In reality, the government plays a very small role, in part because it doesn’t have the legal mandate to protect the public from harmful substances that chemical companies and manufacturers sell in their products. Federal rules designed to ensure product safety haven’t been updated in more than 75 years. New untested chemicals appear on store shelves all the time.

“Under federal law, cosmetics companies don’t have to disclose chemicals or gain approval for the 2,000 products that go on the market every year,” notes environment writer Jane Kay in Scientific American. “And removing a cosmetic from sale takes a battle in federal court.”

It’s high time these rules are revisited. Not only have thousands of new chemicals entered the market in the past several decades, there is overwhelming evidence that the public is unnecessarily exposed to health hazards from consumer products. In 2013, the American College of Obstetricians and Gynecologists issued a report that found “robust” evidence linking “toxic environmental agents” — which includes consumer products — to “adverse reproductive and developmental health outcomes.”

Formaldehyde is a good example. It is a known carcinogen used as a preservative to kill or inhibit the growth of microorganisms in a wide range of personal care products, from cosmetics, soaps, shampoos and lotions to deodorants, nail polishes and hair gels. It is also used in pressed-wood products, permanent-press fabrics, paper product coatings and insulation, and as a fungicide, germicide, disinfectant and preservative. The general public is also exposed to formaldehyde through automobile tailpipe emissions. Formaldehyde has been linked to spontaneous abortion and low birth weight.

While the main concern about formaldehyde exposure centers around industrial use (e.g., industrial workers, embalmers and salon workers), the Cosmetic Ingredient Review, an independent panel of experts that determines the safety of individual chemical compounds as they are used in cosmetics, recommends that for health and safety reasons cosmetics should not contain formaldehyde at amounts greater than 0.2 percent. It’s a small amount, but the problem is that the FDA doesn’t regulate the use of formaldehyde in cosmetics (except for nail polish), and companies aren’t required by law to follow CIR’s recommendations.

 

Source:  alternet.org

NSA planned to infect Samsung with spyware

 

NSA planned to infect Samsung with spyware

NSA planned to infect Samsung with spyware

If you’re in the business of writing spyware or malware, smartphones are a tempting target. For many people, their phone or tablet is now the primary compute device they use to surf the web, access content, and explore new software. Google has had problems keeping the Google Play store free from malware and spyware, but new information suggests that both Google and Samsung almost faced a much more potent opponent — the NSA itself.

A report from The Intercept highlights how the NSA explored options for hacking the App Store and Google Play over several workshops held in Australia and Canada between November 2011 and February 2012. The projects used the Internet-monitoring Xkeyscore system to identify smartphone traffic, then trace that traffic back to app stores. This led to a project dubbed Irritant Horn, the point of which was to develop the ability to distribute “implants” that could be installed when the smartphones in question attempted to connect to Google or Samsung app stores.

The NSA has targeted mobile devices ever since the post-Patriot Act era made such warrantless comprehensive spying legal, but it’s never been clear how the organization managed to tap certain hardware in the first place. The goal was twofold: First, use app stores to launch spyware campaigns and second, gather information about the phone users themselves by infiltrating the app stores in question.

The reference to “Another Arab spring,” refers to the fact that the events of 2010-2011 apparently caught western intelligence agencies off-guard, with few resources that could quickly be brought to bear. The NSA wanted to be aware of future events before they happened. Note, however, that this has precious little to do with the direct goal of protecting the United States from terrorism.

Few would argue that the US should not monitor the activities of known threats, but where was the threat from internal strife and the possible toppling of autocratic governments? It’s true that in the longer run, some new governments might pursue policies that the United States found less desirable than those of the previous regime, but there’s an enormous leap between “We don’t like Country X’s new trade policy,” and “Country X is actively assisting terrorist groups to carry out an attack on the United States.”

 The NSA was primarily interested in the activities of African countries. But in the course of investigating these possibilities, it discovered significant security flaws in a program called UC Browser, used by nearly half a billion people in East Asia. Instead of disclosing the security vulnerability, the NSA and other foreign intelligence groups chose to exploit it — thereby increasing the chances that other criminal elements would have time to find and exploit it as well.

These issues are at the heart of the debate over what the NSA’s role should be in the future. There’s always been tension over whether the NSA should weaken or strengthen the cryptographic standards that allow for secure communication. That discussion may be even more nuanced when it involves software produced by foreign companies. There are few signs, however, that such nuanced discussions of capability have ever occurred. Instead, we continue to see intelligence resources deployed with the goal of vacuuming up all information from any source, regardless of legal precedent or cooperation.

The future of the Patriot Act and the scope of NSA’s future powers remains in some doubt. Senator Rand Paul gave a 10-hour speech yesterday aimed at derailing support for the Patriot Act (his actions were not properly a filibuster, because a vote on the renewal of Section 215 wasn’t actually before the chamber at the time). Others in the House of Representatives have called for a full appeal of the Patriot Act’s provisions, and the Federal Appeals Court for the Second Circuit recently ruled that the current spying program is illegal under the Patriot Act as it stands.

 

Source:  extremetech.com

 

Reprogrammed bacteria able to detect cancer

Bacteria to detect cancer

Reprogramming bacteria to detect cancer

The fight against cancer has risen to a fever pitch in the last decade, with new research avenues increasing almost by the day. If we are to believe Ray Kurzweil and the singularity folk, the specter of cancer may soon be a thing of the past. Lending credence to such optimism, new research by a team at MIT and UC San Diego employs genetically engineered bacteria to detect cancer, and perhaps someday treat it as well. Enlisting the help of bacteria in the battle against cancer may prove key in turning the tables on this awful menace.

The basis for this new form of cancer diagnosis is the unusual relationship between cancer and bacteria. Whereas healthy human tissue will aggressively fight off most bacterial infestations, the immune system within tumors has been compromised by the many mutations taking place there, and so bacteria accumulate in them at a higher-than-normal rate. The researchers exploited this characteristic to devise a means of detecting tumors long before other methods could catch them.

By removing a snippet of DNA programming found in fireflies and transferring it to a harmless form of E. Coli bacteria, the researchers were able to cause these bacteria to fluoresce at the critical concentrations that occur within tumors. The analogy would be to that of creating a flashlight that automatically turns on when it finds a tumor. The ability to detect tumors as small as one cubic millimeter makes this one of the most sensitive diagnostic tools to date. In treating cancer, early detection is pivotal, since the sooner a tumor is detected, the easier it is to contain and eliminate.

But before letting out a collective sigh of relief, it should be kept in mind that this method has only been successfully applied to liver cancers. Early on in the study, the researchers realized the orally ingested bacteria would not reach sufficient concentrations throughout the whole body to successfully detect all tumors therein. For instance, the blood brain barrier prevents the bacteria from entering the human brain as would be necessary for this method to detect brain tumors. The liver, however, proved an exception, in that the E. Coli bacteria in question naturally occurs there and would multiply rapidly in the presence of a tumor.

Despite its limitations, this is nonetheless a significant development. Many tumors that begin in the colon quickly spread to the liver, where they prove difficult to detect and go on to infect other parts of the body. Therefore, catching liver cancer early can play a key role in preventing cancers in many other place of the body.

The scientists involved in the study, including Tal Danino and Arthur Prindle, are now hopeful that the same bacteria can be programmed to fight cancer as well. The goal is to engineer the bacteria to cause genetic disruption of cancer cell function, deliver drugs, or signal the immune system to destroy the cancer itself. In the future, the cup of yogurt you have in the morning may not only improve digestive health, but simultaneously track down and eliminate cancers growing within the body.

 

Source:  extremetech.com

 

100,000 German Call for GMO Ban

GMO Cultivation Ban

GMO Cultivation Ban

German beekeepers have called for a nationwide ban on cultivating GM plants, reports the German NGO keine-gentechnik.de.

The call by the German Beekeepers Association (DIB), which represents almost 100,000 beekeepers, comes after Europe adopted controversial legislation enabling member states to opt-out of the cultivation of GMOs that have been approved at the EU level.

Under the law, a member state can ban a GMO in part or all of its territory. But the law has come under heavy criticism for failing to provide a solid basis for such bans.

The beekeepers are urging Agriculture Minister Christian Schmidt (CSU) to implement a Germany-wide ban on cultivation. The Minister pleads, however, for letting each state decide individually.

The beekeepers counter that a piecemeal approach will not work. Bees fly up to eight kilometres in search of food, the DIB said, so a juxtaposition of GM crop cultivation zones and GMO-free zones within Germany would be “environmentally and agriculturally unacceptable”.

“Bees know no borders,” the DIB added.

The beekeepers’ demand for a nationwide ban could bring them into direct conflict with the new opt-out law, as experts warn that such bans may not be legally solid.

National GMO cultivation bans will be tough to uphold

At a conference on the new European legislation hosted by the Hungarian Ministry of Agriculture in Budapest, Hungary, in April 2015, Dr H.-Christoph von Heydebrand of the German Federal Ministry of Food and Agriculture warned that a nationwide ban on GMO cultivation would be much harder to justify under the new law than a regional or local ban.

A lawyer from the EU Council, Matthew Moore, speaking at the same conference in a personal capacity, agreed that it would be far easier under the law to defend national measures that “do not extend to the whole territory”.

Mr Moore gave an example of the type of challenge that would-be opting-out countries will be faced with. If they argue that GMOs threaten small-scale and agroecological farmers in their nation, they could be asked: “Is the entirety of your agricultural sector really composed of small farmers whose domination by a large agro-industrial company and its single pesticide motivated you to act?”

Mr Moore explained that the principle of proportionality is written into the new law, as well as being a general principle of EU law.

This means that the ECJ will be more inclined to accept GMO cultivation opt-outs “in relation to a defined region than in relation to the entirety of the territory of a country the size of Hungary”. Any measure taken by an opting-out country to ban or restrict the cultivation of GMOs must not go beyond what is necessary to achieve the stated aim.

Mr Moore made clear that if opt-outs were challenged, for example, by the GMO industry, the case would end up in the European Court of Justice. And the ECJ has a presumption in favour of the EU single market.

In simple terms, that means the ECJ could take a lot of convincing to allow a country or even a region to opt out of cultivating a GMO that the European Food Safety Authority (EFSA) asserts is safe. Such an opt-out, if allowed to stand, could create divisions in the European single market and might bring the member state into conflict with the ECJ.

The current situation in Germany, with beekeepers ranged against government officials and pro-GMO farmers, also suggests that the new opt-out law will create internal divisions within a country.

The GMO industry may go down in history as having broken apart the European Union and set one sector of the food and agriculture industry against another.

 

Source:  globalresearch.ca

Harvest electricity from evaporating water

harvest electricity from evaporating water

harvest electricity from evaporating water

Scientists in the US have shown that evaporating water could be an abundant new source of clean, renewable energy, and it’s already powerful enough to light up a small LED and power a miniature car.

With around 70 percent of the planet covered in water that’s constantly evaporating into the atmosphere, the new technology has huge potential to help us power our homes, transportation and industries, without producing greenhouse gas emissions.

 “Evaporation is a fundamental force of nature,” lead researcher Ozgur Sahin from Columbia University said in a press release. “It’s everywhere, and it’s more powerful than other forces like wind and waves.” If the technology can be scaled up, his team believes we could one day place giant floating power generators on top of lakes, dams and rivers.

Although scientists have long known that evaporation was a constant and powerful force, they’ve struggled to find a way to use this energy to generate electricity. But last year, Sahin made a seemingly unrelated discovery – that when common soil bacteria spores shrink and swell as a result of changing humidity, they can push and pull other objects with surprising force.

With this in mind, Sahin and his team stuck Bacillus subtilis spores onto thin strips of tape, similar to cassette tape. This made the tape contract when the air surrounding it was dry, and expand when it was humid. “Several of these strips together can contract with enough force to lift small weights of 0.2 lbs to 0.7 lbs [0.09 to 0.3 kg] – 50 times the weight of the strips themselves,” writes Kiona Smith-Strickland for Discover magazine.

The researchers then used these strips – which they’re calling hygroscopy driven artificial muscles, or HYDRAs – to build a shuttered structure that floats on water. The humidity produced by the evaporating water causes the tape to expand, opening up the shutters and causing the device to dry out. As the tape shrinks again in the dry air, the shutters are pulled shut, which allows the humidity to build up again, repeating the cycle.

“When we placed water beneath the device, it suddenly came to life, moving on its own,” said Xi Chen, a postdoctoral fellow in Sahin’s lab.

The team used this opening and shutting as a rudimentary piston and linked it to a generator, producing enough electricity to cause a small LED light to flash on and off.

“We turned evaporation from a pool of water into light,” said Sahin. Chen speculates, “that an improved version with stickier plastic tape and more spores could potentially generate even more power per unit area than a wind farm”.

They also created a ‘Moisture Mill’ – a plastic wheel covered in the spore-covered tape, which is half covered in a humid environment, and half exposed to a dry environment. This change in moisture causes the tabs to curve and straighten, producing enough force to turn the wheel continuously.

Using the system, the team was able to make a small toy car weighing 0.1 kg roll forwards on its own. Sahin suggests that a larger version of this mill could produce as much electricity as a wind turbine

The research has been published in Nature Communications, and the team is now focussing on scaling up the devices and investigating other ways that the technology can help generate electricity. “Evaporation-driven engines may find applications in powering robotic systems, sensors, devices and machinery that function in the natural environment,” they write.

In the meantime they’ve put a call-out for other researchers to take their idea and run with it. Fingers crossed that further experiments confirm that the technology is as powerful as it seems, because this idea could be an exciting contender in the renewable energy space.

 

Source:  sciencealert.com

Half Of All Jobs Will Be Automated By 2034

47% Of All Jobs Will Be Automated By 2034

47% Of All Jobs Will Be Automated By 2034

Almost half of all jobs could be automated by computers within two decades and “no government is prepared” for the tsunami of social change that will follow, according to the Economist.

The magazine’s 2014 analysis of the impact of technology paints a pretty bleak picture of the future.

It says that while innovation (aka “the elixir of progress”) has always resulted in job losses, usually economies have eventually been able to develop new roles for those workers to compensate, such as in the industrial revolution of the 19th century, or the food production revolution of the 20th century.

But the pace of change this time around appears to be unprecedented, its leader column claims. And the result is a huge amount of uncertainty for both developed and under-developed economies about where the next ‘lost generation’ is going to find work.

It quotes a 2013 Oxford Martin School study that estimates 47% of all jobs could be automated in the next 20 years:

“Our findings thus imply that as technology races ahead, low-skill workers will reallocate to tasks that are non-susceptible to computerisation – i.e., tasks requiring creative and social intelligence. For workers to win the race, however, they will have to acquire creative and social skills,” that study says.

The Economist also points out that current unemployment levels are startlingly high, but that “this wave of technological disruption to the job market has only just started”.

Specifically the Economist points to new tech like driverless cars, improved household gadgets, faster and more efficient online communications and ‘big data’ analysis to areas that humans are quickly being superceded. And while new start-ups are raising billions, they employ few people – Instagram, sold to Facebook in 2012 for $1 billion, employed just 30 people at the time.

Those conclusions are echoed elsewhere. Another study (‘Are You Ready For #GenMobile?’), to be released in full on 21 January by Aruba Networks, points out just how fast traditional working models are changing.

It says that 72% of British people now believe they work more efficiently at home, and that 63% need a WiFi network to complete their tasks – not bad for a technology that was barely standardised 10 years ago.

Meanwhile in ‘The Second Machine Age’, out this week, Erik Brynjolfsson and Andrew McAfee argue workers are under unprecedented pressure by the automation of skilled and unskilled jobs.

In a recent Salon interview Brynjolfsson said: “technology has always been destroying jobs, and it’s always been creating jobs, and it’s been roughly a wash for the last 200 years. But starting in the 1990s the employment to population ration really started plummeting and it’s now fallen off a cliff and not getting back up. We think that it should be the focus of policymakers right now to figure out how to address that.”

The BBC also produced a report earlier this month which claimed, in stark tones, that “the robots are coming to steal our jobs”.

“AI’s are embedded in the fabric of our everyday lives,” head of AI at Singularity University, Neil Jacobstein, told the Beeb.

“They are used in medicine, in law, in design and throughout automotive industry.”

That report too pointed out the change will affect jobs of all kinds – from a Chinese factory Hon Hai which has announced plans to replace 500,000 workers with robots in three years, to lawyers, surgeons and public sector workers.

Opinions remain divided on the impact and future of technological innovation on the jobs market, and wealth inequality. The Economist leader argues that governments have a responsibility to innovate in education, taxation and embracing progress, though the solutions are by no means obvious or without uncertainty.

If only we could automate the process of making and implementing those political decisions – now that would really be something.

 

Source:  huffingtonpost.co.uk

Animals internal compass possibly found

Internal Compass

Internal Compass

Scientists have long known that animals have some kind of internal compass that allows them to use Earth’s magnetic field to navigate. This ability allows species such as the monarch butterfly to travel up to an incredible 5,000 km across the US to the exact same location, year after year, and the Arctic tern to travel 71,000 km between Greenland and Antarctica annually. But these magnetic fields are pretty much invisible to humans, and we’ve never been able to find the sensor that lets animals to detect them.

Now a team of researchers from the University of Texas at Austin in the US has identified a tiny antenna-like structure in the brain of worms that allows them to sense Earth’s magnetic field, and they suspect the same structure could be the key to helping other species navigate, too.

“Chances are that the same molecules will be used by cuter animals like butterflies and birds,” one of the researchers, Jon Pierce-Shimomura, said in a press release. “This gives us a first foothold in understanding magnetosensation in other animals.”

Back in 2012, scientists found cells in pigeons that process information about magnetic fields, but this is the first time researchers have ever found the actual sensor in animals.

“It’s been a competitive race to find the first magnetosensory neuron,” said Pierce-Shimomura. “And we think we’ve won with worms, which is a big surprise because no one suspected that worms could sense the Earth’s magnetic field.”

The team made the discovery while conducting Alzheimer’s research in small soil worms, C. elegans. They noticed that when worms from Texas soil were hungry, they moved downwards to look for food. But worms that came from other parts of the world – Hawaii England and Australia, for example – didn’t move down; they moved at a precise angle to the magnetic field that would have corresponded to down if they’d been in their home country.

The team then altered the magnetic field around the worms’ enclosure using a special magnetic coil system, and found that they changed their behaviour accordingly.

But the real breakthrough came when they worked with worms that had been genetically engineered to block a structure called the AFD neuron from forming in the brain. These worms didn’t change their behaviour when the magnetic fields around their enclosure were altered – in fact, they seemed unable to detect the magnetic fields at all.

The AFD neuron is a tiny structure at the end of a neuron that gives worms the ability to sense carbon dioxide levels and temperature while underground. To confirm its additional role in sensing magnetic fields, the team used a technique called calcium imaging to show that changes in the magnetic field caused the AFD neuron to light up. Their findings have been published in the journal eLife. 

The next step will be to confirm that this AFD neuron exists in other species and that it works in the same way. If that’s the case, we might finally have an explanation for the incredible navigation abilities of animals, and perhaps a roadmap for how humans could one day achieve the same ability.

 

Source:  sciencealert.com

Majority of American’s are now Obese

Most American's are Obese

Most American’s are Obese

The number of overweight and obese adults in the United States continues to rise, according to a new study that’s found more than two-thirds of adult Americans aged 25 years or older are now overweight or obese.

The research analysed data from the National Health and Nutrition Examination Survey, which ran from 2007 to 2012, and included information on a sample of 15,208 men and women. Based on the data, the researchers estimate that 39.96 percent of US men (36.3 million) are overweight and 35.04 percent (31.8 million) are obese.

 For women, the estimates are 29.74 percent (28.9 million) of them are overweight, while 36.84 percent (35.8 million) are obese. If you do the maths, sure enough, the number of obese adult Americans (67.6 million) now eclipses those who are only overweight (65.2 million).

What’s so remarkable about the research, conducted by the Washington University School of Medicine and published in The Journal of the American Medical Association, is just how stark the numbers are for the US population. Every three in four men is overweight or obese, and the same can be said for two out of every three women.

In other words, people in healthy weight ranges in the US make up only a distinct minority of the population, especially when you consider that some portion of the remainder in these figures will be people who are actually underweight.

The researchers found the African American community has the biggest problem with obesity – affecting 39 percent of black men and 57 percent of black women – followed by Mexican Americans and then whites.

A similar study was published back in 1999, finding that 63 percent of men and 55 percent of women aged 25 and older were overweight or obese, so clearly the problem has only gotten worse over the last two decades, despite efforts from the government and the health community to educate people on how to take care of themselves when it comes to food and lifestyle choices.

“This is a wakeup call to implement policies and practices designed to combat overweight and obesity,” said Lin Yang, the study’s lead, in a statement. “An effort that spans multiple sectors must be made to stop or reverse this trend that is compromising and shortening the lives of many.”

Scary stuff, but hopefully this latest research will help galvanise efforts to turn weights around in the US and put healthy eating and living squarely back on the agenda.

 

Source:  sciencealert.com

Human cyborgs within 200 years

cyborg women

cyborg women

Within the next 200 years, humans will have become so merged with technology that we’ll have evolved into “God-like cyborgs”, according to Yuval Noah Harari, an historian and author from the Hebrew University of Jerusalem in Israel.

Harari researches the history of the human species, and after writing a new book on our past, he now believes that we’re just a few short centuries away from being able to use technology to avoid death altogether – if we can afford it, that is.

 “I think it is likely in the next 200 years or so Homo sapiens will upgrade themselves into some idea of a divine being, either through biological manipulation or genetic engineering of by the creation of cyborgs: part organic, part non-organic,” Harari said during his presentation the Hay Festival in the UK, as Sarah Knapton reports for the Telegraph. “It will be the greatest evolution in biology since the appearance of life … we will be as different from today’s humans as chimps are now from us.”

Obviously, we should take Harari’s predictions with a grain of salt, but while they sound more suited to science fiction than real life, they’re not actually that out-there. Many researchers believe that we’ve already started down the path towards a cyborg future; after all, many of us already rely on bionic ears and eyes, insulin pump technology and prosthetics to help us survive. And with researchers recently learning how to send people’s thoughts across the web, subconsciously control bionic limbs and use liquid metal to heal severed nerves, it’s not hard to imagine how we could continue to use technology to supplement our vulnerable human bodies further.

Interestingly, Harare’s comments came just a few days after UK-based neuroscientist Hannah Critchlow from Cambridge University got the Internet excited by saying that it could be possible to upload our brains into computers, if we could build computers with 100 trillion circuit connections. “People could probably live inside a machine. Potentially, I think it is definitely a possibility,” Critchlow said during her presentation at the festival.

But Harari warned that these upgrades may only be available to the wealthiest members of society, and that could cause a growing biological divide between rich and poor – especially if some of us can afford to pay for the privilege of living forever while the rest of the species dies out.

If that sounds depressing, the alternative is a future where instead of us taking advantage of technology, technology takes advantage of us, and artificial intelligence poses a threat to our survival, as Elon Musk, Stephen Hawking, and Bill Gates have all predicted.

Either way, one thing seems pretty clear – our future as a species is now inextricably linked with the technology we’ve created. For better or for worse.

 

Source:  sciencealert.com

Friends genetically more similar than strangers

'We Become Friends With Genetically Similar People'

‘We Become Friends With Genetically Similar People’

Our friends seem to be genetically more similar to us than strangers, according to a new U.S. scientific study led by prominent Greek-American professor of sociology and medicine at Yale University Nicholas Christakis and James Fowler, professor of medical genetics and political science at the University of California.

The researchers, who made the relevant publication in the Journal of the National Academy of Sciences (PNAS), analyzed the genome of 1,932 people and compared pairs of friends with pairs of strangers.

There was no biological affinity among all these people, but only the difference in the level of social relations between them.

The study showed that, on average, every person had a more similar DNA with his friends than with strangers. The researchers noted that this finding has to do with the tendency of people to make friends with similar racial (and hence genetic) background.

The genetic similarity between friends was greater than the expected similarity between people who share a common national and genetic inheritance. It is not clear yet by what mechanisms this occurs.

But how similar are we with our friends?

On average, according to the study, a friend of ours has a genetic affinity comparable to our fourth cousin, which means that we share about 1% of our genes our friends.

“1% does not sound a big deal, but it is for geneticists. It is noteworthy that most people do not even know who their fourth cousins ​​are, but somehow, from the countless possible cases, we choose to make friends with people who are genetically similar to us,” said Prof Christakis.

Christakis and Fowler even developed a “friendship score”, which predicts who will befriend whom with nearly the same accuracy as scientists predict, on the basis of genetic analysis, the chances of a person to obesity or schizophrenia.

Focusing on individual genes, the research shows that friends are more likely to have similar genes related to the sense of smell, but different genes that control immunity; thus friends vary genetically in their protection against various diseases.

It seems to be an evolutionary mechanism that serves the society in general, since the fact that people hang out with those who are vulnerable to different diseases constitutes a barrier to the quick spread of an epidemic from person to person. Another notable finding is that the common genes we share with our friends seem to evolve more rapidly than others.

Prof. Christakis explains that probably that is why human evolution seems to have accelerated over the past 30,000 years, as the social environment with an important role of linguistic communication is a vital evolutionary factor.

 

Source:  humansarefree.com

Multitasking lowers your IQ

Multitasking makes you stupid

Multitasking makes you stupid

Envisage the switched-on new-millennium male – his iPhone in one hand while he switches between emails and business reports on his computer screen – a vision of productivity in this wondrous age of apps.

Wrong. He’s seriously dumbing himself down.

Several scientific studies around the world have concluded the brain doesn’t switch tasks like an expert juggler. Quite the opposite. It can reduce your IQ by as much as 10 points, cause mental blanks and reduce your productivity by 40 per cent.

Not a single study in psychology shows that women are better than men at multitasking, says Dr Julia Irwin, senior lecturer in psychology at Macquarie University.

What about women? They’re legends at multitasking and concentrating on several things at once. Nope. Not a single psychological study concludes women are better at multitasking than men, and some research indicates they can be worse.

One Australian researcher in the field, Dr Julia Irwin, senior lecturer in psychology at Macquarie University, advises people to abandon their apps, turn off their mobiles and ignore their emails while they concentrate on one task at a time. “At the end of the day, they will have been a lot more productive,” she says.

“If you’re sending an email while also working on an assignment, one downside is that withdrawing your attention from one task to another creates a split-second in which the brain’s in no-man’s land. It’s called a post-refractory pause.

“Over time these pauses add up and can mean your mind wasn’t on the job for a couple of minutes.”

Dr Irwin says such mental blanks can be dangerous when doing something of critical importance like keeping an eye out for a child in a playground. “If, in that pause, a child wobbles on their bicycle, it’s obviously a worry. You just haven’t got your attention on it.

“The other aspect is, if you’re deeply immersed in writing something and turn your attention to an email that’s just come in, there are studies that show it can take you up to 15 minutes to get yourself back into that same degree of immersion.”

One early study by the Institute of Psychiatry in London involved more than 1000 workers and found multitasking with electronic media caused a temporary 10-point decrease in IQ – a worse effect than smoking marijuana or losing a night’s sleep.

The study’s leader, an adjunct professor at the University of Nevada, Dr Glenn Wilson, called it “informania”, a condition created by using multiple electronic devices and employers’ growing demands to tackle more than one task at a time.

“This is a very real and widespread phenomenon,” he told CNN. “We have found that this obsession with looking at messages, if unchecked, will damage a worker’s performance by reducing their mental sharpness. Companies should encourage a more balanced and appropriate way of working.”

Another study, by Professor David Meyer, director of the University of Michigan’s Brain Cognition and Action Laboratory, concluded that even brief mental blocks created by shifting between tasks cost as much as 40 per cent of someone’s productive time.

Dr Irwin’s own Australian research concludes clearly that in today’s multitasking multi-app world, people should turn off their devices when doing something that merits their full attention.

One of her studies also defies a widespread belief that women are better at multitasking. “One of the very first studies I did was with young students driving and either talking to passengers or on a mobile,” she says. “I thought, oh, the women are going to ace this, but the women actually scored worse on the phones than the men.

“When I looked in the literature, there is not a single study in psychology that shows that women are better at multitasking. But what I did find in the sociological literature is that they perform multiple tasks more often.

“This has  led to the belief that women are better at multitasking, but the more studies are done, the fewer differences they find between female and male brains.”

 

Source:  theage.com.au

Reality doesn’t exist, quantum experiment confirms

 

Reality doesn't exist

Reality doesn’t exist

Australian scientists have recreated a famous experiment and confirmed quantum physics’s bizarre predictions about the nature of reality, by proving that reality doesn’t actually exist until we measure it – at least, not on the very small scale.

That all sounds a little mind-meltingly complex, but the experiment poses a pretty simple question: if you have an object that can either act like a particle or a wave, at what point does that object ‘decide’?

Our general logic would assume that the object is either wave-like or particle-like by its very nature, and our measurements will have nothing to do with the answer. But quantum theory predicts that the result all depends on how the object is measured at the end of its journey. And that’s exactly what a team from the Australian National University has now found.

“It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” lead researcher and physicist Andrew Truscott said in a press release.

Known as John Wheeler’s delayed-choice thought experiment, the experiment was first proposed back in 1978 using light beams bounced by mirrors, but back then, the technology needed was pretty much impossible. Now, almost 40 years later, the Australian team has managed to recreate the experiment using helium atoms scattered by laser light.

“Quantum physics predictions about interference seem odd enough when applied to light, which seems more like a wave, but to have done the experiment with atoms, which are complicated things that have mass and interact with electric fields and so on, adds to the weirdness,” said Roman Khakimov, a PhD student who worked on the experiment.

To successfully recreate the experiment, the team trapped a bunch of helium atoms in a suspended state known as a Bose-Einstein condensate, and then ejected them all until there was only a single atom left.

This chosen atom was then dropped through a pair of laser beams, which made a grating pattern that acted as a crossroads that would scatter the path of the atom, much like a solid grating would scatter light.

They then randomly added a second grating that recombined the paths, but only after the atom had already passed the first grating.

When this second grating was added, it led to constructive or destructive interference, which is what you’d expect if the atom had travelled both paths, like a wave would. But when the second grating was not added, no interference was observed, as if the atom chose only one path.

The fact that this second grating was only added after the atom passed through the first crossroads suggests that the atom hadn’t yet determined its nature before being measured a second time.

So if you believe that the atom did take a particular path or paths at the first crossroad, this means that a future measurement was affecting the atom’s path, explained Truscott. “The atoms did not travel from A to B. It was only when they were measured at the end of the journey that their wave-like or particle-like behaviour was brought into existence,” he said.

Although this all sounds incredibly weird, it’s actually just a validation for the quantum theory that already governs the world of the very small. Using this theory, we’ve managed to develop things like LEDs, lasers and computer chips, but up until now, it’s been hard to confirm that it actually works with a lovely, pure demonstration such as this one.

Source:  Sciencedaily.com

Google turns your clothes into touchscreens

Google plans on turning your clothes into touchscreens

Google plans on turning your clothes into touchscreens

Last week Google unveiled a wealth of new innovations and initiatives at its annual I/O developer conference, and one of the big reveals was Project Jacquard. It’s part of the Google ATAP (Advanced Technology and Projects) division and it’s the company’s plan for the future of clothing: touch-sensitive materials that you can interact with in the same way as your smartphone display.

Project Jacquard uses touch-sensitive, metallic yarns that are weaved in with normal material – cotton, silk or polyester – to give it the kind of capabilities that you don’t usually find outside of science fiction movies. The yarn is connected to a small receiver and controller the size of a button, with the idea that one day you might be able to tap your lapel to switch on the washing machine, or flick your cuff to change the volume on your smart television set.

One of the demos that Google showed off at I/O 2015 was a touch-enabled outfit controlling a set of Philips Hue lights. A quick tap on the clothing turned the lights on and off, while swiping left and right changed the colour, and swiping up and down adjusted the brightness. You wouldn’t have to take your phone out of your jeans pocket to do all this – the pocket itself would act as the controller.

Monitoring capabilities can be included too, so your pillow could track your breathing or your t-shirt could monitor your heart rate without the need for any other equipment. Google is expecting to work with a number of different partners on the technology in the future, and already has an agreement in place with denim manufacturer Levi Strauss & Co in the US.

What makes the technology so exciting is its invisibility. There’s no need to wear a clunky headset or a smart wristwatch to get connected – it’s essentially the ultimate in wearables. Project Jacquard is still at the early stages, but a lot of progress has been made in a short space of time, and Google thinks the interactive yarn will have an important role to play in our sartorial future.

“The complementary components are engineered to be as discreet as possible,” explains the official Project Jacquard page. “We developed innovative techniques to attach the conductive yarns to connectors and tiny circuits, no larger than the button on a jacket. These miniaturised electronics capture touch interactions, and various gestures can be inferred using machine-learning algorithms.”

The smart clothing is stretchable and washable, and Google says it’s up to the designer whether the special yarn is highlighted on the material or kept completely invisible. It can be restricted to a certain patch of clothing or spread over the whole garment.

Jacquard, by the way, is a type of loom used in the 19th century. Google says that the new touch-enabled clothing can be made at scale using equipment that already exists, so when it’s ready for the mass market it can be cheaply and easily produced.

Ultimately, we could see all kinds of smart clothing, furnishings and textiles that look identical to the ‘dumb’ versions that came before them. Google doesn’t have a timescale for launching Project Jacquard out into the world just yet, but you can sign up for updates at the project page.

 

Source:  sciencedaily.com

Woolly mammoths genome mapped

woolly mammoth genome

woolly mammoth genome

An international team of researchers has sequenced the nearly complete genome of two Siberian woolly mammoths — revealing the most complete picture to date — including new information about the species’ evolutionary history and the conditions that led to its mass extinction at the end of the Ice Age.

“This discovery means that recreating extinct species is a much more real possibility, one we could in theory realize within decades,” says evolutionary geneticist Hendrik Poinar, director of the Ancient DNA Centre at McMaster University and a researcher at the Institute for Infectious Disease Research, the senior Canadian scientist on the project.

“With a complete genome and this kind of data, we can now begin to understand what made a mammoth a mammoth — when compared to an elephant — and some of the underlying causes of their extinction which is an exceptionally difficult and complex puzzle to solve,” he says.

While scientists have long argued that climate change and human hunting were major factors behind the mammoth’s extinction, the new data suggests multiple factors were at play over their long evolutionary history.

Researchers from McMaster, Harvard Medical School, the Swedish Museum of Natural History, Stockholm University and others produced high-quality genomes from specimens taken from the remains of two male woolly mammoths, which lived about 40,000 years apart.

One had lived in northeastern Siberia and is estimated to be nearly 45,000 years old. The other -believed to be from one of the last surviving mammoth populations — lived approximately 4,300 years ago on Russia’s Wrangel Island, located in the Arctic Ocean.

“We found that the genome from one of the world’s last mammoths displayed low genetic variation and a signature consistent with inbreeding, likely due to the small number of mammoths that managed to survive on Wrangel Island during the last 5,000 years of the species’ existence,” says Love Dalén, an associate professor of Bioinformatics and Genetics at the Swedish Museum of Natural History.

Scientists used sophisticated technology to tease bits and pieces of highly fragmented DNA from the ancient specimens, which they then used to sequence the genomes. Through careful analysis, they determined the animal populations had suffered and recovered from a significant setback roughly 250,000 to 300,000 years ago. However, say researchers, another severe decline occurred in the final days of the Ice Age, marking the end.

“The dates on these current samples suggest that when Egyptians were building pyramids, there were still mammoths living on these islands,” says Poinar. “Having this quality of data can help with our understanding of the evolutionary dynamics of elephants in general and possible efforts at de-extinction.”

The latest research is the continuation of the pioneering work Poinar and his team began in 2006, when they first mapped a partial mammoth genome, using DNA extracted from carcasses found in permafrost in the Yukon and Siberia.

 

Source:  Sciencedaily.com

Iris scanner that works 12 metres away

Iris scanner that can work at 12 metres away

Iris scanner that can work at 12 metres away

Iris scanner that can work at 12 metres away

Imagine an advertising billboard or a smart door that can recognise you from across the street – that’s the futuristic type of technology that’s on the way after researchers the US developed an iris scanner that can work at a distance of 12 metres (40 feet).

We’re starting to see primitive eye scanners appear in consumer electronics, but this new device takes the innovation one step further. The team from Carnegie Mellon University (CMU) has developed a scanning system that can spot and identify a driver sat in a car, so whether you’re traveling through a toll bridge or exceeding the speed limit, the cameras will know who you are.

That brings up a whole series of difficult questions about user privacy and the capabilities of law enforcement agencies across the world. Given a positive spin, it means a dangerous criminal getting spotted ahead of time; seen more cynically, the technology could be used to track citizens without their knowledge.

This long-range iris scanning system is primarily the work of CMU engineering professor, Marios Savvides. As our irises are as distinctive as our fingerprints, the technology is very accurate – but as with fingerprints, your eyeballs will already need to be on file for you to be spotted.

Savvides thinks the technology is helpful rather than scary. “Fingerprints, they require you to touch something. Iris, we can capture it at a distance, so we’re making the whole user experience much less intrusive, much more comfortable,” he told The Atlantic. “There’s no X-marks-the-spot. There’s no place you have to stand. Anywhere between six and 12 metres, it will find you, it will zoom in and capture both irises and full face.”

If nothing else, it could speed up queues at the airport. But in the wrong hands or used in the wrong way, it could be just as dangerous as it is convenient. There’s no chance of these types of biometric technology going backwards, so rigorous laws on how it can be used become increasingly important.

Savvides thinks we’re already in a new era of surveillance, and that his invention won’t change that. “People are being tracked,” he says. “Their every move, their purchasing, their habits, where they are every day, through credit card transactions, through advantage cards – if someone really wanted to know what you were doing every moment of the day, they don’t need facial recognition or iris recognition to do that. That’s already out there.”

Like many recent advancements in biometrics, increased convenience and accuracy comes at a cost – it’s all a question of how the technology is used. Just don’t be surprised if in the near future your office door spots you well before you reach it.

 

Source:  Sciencealert.com

Japanese scientists reverse aging in human cell

By altering the behavior of two genes responsible for the production of simple amino acids in human cells, scientists have gained a better understanding of how the process of ageing works, and how we could delay or perhaps even reverse it.

The team, led by Jun-Ichi Hayashi at the University of Tsukuba, targeted two genes that produce the amino acid glycine in the cell’s mitochondria, and figured out how to switch them on and off. By doing this, they could either accelerate the process of ageing within the cell, which caused significant defects to arise, or they could reverse the process of ageing, which restored the capacity for cellular respiration. Using this technique to produce more glycine in a 97-year-old cell line for 10 days, the researchers restored cellular respiration, effectively reversing the cell line’s age.

 The finding brings into question the popular, but more recently controversial, mitochondrial theory of ageing, which puts forward the notion that an accumulation of mutations in mitochondrial DNA leads to age-related defects in the mitochondria – often referred to as the cell’s powerhouses because they are responsible for energy production and cellular respiration. Defects in the cell’s mitochondria lead to damage in the DNA, and an accumulation of DNA damage is linked to age-related hair loss, weight loss, spine curvature, osteoporosis, and a decreased lifespan.

But is this theory accurate? The results of Hayashi’s study support an alternative theory to ageing, which proposes that age-associated mitochondrial defects are caused not by the accumulation of mutations in mitochondrial DNA, but by certain crucial genes being turned on and off as we get older.

The team worked with human fibroblast cell lines gathered from young people – from foetus-age to 12 years old – and the elderly, from 80 to 97 years old. They compared the capacity for cellular respiration in the young and old cells, and found that while the capacity was indeed lower in the cells of the elderly, there was almost no difference in the amount of DNA damage between the two. This calls into question the mitochondrial theory of ageing, the team reports in the journal Scientific Reports, and suggests instead that the age-related effects they were seeing were being caused by a process known as epigenetic regulation.

Epigenetic regulation describes the process where the physical structure of DNA – not the DNA sequence – is altered by the addition or subtraction of chemical structures or proteins, which is regulated by the turning on and off of certain genes. “Unlike mutations that damage that sequence, as in the other, aforementioned theory of ageing, epigenetic changes could possibly be reversed by genetically reprogramming cells to an embryonic stem cell-like state, effectively turning back the clock on ageing,” says Eric Mack at Gizmag.

Hayashi and his team supported this theory by showing that they could turn off the genes that regulate the production of glycine to achieve cellular ageing, or turn them on for the restoration of cellular respiration. This suggests, they say, that glycine treatment could effectively reverse the age-associated respiration defects present in their elderly human fibroblasts.

“Whether or not this process could be a potential fountain of youth for humans and not just human fibroblast cell lines still remains to be seen, with much more testing required,” Mack points out at Gizmag. “However, if the theory holds, glycine supplements could one day become a powerful tool for life extension.”

We’ll just have to wait and see. The faster we can solve the debate over how ageing actually works, the faster we can figure out how to delay it.

 

Source:  sciencedaily.com

US military robots will leave humans defenceless

US military robots

US military robots

Killer robots which are being developed by the US military ‘will leave humans utterly defenceless‘, an academic has warned.

Two programmes commissioned by the US Defense Advanced Research Projects Agency (DARPA) are seeking to create drones which can track and kill targets even when out of contact with their handlers.

Writing in the journal Nature, Stuart Russell, Professor of Computer Science at the University of California, Berkley, said the research could breach the Geneva Convention and leave humanity in the hands of amoral machines.

“Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans,” he said.

“Existing AI and robotics components can provide physical platforms, perception, motor control, navigation, mapping, tactical decision-making and long-term planning. They just need to be combined.

“In my view, the overriding concern should be the probable endpoint of this technological trajectory.

“Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.”

• Killer robots a small step away and must be outlawed, says UN official
• Britain prepared to develop ‘killer robots’, minister says

The robots, called LAWS – lethal autonomous weapons systems – are likely to be armed quadcopters of mini-tanks that can decided without human intervention who should live or die.

DARPA is currently working on two projects which could lead to killer bots. One is Fast Lightweight Autonomy (FLA) which is designing a tiny rotorcraft to manoeuvre unaided at high speed in urban areas and inside buildings. The other and Collaborative Operations in Denied Environment (CODE), is aiming to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission — find, fix, track, target, engage, assess” in situations in which enemy signal-jamming makes communication with a human commander impossible.

Last year Angela Kane, the UN’s high representative for disarmament, said killer robots were just a ‘small step’ away and called for a worldwide ban. But the Foreign Office has said while the technology had potentially “terrifying” implications, Britain “reserves the right” to develop it to protect troops.

Professor Russell said: “LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting ‘threatening behaviour’

“Debates should be organized at scientific meetings; arguments studied by ethics committees. Doing nothing is a vote in favour of continued development and deployment.”

• The US army tests a killer robot tank
• Future robots will resemble ostriches or dinosaurs, scientists say

However Dr Sabine Hauert, a lecturer in robotics at the University of Bristol said that the public did not need to fear the developments in artificial intelligence.

“My colleagues and I spend dinner parties explaining that we are not evil but instead have been working for years to develop systems that could help the elderly, improve health care, make jobs safer and more efficient, and allow us to explore space or beneath the ocean,” she said.

 

Source:   telegraph.co.uk

Freelance NSA Spies Private Conversations

NSA Spies

NSA Spies

Thanks to Edward Snowden, we know that the National Security Agency collects the phone records of every American in order to keep the country safe from terrorism. But for the past eight months a group of artists claiming to work for the NSA on “a freelance, pro bono basis” have been recording people’s private conversations in popular bars, restaurants, and gyms in Lower Manhattan to ensure that no actionable intelligence falls through the cracks.

“We’re looking for terrorism, we’re looking for signs of plots and schemes that could put the homeland at risk,” one of the group’s “agents” tells us.

The project’s website, We Are Always Listening, includes snippets of actual conversations recorded by tiny, hidden tape recorders placed in The Brindle Room, Café Mogador, and the Crunch Gym in Union Square, among other popular public spaces.

In the recordings, a group of men talk about how a friend is “trying too hard to be one of us,” a woman complains about paying more than $2,000/month in rent, and a man describes a former boyfriend’s fetish: “He wanted me to like, fake double over in pain. Like we’re doing a scene from Batman Returns.”

None of the recordings contain any last names or other forms of information that would allow the people in the recordings to be directly identified, but first names flow freely.

“The reason we broadcast small, small, small, fractions of what we’ve gathered is because we’ve also heard members of the American public say they want a more transparent window into how data is collected,” said the “agent,” who asked to speak anonymously because New York State law requires the consent of at least one party in order to record a conversation (as Governor Cuomo famously discovered).

“Our agents would dispute that having a conversation at a restaurant or a gym is private. There should not be an assumption of privacy.”

The Manhattan DA’s office declined to comment on the group’s activities.

The project is seemingly designed to shake Americans (and, based on the locations the group placed their recorders, the Downtown bourgeoisie) out of their torpor with respect to how the NSA collects data and the federal government’s reliance on millions of independent contractors with security clearances.

“We imagine people are fine with this type of surveillance,” the “agent” said, tongue firmly in cheek. “The general public has mostly spoken in a unified voice saying, well, it’s just what you need to do to keep the country safe.”

For those who believe that posting audio of private conversations online is wrong, or that it surpasses what even the NSA considers appropriate, a button marked “Angry?” on the group’s website directs users to the ACLU’s website that allows you to contact your federal representatives and urge them to kill the portion of the Patriot Act that allows for the NSA’s blanket surveillance (the Senate recently voted to block a bill from the House designed to curtail the government’s collection of phone data).

The “agent” told us that New Yorkers should expect more leaked conversations. If you’ve hung out at 61 Local in Cobble Hill recently, you might want to keep your eye on the group’s website: a tape recorder has been listening there for some time.

 

Source:  gothamist.com

Google closer to developing human-like intelligence

Artificial Intelligence

Artificial Intelligence

Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.

Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.

The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors”.

Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”

The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” he added. “But there’s no reason why not. I think you can capture a thought by a vector.”

Hinton, who is due to give a talk at the Royal Society in London on Friday, believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence: mastering natural, conversational language, and the ability to make leaps of logic.

He painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.

“It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.”

In the past two years, scientists have already made significant progress in overcoming this challenge.

Richard Socher, an artificial intelligence scientist at Stanford University, recently developed a program called NaSent that he taught to recognise human sentiment by training it on 12,000 sentences taken from the film review website Rotten Tomatoes.

Part of the initial motivation for developing “thought vectors” was to improve translation software, such as Google Translate, which currently uses dictionaries to translate individual words and searches through previously translated documents to find typical translations for phrases. Although these methods often provide the rough meaning, they are also prone to delivering nonsense and dubious grammar.

Thought vectors, Hinton explained, work at a higher level by extracting something closer to actual meaning.

The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical “meaning space” or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector.

The “thought” serves as a the bridge between the two languages because it can be transferred into the French version of the meaning space and decoded back into a new path between words.

The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences.

At first the translations it produces are nonsense, but a feedback loop provides an error signal that allows the position of each word to be refined until eventually the positions of words in the cloud captures the way humans use them – effectively a map of their meanings.

Hinton said that the idea that language can be deconstructed with almost mathematical precision is surprising, but true. “If you take the vector for Paris and subtract the vector for France and add Italy, you get Rome,” he said. “It’s quite remarkable.”

Dr Hermann Hauser, a Cambridge computer scientist and entrepreneur, said that Hinton and others could be on the way to solving what programmers call the “genie problem”.

“With machines at the moment, you get exactly what you wished for,” Hauser said. “The problem is we’re not very good at wishing for the right thing. When you look at humans, the recognition of individual words isn’t particularly impressive, the important bit is figuring out what the guy wants.”

“Hinton is our number one guru in the world on this at the moment,” he added.

Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits.”

A flirtatious program would “probably be quite simple” to create, however. “It probably wouldn’t be subtly flirtatious to begin with, but it would be capable of saying borderline politically incorrect phrases,” he said.

Many of the recent advances in AI have sprung from the field of deep learning, which Hinton has been working on since the 1980s. At its core is the idea that computer programs learn how to carry out tasks by training on huge datasets, rather than being taught a set of inflexible rules.

With the advent of huge datasets and powerful processors, the approach pioneered by Hinton decades ago has come into the ascendency and underpins the work of Google’s artificial intelligence arm, DeepMind, and similar programs of research at Facebook and Microsoft.

Hinton played down concerns about the dangers of AI raised by those such as the American entrepreneur Elon Musk, who has described the technologies under development as humanity’s greatest existential threat. “The risk of something seriously dangerous happening is in the five year timeframe. Ten years at most,” Musk warned last year.

“I’m more scared about the things that have already happened,” said Hinton in response. “The NSA is already bugging everything that everybody does. Each time there’s a new revelation from Snowden, you realise the extent of it.”

“I am scared that if you make the technology work better, you help the NSA misuse it more,” he added. “I’d be more worried about that than about autonomous killer robots.

 

Source:  theguardian.com

Iron levels hasten Alzheimer’s disease

Brain

Brain

High levels of iron in the brain could increase the risk of developing Alzheimer’s disease and hasten the cognitive decline that comes with it, new research suggests.

The results of the study, which tracked the brain degeneration of people with Alzheimer’s over a seven-year period, suggest it might be possible to halt the disease with drugs that reduce iron levels in the brain.

 “We think that iron is contributing to the disease progression of Alzheimer’s disease,” neuroscientist Scott Ayton, from the University of Melbourne in Australia, told Anna Salleh at ABC Science.

“This is strong evidence to base a clinical trial on lowering iron content in the brain to see if that would impart a cognitive benefit.”

Alzheimer’s is a devastating disease that researchers suspect “begins when two abnormal protein fragments, known as plaques and tangles, accumulate in the brain and start killing our brain cells,” explains Fiona Macdonald for ScienceAlert.

It starts by destroying the hippocampus – the region of the brain where memories are formed and stored – and eventually damages the region where language is processed, making it difficult for advanced Alzheimer’s patients to communication. As the disease’s gradual takeover continues, people lose the ability to regulate their emotions and behaviour, and to make sense of the world around them.

But previous studies have shown that people with Alzheimer’s disease also have elevated levels of brain iron, which may also be a risk factor for the disease.

“There has been debate for a long period of time whether this is important or whether it’s just a coincidence,” Ayton told ABC Science.

The long-term impact of elevated iron levels on the disease outcome has not been investigated, the researchers say.

So Ayton’s team decided to test this, examining the link between brain iron levels and cognitive decline in three groups of people over seven years. The participants included 91 people with normal cognition, 144 people with mild cognitive impairment, and 67 people with diagnosed Alzheimer’s disease.

At the beginning of the study, the researchers determined the patients’ brain iron levels by measuring the amount of ferritin in the cerebrospinal fluid around the brain. Ferritin is a protein that stores and releases iron.

The researchers did regular tests and MRI scans to track cognitive decline and changes in the brain over the study period.

They found that people with higher levels of ferritin – in all groups – had faster declines in cognitive abilities and accelerated shrinking of the hippocampus. Levels of ferritin were also a linked to a greater likelihood of people with mild cognitive impairment developing Alzheimer’s.

Their data contained some other interesting takeaways: The researchers found higher levels of ferritin corresponded to earlier ages for diagnoses – roughly three months for every 1 nanogram per millilitre increase.

They also found that people with the APOE-e4 gene variant, which is known to be the strongest genetic risk factor for the disease, had the highest levels of iron in their brains.

This suggests that APOE-e4 may be increasing Alzheimer’s disease risk by increasing iron levels in the brain, Ayton told ABC Science.

The researchers say their findings, which were published in the journal Nature Communications, justify the revival of clinical trials to explore drugs to target brain iron levels.

In a study carried out 24 years ago, a drug called deferiprone halved the rate of Alzheimer’s cognitive decline, Ayton told Clare Wilson at NewScientist. “Perhaps it’s time to refocus the field on looking at iron as a target.”

“Lowering CSF ferritin, as might be expected from a drug like deferiprone, could conceivably delay mild cognitive impairment conversion to Alzheimer’s disease by as much as three years,” the team wrote.

PTSD Linked to Accelerated Aging

chromosomes_with_telomeres

chromosomes_with_telomeres

In recent years, public health concerns about post-traumatic stress disorder (PTSD) have risen significantly, driven in part by affected military veterans returning from conflicts in the Middle East and elsewhere. PTSD is associated with number of psychological maladies, among them chronic depression, anger, insomnia, eating disorders and substance abuse.

Writing in the May 7 online issue of American Journal of Geriatric Psychiatry, researchers at University of California, San Diego School of Medicine and Veterans Affairs San Diego Healthcare System suggest that people with PTSD may also be at risk for accelerated aging or premature senescence.

“This is the first study of its type to link PTSD, a psychological disorder with no established genetic basis, which is caused by external, traumatic stress, with long-term, systemic effects on a basic biological process such as aging,” said Dilip V. Jeste, MD, Distinguished Professor of Psychiatry and Neurosciences and director of the Center on Healthy Aging and Senior Care at UC San Diego, who is the senior author of this study.

Researchers had previously noted a potential association between psychiatric conditions, such as schizophrenia and bipolar disorder, and acceleration of the aging process. Jeste and colleagues determined to see if PTSD might show a similar association by conducting a comprehensive review of published empirical studies relevant to early aging in PTSD, covering multiple databases going back to 2000.

There is no standardized definition of what constitutes premature or accelerated senescence. For guidance, the researchers looked at early aging phenomena associated with non-psychiatric conditions, such as Hutchinson-Gilford progeria syndrome, HIV infection and Down’s syndrome. The majority of evidence fell into three categories: biological indicators or biomarkers, such as leukocyte telomere length (LTL), earlier occurrence or higher prevalence of medical conditions associated with advanced age and premature mortality.

In their literature review, the UC San Diego team identified 64 relevant studies; 22 were suitable for calculating overall effect sizes for biomarkers, 10 for mortality.

All six studies looking specifically at LTL found reduced telomere length in persons with PTSD. Leukocytes are white blood cells. Telomeres are stretches of protective, repetitive nucleotide sequences at the ends of chromosomes. These sequences shorten with every cell replication and are considered a strong measure of the aging process in cells.

The scientists also found consistent evidence of increased pro-inflammatory markers, such as C-reactive protein and tumor necrosis factor alpha, associated with PTSD.

A majority of reviewed studies found increased medical comorbidity of PTSD with several targeted conditions associated with normal aging, including cardiovascular disease, type 2 diabetes, gastrointestinal ulcer disease and dementia.

Seven of 10 studies indicated a mild-to-moderate association of PTSD with earlier mortality, consistent with an early onset or acceleration of aging in PTSD.

“These findings do not speak to whether accelerated aging is specific to PTSD, but they do argue the need to re-conceptualize PTSD as something more than a mental illness,” said first author James B. Lohr, MD, professor of psychiatry. “Early senescence, increased medical morbidity and premature mortality in PTSD have implications in health care beyond simply treating PTSD symptoms. Our findings warrant a deeper look at this phenomenon and a more integrated medical-psychiatric approach to their care.”

 Barton Palmer, PhD, professor of psychiatry and a coauthor of the study, cautioned that “prospective longitudinal studies are needed to directly demonstrate accelerated aging in PTSD and to establish underlying mechanisms.”
Source:  scienceblog.com

Plasma made from matter and antimatter

Pulsar has atmosphere of matter and anti matter.

Pulsar has atmosphere of matter and anti matter.

One of the all-time great mysteries in physics is why our Universe contains more matter than antimatter, which is the equivalent of matter but with the opposite charge. To tackle this question, our international team of researchers have managed to create a plasma of equal amounts of matter and antimatter – a condition we think made up the early Universe.

Matter as we know it appears in four different states: solid, liquid, gas, and plasma, which is a really hot gas where the atoms have been stripped of their electrons. However, there is also a fifth, exotic state: a matter-antimatter plasma, in which there is complete symmetry between negative particles (electrons) and positive particles (positrons).

This peculiar state of matter is believed to be present in the atmosphere of extreme astrophysical objects, such as black holes and pulsars. It is also thought to have been the fundamental constituent of the Universe in its infancy, in particular during the Leptonic era, starting approximately one second after the Big Bang.

One of the problems with creating matter and antimatter particles together is that they strongly dislike each other – disappearing in a burst of light whenever they meet. However, this doesn’t happen straight away, and it is possible to study the behaviour of the plasma for the fraction of a second in which it is alive.

Understanding how matter behaves in this exotic state is crucial if we want to understand how our Universe has evolved and, in particular, why the Universe as we know it is made up mainly of matter. This is a puzzling feature, as the theory of relativistic quantum mechanics suggests we should have equal amounts of the two. In fact, no current model of physics can explain the discrepancy.

Despite its fundamental importance for our understanding of the Universe, an electron-positron plasma had never been produced before in the laboratory, not even in huge particle accelerators such as CERN. Our international team, involving physicists from the UK, Germany, Portugal, and Italy, finally managed to crack the nut by completely changing the way we look at these objects.

Instead of focusing our attention on immense particle accelerators, we turned to the ultra-intense lasers available at the Central Laser Facility at the Rutherford Appleton Laboratory in Oxfordshire, UK. We used an ultra-high vacuum chamber with an air pressure corresponding to a hundredth of a millionth of our atmosphere to shoot an ultra-short and intense laser pulse (hundred billions of billions more intense that sunlight on the Earth surface) onto a nitrogen gas. This stripped off the gas’ electrons and accelerated them to a speed extremely close to that of light.

The beam then collided with a block of lead, which slowed them down again. As they slowed down they emitted particles of light, photons, which created pairs of electrons and their anti-particle, the positron, when they collided with nuclei of the lead sample. A chain-reaction of this process gave rise to the plasma.

However, this experimental achievement was not without effort. The laser beam had to be guided and controlled with micrometer precision, and the detectors had to be finely calibrated and shielded – resulting in frequent long nights in the laboratory.

But it was well worth it as the development means an exciting branch of physics is opening up. Apart from investigating the important matter-antimatter asymmetry, by looking at how these plasmas interact with ultra powerful laser beams, we can also study how this plasma propagates in vacuum and in a low-density medium. This would be effectively recreating conditions similar to the generation of gamma-ray bursts, some of the most luminous events ever recorded in our Universe.

 

Source:  sciencealert.com

Holographic micro battery is 10 micrometers thick

holographic microbattery

holographic microbattery

Researchers and companies alike have been scrambling to come up with a next-generation battery, but one of the more unlikely places we’d expect to hear about it is from the study of holography. Recently, a team of engineers at the University of Illinois, Urbana-Champaign demonstrated that porous, three-dimensional electrodes can boost a lithium-ion micro battery’s power output by three orders of magnitude, as first reported in Chemical & Engineering News. But now the team has gone a step further, and has optimized the electrode structure with holograms, the three-dimensional interference patterns of multiple laser beams, in order to generate porous blocks that could used as a sort of scaffolding for building electrodes.

The result: a holographic micro battery that’s only 2mm wide and 10 micrometers thick, with an area of 4mm squared, and 12% capacity fade. The researchers said it’s compatible with existing fabrication techniques, and ideal for large-scale on-chip integration with all kinds of microelectronic devices, including medical implants, sensors, and radio transmitters. To get an idea of scale, the photo above shows the battery’s electrodes in a 2mm by 2mm square on a glass substrate. Batteries like this could power implants small enough to track certain aspects of someone’s health in real time, and without the comparatively vast bulk of existing blood glucose and cardiac monitors, just to cite one example.

“This 3D micro battery has exceptional performance and scalability, and we think it will be of importance for many applications,” said Paul Braun, a professor of materials science and engineering at Illinois, in a statement. “Micro-scale devices typically utilize power supplied off-chip because of difficulties in miniaturizing energy storage technologies.”

Braun said that a supercapacitor-like, on-chip battery of this diminutive size would be ideal for autonomous microscale actuators, distributed wireless sensors and transmitters, monitors, and portable and implantable medical devices. To fabricate the batteries, controlling the interfering optical beams for building 3D holographic lithography isn’t trivial. But “recent advances have significantly simplified the required optics, enabling creation of structures via a single incident beam and standard photoresist processing,” said professor John Rogers, who assisted Braun and his team to develop the technology.

This isn’t the first time we’ve seen such tiny micro batteries developed. Back in 2013, researchers 3D-printed a battery that’s just 1mm wide, and in 2014, we saw a graphene-based microbattery that could also power implants. But it’s arguably the most sophisticated and realistic design yet. On the slightly larger front, last month a team of Stanford researchers developed an aluminum graphite battery that could charge up a smartphone in just 60 seconds. But in the end, it may be no surprise that holograms help us engineer better batteries — after all, we could be living inside a hologram all this time.

 

Source:  extremetech.com

 

Carbon Billionaire Al Gore

Al Gore Becomes First ‘Carbon Billionaire’

Al Gore First ‘Carbon Billionaire’

Former US Vice President and Global Warming advocate, Al Gore, has become the world’s first ‘carbon billionaire’ after landing a major carbon deal with Chinese coal mining company Haerwusu, one of the top ten coal mining companies in the world.

Al Gore and his partner David Blood, both principals at Generation Investment Management (GIM) have landed the most lucrative carbon deal to date, reaching an estimated $12 billion dollars in carbon shares, estimate experts, although official numbers have not yet been disclosed.

Haerwusu that has often been criticized by Amnesty International and other human rights groups for the poor working conditions of their employees is believed to have sealed the carbon deal to “improve its international image” in an attempt to facilitate commerce with Europe and America, believe specialists.

The former vice-president announced the news to share holders earlier this week during a press conference at GIM headquarters, in London, England.

“I am proud to say that this is just the beginning” he told share holders, visibly enchanted by the recent deal.

“I told the world 20 years ago that the ice caps would be melted by now. Although we are lucky this has not happened yet, we have been at the forefront of the Global Warming movement all along and today we are reaping what we have sown” he admitted with great pride.

“When a system of carbon taxes and carbon trade is setup all over the world in the near future, GIM will be at the epicenter of this green revolution, and believe me, this is just the beginning” he acknowledged prophetically.

The $12 billion dollar deal signed for a period of 10 years with the Haerwusu company could encourage other companies to join in the global carbon trade, a great thing for GIM share holders who’s profits are estimated by experts to sky rocket in the next years.

 

Source:  worldnewsdailyreport.com

Test catches cancer 13 years before it hits

Test can predict cancer up to 13 years

Test can predict cancer up to 13 years

Scientists have developed a new test that can predict with 100 per cent accuracy whether someone will develop cancer up to 13 years in the future.

The discovery of tiny but significant changes taking place in the body more than a decade before cancer was diagnosed helped researchers at  Harvard and Northwestern University make the breakthrough.

Their research,  published in the online journal Ebiomedicine, found protective caps on the ends of chromosomes, which prevent DNA damage were more worn down those who went on to develop cancer.

Known as telomeres, these were much shorter than they should have been and continued to get shorter until around four years before the cancer developed, when they suddenly stopped shrinking.

“Because we saw a strong relationship in the pattern across a wide variety of cancers, with the right testing these procedures could be used eventually to diagnose a wide variety of cancers,” said Dr Lifang Hou, the lead study author, told The Telegraph.

“Understanding this pattern of telomere growth may mean it can be a predictive biomarker for cancer….We found cancer has hijacked the telomere shortening in order to flourish in the body.”

Source:

independent.co.uk

Scientists discover key driver of reversing aging process

A study tying the aging process to the deterioration of tightly packaged bundles of cellular DNA could lead to methods of preventing and treating age-related diseases such as cancer, diabetes and Alzheimer's disease, experts say.

A study tying the aging process to the deterioration of tightly packaged bundles of cellular DNA could lead to methods of preventing and treating age-related diseases such as cancer, diabetes and Alzheimer’s disease, experts say.

A study tying the aging process to the deterioration of tightly packaged bundles of cellular DNA could lead to methods of preventing and treating age-related diseases such as cancer, diabetes and Alzheimer’s disease, experts say. In the study, scientists at the Salk Institute and the Chinese Academy of Science found that the genetic mutations underlying Werner syndrome, a disorder that leads to premature aging and death, resulted in the deterioration of bundles of DNA known as heterochromatin.

The discovery, made possible through a combination of cutting-edge stem cell and gene-editing technologies, could lead to ways of countering age-related physiological declines by preventing or reversing damage to heterochromatin.

“Our findings show that the gene mutation that causes Werner syndrome results in the disorganization of heterochromatin, and that this disruption of normal DNA packaging is a key driver of aging,” says Juan Carlos Izpisua Belmonte, a senior author on the paper. “This has implications beyond Werner syndrome, as it identifies a central mechanism of aging–heterochromatin disorganization–which has been shown to be reversible.”

Werner syndrome is a genetic disorder that causes people to age more rapidly than normal. It affects around one in every 200,000 people in the United States. People with the disorder suffer age-related diseases early in life, including cataracts, type 2 diabetes, hardening of the arteries, osteoporosis and cancer, and most die in their late 40s or early 50s.

The disease is caused by a mutation to the Werner syndrome RecQ helicase-like gene, known as the WRN gene for short, which generates the WRN protein. Previous studies showed that the normal form of the protein is an enzyme that maintains the structure and integrity of a person’s DNA. When the protein is mutated in Werner syndrome it disrupts the replication and repair of DNA and the expression of genes, which was thought to cause premature aging. However, it was unclear exactly how the mutated WRN protein disrupted these critical cellular processes.

In their study, the Salk scientists sought to determine precisely how the mutated WRN protein causes so much cellular mayhem. To do this, they created a cellular model of Werner syndrome by using a cutting-edge gene-editing technology to delete WRN gene in human stem cells. This stem cell model of the disease gave the scientists the unprecedented ability to study rapidly aging cells in the laboratory. The resulting cells mimicked the genetic mutation seen in actual Werner syndrome patients, so the cells began to age more rapidly than normal. On closer examination, the scientists found that the deletion of the WRN gene also led to disruptions to the structure of heterochromatin, the tightly packed DNA found in a cell’s nucleus.

This bundling of DNA acts as a switchboard for controlling genes’ activity and directs a cell’s complex molecular machinery. On the outside of the heterochromatin bundles are chemical markers, known as epigenetic tags, which control the structure of the heterochromatin. For instance, alterations to these chemical switches can change the architecture of the heterochromatin, causing genes to be expressed or silenced.

The Salk researchers discovered that deletion of the WRN gene leads to heterochromatin disorganization, pointing to an important role for the WRN protein in maintaining heterochromatin. And, indeed, in further experiments, they showed that the protein interacts directly with molecular structures known to stabilize heterochromatin–revealing a kind of smoking gun that, for the first time, directly links mutated WRN protein to heterochromatin destabilization.

“Our study connects the dots between Werner syndrome and heterochromatin disorganization, outlining a molecular mechanism by which a genetic mutation leads to a general disruption of cellular processes by disrupting epigenetic regulation,” says Izpisua Belmonte. “More broadly, it suggests that accumulated alterations in the structure of heterochromatin may be a major underlying cause of cellular aging. This begs the question of whether we can reverse these alterations–like remodeling an old house or car–to prevent, or even reverse, age-related declines and diseases.”

Izpisua Belmonte added that more extensive studies will be needed to fully understand the role of heterochromatin disorganization in aging, including how it interacts with other cellular processes implicated in aging, such as shortening of the end of chromosomes, known as telomeres. In addition, the Izpisua Belmonte team is developing epigenetic editing technologies to reverse epigenetic alterations with a role in human aging and disease.

 

Source:  sciencedaily.com

Coffee Antioxidant 500 times greater than vitamin C

coffee that mimics effects of morphine

coffee that mimics effects of morphine

The coffee industry plays a major role in the global economy. It also has a significant impact on the environment, producing more than 2 billion tonnes of coffee by-products annually. Coffee silverskin (the epidermis of the coffee bean) is usually removed during processing, after the beans have been dried, while the coffee grounds are normally directly discarded.

It has traditionally been assumed that these by-products ─ coffee grounds and coffee silverskin, have few practical uses and applications. Spent coffee grounds are sometimes employed as homemade skin exfoliants or as abrasive cleaning products. They are also known to make great composting agents for fertilizing certain plants. But apart from these limited applications, coffee by-products are by and large deemed to be virtually useless. As such, practically all of this highly contaminating ‘coffee waste’ ends up in landfills across the globe and has a considerable knock-on effect on the environment.

However, a UGR research team led by José Ángel Rufíán Henares set out to determine the extent to which these by-products could be recycled for nutritional purposes, thereby reducing the amount of waste being generated, as well as benefitting coffee producers, recycling companies, the health sector, and consumers.

In an article published in the academic journal Food Science and Technology, the researchers demonstrate the powerful antioxidant and antimicrobial properties of the coffee grounds and silverskin, which are highly rich in fibre and phenols. Indeed, their findings indicate that the antioxidant effects of these coffee grounds are 500 times greater than those found in vitamin C and could be employed to create functional foods with significant health benefits.

Moreover, Professor Rufián Henares points out: “They also contain high levels of melanoidins, which are produced during the roasting process and give coffee its brown colour. The biological properties of these melanoidins could be harnessed for a range of practical applications, such as preventing harmful pathogens from growing in food products.” However, he also adds: “If we are to harness the beneficial prebiotic effects of the coffee by-products, first of all we need to remove the melanoidins, since they interfere with such beneficial prebiotic properties.”

The researchers conclude that processed coffee by-products could potentially be recycled as sources of new food ingredients. This would also greatly diminish the environmental impact of discarded coffee by-products.

The Ministry of Economics and Finance has recently allocated a new research project to the team under the ‘State R&D programme’, in order to enable them to conduct further studies in the area and re-assess the potential value of coffee by-products.

 

Source:  sciencedaily.com

Doctor who discovered Cancer blames lack of Oxygen

The Man Who Discovered Cancer

The Man Who Discovered Cancer

Dr. Otto H. Warburg won a Nobel Prize for discovering the cause of cancer. There is one aspect of our bodies that is the key to preventing cancer: pH levels.

What Dr. Warburg figured out is that when there is a lack of oxygen, cancer cells develop. As Dr. Warburg said, “All normal cells have an absolute requirement for oxygen, but cancerous cells can live without oxygen – a rule without exception. Deprive a cell of 35% of it’s oxygen for 48 hours and it may become cancerous.” Cancer cells therefore cannot live in a highly oxygenated state, like the one that develops when your body’s pH levels are alkaline, and not acidic.

Most people’s diets promote the creation of too much acid, which throws our body’s natural pH levels from a slightly alkaline nature to an acidic nature. Maintaining an alkaline pH level can prevent health conditions like cancer, osteoporosis, cardiovascular diseases, diabetes, and acid reflux. Eating processed foods like refined sugars, refined grains, GMOs, and other unnatural foods can lead to a pH level that supports the development of these conditions, and leads to overall bad health. In fact, most health conditions that are common today stem from a pH level that is too acidic, including parasites, bacteria, and viruses are all attributed to an acidic pH level.

There is a natural remedy that you can use at home that is simple, and readily available. All you need is 1/3 tablespoon of baking soda, and 2 tablespoons of lemon juice or apple cider vinegar. Mix the ingredients into 8ounces of cold water, and stir well. The baking soda will react with the lemon juice or ACV and begin to fizz. Drink the mixture all at once. The combination will naturally reduce your pH levels in your body and prevent the conditions associated with an acidic pH level. Maintaining a healthy pH level will do wonders for your health, and you will notice the difference after only a few days of the treatment.

 

Source:  buynongmoseeds.com

Success Regenerating Spinal Cords

Regenerated nerves after spinal cord injury

Regenerated nerves after spinal cord injuryHead Transplant

Working with paralysed rats, scientists in the US have shown how they might be able to regenerate spines after injury and help paralysed people to one day walk again.

The team, from Tufts University School of Medicine, crushed the spines of lab rats at the dorsal root, which is the main bundle of nerve fibres that branches off the spine, and carries signals of sensation from the body to the brain. They then treated the spines with a protein called artemin, known to help neurons grow and function. After the two-week treatment, the nerve fibres regenerated and successfully passed signals over a distance of 4 centimetres.

“This is a significantly longer length of Central Nervous System regeneration than has been reported earlier,” one of the team, physiologist Eric Frank, “But still a long way to go!”

Reporting in a study published by the Proceedings of the National Academy of Sciences, the team says the artemin treatment was successful in regenerating both large and small sensory neurons.

And while that 4-centimetre distance is important, Frank says that’s not all that counts: “The regenerating nerve fibres are growing back to the right places in the spinal cord and brainstem.” He adds that this is pretty impressive, given that their subjects were several months old, which isn’t young in rat years.

The results suggest that the chemical guidance cues that allow the nerve fibres to get to their correct target areas persist in the adult spinal cord, says Frank. This means that while artemin may not help regenerate all nerve fibres -some aren’t receptive to it – it’s likely to help with other neurones to. “If it becomes possible to get these other types of nerve fibres to regenerate for long distances as well, there is a reasonable chance that they can also grow back to their original target areas,” says Frank.

The challenge is getting regenerated nerve fibres to reconnect, so they can do what there are supposed to do, which just might be possible, considering these results. If scientists could achieve that, it would be a big leap forward in improving the lives of paralysed people.

Source:  sciencealert.com

Most likely culprit for schizophrenia

Schizophrenia is eight different diseases

Schizophrenia is eight different diseases

Researchers have found a gene that links the three previously unrelated biological changes most commonly blamed for causing schizophrenia, making it one of the most promising culprits for the disease so far, and a good target for future treatments.

Schizophrenia is a debilitating mental disorder that usually appears in late adolescence, and changes the way people think, act and perceive reality. For decades, scientists have struggled to work out what causes the hallucinations and strange behaviour associated with the disorder, and keep coming back to three neuronal changes that seem to be to blame. The only problem is that the changes seemed to be unrelated, and, in some cases, even contradictory.

But now, researchers from Duke University in the US have managed to find a link between these three hypotheses, and have shown that all three changes can be brought about by a malfunction in the same gene.

Publishing in Nature Neuroscience, the researchers explain that their results could lead to new treatment strategies that target the underlying cause of the disease, rather than the visible changes or phenotypes, associated with schizophrenia.

“The most exciting part was when all the pieces of the puzzle fell together,” lead researcher, Scott Soderling, a professor of cell biology and neurobiology from Duke University, said in a press release. “When [co-researcher Il Hwan Kim] and I finally realised that these three outwardly unrelated phenotypes … were actually functionally interrelated with each other, that was really surprising and also very exciting for us.”

So what are these three phenotypes? The first is spine pruning, which means that the neurons of people with schizophrenia have fewer spines – the long part of a brain cell that passes signals back and forth. Some people with schizophrenia also have hyperactive neurons, and excess dopamine production.

But these changes just didn’t seem to make sense together. After all, how could neurons be overactive if they didn’t have enough dendritic spines to pass messages back and forth, and why would either of these symptoms trigger excess dopamine production? Now, researchers believe that a mutation in the gene Arp2/3 may be to blame.

Soderling and his team originally spotted the gene during previous studies, which identified thousands of genes linked to schizophrenia. But Arp2/3 was of particular interest, as it controls the formation of synapses, or links, between neurons.

To test its effect, the researchers engineered mice that didn’t have the Arp2/3 gene and, surprisingly, found that they behaved very similarly to humans with schizophrenia. The mice also got worse with age and improved slightly with antipsychotic medications, both traits of human schizophrenia.

But most fascinating was the fact that the mice also had all three of the unrelated brain changes – fewer dendritic spines, overactive neurons and excess dopamine production.

They also took things one step further and showed, for the first time, that this lack of dendritic spines can actually trigger hyperactive neurons. This is because the mice’s brain cells rewire themselves to bypass these spines, effectively skipping the ‘filter’ that usually keeps their activity in check.

They also showed that these overactive neurons at the front of the brain were then stimulating other neurons to dump out dopamine.

“Overall, the combined results reveal how three separate pathologies, at the tiniest molecular level, can converge and fuel a psychiatric disorder,” Susan Scutti explains over at Medical Daily.

The group will now study the role Arp2/3 plays in different parts of the brain, and how its linked to other schizophrenia symptoms. The research is still in its very early stages, and obviously has only been demonstrated in mice and not humans. But it’s a promising first step towards understanding this mysterious disease.

“We’re very excited about using this type of approach, where we can genetically rescue Arp2/3 function in different brain regions and normalise behaviours,” Soderling said. “We’d like to use that as a basis for mapping out the neural circuitry and defects that also drive these other behaviours.”

Source:  sciencealert.com

Enzyme that change a person’s blood type

Blood Type

Blood Type

Scientists have discovered that a particular type of enzyme can cut away antigens in blood types A and B, to make them more like Type O – considered the ‘universal’ blood type, because it’s the only type that can be donated to anyone without the risk of provoking a life-threatening immune response.

The team, from the University of British Columbia of Canada, worked with a family of enzymes called 98 glycoside hydrolase, extracted from a strain of Streptococcus pneumoniae. Over many generations, they were able to engineer a super high-powered enzyme strain that can very effectively snip away blood antigens where previous generations of the enzyme struggled. “A major limitation has always been the efficiency of the enzymes,” one of the team, Stephen Withers, said in a press release. “Impractically large amounts of enzyme were needed.”

Getting the right type of blood when you need it is crucial, and it has to do with the different types of residue that can accumulate the surface of red blood cells. Both blood types A and B have this residue – A has an N-acetylgalactosamine residue, and B has a galactose residue – and Type AB has a mixture of both. Only Blood Type O is free from this residue, which means it can be received by any patient, no matter what type they’re carrying.

Withers and his team managed to create their ‘mutant’ enzyme strain using a technology called directed evolution, which allows them to insert many different types of mutations into the gene that codes for it, and by progressively selecting strains that are the best at snipping away the blood antigens, were able to create an enzyme that’s 170 times more effective at it than its parent strain. They published their results in the Journal of the American Chemical Society.

“The concept is not new, but until now we needed so much of the enzyme to make it work that it was impractical,” said Withers. “Now I’m confident that we can take this a whole lot further.”

While the current enzyme strain is not yet capable of removing 100 percent of the antigens from Blood Types A and B, which is where it needs to get if the researchers want to make any real use of it, the team is confident that they’ll get it there so they can try it out in clinical trials. Even the smallest amount of antigen in donated blood can set off a dangerous immune response in the recipient.

“Given our success so far, we are optimistic that this will work,” says Withers.

Source:  sciencealert.com