Baking Soda kills Cancer

Baking Soda kills Cancer

Baking Soda kills Cancer

Even the most aggressive cancers which have metastasized have been reversed with baking soda cancer treatments. Although chemotherapy is toxic to all cells, it represents the only measure that oncologists employ in their practice to almost all cancer patients.

In fact, 9 out of 10 cancer patients agree to chemotherapy first without investigating other less invasive options.

Doctors and pharmaceutical companies make money from it. That’s the only reason chemotherapy is still used. Not because it’s effective, decreases morbidity, mortality or diminishes any specific cancer rates. In fact, it does the opposite. Chemotherapy boosts cancer growth and long-term mortality rates and oncologists know it.

A few years ago, University of Arizona Cancer Center member Dr. Mark Pagel received a $2 million grant from the National Institutes of Health to study the effectiveness of personalized baking soda cancer treatment for breast cancer.

Obviously, there are people in the know who have understood that sodium bicarbonate, that same stuff that can save a person’s life in the emergency room in a heartbeat, is a primary cancer treatment option of the safest and most effective kind.

Studies have shown that dietary measures to boost bicarbonate levels can increase the pH of acidic tumors without upsetting the pH of the blood and healthy tissues. Animal models of human breast cancer show that oral sodium bicarbonate does indeed make tumors more alkaline and inhibit metastasis.

Based on these studies, plus the fact that baking soda is safe and well tolerated, world renowned doctors such as Dr. Julian Whitaker have adopted successful cancer treatment protocols as part of an overall nutritional and immune support program for patients who are dealing with the disease.

The Whitaker protocol uses 12 g (2 rounded teaspoons) of baking soda mixed in 2 cups water, along with a low-cal sweetener of your choice. (It’s quite salty tasting.)

Sip this mixture over the course of an hour or two and repeat for a total of three times a day. One man claims he has found a cure for cancer using baking soda and molasses and actually successfully treated his own disease by using baking soda.

When taken orally with water, especially water with high magnesium content, and when used transdermally in medicinal baths, sodium bicarbonate becomes a first-line medicinal for the treatment of cancer, and also kidney disease, diabetes, influenza and even the common cold.

It is also a powerful buffer against radiation exposure, so everyone should be up to speed on its use. Everybody’s physiology is under heavy nuclear attack from strong radioactive winds that are circling the northern hemisphere.

Dr. Robert J. Gillies and his colleagues have already demonstrated that pre-treatment of mice with baking soda results in the alkalinization of the area around tumors. The same researchers reported that bicarbonate increases tumor pH and also inhibits spontaneous metastases in mice with breast cancer.

The Baking Soda Formula for Cancer

To make the baking soda natural cancer remedy at home, you need:

  • maple syrup,
  • molasses or
  • honey
  • to go along with the baking soda.

In Dr. Sircus’ book, he documented how one patient used baking soda and blackstrap molasses to fight the prostate cancer that had metastasized to his bones. On the first day, the patient mixed 1 teaspoon of baking soda with 1 teaspoon of molasses in a cup of water.

He took this for another 3 days after which his saliva pH read 7.0 and his urine pH read 7.5.

Encouraged by these results, the patient took the solution 2 times on day 5 instead of once daily. And from day 6 – 10, he took 2 teaspoons each of baking soda and molasses twice daily.

By the 10th day, the patient’s pH had risen to 8.5 and the only side effects experienced were headaches and night sweat (similar to cesium therapy).

The next day, the patient had a bone scan and too other medical tests. His results showed that his PSA (prostate-specific antigen, the protein used to determine the severity of prostate enlargement and prostate cancer) level was down from 22.3 at the point of diagnosis to 0.1.

Another baking soda formula recommends mixing 90 teaspoons of maple syrup with 30 teaspoons of baking soda.

To do this, the maple syrup must be heated to become less viscous. Then the baking syrup is added and stirred for 5 minutes until it is fully dissolved.

This preparation should provide about 10-day worth of the baking soda remedy. 5 – 7 teaspoons per day is the recommended dose for cancer patients.

Care should be taken when using the baking soda remedy to treat cancer. This is because sustaining a high pH level can itself cause metabolic alkalosis and electrolyte imbalance. These can result in edema and also affect the heart and blood pressure.

One does not have to be a doctor to practice pH medicine. Every practitioner of the healing arts and every mother and father needs to understand how to use sodium bicarbonate.

Bicarbonate deficiency is a real problem that deepens with age so it really does pay to understand and appreciate what baking soda is all about.

Do you have baking soda in your house?

 

Source:   humansarefree.com

Cancer Kill Switch

Cancer kill switch

Cancer kill switch

What if you could just flick a switch and turn off cancer? It seems like something you would see in a sci-fi flick, but scientists are working towards a future where that could be a reality. At the Mayo Clinic in Jacksonville, Florida, a group of researchers have made a discovery that could be a kill switch for cancer. They have found a way to reprogram mutating cancer cells back to normal, healthy cells.

Panos Anastasiadis, PhD, head of the Department of Cancer Biology at the Mayo Clinic, and his team were studying the role of adhesion proteins in cells. Anastasiadis’ primary focus was on the p120 catenin protein and long held hypothesis on it being a major player in the suppressor of tumors. The team found that p120, along with another adhesion protein, E-cadherin, actually promoted cancer growth. “That led us to believe that these molecules have two faces — a good one, maintaining the normal behavior of the cells, and a bad one that drives tumorigenesis.”

In that research, however, Anastasiadis made a remarkable discovery, “an unexpected new biology that provides the code, the software for turning off cancer.” That would be a partner to the P120 protein, dubbed PLEKHA7. When introduced to tumors, PLEKHA7 was able to “turn off” the cancerous cells’ ability to replicate and return it to a benign state. It stopped the cancer in its tracks.

How it all works is pretty straightforward. Normal, healthy cells are regulated by a sort of biological microprocessor known as microRNAs, which tell the cells to stop replicating when they have reproduced enough. Cancer is caused by a cell’s inability to stop replicating itself, and eventually grows into a cluster of cells that we know as a tumor. Anastasiadis’ team found that PLEKHA7 was an important factor in halting the replication of cells, but that it wasn’t present in the cancerous cells. By reintroducing PLEKHA7, what were once raging cancerous cells returned to normal.

This was done by injecting PLEKHA7 directly into the cells, under a controlled lab test. Anastasiadis said they still need to work on “better delivery options,” as these tests were done on human cells in a lab. They did find success, however, in stopping the growth in two very aggressive forms of cancer: breast and bladder. While this isn’t being tested on humans yet, it represents a huge step forward in understanding the nature of cancer and we can cure it.

 

Source:  Geek.com

14-Million Year Old Vehicle Tracks

 14-Million-Year-Old Vehicle Tracks

14-Million-Year-Old Vehicle Tracks

According to a statement from a Russian geologist: these traces were left by vehicles that belonged to an advanced ancient civilization that inhabited our planet 14 million years ago.

We all know that numerous religious texts speak of giants that roamed the Earth in the distant past.

Even though experts in different fields have different opinions about this possibility, there are others who believe that Ancient Giants did exist and that we can find many traces of their existence today.

Geologist Alexander Koltypin believes that the mysterious markings that extend along the Phrygian Valley, in central Turkey, were made by an intelligent race between 12 and 14 million years ago
“We can assume that ancient vehicles with “wheels” were driven into the soft ground, perhaps a wet surface,” said the geologist.

“Because of the great weight of these vehicles, they left behind very deep grooves which eventually petrified and turned into evidence.”

Geologists are familiar with such phenomena as they have found petrified footprints of dinosaurs that were preserved in the same way.

Together with three colleagues, Dr. Koltypin, director of the Natural Science Scientific Research Centre at Moscow’s International Independent Ecological-Political University, traveled to the site in Anatolia, Turkey where these markings can be found.

Upon returning from his trip, he described the observed as ‘petrified tracking ruts in rocky tuffaceous [made from compacted volcanic ash deposits’.

Dr Koltypin said: ‘All these rocky fields were covered with the ruts left some millions of years ago… we are not talking about human beings.’

‘We are dealing with some kind of cars or all-terrain vehicles. The pairs of ruts are crossing each other from time to time and some ruts are more deep than the others,’ he added.

According to Dr Koltypin, these tracers were left behind by vehicles 14 million years ago. These mysterious ruts are between 12 to 14 million years old.

‘The methodology of specifying the age of volcanic rocks is very well-studied and worked out,’ he said.

Dr Koltypin is one of few experts that actually believes that science needs to change their approach on different matters. He believes that there are many archaeologists who avoid touching this matter since it would ruin all classic theories.

‘As a geologist, I can certainly tell you that unknown antediluvian [pre-Biblical] all-terrain vehicles drove around Central Turkey some 12-to-14 million years ago.’ said Dr. Koltypin.

He said: ‘I think we are seeing the signs of the civilisation which existed before the classic creation of this world. Maybe the creatures of that pre-civilisation were not like modern human beings.’

According to Dr. Koltypin and many other archaeologists and experts which have adopted new ways of thinking these ancient “car tracks” are one of the best preserved pieces of evidence which undoubtedly prove the existence of highly advanced ancient civilizations that inhabited our planet in the distant past.

Many researchers believe that there are several pieces of evidence pointing towards the existence of highly advanced ancient civilizations that existed on Earth millions of years ago.

“There was no comprehensible system for the tracks but the distance between each pair of tracks ‘is always the same,” added Dr Koltypin.

‘The maximum depth of a rut is about three feet (one metre). On the sides of ruts there can be seen horizontal scratches, it looks like they were left by the ends of the axles used for ancient wheels.

‘We found many ruts with such scratches,’ he said.

Is it possible that Dr Koltypin is right? And is it possible that mainstream scientists have ignored these giant pieces of evidence in hopes of preserving their classic and old thinking methods?

Is it possible that mainstream experts are afraid of adopting a new approach to ancient history?

There are many who believe that with a classic approach, science is becoming less objective.

 

Source:  humansarefree.com

Scientists grow 5-week-old human brain

Scientists successfully grow human brain in lab

Scientists successfully grow human brain in lab

Growing brain tissue in a dish has been done before, but bold new research announced this week shows that scientists’ ability to create human brains in laboratory settings has come a long way quickly.

Researchers at the Ohio State University in the US claim to have developed the most complete laboratory-grown human brain ever, creating a model with the brain maturity of a 5-week-old foetus. The brain, which is approximately the size of a pencil eraser, contains 99 percent of the genes that would be present in a natural human foetal brain.

“It not only looks like the developing brain, its diverse cell types express nearly all genes like a brain,” Rene Anand, professor of biological chemistry and pharmacology at Ohio State and lead researcher on the brain model, said in a statement.

“We’ve struggled for a long time trying to solve complex brain disease problems that cause tremendous pain and suffering. The power of this brain model bodes very well for human health because it gives us better and more relevant options to test and develop therapeutics other than rodents.”

Anand turned to stem cell engineering four years ago after his specialized field of research – examining the relationship between nicotinic receptors and central nervous system disorders – ran into complications using rodent specimens. Despite having limited funds, Anand and his colleagues succeeded with their proprietary technique, which they are in the process of commercializing.

The brain they have developed is a virtually complete recreation of a human foetal brain, primarily missing only a vascular system – in other words, all the blood vessels. But everything else (spinal cord, major brain regions, multiple cell types, signalling circuitry is there). What’s more, it’s functioning, with high-resolution imaging of the brain model showing functioning neurons and brain cells.

The researchers say that it takes 15 weeks to grow a lab-developed brain to the equivalent of a 5-week-old foetal human brain, and the longer the maturation process the more complete the organoid will become.

“If we let it go to 16 or 20 weeks, that might complete it, filling in that 1 percent of missing genes. We don’t know yet,” said Anand.

The scientific benefit of growing human brains in laboratory settings is that it enables high-end research into human diseases that cannot be completed using rodents.

“In central nervous system diseases, this will enable studies of either underlying genetic susceptibility or purely environmental influences, or a combination,” said Anand. “Genomic science infers there are up to 600 genes that give rise to autism, but we are stuck there. Mathematical correlations and statistical methods are insufficient to in themselves identify causation. You need an experimental system – you need a human brain.”

The research was presented this week at the Military Health System Research Symposium.

 

Source:  sciencealert.com

1% of 84,000 Chemicals Have only Been Tested

women make-up

women make-up

There are around 84,000 chemicals on the market, and we come into contact with many of them every single day. And if that isn’t enough to cause concern, the shocking fact is that only about 1 percent of them have been studied for safety.

In 2010, at a hearing of the Senate Subcommittee on Superfund, Toxics and Environmental Health, Lisa Jackson, then the administrator of the EPA, put our current, hyper-toxic era into sharp perspective: “A child born in America today will grow up exposed to more chemicals than any other generation in our history.”

Just consider your morning routine: If you’re an average male, you use up to nine personal care products every single day: shampoo, toothpaste, soap, deodorant, hair conditioner, lip balm, sunscreen, body lotion and shaving products — amounting to about 85 different chemicals. Many of the ingredients in these products are harmless, but some are carcinogens, neurotoxins and endocrine disruptors.

Women are particularly at risk because they generally use more personal care products than men: 25 percent of women apply 15 or more products daily, including makeup and anti-aging creams, amounting to an average of 168 chemicals. For a pregnant woman, the risk is multiplied as she can pass on those toxins to her unborn child: 300 contaminants have been detected in the umbilical cord blood of newborns.

Many people don’t think twice about the chemicals they put on their bodies, perhaps thinking that the government regulates the personal care products that flood the marketplace. In reality, the government plays a very small role, in part because it doesn’t have the legal mandate to protect the public from harmful substances that chemical companies and manufacturers sell in their products. Federal rules designed to ensure product safety haven’t been updated in more than 75 years. New untested chemicals appear on store shelves all the time.

“Under federal law, cosmetics companies don’t have to disclose chemicals or gain approval for the 2,000 products that go on the market every year,” notes environment writer Jane Kay in Scientific American. “And removing a cosmetic from sale takes a battle in federal court.”

It’s high time these rules are revisited. Not only have thousands of new chemicals entered the market in the past several decades, there is overwhelming evidence that the public is unnecessarily exposed to health hazards from consumer products. In 2013, the American College of Obstetricians and Gynecologists issued a report that found “robust” evidence linking “toxic environmental agents” — which includes consumer products — to “adverse reproductive and developmental health outcomes.”

Formaldehyde is a good example. It is a known carcinogen used as a preservative to kill or inhibit the growth of microorganisms in a wide range of personal care products, from cosmetics, soaps, shampoos and lotions to deodorants, nail polishes and hair gels. It is also used in pressed-wood products, permanent-press fabrics, paper product coatings and insulation, and as a fungicide, germicide, disinfectant and preservative. The general public is also exposed to formaldehyde through automobile tailpipe emissions. Formaldehyde has been linked to spontaneous abortion and low birth weight.

While the main concern about formaldehyde exposure centers around industrial use (e.g., industrial workers, embalmers and salon workers), the Cosmetic Ingredient Review, an independent panel of experts that determines the safety of individual chemical compounds as they are used in cosmetics, recommends that for health and safety reasons cosmetics should not contain formaldehyde at amounts greater than 0.2 percent. It’s a small amount, but the problem is that the FDA doesn’t regulate the use of formaldehyde in cosmetics (except for nail polish), and companies aren’t required by law to follow CIR’s recommendations.

 

Source:  alternet.org

NSA planned to infect Samsung with spyware

 

NSA planned to infect Samsung with spyware

NSA planned to infect Samsung with spyware

If you’re in the business of writing spyware or malware, smartphones are a tempting target. For many people, their phone or tablet is now the primary compute device they use to surf the web, access content, and explore new software. Google has had problems keeping the Google Play store free from malware and spyware, but new information suggests that both Google and Samsung almost faced a much more potent opponent — the NSA itself.

A report from The Intercept highlights how the NSA explored options for hacking the App Store and Google Play over several workshops held in Australia and Canada between November 2011 and February 2012. The projects used the Internet-monitoring Xkeyscore system to identify smartphone traffic, then trace that traffic back to app stores. This led to a project dubbed Irritant Horn, the point of which was to develop the ability to distribute “implants” that could be installed when the smartphones in question attempted to connect to Google or Samsung app stores.

The NSA has targeted mobile devices ever since the post-Patriot Act era made such warrantless comprehensive spying legal, but it’s never been clear how the organization managed to tap certain hardware in the first place. The goal was twofold: First, use app stores to launch spyware campaigns and second, gather information about the phone users themselves by infiltrating the app stores in question.

The reference to “Another Arab spring,” refers to the fact that the events of 2010-2011 apparently caught western intelligence agencies off-guard, with few resources that could quickly be brought to bear. The NSA wanted to be aware of future events before they happened. Note, however, that this has precious little to do with the direct goal of protecting the United States from terrorism.

Few would argue that the US should not monitor the activities of known threats, but where was the threat from internal strife and the possible toppling of autocratic governments? It’s true that in the longer run, some new governments might pursue policies that the United States found less desirable than those of the previous regime, but there’s an enormous leap between “We don’t like Country X’s new trade policy,” and “Country X is actively assisting terrorist groups to carry out an attack on the United States.”

 The NSA was primarily interested in the activities of African countries. But in the course of investigating these possibilities, it discovered significant security flaws in a program called UC Browser, used by nearly half a billion people in East Asia. Instead of disclosing the security vulnerability, the NSA and other foreign intelligence groups chose to exploit it — thereby increasing the chances that other criminal elements would have time to find and exploit it as well.

These issues are at the heart of the debate over what the NSA’s role should be in the future. There’s always been tension over whether the NSA should weaken or strengthen the cryptographic standards that allow for secure communication. That discussion may be even more nuanced when it involves software produced by foreign companies. There are few signs, however, that such nuanced discussions of capability have ever occurred. Instead, we continue to see intelligence resources deployed with the goal of vacuuming up all information from any source, regardless of legal precedent or cooperation.

The future of the Patriot Act and the scope of NSA’s future powers remains in some doubt. Senator Rand Paul gave a 10-hour speech yesterday aimed at derailing support for the Patriot Act (his actions were not properly a filibuster, because a vote on the renewal of Section 215 wasn’t actually before the chamber at the time). Others in the House of Representatives have called for a full appeal of the Patriot Act’s provisions, and the Federal Appeals Court for the Second Circuit recently ruled that the current spying program is illegal under the Patriot Act as it stands.

 

Source:  extremetech.com

 

Reprogrammed bacteria able to detect cancer

Bacteria to detect cancer

Reprogramming bacteria to detect cancer

The fight against cancer has risen to a fever pitch in the last decade, with new research avenues increasing almost by the day. If we are to believe Ray Kurzweil and the singularity folk, the specter of cancer may soon be a thing of the past. Lending credence to such optimism, new research by a team at MIT and UC San Diego employs genetically engineered bacteria to detect cancer, and perhaps someday treat it as well. Enlisting the help of bacteria in the battle against cancer may prove key in turning the tables on this awful menace.

The basis for this new form of cancer diagnosis is the unusual relationship between cancer and bacteria. Whereas healthy human tissue will aggressively fight off most bacterial infestations, the immune system within tumors has been compromised by the many mutations taking place there, and so bacteria accumulate in them at a higher-than-normal rate. The researchers exploited this characteristic to devise a means of detecting tumors long before other methods could catch them.

By removing a snippet of DNA programming found in fireflies and transferring it to a harmless form of E. Coli bacteria, the researchers were able to cause these bacteria to fluoresce at the critical concentrations that occur within tumors. The analogy would be to that of creating a flashlight that automatically turns on when it finds a tumor. The ability to detect tumors as small as one cubic millimeter makes this one of the most sensitive diagnostic tools to date. In treating cancer, early detection is pivotal, since the sooner a tumor is detected, the easier it is to contain and eliminate.

But before letting out a collective sigh of relief, it should be kept in mind that this method has only been successfully applied to liver cancers. Early on in the study, the researchers realized the orally ingested bacteria would not reach sufficient concentrations throughout the whole body to successfully detect all tumors therein. For instance, the blood brain barrier prevents the bacteria from entering the human brain as would be necessary for this method to detect brain tumors. The liver, however, proved an exception, in that the E. Coli bacteria in question naturally occurs there and would multiply rapidly in the presence of a tumor.

Despite its limitations, this is nonetheless a significant development. Many tumors that begin in the colon quickly spread to the liver, where they prove difficult to detect and go on to infect other parts of the body. Therefore, catching liver cancer early can play a key role in preventing cancers in many other place of the body.

The scientists involved in the study, including Tal Danino and Arthur Prindle, are now hopeful that the same bacteria can be programmed to fight cancer as well. The goal is to engineer the bacteria to cause genetic disruption of cancer cell function, deliver drugs, or signal the immune system to destroy the cancer itself. In the future, the cup of yogurt you have in the morning may not only improve digestive health, but simultaneously track down and eliminate cancers growing within the body.

 

Source:  extremetech.com

 

100,000 German Call for GMO Ban

GMO Cultivation Ban

GMO Cultivation Ban

German beekeepers have called for a nationwide ban on cultivating GM plants, reports the German NGO keine-gentechnik.de.

The call by the German Beekeepers Association (DIB), which represents almost 100,000 beekeepers, comes after Europe adopted controversial legislation enabling member states to opt-out of the cultivation of GMOs that have been approved at the EU level.

Under the law, a member state can ban a GMO in part or all of its territory. But the law has come under heavy criticism for failing to provide a solid basis for such bans.

The beekeepers are urging Agriculture Minister Christian Schmidt (CSU) to implement a Germany-wide ban on cultivation. The Minister pleads, however, for letting each state decide individually.

The beekeepers counter that a piecemeal approach will not work. Bees fly up to eight kilometres in search of food, the DIB said, so a juxtaposition of GM crop cultivation zones and GMO-free zones within Germany would be “environmentally and agriculturally unacceptable”.

“Bees know no borders,” the DIB added.

The beekeepers’ demand for a nationwide ban could bring them into direct conflict with the new opt-out law, as experts warn that such bans may not be legally solid.

National GMO cultivation bans will be tough to uphold

At a conference on the new European legislation hosted by the Hungarian Ministry of Agriculture in Budapest, Hungary, in April 2015, Dr H.-Christoph von Heydebrand of the German Federal Ministry of Food and Agriculture warned that a nationwide ban on GMO cultivation would be much harder to justify under the new law than a regional or local ban.

A lawyer from the EU Council, Matthew Moore, speaking at the same conference in a personal capacity, agreed that it would be far easier under the law to defend national measures that “do not extend to the whole territory”.

Mr Moore gave an example of the type of challenge that would-be opting-out countries will be faced with. If they argue that GMOs threaten small-scale and agroecological farmers in their nation, they could be asked: “Is the entirety of your agricultural sector really composed of small farmers whose domination by a large agro-industrial company and its single pesticide motivated you to act?”

Mr Moore explained that the principle of proportionality is written into the new law, as well as being a general principle of EU law.

This means that the ECJ will be more inclined to accept GMO cultivation opt-outs “in relation to a defined region than in relation to the entirety of the territory of a country the size of Hungary”. Any measure taken by an opting-out country to ban or restrict the cultivation of GMOs must not go beyond what is necessary to achieve the stated aim.

Mr Moore made clear that if opt-outs were challenged, for example, by the GMO industry, the case would end up in the European Court of Justice. And the ECJ has a presumption in favour of the EU single market.

In simple terms, that means the ECJ could take a lot of convincing to allow a country or even a region to opt out of cultivating a GMO that the European Food Safety Authority (EFSA) asserts is safe. Such an opt-out, if allowed to stand, could create divisions in the European single market and might bring the member state into conflict with the ECJ.

The current situation in Germany, with beekeepers ranged against government officials and pro-GMO farmers, also suggests that the new opt-out law will create internal divisions within a country.

The GMO industry may go down in history as having broken apart the European Union and set one sector of the food and agriculture industry against another.

 

Source:  globalresearch.ca

Gay Marriage Study Faked!

Not everyone agrees with Gay Marriage!

Not everyone agrees with Gay Marriage!

We’ve heard it over and over again in the main stream media – a majority of Americans now support “same-sex” marriage.  It is the rationale those pushing for this radical change to U.S. culture are using to press ahead with their agenda.

But, a new report just out shoots down this assumption being hammered home by the left.  The Daily Caller reports:

A study purporting to show that people’s views on gay marriage could change simply by meeting gay people has been retracted following revelations that its data was fabricated.

The study was published last December in Science, and prior to publication drew a great deal of attention from the American media. Vox, for instance, described the findings in the study as “kind of miraculous.” As it turns out, that’s exactly what they were, because they were apparently made up.According to the study, people from communities hostile to gay marriage could have their opinions shift dramatically after spending just a few minutes speaking with a gay person who canvassed their neighborhood promoting gay marriage. Not only that, but this could have a spillover effect, making not just the people themselves more pro-gay but also other people who lived in the same household.

The study, among other things, lent support to the notion that those opposed to gay marriage simply don’t know or interact with open homosexuals. More broadly, it was seen as an important development in the science of how people can be convinced to change their minds on ideologically-charged issues.

I don’t know why this should surprise us.  The left always plays “fast and loose ” with the facts to spew their propaganda.  Perception is reality as far as they are concerned.

The Daily Caller provides more details:

The study began to fall apart when students at the University of California at Berkeley sought to conduct additional research building off of it, only to find major irregularities in how its research was apparently conducted. For example, thermometers used to measure participants’ attitudes produced consistent, reliable information, even though they are known for producing relatively unreliable numbers.

Also, the data recovered had an exceptionally consistent distribution, with not a single one of the 12,000 supposed participants providing anomalous or unusual results. In other words, the study’s data was too perfect to be believable.

Donald Green, a professor at Columbia University and a co-author of the paper, made the decision to retract it after having a confrontation with co-author Michael LaCour, a graduate student at UCLA. While LaCour maintained that he hadn’t fabricated the data, he was also unable to produce the original source files supposedly used to produce it. When he failed to write-up a retraction, Green took the initiative and did so himself.

“I am deeply embarrassed by this turn of events and apologize to the editors, reviewers, and readers of Science,” Green told Retraction Watch, a science watchdog website.

How much damage this “fake” study has already inflicted on America is not known, but don’t look to anyone in the main stream media to correct the record.  They are completely sold on the idea of “same-sex” marriage and all of the “transgender fluidity” nonsense now being pushed by the same people who pushed for acceptance of homosexual behavior as normative.

It may take the American people some time to see through this most recent fraud, but truth has a funny way of coming out; especially when all of the lies start to fall like a house of cards.

 

Source:  thefederalistpapers.org

Half Of All Jobs Will Be Automated By 2034

47% Of All Jobs Will Be Automated By 2034

47% Of All Jobs Will Be Automated By 2034

Almost half of all jobs could be automated by computers within two decades and “no government is prepared” for the tsunami of social change that will follow, according to the Economist.

The magazine’s 2014 analysis of the impact of technology paints a pretty bleak picture of the future.

It says that while innovation (aka “the elixir of progress”) has always resulted in job losses, usually economies have eventually been able to develop new roles for those workers to compensate, such as in the industrial revolution of the 19th century, or the food production revolution of the 20th century.

But the pace of change this time around appears to be unprecedented, its leader column claims. And the result is a huge amount of uncertainty for both developed and under-developed economies about where the next ‘lost generation’ is going to find work.

It quotes a 2013 Oxford Martin School study that estimates 47% of all jobs could be automated in the next 20 years:

“Our findings thus imply that as technology races ahead, low-skill workers will reallocate to tasks that are non-susceptible to computerisation – i.e., tasks requiring creative and social intelligence. For workers to win the race, however, they will have to acquire creative and social skills,” that study says.

The Economist also points out that current unemployment levels are startlingly high, but that “this wave of technological disruption to the job market has only just started”.

Specifically the Economist points to new tech like driverless cars, improved household gadgets, faster and more efficient online communications and ‘big data’ analysis to areas that humans are quickly being superceded. And while new start-ups are raising billions, they employ few people – Instagram, sold to Facebook in 2012 for $1 billion, employed just 30 people at the time.

Those conclusions are echoed elsewhere. Another study (‘Are You Ready For #GenMobile?’), to be released in full on 21 January by Aruba Networks, points out just how fast traditional working models are changing.

It says that 72% of British people now believe they work more efficiently at home, and that 63% need a WiFi network to complete their tasks – not bad for a technology that was barely standardised 10 years ago.

Meanwhile in ‘The Second Machine Age’, out this week, Erik Brynjolfsson and Andrew McAfee argue workers are under unprecedented pressure by the automation of skilled and unskilled jobs.

In a recent Salon interview Brynjolfsson said: “technology has always been destroying jobs, and it’s always been creating jobs, and it’s been roughly a wash for the last 200 years. But starting in the 1990s the employment to population ration really started plummeting and it’s now fallen off a cliff and not getting back up. We think that it should be the focus of policymakers right now to figure out how to address that.”

The BBC also produced a report earlier this month which claimed, in stark tones, that “the robots are coming to steal our jobs”.

“AI’s are embedded in the fabric of our everyday lives,” head of AI at Singularity University, Neil Jacobstein, told the Beeb.

“They are used in medicine, in law, in design and throughout automotive industry.”

That report too pointed out the change will affect jobs of all kinds – from a Chinese factory Hon Hai which has announced plans to replace 500,000 workers with robots in three years, to lawyers, surgeons and public sector workers.

Opinions remain divided on the impact and future of technological innovation on the jobs market, and wealth inequality. The Economist leader argues that governments have a responsibility to innovate in education, taxation and embracing progress, though the solutions are by no means obvious or without uncertainty.

If only we could automate the process of making and implementing those political decisions – now that would really be something.

 

Source:  huffingtonpost.co.uk

Animals internal compass possibly found

Internal Compass

Internal Compass

Scientists have long known that animals have some kind of internal compass that allows them to use Earth’s magnetic field to navigate. This ability allows species such as the monarch butterfly to travel up to an incredible 5,000 km across the US to the exact same location, year after year, and the Arctic tern to travel 71,000 km between Greenland and Antarctica annually. But these magnetic fields are pretty much invisible to humans, and we’ve never been able to find the sensor that lets animals to detect them.

Now a team of researchers from the University of Texas at Austin in the US has identified a tiny antenna-like structure in the brain of worms that allows them to sense Earth’s magnetic field, and they suspect the same structure could be the key to helping other species navigate, too.

“Chances are that the same molecules will be used by cuter animals like butterflies and birds,” one of the researchers, Jon Pierce-Shimomura, said in a press release. “This gives us a first foothold in understanding magnetosensation in other animals.”

Back in 2012, scientists found cells in pigeons that process information about magnetic fields, but this is the first time researchers have ever found the actual sensor in animals.

“It’s been a competitive race to find the first magnetosensory neuron,” said Pierce-Shimomura. “And we think we’ve won with worms, which is a big surprise because no one suspected that worms could sense the Earth’s magnetic field.”

The team made the discovery while conducting Alzheimer’s research in small soil worms, C. elegans. They noticed that when worms from Texas soil were hungry, they moved downwards to look for food. But worms that came from other parts of the world – Hawaii England and Australia, for example – didn’t move down; they moved at a precise angle to the magnetic field that would have corresponded to down if they’d been in their home country.

The team then altered the magnetic field around the worms’ enclosure using a special magnetic coil system, and found that they changed their behaviour accordingly.

But the real breakthrough came when they worked with worms that had been genetically engineered to block a structure called the AFD neuron from forming in the brain. These worms didn’t change their behaviour when the magnetic fields around their enclosure were altered – in fact, they seemed unable to detect the magnetic fields at all.

The AFD neuron is a tiny structure at the end of a neuron that gives worms the ability to sense carbon dioxide levels and temperature while underground. To confirm its additional role in sensing magnetic fields, the team used a technique called calcium imaging to show that changes in the magnetic field caused the AFD neuron to light up. Their findings have been published in the journal eLife. 

The next step will be to confirm that this AFD neuron exists in other species and that it works in the same way. If that’s the case, we might finally have an explanation for the incredible navigation abilities of animals, and perhaps a roadmap for how humans could one day achieve the same ability.

 

Source:  sciencealert.com

Majority of American’s are now Obese

Most American's are Obese

Most American’s are Obese

The number of overweight and obese adults in the United States continues to rise, according to a new study that’s found more than two-thirds of adult Americans aged 25 years or older are now overweight or obese.

The research analysed data from the National Health and Nutrition Examination Survey, which ran from 2007 to 2012, and included information on a sample of 15,208 men and women. Based on the data, the researchers estimate that 39.96 percent of US men (36.3 million) are overweight and 35.04 percent (31.8 million) are obese.

 For women, the estimates are 29.74 percent (28.9 million) of them are overweight, while 36.84 percent (35.8 million) are obese. If you do the maths, sure enough, the number of obese adult Americans (67.6 million) now eclipses those who are only overweight (65.2 million).

What’s so remarkable about the research, conducted by the Washington University School of Medicine and published in The Journal of the American Medical Association, is just how stark the numbers are for the US population. Every three in four men is overweight or obese, and the same can be said for two out of every three women.

In other words, people in healthy weight ranges in the US make up only a distinct minority of the population, especially when you consider that some portion of the remainder in these figures will be people who are actually underweight.

The researchers found the African American community has the biggest problem with obesity – affecting 39 percent of black men and 57 percent of black women – followed by Mexican Americans and then whites.

A similar study was published back in 1999, finding that 63 percent of men and 55 percent of women aged 25 and older were overweight or obese, so clearly the problem has only gotten worse over the last two decades, despite efforts from the government and the health community to educate people on how to take care of themselves when it comes to food and lifestyle choices.

“This is a wakeup call to implement policies and practices designed to combat overweight and obesity,” said Lin Yang, the study’s lead, in a statement. “An effort that spans multiple sectors must be made to stop or reverse this trend that is compromising and shortening the lives of many.”

Scary stuff, but hopefully this latest research will help galvanise efforts to turn weights around in the US and put healthy eating and living squarely back on the agenda.

 

Source:  sciencealert.com

Human cyborgs within 200 years

cyborg women

cyborg women

Within the next 200 years, humans will have become so merged with technology that we’ll have evolved into “God-like cyborgs”, according to Yuval Noah Harari, an historian and author from the Hebrew University of Jerusalem in Israel.

Harari researches the history of the human species, and after writing a new book on our past, he now believes that we’re just a few short centuries away from being able to use technology to avoid death altogether – if we can afford it, that is.

 “I think it is likely in the next 200 years or so Homo sapiens will upgrade themselves into some idea of a divine being, either through biological manipulation or genetic engineering of by the creation of cyborgs: part organic, part non-organic,” Harari said during his presentation the Hay Festival in the UK, as Sarah Knapton reports for the Telegraph. “It will be the greatest evolution in biology since the appearance of life … we will be as different from today’s humans as chimps are now from us.”

Obviously, we should take Harari’s predictions with a grain of salt, but while they sound more suited to science fiction than real life, they’re not actually that out-there. Many researchers believe that we’ve already started down the path towards a cyborg future; after all, many of us already rely on bionic ears and eyes, insulin pump technology and prosthetics to help us survive. And with researchers recently learning how to send people’s thoughts across the web, subconsciously control bionic limbs and use liquid metal to heal severed nerves, it’s not hard to imagine how we could continue to use technology to supplement our vulnerable human bodies further.

Interestingly, Harare’s comments came just a few days after UK-based neuroscientist Hannah Critchlow from Cambridge University got the Internet excited by saying that it could be possible to upload our brains into computers, if we could build computers with 100 trillion circuit connections. “People could probably live inside a machine. Potentially, I think it is definitely a possibility,” Critchlow said during her presentation at the festival.

But Harari warned that these upgrades may only be available to the wealthiest members of society, and that could cause a growing biological divide between rich and poor – especially if some of us can afford to pay for the privilege of living forever while the rest of the species dies out.

If that sounds depressing, the alternative is a future where instead of us taking advantage of technology, technology takes advantage of us, and artificial intelligence poses a threat to our survival, as Elon Musk, Stephen Hawking, and Bill Gates have all predicted.

Either way, one thing seems pretty clear – our future as a species is now inextricably linked with the technology we’ve created. For better or for worse.

 

Source:  sciencealert.com

Friends genetically more similar than strangers

'We Become Friends With Genetically Similar People'

‘We Become Friends With Genetically Similar People’

Our friends seem to be genetically more similar to us than strangers, according to a new U.S. scientific study led by prominent Greek-American professor of sociology and medicine at Yale University Nicholas Christakis and James Fowler, professor of medical genetics and political science at the University of California.

The researchers, who made the relevant publication in the Journal of the National Academy of Sciences (PNAS), analyzed the genome of 1,932 people and compared pairs of friends with pairs of strangers.

There was no biological affinity among all these people, but only the difference in the level of social relations between them.

The study showed that, on average, every person had a more similar DNA with his friends than with strangers. The researchers noted that this finding has to do with the tendency of people to make friends with similar racial (and hence genetic) background.

The genetic similarity between friends was greater than the expected similarity between people who share a common national and genetic inheritance. It is not clear yet by what mechanisms this occurs.

But how similar are we with our friends?

On average, according to the study, a friend of ours has a genetic affinity comparable to our fourth cousin, which means that we share about 1% of our genes our friends.

“1% does not sound a big deal, but it is for geneticists. It is noteworthy that most people do not even know who their fourth cousins ​​are, but somehow, from the countless possible cases, we choose to make friends with people who are genetically similar to us,” said Prof Christakis.

Christakis and Fowler even developed a “friendship score”, which predicts who will befriend whom with nearly the same accuracy as scientists predict, on the basis of genetic analysis, the chances of a person to obesity or schizophrenia.

Focusing on individual genes, the research shows that friends are more likely to have similar genes related to the sense of smell, but different genes that control immunity; thus friends vary genetically in their protection against various diseases.

It seems to be an evolutionary mechanism that serves the society in general, since the fact that people hang out with those who are vulnerable to different diseases constitutes a barrier to the quick spread of an epidemic from person to person. Another notable finding is that the common genes we share with our friends seem to evolve more rapidly than others.

Prof. Christakis explains that probably that is why human evolution seems to have accelerated over the past 30,000 years, as the social environment with an important role of linguistic communication is a vital evolutionary factor.

 

Source:  humansarefree.com

Multitasking lowers your IQ

Multitasking makes you stupid

Multitasking makes you stupid

Envisage the switched-on new-millennium male – his iPhone in one hand while he switches between emails and business reports on his computer screen – a vision of productivity in this wondrous age of apps.

Wrong. He’s seriously dumbing himself down.

Several scientific studies around the world have concluded the brain doesn’t switch tasks like an expert juggler. Quite the opposite. It can reduce your IQ by as much as 10 points, cause mental blanks and reduce your productivity by 40 per cent.

Not a single study in psychology shows that women are better than men at multitasking, says Dr Julia Irwin, senior lecturer in psychology at Macquarie University.

What about women? They’re legends at multitasking and concentrating on several things at once. Nope. Not a single psychological study concludes women are better at multitasking than men, and some research indicates they can be worse.

One Australian researcher in the field, Dr Julia Irwin, senior lecturer in psychology at Macquarie University, advises people to abandon their apps, turn off their mobiles and ignore their emails while they concentrate on one task at a time. “At the end of the day, they will have been a lot more productive,” she says.

“If you’re sending an email while also working on an assignment, one downside is that withdrawing your attention from one task to another creates a split-second in which the brain’s in no-man’s land. It’s called a post-refractory pause.

“Over time these pauses add up and can mean your mind wasn’t on the job for a couple of minutes.”

Dr Irwin says such mental blanks can be dangerous when doing something of critical importance like keeping an eye out for a child in a playground. “If, in that pause, a child wobbles on their bicycle, it’s obviously a worry. You just haven’t got your attention on it.

“The other aspect is, if you’re deeply immersed in writing something and turn your attention to an email that’s just come in, there are studies that show it can take you up to 15 minutes to get yourself back into that same degree of immersion.”

One early study by the Institute of Psychiatry in London involved more than 1000 workers and found multitasking with electronic media caused a temporary 10-point decrease in IQ – a worse effect than smoking marijuana or losing a night’s sleep.

The study’s leader, an adjunct professor at the University of Nevada, Dr Glenn Wilson, called it “informania”, a condition created by using multiple electronic devices and employers’ growing demands to tackle more than one task at a time.

“This is a very real and widespread phenomenon,” he told CNN. “We have found that this obsession with looking at messages, if unchecked, will damage a worker’s performance by reducing their mental sharpness. Companies should encourage a more balanced and appropriate way of working.”

Another study, by Professor David Meyer, director of the University of Michigan’s Brain Cognition and Action Laboratory, concluded that even brief mental blocks created by shifting between tasks cost as much as 40 per cent of someone’s productive time.

Dr Irwin’s own Australian research concludes clearly that in today’s multitasking multi-app world, people should turn off their devices when doing something that merits their full attention.

One of her studies also defies a widespread belief that women are better at multitasking. “One of the very first studies I did was with young students driving and either talking to passengers or on a mobile,” she says. “I thought, oh, the women are going to ace this, but the women actually scored worse on the phones than the men.

“When I looked in the literature, there is not a single study in psychology that shows that women are better at multitasking. But what I did find in the sociological literature is that they perform multiple tasks more often.

“This has  led to the belief that women are better at multitasking, but the more studies are done, the fewer differences they find between female and male brains.”

 

Source:  theage.com.au

Reality doesn’t exist, quantum experiment confirms

 

Reality doesn't exist

Reality doesn’t exist

Australian scientists have recreated a famous experiment and confirmed quantum physics’s bizarre predictions about the nature of reality, by proving that reality doesn’t actually exist until we measure it – at least, not on the very small scale.

That all sounds a little mind-meltingly complex, but the experiment poses a pretty simple question: if you have an object that can either act like a particle or a wave, at what point does that object ‘decide’?

Our general logic would assume that the object is either wave-like or particle-like by its very nature, and our measurements will have nothing to do with the answer. But quantum theory predicts that the result all depends on how the object is measured at the end of its journey. And that’s exactly what a team from the Australian National University has now found.

“It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” lead researcher and physicist Andrew Truscott said in a press release.

Known as John Wheeler’s delayed-choice thought experiment, the experiment was first proposed back in 1978 using light beams bounced by mirrors, but back then, the technology needed was pretty much impossible. Now, almost 40 years later, the Australian team has managed to recreate the experiment using helium atoms scattered by laser light.

“Quantum physics predictions about interference seem odd enough when applied to light, which seems more like a wave, but to have done the experiment with atoms, which are complicated things that have mass and interact with electric fields and so on, adds to the weirdness,” said Roman Khakimov, a PhD student who worked on the experiment.

To successfully recreate the experiment, the team trapped a bunch of helium atoms in a suspended state known as a Bose-Einstein condensate, and then ejected them all until there was only a single atom left.

This chosen atom was then dropped through a pair of laser beams, which made a grating pattern that acted as a crossroads that would scatter the path of the atom, much like a solid grating would scatter light.

They then randomly added a second grating that recombined the paths, but only after the atom had already passed the first grating.

When this second grating was added, it led to constructive or destructive interference, which is what you’d expect if the atom had travelled both paths, like a wave would. But when the second grating was not added, no interference was observed, as if the atom chose only one path.

The fact that this second grating was only added after the atom passed through the first crossroads suggests that the atom hadn’t yet determined its nature before being measured a second time.

So if you believe that the atom did take a particular path or paths at the first crossroad, this means that a future measurement was affecting the atom’s path, explained Truscott. “The atoms did not travel from A to B. It was only when they were measured at the end of the journey that their wave-like or particle-like behaviour was brought into existence,” he said.

Although this all sounds incredibly weird, it’s actually just a validation for the quantum theory that already governs the world of the very small. Using this theory, we’ve managed to develop things like LEDs, lasers and computer chips, but up until now, it’s been hard to confirm that it actually works with a lovely, pure demonstration such as this one.

Source:  Sciencedaily.com

Google turns your clothes into touchscreens

Google plans on turning your clothes into touchscreens

Google plans on turning your clothes into touchscreens

Last week Google unveiled a wealth of new innovations and initiatives at its annual I/O developer conference, and one of the big reveals was Project Jacquard. It’s part of the Google ATAP (Advanced Technology and Projects) division and it’s the company’s plan for the future of clothing: touch-sensitive materials that you can interact with in the same way as your smartphone display.

Project Jacquard uses touch-sensitive, metallic yarns that are weaved in with normal material – cotton, silk or polyester – to give it the kind of capabilities that you don’t usually find outside of science fiction movies. The yarn is connected to a small receiver and controller the size of a button, with the idea that one day you might be able to tap your lapel to switch on the washing machine, or flick your cuff to change the volume on your smart television set.

One of the demos that Google showed off at I/O 2015 was a touch-enabled outfit controlling a set of Philips Hue lights. A quick tap on the clothing turned the lights on and off, while swiping left and right changed the colour, and swiping up and down adjusted the brightness. You wouldn’t have to take your phone out of your jeans pocket to do all this – the pocket itself would act as the controller.

Monitoring capabilities can be included too, so your pillow could track your breathing or your t-shirt could monitor your heart rate without the need for any other equipment. Google is expecting to work with a number of different partners on the technology in the future, and already has an agreement in place with denim manufacturer Levi Strauss & Co in the US.

What makes the technology so exciting is its invisibility. There’s no need to wear a clunky headset or a smart wristwatch to get connected – it’s essentially the ultimate in wearables. Project Jacquard is still at the early stages, but a lot of progress has been made in a short space of time, and Google thinks the interactive yarn will have an important role to play in our sartorial future.

“The complementary components are engineered to be as discreet as possible,” explains the official Project Jacquard page. “We developed innovative techniques to attach the conductive yarns to connectors and tiny circuits, no larger than the button on a jacket. These miniaturised electronics capture touch interactions, and various gestures can be inferred using machine-learning algorithms.”

The smart clothing is stretchable and washable, and Google says it’s up to the designer whether the special yarn is highlighted on the material or kept completely invisible. It can be restricted to a certain patch of clothing or spread over the whole garment.

Jacquard, by the way, is a type of loom used in the 19th century. Google says that the new touch-enabled clothing can be made at scale using equipment that already exists, so when it’s ready for the mass market it can be cheaply and easily produced.

Ultimately, we could see all kinds of smart clothing, furnishings and textiles that look identical to the ‘dumb’ versions that came before them. Google doesn’t have a timescale for launching Project Jacquard out into the world just yet, but you can sign up for updates at the project page.

 

Source:  sciencedaily.com

Woolly mammoths genome mapped

woolly mammoth genome

woolly mammoth genome

An international team of researchers has sequenced the nearly complete genome of two Siberian woolly mammoths — revealing the most complete picture to date — including new information about the species’ evolutionary history and the conditions that led to its mass extinction at the end of the Ice Age.

“This discovery means that recreating extinct species is a much more real possibility, one we could in theory realize within decades,” says evolutionary geneticist Hendrik Poinar, director of the Ancient DNA Centre at McMaster University and a researcher at the Institute for Infectious Disease Research, the senior Canadian scientist on the project.

“With a complete genome and this kind of data, we can now begin to understand what made a mammoth a mammoth — when compared to an elephant — and some of the underlying causes of their extinction which is an exceptionally difficult and complex puzzle to solve,” he says.

While scientists have long argued that climate change and human hunting were major factors behind the mammoth’s extinction, the new data suggests multiple factors were at play over their long evolutionary history.

Researchers from McMaster, Harvard Medical School, the Swedish Museum of Natural History, Stockholm University and others produced high-quality genomes from specimens taken from the remains of two male woolly mammoths, which lived about 40,000 years apart.

One had lived in northeastern Siberia and is estimated to be nearly 45,000 years old. The other -believed to be from one of the last surviving mammoth populations — lived approximately 4,300 years ago on Russia’s Wrangel Island, located in the Arctic Ocean.

“We found that the genome from one of the world’s last mammoths displayed low genetic variation and a signature consistent with inbreeding, likely due to the small number of mammoths that managed to survive on Wrangel Island during the last 5,000 years of the species’ existence,” says Love Dalén, an associate professor of Bioinformatics and Genetics at the Swedish Museum of Natural History.

Scientists used sophisticated technology to tease bits and pieces of highly fragmented DNA from the ancient specimens, which they then used to sequence the genomes. Through careful analysis, they determined the animal populations had suffered and recovered from a significant setback roughly 250,000 to 300,000 years ago. However, say researchers, another severe decline occurred in the final days of the Ice Age, marking the end.

“The dates on these current samples suggest that when Egyptians were building pyramids, there were still mammoths living on these islands,” says Poinar. “Having this quality of data can help with our understanding of the evolutionary dynamics of elephants in general and possible efforts at de-extinction.”

The latest research is the continuation of the pioneering work Poinar and his team began in 2006, when they first mapped a partial mammoth genome, using DNA extracted from carcasses found in permafrost in the Yukon and Siberia.

 

Source:  Sciencedaily.com

Iris scanner that works 12 metres away

Iris scanner that can work at 12 metres away

Iris scanner that can work at 12 metres away

Iris scanner that can work at 12 metres away

Imagine an advertising billboard or a smart door that can recognise you from across the street – that’s the futuristic type of technology that’s on the way after researchers the US developed an iris scanner that can work at a distance of 12 metres (40 feet).

We’re starting to see primitive eye scanners appear in consumer electronics, but this new device takes the innovation one step further. The team from Carnegie Mellon University (CMU) has developed a scanning system that can spot and identify a driver sat in a car, so whether you’re traveling through a toll bridge or exceeding the speed limit, the cameras will know who you are.

That brings up a whole series of difficult questions about user privacy and the capabilities of law enforcement agencies across the world. Given a positive spin, it means a dangerous criminal getting spotted ahead of time; seen more cynically, the technology could be used to track citizens without their knowledge.

This long-range iris scanning system is primarily the work of CMU engineering professor, Marios Savvides. As our irises are as distinctive as our fingerprints, the technology is very accurate – but as with fingerprints, your eyeballs will already need to be on file for you to be spotted.

Savvides thinks the technology is helpful rather than scary. “Fingerprints, they require you to touch something. Iris, we can capture it at a distance, so we’re making the whole user experience much less intrusive, much more comfortable,” he told The Atlantic. “There’s no X-marks-the-spot. There’s no place you have to stand. Anywhere between six and 12 metres, it will find you, it will zoom in and capture both irises and full face.”

If nothing else, it could speed up queues at the airport. But in the wrong hands or used in the wrong way, it could be just as dangerous as it is convenient. There’s no chance of these types of biometric technology going backwards, so rigorous laws on how it can be used become increasingly important.

Savvides thinks we’re already in a new era of surveillance, and that his invention won’t change that. “People are being tracked,” he says. “Their every move, their purchasing, their habits, where they are every day, through credit card transactions, through advantage cards – if someone really wanted to know what you were doing every moment of the day, they don’t need facial recognition or iris recognition to do that. That’s already out there.”

Like many recent advancements in biometrics, increased convenience and accuracy comes at a cost – it’s all a question of how the technology is used. Just don’t be surprised if in the near future your office door spots you well before you reach it.

 

Source:  Sciencealert.com

Japanese scientists reverse aging in human cell

By altering the behavior of two genes responsible for the production of simple amino acids in human cells, scientists have gained a better understanding of how the process of ageing works, and how we could delay or perhaps even reverse it.

The team, led by Jun-Ichi Hayashi at the University of Tsukuba, targeted two genes that produce the amino acid glycine in the cell’s mitochondria, and figured out how to switch them on and off. By doing this, they could either accelerate the process of ageing within the cell, which caused significant defects to arise, or they could reverse the process of ageing, which restored the capacity for cellular respiration. Using this technique to produce more glycine in a 97-year-old cell line for 10 days, the researchers restored cellular respiration, effectively reversing the cell line’s age.

 The finding brings into question the popular, but more recently controversial, mitochondrial theory of ageing, which puts forward the notion that an accumulation of mutations in mitochondrial DNA leads to age-related defects in the mitochondria – often referred to as the cell’s powerhouses because they are responsible for energy production and cellular respiration. Defects in the cell’s mitochondria lead to damage in the DNA, and an accumulation of DNA damage is linked to age-related hair loss, weight loss, spine curvature, osteoporosis, and a decreased lifespan.

But is this theory accurate? The results of Hayashi’s study support an alternative theory to ageing, which proposes that age-associated mitochondrial defects are caused not by the accumulation of mutations in mitochondrial DNA, but by certain crucial genes being turned on and off as we get older.

The team worked with human fibroblast cell lines gathered from young people – from foetus-age to 12 years old – and the elderly, from 80 to 97 years old. They compared the capacity for cellular respiration in the young and old cells, and found that while the capacity was indeed lower in the cells of the elderly, there was almost no difference in the amount of DNA damage between the two. This calls into question the mitochondrial theory of ageing, the team reports in the journal Scientific Reports, and suggests instead that the age-related effects they were seeing were being caused by a process known as epigenetic regulation.

Epigenetic regulation describes the process where the physical structure of DNA – not the DNA sequence – is altered by the addition or subtraction of chemical structures or proteins, which is regulated by the turning on and off of certain genes. “Unlike mutations that damage that sequence, as in the other, aforementioned theory of ageing, epigenetic changes could possibly be reversed by genetically reprogramming cells to an embryonic stem cell-like state, effectively turning back the clock on ageing,” says Eric Mack at Gizmag.

Hayashi and his team supported this theory by showing that they could turn off the genes that regulate the production of glycine to achieve cellular ageing, or turn them on for the restoration of cellular respiration. This suggests, they say, that glycine treatment could effectively reverse the age-associated respiration defects present in their elderly human fibroblasts.

“Whether or not this process could be a potential fountain of youth for humans and not just human fibroblast cell lines still remains to be seen, with much more testing required,” Mack points out at Gizmag. “However, if the theory holds, glycine supplements could one day become a powerful tool for life extension.”

We’ll just have to wait and see. The faster we can solve the debate over how ageing actually works, the faster we can figure out how to delay it.

 

Source:  sciencedaily.com

US military robots will leave humans defenceless

US military robots

US military robots

Killer robots which are being developed by the US military ‘will leave humans utterly defenceless‘, an academic has warned.

Two programmes commissioned by the US Defense Advanced Research Projects Agency (DARPA) are seeking to create drones which can track and kill targets even when out of contact with their handlers.

Writing in the journal Nature, Stuart Russell, Professor of Computer Science at the University of California, Berkley, said the research could breach the Geneva Convention and leave humanity in the hands of amoral machines.

“Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans,” he said.

“Existing AI and robotics components can provide physical platforms, perception, motor control, navigation, mapping, tactical decision-making and long-term planning. They just need to be combined.

“In my view, the overriding concern should be the probable endpoint of this technological trajectory.

“Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.”

• Killer robots a small step away and must be outlawed, says UN official
• Britain prepared to develop ‘killer robots’, minister says

The robots, called LAWS – lethal autonomous weapons systems – are likely to be armed quadcopters of mini-tanks that can decided without human intervention who should live or die.

DARPA is currently working on two projects which could lead to killer bots. One is Fast Lightweight Autonomy (FLA) which is designing a tiny rotorcraft to manoeuvre unaided at high speed in urban areas and inside buildings. The other and Collaborative Operations in Denied Environment (CODE), is aiming to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission — find, fix, track, target, engage, assess” in situations in which enemy signal-jamming makes communication with a human commander impossible.

Last year Angela Kane, the UN’s high representative for disarmament, said killer robots were just a ‘small step’ away and called for a worldwide ban. But the Foreign Office has said while the technology had potentially “terrifying” implications, Britain “reserves the right” to develop it to protect troops.

Professor Russell said: “LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting ‘threatening behaviour’

“Debates should be organized at scientific meetings; arguments studied by ethics committees. Doing nothing is a vote in favour of continued development and deployment.”

• The US army tests a killer robot tank
• Future robots will resemble ostriches or dinosaurs, scientists say

However Dr Sabine Hauert, a lecturer in robotics at the University of Bristol said that the public did not need to fear the developments in artificial intelligence.

“My colleagues and I spend dinner parties explaining that we are not evil but instead have been working for years to develop systems that could help the elderly, improve health care, make jobs safer and more efficient, and allow us to explore space or beneath the ocean,” she said.

 

Source:   telegraph.co.uk

Iron levels hasten Alzheimer’s disease

Brain

Brain

High levels of iron in the brain could increase the risk of developing Alzheimer’s disease and hasten the cognitive decline that comes with it, new research suggests.

The results of the study, which tracked the brain degeneration of people with Alzheimer’s over a seven-year period, suggest it might be possible to halt the disease with drugs that reduce iron levels in the brain.

 “We think that iron is contributing to the disease progression of Alzheimer’s disease,” neuroscientist Scott Ayton, from the University of Melbourne in Australia, told Anna Salleh at ABC Science.

“This is strong evidence to base a clinical trial on lowering iron content in the brain to see if that would impart a cognitive benefit.”

Alzheimer’s is a devastating disease that researchers suspect “begins when two abnormal protein fragments, known as plaques and tangles, accumulate in the brain and start killing our brain cells,” explains Fiona Macdonald for ScienceAlert.

It starts by destroying the hippocampus – the region of the brain where memories are formed and stored – and eventually damages the region where language is processed, making it difficult for advanced Alzheimer’s patients to communication. As the disease’s gradual takeover continues, people lose the ability to regulate their emotions and behaviour, and to make sense of the world around them.

But previous studies have shown that people with Alzheimer’s disease also have elevated levels of brain iron, which may also be a risk factor for the disease.

“There has been debate for a long period of time whether this is important or whether it’s just a coincidence,” Ayton told ABC Science.

The long-term impact of elevated iron levels on the disease outcome has not been investigated, the researchers say.

So Ayton’s team decided to test this, examining the link between brain iron levels and cognitive decline in three groups of people over seven years. The participants included 91 people with normal cognition, 144 people with mild cognitive impairment, and 67 people with diagnosed Alzheimer’s disease.

At the beginning of the study, the researchers determined the patients’ brain iron levels by measuring the amount of ferritin in the cerebrospinal fluid around the brain. Ferritin is a protein that stores and releases iron.

The researchers did regular tests and MRI scans to track cognitive decline and changes in the brain over the study period.

They found that people with higher levels of ferritin – in all groups – had faster declines in cognitive abilities and accelerated shrinking of the hippocampus. Levels of ferritin were also a linked to a greater likelihood of people with mild cognitive impairment developing Alzheimer’s.

Their data contained some other interesting takeaways: The researchers found higher levels of ferritin corresponded to earlier ages for diagnoses – roughly three months for every 1 nanogram per millilitre increase.

They also found that people with the APOE-e4 gene variant, which is known to be the strongest genetic risk factor for the disease, had the highest levels of iron in their brains.

This suggests that APOE-e4 may be increasing Alzheimer’s disease risk by increasing iron levels in the brain, Ayton told ABC Science.

The researchers say their findings, which were published in the journal Nature Communications, justify the revival of clinical trials to explore drugs to target brain iron levels.

In a study carried out 24 years ago, a drug called deferiprone halved the rate of Alzheimer’s cognitive decline, Ayton told Clare Wilson at NewScientist. “Perhaps it’s time to refocus the field on looking at iron as a target.”

“Lowering CSF ferritin, as might be expected from a drug like deferiprone, could conceivably delay mild cognitive impairment conversion to Alzheimer’s disease by as much as three years,” the team wrote.

FDA Cover’s Up Deaths in Drug Trials

FDA

FDA

Does the habitual use of antidepressants do more harm than good to many patients? Absolutely, says one expert in a new British Medical Journal report. Moreover, he says that the federal Food and Drug Administration might even be hiding the truth about antidepressant lethality.

In his portion of the report, Peter C. Gotzsche, a professor at the Nordic Cochrane Centre in Denmark, said that nearly all psychotropic drug use could be ended today without deleterious effects, adding that such “drugs are responsible for the deaths of more than half a million people aged 65 and older each year in the Western world.”

Gotzsche, author of the 2013 book Deadly Medicines and Organized Crime: How Big Pharma Has Corrupted Healthcare, further notes in the BMJ that “randomized trials that have been conducted do not properly evaluate the drugs’ effects.” He adds, “Almost all of them are biased because they included patients already taking another psychiatric drug.”

Hiding or fabricating data about harmful side effects

The FDA’s data is incomplete at best and intentionally skewed at worst, he insisted:

Under-reporting of deaths in industry funded trials is another major flaw. Based on some of the randomised trials that were included in a meta-analysis of 100,000 patients by the US Food and Drug Administration, I have estimated that there are likely to have been 15 times more suicides among people taking antidepressants than reported by the FDA – for example, there were 14 suicides in 9,956 patients in trials with fluoxetine and paroxetine, whereas the FDA had only five suicides in 52,960 patients, partly because the FDA only included events up to 24 hours after patients stopped taking the drug.

He said that he was most concerned about three classes of drugs: antipsychotics, benzodiazepines and antidepressants, saying they are responsible for 3,693 deaths a year in Denmark alone. When scaling up that figure in relation to the U.S. and European Union together, he estimated that 539,000 people die every year because of the medications.

“Given their lack of benefit, I estimate we could stop almost all psychotropic drugs without causing harm – by dropping all antidepressants, ADHD drugs, and dementia drugs (as the small effects are probably the result of unblinding bias) and using only a fraction of the antipsychotics and benzodiazepines we currently use,” Gotzsche wrote.

“This would lead to healthier and more long lived populations. Because psychotropic drugs are immensely harmful when used long-term, they should almost exclusively be used in acute situations and always with a firm plan for tapering off, which can be difficult for many patients,” he added.

Gotzsche’s views were disputed in the same BMJ piece by Allan Young, professor of mood disorders at King’s College London, and psychiatric patient John Crace.

“More than a fifth of all health-related disability is caused by mental ill health, studies suggest, and people with poor mental health often have poor physical health and poorer (long-term) outcomes in both aspects of health,” they wrote.

They also insisted that psychiatric drugs are “rigorously examined for efficacy and safety, before and after regulatory approval.”

 

Source:  globalresearch.ca

PTSD Linked to Accelerated Aging

chromosomes_with_telomeres

chromosomes_with_telomeres

In recent years, public health concerns about post-traumatic stress disorder (PTSD) have risen significantly, driven in part by affected military veterans returning from conflicts in the Middle East and elsewhere. PTSD is associated with number of psychological maladies, among them chronic depression, anger, insomnia, eating disorders and substance abuse.

Writing in the May 7 online issue of American Journal of Geriatric Psychiatry, researchers at University of California, San Diego School of Medicine and Veterans Affairs San Diego Healthcare System suggest that people with PTSD may also be at risk for accelerated aging or premature senescence.

“This is the first study of its type to link PTSD, a psychological disorder with no established genetic basis, which is caused by external, traumatic stress, with long-term, systemic effects on a basic biological process such as aging,” said Dilip V. Jeste, MD, Distinguished Professor of Psychiatry and Neurosciences and director of the Center on Healthy Aging and Senior Care at UC San Diego, who is the senior author of this study.

Researchers had previously noted a potential association between psychiatric conditions, such as schizophrenia and bipolar disorder, and acceleration of the aging process. Jeste and colleagues determined to see if PTSD might show a similar association by conducting a comprehensive review of published empirical studies relevant to early aging in PTSD, covering multiple databases going back to 2000.

There is no standardized definition of what constitutes premature or accelerated senescence. For guidance, the researchers looked at early aging phenomena associated with non-psychiatric conditions, such as Hutchinson-Gilford progeria syndrome, HIV infection and Down’s syndrome. The majority of evidence fell into three categories: biological indicators or biomarkers, such as leukocyte telomere length (LTL), earlier occurrence or higher prevalence of medical conditions associated with advanced age and premature mortality.

In their literature review, the UC San Diego team identified 64 relevant studies; 22 were suitable for calculating overall effect sizes for biomarkers, 10 for mortality.

All six studies looking specifically at LTL found reduced telomere length in persons with PTSD. Leukocytes are white blood cells. Telomeres are stretches of protective, repetitive nucleotide sequences at the ends of chromosomes. These sequences shorten with every cell replication and are considered a strong measure of the aging process in cells.

The scientists also found consistent evidence of increased pro-inflammatory markers, such as C-reactive protein and tumor necrosis factor alpha, associated with PTSD.

A majority of reviewed studies found increased medical comorbidity of PTSD with several targeted conditions associated with normal aging, including cardiovascular disease, type 2 diabetes, gastrointestinal ulcer disease and dementia.

Seven of 10 studies indicated a mild-to-moderate association of PTSD with earlier mortality, consistent with an early onset or acceleration of aging in PTSD.

“These findings do not speak to whether accelerated aging is specific to PTSD, but they do argue the need to re-conceptualize PTSD as something more than a mental illness,” said first author James B. Lohr, MD, professor of psychiatry. “Early senescence, increased medical morbidity and premature mortality in PTSD have implications in health care beyond simply treating PTSD symptoms. Our findings warrant a deeper look at this phenomenon and a more integrated medical-psychiatric approach to their care.”

 Barton Palmer, PhD, professor of psychiatry and a coauthor of the study, cautioned that “prospective longitudinal studies are needed to directly demonstrate accelerated aging in PTSD and to establish underlying mechanisms.”
Source:  scienceblog.com

Bionic Lens 3x better than 20/20

bionic-lens

bionic-lens

Imagine being able to see three times better than 20/20 vision without wearing glasses or contacts — even at age 100 or more — with the help of bionic lenses implanted in your eyes.

Dr. Garth Webb, an optometrist in British Columbia who invented the Ocumetics Bionic Lens, says patients would have perfect vision and that driving glasses, progressive lenses and contact lenses would become a dim memory as the eye-care industry is transformed.

Dr. Garth Webb says the bionic lens would allow people to see to infinity and replace the need for eyeglasses and contact lenses. (Darryl Dyck/Canadian Press)

Webb says people who have the specialized lenses surgically inserted would never get cataracts because their natural lenses, which decay over time, would have been replaced.

Perfect eyesight would result “no matter how crummy your eyes are,” Webb says, adding the Bionic Lens would be an option for someone who depends on corrective lenses and is over about age 25, when the eye structures are fully developed.

“This is vision enhancement that the world has never seen before,” he says, showing a Bionic Lens, which looks like a tiny button.

“If you can just barely see the clock at 10 feet, when you get the Bionic Lens you can see the clock at 30 feet away,” says Webb, demonstrating how a custom-made lens that folded like a taco in a saline-filled syringe would be placed in an eye, where it would unravel itself within 10 seconds.

8-minute surgery

He says the painless procedure, identical to cataract surgery, would take about eight minutes and a patient’s sight would be immediately corrected.

Webb, who is the CEO of Ocumetics Technology Corp., has spent the last eight years and about $3 million researching and developing the Bionic Lens, getting international patents and securing a biomedical manufacturing facility in Delta, B.C.

Webb says people who have the specialized lenses surgically inserted would never get cataracts because their natural lenses, which decay over time, would have been replaced. (Laitr Keiows/Wikicommons)

His mission is fuelled by the “obsession” he’s had to free himself and others from corrective lenses since he was in Grade 2, when he was saddled with glasses.

“My heroes were cowboys, and cowboys just did not wear glasses,” Webb says.

“At age 45 I had to struggle with reading glasses, which like most people, I found was a great insult. To this day I curse my progressive glasses. I also wear contact lenses, which I also curse just about every day.”

Webb’s efforts culminated in his recent presentation of the lens to 14 top ophthalmologists in San Diego the day before an annual gathering of the American Society of Cataract and Refractive Surgery.

Dr. Vincent DeLuise, an ophthalmologist who teaches at Yale University in New Haven, Conn., and at Weill Cornell Medical College in New York City, says he arranged several meetings on April 17, when experts in various fields learned about the lens.

He says the surgeons, from Canada, the United States, Australia and the Dominican Republic, were impressed with what they heard and some will be involved in clinical trials for Webb’s “very clever” invention.

“There’s a lot of excitement about the Bionic Lens from very experienced surgeons who perhaps had some cynicism about this because they’ve seen things not work in the past. They think that this might actually work and they’re eager enough that they all wish to be on the medical advisory board to help him on his journey,” DeLuise says.

“I think this device is going to bring us closer to the holy grail of excellent vision at all ranges — distant, intermediate and near.

 

Source:  cbc.ca

Gut feeling about your CEO is spot on

Gut feeling about CEO is spot on

Gut feeling about CEO is spot on.

That gut feeling many workers, laborers and other underlings have about their CEOs is spot on, according to three recent studies in the Journal of Management, the Journal of Management Studies and the Journal of Leadership and Organizational Studies that say CEO greed is bad for business.

But how do you define greed? Are compassionate CEOs better for business? How do you know if the leader is doing more harm than good? And can anybody rein in the I-Me-Mine type leader anyway?

University of Delaware researcher Katalin Takacs Haynes and three collaborators — Michael A. Hitt and Matthew Josefy of Texas A&M University and Joanna Tochman Campbell of the University of Cincinnati — have chased such questions for several years, digging into annual reports, comparing credentials with claims and developing useful definitions that could shed more light on the impact of a company’s top leader on employees, business partners and investors.

They test the assumption that self-interest is a universal trait of CEOs (spoiler alert: it’s alive and well), show that too much altruism can harm company performance, reveal the dark, self-destructive tendencies of some entrepreneurs and family-owned businesses and provide a way to measure and correlate greed, arrogance and company performance.

“We tried to look at what we think greed is more objectively,” said Haynes, who was recently promoted to associate professor of management in UD’s Alfred Lerner College of Business and Economics. “What we’re trying to do is clean up some of the definitions and make sure we’re all talking about the same concepts.”

In their studies, researchers offer plenty of evidence that some leaders are insatiable when it comes to compensation. How much is too much? They don’t put a number on that. But they do add plenty of nuance to the question and point to a mix of motivations that goes beyond raw greed.

“It’s not for us to judge what too much is for anybody else,” said Haynes, “but we can see when the outcome of somebody’s work is the greater good, and when it is not just greed that is operating in them.”

Greed seems all too apparent to many workers. The recent recession left millions without jobs and many companies sinking into a sea of red. At the same time, though, stunning bonuses and other perks were landing in the laps of people at the helm.

Haynes, who joined the UD faculty in 2011, has found the range of pay within companies an intriguing question, too.

“Why is it that in some companies there is a huge difference between the pay of the top executive and the average worker or the lowest-paid employee and in other companies the pay is a lot closer?” she said.

Many a minimum-wage worker, making $15,080 per year, has wondered that, and so have those in the middle class, who may work a year to make what some CEOs make in a day.

But if you make more than anyone else does that mean you’re greedy?

The question is more complicated than water-cooler conversations might suggest. And Haynes and her collaborators go to the data for answers, leaving emotion, indignation and cries for justice to others. They leave others to correlate the data with names, too.

Instead, they offer definitions and analytical tools that add clarity, allow for apples-to-apples comparisons and shed new light on how a leader’s objectives shape company performance.

“It’s possible that high pay is perfectly deserved because of high contributions, high skill sets,” Haynes said, “and just because somebody doesn’t have high pay doesn’t mean they aren’t greedy.”

The marks of greed are found elsewhere — in a reporting category that tracks “other” compensation and perquisites, in the pay rates of other top executives, in compensation demands during times of company stress, for example.

Haynes’ studies included interviews (with anonymity assured), publicly reported data, written surveys, essays and a review of published information and interviews with CEOs.

The studies also examined managerial hubris and how it differs from self-confidence.

“Hubris is an extreme manifestation of confidence, characterized by preoccupation with fantasies of success and power, excessive feelings of self-importance, as well as arrogance,” researchers wrote.

“Say I’m a stunt driver and I have jumped across five burning cars before with my car,” Haynes said. “I’m pretty confident I can do that — and maybe even six. Say I’m not a stunt driver. To say I could jump through six burning cars would be arrogance. And if I drag you to go with me, it could be criminal.”

Risk aversion can harm a company. But risk for short-term gain without thought of the company’s future is a sign of greed.

“Some CEOs take risks and it will pay off,” she said. “They will have reliable performance and we can forecast that. We know their track record. Others take foolish risks not based on their previous performance.”

Such risks may be especially prevalent among young entrepreneurs, who underestimate the resources needed to help a startup succeed and fail to recognize that more than money is at stake.

“While financial capital is an important concern with these behaviors, the effects on human and social capital are often overlooked, despite the fact that they are highly critical for the success and ultimate survival of entrepreneurial ventures,” the researchers wrote.

Generally, researchers found that greed is worse among short-term leaders with weak boards.

The good news, Haynes said, is that strong corporate governance can rein in CEO greed and keep both self-interest and altruism in proper balance. And that is where the greatest success is found.

“Overall, we conclude that measured self-interest keeps managers focused on the firm’s goals and measured altruism helps the firm to build and maintain strong human and social capital,” researchers wrote.

 

Source:  sciencedaily.com

Plasma made from matter and antimatter

Pulsar has atmosphere of matter and anti matter.

Pulsar has atmosphere of matter and anti matter.

One of the all-time great mysteries in physics is why our Universe contains more matter than antimatter, which is the equivalent of matter but with the opposite charge. To tackle this question, our international team of researchers have managed to create a plasma of equal amounts of matter and antimatter – a condition we think made up the early Universe.

Matter as we know it appears in four different states: solid, liquid, gas, and plasma, which is a really hot gas where the atoms have been stripped of their electrons. However, there is also a fifth, exotic state: a matter-antimatter plasma, in which there is complete symmetry between negative particles (electrons) and positive particles (positrons).

This peculiar state of matter is believed to be present in the atmosphere of extreme astrophysical objects, such as black holes and pulsars. It is also thought to have been the fundamental constituent of the Universe in its infancy, in particular during the Leptonic era, starting approximately one second after the Big Bang.

One of the problems with creating matter and antimatter particles together is that they strongly dislike each other – disappearing in a burst of light whenever they meet. However, this doesn’t happen straight away, and it is possible to study the behaviour of the plasma for the fraction of a second in which it is alive.

Understanding how matter behaves in this exotic state is crucial if we want to understand how our Universe has evolved and, in particular, why the Universe as we know it is made up mainly of matter. This is a puzzling feature, as the theory of relativistic quantum mechanics suggests we should have equal amounts of the two. In fact, no current model of physics can explain the discrepancy.

Despite its fundamental importance for our understanding of the Universe, an electron-positron plasma had never been produced before in the laboratory, not even in huge particle accelerators such as CERN. Our international team, involving physicists from the UK, Germany, Portugal, and Italy, finally managed to crack the nut by completely changing the way we look at these objects.

Instead of focusing our attention on immense particle accelerators, we turned to the ultra-intense lasers available at the Central Laser Facility at the Rutherford Appleton Laboratory in Oxfordshire, UK. We used an ultra-high vacuum chamber with an air pressure corresponding to a hundredth of a millionth of our atmosphere to shoot an ultra-short and intense laser pulse (hundred billions of billions more intense that sunlight on the Earth surface) onto a nitrogen gas. This stripped off the gas’ electrons and accelerated them to a speed extremely close to that of light.

The beam then collided with a block of lead, which slowed them down again. As they slowed down they emitted particles of light, photons, which created pairs of electrons and their anti-particle, the positron, when they collided with nuclei of the lead sample. A chain-reaction of this process gave rise to the plasma.

However, this experimental achievement was not without effort. The laser beam had to be guided and controlled with micrometer precision, and the detectors had to be finely calibrated and shielded – resulting in frequent long nights in the laboratory.

But it was well worth it as the development means an exciting branch of physics is opening up. Apart from investigating the important matter-antimatter asymmetry, by looking at how these plasmas interact with ultra powerful laser beams, we can also study how this plasma propagates in vacuum and in a low-density medium. This would be effectively recreating conditions similar to the generation of gamma-ray bursts, some of the most luminous events ever recorded in our Universe.

 

Source:  sciencealert.com

Carbon Billionaire Al Gore

Al Gore Becomes First ‘Carbon Billionaire’

Al Gore First ‘Carbon Billionaire’

Former US Vice President and Global Warming advocate, Al Gore, has become the world’s first ‘carbon billionaire’ after landing a major carbon deal with Chinese coal mining company Haerwusu, one of the top ten coal mining companies in the world.

Al Gore and his partner David Blood, both principals at Generation Investment Management (GIM) have landed the most lucrative carbon deal to date, reaching an estimated $12 billion dollars in carbon shares, estimate experts, although official numbers have not yet been disclosed.

Haerwusu that has often been criticized by Amnesty International and other human rights groups for the poor working conditions of their employees is believed to have sealed the carbon deal to “improve its international image” in an attempt to facilitate commerce with Europe and America, believe specialists.

The former vice-president announced the news to share holders earlier this week during a press conference at GIM headquarters, in London, England.

“I am proud to say that this is just the beginning” he told share holders, visibly enchanted by the recent deal.

“I told the world 20 years ago that the ice caps would be melted by now. Although we are lucky this has not happened yet, we have been at the forefront of the Global Warming movement all along and today we are reaping what we have sown” he admitted with great pride.

“When a system of carbon taxes and carbon trade is setup all over the world in the near future, GIM will be at the epicenter of this green revolution, and believe me, this is just the beginning” he acknowledged prophetically.

The $12 billion dollar deal signed for a period of 10 years with the Haerwusu company could encourage other companies to join in the global carbon trade, a great thing for GIM share holders who’s profits are estimated by experts to sky rocket in the next years.

 

Source:  worldnewsdailyreport.com

Test catches cancer 13 years before it hits

Test can predict cancer up to 13 years

Test can predict cancer up to 13 years

Scientists have developed a new test that can predict with 100 per cent accuracy whether someone will develop cancer up to 13 years in the future.

The discovery of tiny but significant changes taking place in the body more than a decade before cancer was diagnosed helped researchers at  Harvard and Northwestern University make the breakthrough.

Their research,  published in the online journal Ebiomedicine, found protective caps on the ends of chromosomes, which prevent DNA damage were more worn down those who went on to develop cancer.

Known as telomeres, these were much shorter than they should have been and continued to get shorter until around four years before the cancer developed, when they suddenly stopped shrinking.

“Because we saw a strong relationship in the pattern across a wide variety of cancers, with the right testing these procedures could be used eventually to diagnose a wide variety of cancers,” said Dr Lifang Hou, the lead study author, told The Telegraph.

“Understanding this pattern of telomere growth may mean it can be a predictive biomarker for cancer….We found cancer has hijacked the telomere shortening in order to flourish in the body.”

Source:

independent.co.uk

Scientists discover key driver of reversing aging process

A study tying the aging process to the deterioration of tightly packaged bundles of cellular DNA could lead to methods of preventing and treating age-related diseases such as cancer, diabetes and Alzheimer's disease, experts say.

A study tying the aging process to the deterioration of tightly packaged bundles of cellular DNA could lead to methods of preventing and treating age-related diseases such as cancer, diabetes and Alzheimer’s disease, experts say.

A study tying the aging process to the deterioration of tightly packaged bundles of cellular DNA could lead to methods of preventing and treating age-related diseases such as cancer, diabetes and Alzheimer’s disease, experts say. In the study, scientists at the Salk Institute and the Chinese Academy of Science found that the genetic mutations underlying Werner syndrome, a disorder that leads to premature aging and death, resulted in the deterioration of bundles of DNA known as heterochromatin.

The discovery, made possible through a combination of cutting-edge stem cell and gene-editing technologies, could lead to ways of countering age-related physiological declines by preventing or reversing damage to heterochromatin.

“Our findings show that the gene mutation that causes Werner syndrome results in the disorganization of heterochromatin, and that this disruption of normal DNA packaging is a key driver of aging,” says Juan Carlos Izpisua Belmonte, a senior author on the paper. “This has implications beyond Werner syndrome, as it identifies a central mechanism of aging–heterochromatin disorganization–which has been shown to be reversible.”

Werner syndrome is a genetic disorder that causes people to age more rapidly than normal. It affects around one in every 200,000 people in the United States. People with the disorder suffer age-related diseases early in life, including cataracts, type 2 diabetes, hardening of the arteries, osteoporosis and cancer, and most die in their late 40s or early 50s.

The disease is caused by a mutation to the Werner syndrome RecQ helicase-like gene, known as the WRN gene for short, which generates the WRN protein. Previous studies showed that the normal form of the protein is an enzyme that maintains the structure and integrity of a person’s DNA. When the protein is mutated in Werner syndrome it disrupts the replication and repair of DNA and the expression of genes, which was thought to cause premature aging. However, it was unclear exactly how the mutated WRN protein disrupted these critical cellular processes.

In their study, the Salk scientists sought to determine precisely how the mutated WRN protein causes so much cellular mayhem. To do this, they created a cellular model of Werner syndrome by using a cutting-edge gene-editing technology to delete WRN gene in human stem cells. This stem cell model of the disease gave the scientists the unprecedented ability to study rapidly aging cells in the laboratory. The resulting cells mimicked the genetic mutation seen in actual Werner syndrome patients, so the cells began to age more rapidly than normal. On closer examination, the scientists found that the deletion of the WRN gene also led to disruptions to the structure of heterochromatin, the tightly packed DNA found in a cell’s nucleus.

This bundling of DNA acts as a switchboard for controlling genes’ activity and directs a cell’s complex molecular machinery. On the outside of the heterochromatin bundles are chemical markers, known as epigenetic tags, which control the structure of the heterochromatin. For instance, alterations to these chemical switches can change the architecture of the heterochromatin, causing genes to be expressed or silenced.

The Salk researchers discovered that deletion of the WRN gene leads to heterochromatin disorganization, pointing to an important role for the WRN protein in maintaining heterochromatin. And, indeed, in further experiments, they showed that the protein interacts directly with molecular structures known to stabilize heterochromatin–revealing a kind of smoking gun that, for the first time, directly links mutated WRN protein to heterochromatin destabilization.

“Our study connects the dots between Werner syndrome and heterochromatin disorganization, outlining a molecular mechanism by which a genetic mutation leads to a general disruption of cellular processes by disrupting epigenetic regulation,” says Izpisua Belmonte. “More broadly, it suggests that accumulated alterations in the structure of heterochromatin may be a major underlying cause of cellular aging. This begs the question of whether we can reverse these alterations–like remodeling an old house or car–to prevent, or even reverse, age-related declines and diseases.”

Izpisua Belmonte added that more extensive studies will be needed to fully understand the role of heterochromatin disorganization in aging, including how it interacts with other cellular processes implicated in aging, such as shortening of the end of chromosomes, known as telomeres. In addition, the Izpisua Belmonte team is developing epigenetic editing technologies to reverse epigenetic alterations with a role in human aging and disease.

 

Source:  sciencedaily.com

Doctor who discovered Cancer blames lack of Oxygen

The Man Who Discovered Cancer

The Man Who Discovered Cancer

Dr. Otto H. Warburg won a Nobel Prize for discovering the cause of cancer. There is one aspect of our bodies that is the key to preventing cancer: pH levels.

What Dr. Warburg figured out is that when there is a lack of oxygen, cancer cells develop. As Dr. Warburg said, “All normal cells have an absolute requirement for oxygen, but cancerous cells can live without oxygen – a rule without exception. Deprive a cell of 35% of it’s oxygen for 48 hours and it may become cancerous.” Cancer cells therefore cannot live in a highly oxygenated state, like the one that develops when your body’s pH levels are alkaline, and not acidic.

Most people’s diets promote the creation of too much acid, which throws our body’s natural pH levels from a slightly alkaline nature to an acidic nature. Maintaining an alkaline pH level can prevent health conditions like cancer, osteoporosis, cardiovascular diseases, diabetes, and acid reflux. Eating processed foods like refined sugars, refined grains, GMOs, and other unnatural foods can lead to a pH level that supports the development of these conditions, and leads to overall bad health. In fact, most health conditions that are common today stem from a pH level that is too acidic, including parasites, bacteria, and viruses are all attributed to an acidic pH level.

There is a natural remedy that you can use at home that is simple, and readily available. All you need is 1/3 tablespoon of baking soda, and 2 tablespoons of lemon juice or apple cider vinegar. Mix the ingredients into 8ounces of cold water, and stir well. The baking soda will react with the lemon juice or ACV and begin to fizz. Drink the mixture all at once. The combination will naturally reduce your pH levels in your body and prevent the conditions associated with an acidic pH level. Maintaining a healthy pH level will do wonders for your health, and you will notice the difference after only a few days of the treatment.

 

Source:  buynongmoseeds.com

Success Regenerating Spinal Cords

Regenerated nerves after spinal cord injury

Regenerated nerves after spinal cord injuryHead Transplant

Working with paralysed rats, scientists in the US have shown how they might be able to regenerate spines after injury and help paralysed people to one day walk again.

The team, from Tufts University School of Medicine, crushed the spines of lab rats at the dorsal root, which is the main bundle of nerve fibres that branches off the spine, and carries signals of sensation from the body to the brain. They then treated the spines with a protein called artemin, known to help neurons grow and function. After the two-week treatment, the nerve fibres regenerated and successfully passed signals over a distance of 4 centimetres.

“This is a significantly longer length of Central Nervous System regeneration than has been reported earlier,” one of the team, physiologist Eric Frank, “But still a long way to go!”

Reporting in a study published by the Proceedings of the National Academy of Sciences, the team says the artemin treatment was successful in regenerating both large and small sensory neurons.

And while that 4-centimetre distance is important, Frank says that’s not all that counts: “The regenerating nerve fibres are growing back to the right places in the spinal cord and brainstem.” He adds that this is pretty impressive, given that their subjects were several months old, which isn’t young in rat years.

The results suggest that the chemical guidance cues that allow the nerve fibres to get to their correct target areas persist in the adult spinal cord, says Frank. This means that while artemin may not help regenerate all nerve fibres -some aren’t receptive to it – it’s likely to help with other neurones to. “If it becomes possible to get these other types of nerve fibres to regenerate for long distances as well, there is a reasonable chance that they can also grow back to their original target areas,” says Frank.

The challenge is getting regenerated nerve fibres to reconnect, so they can do what there are supposed to do, which just might be possible, considering these results. If scientists could achieve that, it would be a big leap forward in improving the lives of paralysed people.

Source:  sciencealert.com

Most likely culprit for schizophrenia

Schizophrenia is eight different diseases

Schizophrenia is eight different diseases

Researchers have found a gene that links the three previously unrelated biological changes most commonly blamed for causing schizophrenia, making it one of the most promising culprits for the disease so far, and a good target for future treatments.

Schizophrenia is a debilitating mental disorder that usually appears in late adolescence, and changes the way people think, act and perceive reality. For decades, scientists have struggled to work out what causes the hallucinations and strange behaviour associated with the disorder, and keep coming back to three neuronal changes that seem to be to blame. The only problem is that the changes seemed to be unrelated, and, in some cases, even contradictory.

But now, researchers from Duke University in the US have managed to find a link between these three hypotheses, and have shown that all three changes can be brought about by a malfunction in the same gene.

Publishing in Nature Neuroscience, the researchers explain that their results could lead to new treatment strategies that target the underlying cause of the disease, rather than the visible changes or phenotypes, associated with schizophrenia.

“The most exciting part was when all the pieces of the puzzle fell together,” lead researcher, Scott Soderling, a professor of cell biology and neurobiology from Duke University, said in a press release. “When [co-researcher Il Hwan Kim] and I finally realised that these three outwardly unrelated phenotypes … were actually functionally interrelated with each other, that was really surprising and also very exciting for us.”

So what are these three phenotypes? The first is spine pruning, which means that the neurons of people with schizophrenia have fewer spines – the long part of a brain cell that passes signals back and forth. Some people with schizophrenia also have hyperactive neurons, and excess dopamine production.

But these changes just didn’t seem to make sense together. After all, how could neurons be overactive if they didn’t have enough dendritic spines to pass messages back and forth, and why would either of these symptoms trigger excess dopamine production? Now, researchers believe that a mutation in the gene Arp2/3 may be to blame.

Soderling and his team originally spotted the gene during previous studies, which identified thousands of genes linked to schizophrenia. But Arp2/3 was of particular interest, as it controls the formation of synapses, or links, between neurons.

To test its effect, the researchers engineered mice that didn’t have the Arp2/3 gene and, surprisingly, found that they behaved very similarly to humans with schizophrenia. The mice also got worse with age and improved slightly with antipsychotic medications, both traits of human schizophrenia.

But most fascinating was the fact that the mice also had all three of the unrelated brain changes – fewer dendritic spines, overactive neurons and excess dopamine production.

They also took things one step further and showed, for the first time, that this lack of dendritic spines can actually trigger hyperactive neurons. This is because the mice’s brain cells rewire themselves to bypass these spines, effectively skipping the ‘filter’ that usually keeps their activity in check.

They also showed that these overactive neurons at the front of the brain were then stimulating other neurons to dump out dopamine.

“Overall, the combined results reveal how three separate pathologies, at the tiniest molecular level, can converge and fuel a psychiatric disorder,” Susan Scutti explains over at Medical Daily.

The group will now study the role Arp2/3 plays in different parts of the brain, and how its linked to other schizophrenia symptoms. The research is still in its very early stages, and obviously has only been demonstrated in mice and not humans. But it’s a promising first step towards understanding this mysterious disease.

“We’re very excited about using this type of approach, where we can genetically rescue Arp2/3 function in different brain regions and normalise behaviours,” Soderling said. “We’d like to use that as a basis for mapping out the neural circuitry and defects that also drive these other behaviours.”

Source:  sciencealert.com

Enzyme that change a person’s blood type

Blood Type

Blood Type

Scientists have discovered that a particular type of enzyme can cut away antigens in blood types A and B, to make them more like Type O – considered the ‘universal’ blood type, because it’s the only type that can be donated to anyone without the risk of provoking a life-threatening immune response.

The team, from the University of British Columbia of Canada, worked with a family of enzymes called 98 glycoside hydrolase, extracted from a strain of Streptococcus pneumoniae. Over many generations, they were able to engineer a super high-powered enzyme strain that can very effectively snip away blood antigens where previous generations of the enzyme struggled. “A major limitation has always been the efficiency of the enzymes,” one of the team, Stephen Withers, said in a press release. “Impractically large amounts of enzyme were needed.”

Getting the right type of blood when you need it is crucial, and it has to do with the different types of residue that can accumulate the surface of red blood cells. Both blood types A and B have this residue – A has an N-acetylgalactosamine residue, and B has a galactose residue – and Type AB has a mixture of both. Only Blood Type O is free from this residue, which means it can be received by any patient, no matter what type they’re carrying.

Withers and his team managed to create their ‘mutant’ enzyme strain using a technology called directed evolution, which allows them to insert many different types of mutations into the gene that codes for it, and by progressively selecting strains that are the best at snipping away the blood antigens, were able to create an enzyme that’s 170 times more effective at it than its parent strain. They published their results in the Journal of the American Chemical Society.

“The concept is not new, but until now we needed so much of the enzyme to make it work that it was impractical,” said Withers. “Now I’m confident that we can take this a whole lot further.”

While the current enzyme strain is not yet capable of removing 100 percent of the antigens from Blood Types A and B, which is where it needs to get if the researchers want to make any real use of it, the team is confident that they’ll get it there so they can try it out in clinical trials. Even the smallest amount of antigen in donated blood can set off a dangerous immune response in the recipient.

“Given our success so far, we are optimistic that this will work,” says Withers.

Source:  sciencealert.com

Humans Played Role in Neanderthal Extinction

neanderthal-human-skulls

neanderthal-human-skulls

Ancient teeth from Italy suggest that the arrival of modern humans in Western Europe coincided with the demise of Neanderthals there, researchers said.

This finding suggests that modern humans may have caused Neanderthals to go extinct, either directly or indirectly, scientists added.

Neanderthals are the closest extinct relatives of modern humans. Recent findings suggest that Neanderthals, who once lived in Europe and Asia, were closely enough related to humans to interbreed with the ancestors of modern humans — about 1.5 to 2.1 percent of the DNA of anyone outside Africa is Neanderthal in origin. Recent findings suggest that Neanderthals disappeared from Europe between about 41,000 and 39,000 years ago.

Scientists have hotly debated whether Neanderthals were driven into extinction because of modern humans. To solve this mystery, researchers have tried pinpointing when modern humans entered Western Europe. [Image Gallery: Our Closest Human Ancestor]

Modern human or Neanderthal?

The Protoaurignacians, who first appeared in southern Europe about 42,000 years ago, could shed light on the entrance of modern humans into the region. This culture was known for its miniature blades and for simple ornaments made of shells and bones.

Scientists had long viewed the Protoaurignacians as the precursors of the Aurignacians — modern humans named after the site of Aurignac in southern France who spread across Europe between about 35,000 and 45,000 years ago. Researchers had thought the Protoaurignacians reflected the westward spread of modern humans from the Near East — the part of Asia between the Mediterranean Sea and India that includes the Middle East.

However, the classification of the Protoaurignacians as modern human or Neanderthal has long been uncertain. Fossils recovered from Protoaurignacian sites were not conclusively identified as either.

Now scientists analyzing two 41,000-year-old teeth from two Protoaurignacian sites in Italy find that the fossils belonged to modern humans.

“We finally have proof for the argument that says that modern humans were there when the Neanderthals went extinct in Europe,” study lead author Stefano Benazzi, a paleoanthropologist at the University of Bologna in Ravenna, Italy.

A fossil tooth found at an Italian site called Grotta di Fumane (shown here) came from a modern human, scientists say.

The researchers investigated a lower incisor tooth from Riparo Bombrini, an excavation site in Italy, and found it had relatively thick enamel. Prior research suggested modern human teeth had thicker enamel than those of Neanderthals, perhaps because modern humans were healthier or developed more slowly. They also compared DNA from an upper incisor tooth found in another site in Italy — Grotta di Fumane — with that of 52 present-day modern humans, 10 ancient modern humans, a chimpanzee, 10 Neanderthals, two members of a recently discovered human lineage known as the Denisovans, and one member of an unknown kind of human lineage from Spain, and found that the Protoaurignacian DNA was modern human.

“This research really could not have been done without the collaboration of researchers in many different scientific research fields — paleoanthropologists, molecular anthropologists, physical anthropologists, paleontologists and physicists working on dating the fossils,” Benazzi said.

Killing off Neanderthals

Since the Protoaurignacians first appeared in Europe about 42,000 years ago and the Neanderthals disappeared from Europe between about 41,000 and 39,000 years ago, these new findings suggest that Protoaurignacians “caused, directly or indirectly, the demise of Neanderthals,” Benazzi said.

These 3D models show an incisor tooth from two Italian sites, Riparo Bombrini (left) and Grotta di Fumane (right).
Credit: Daniele Panetta, CNR Institute of Clinical Physiology, Pisa, Italy

View full size image

It remains unclear just how modern humans might have driven Neanderthals into extinction, Benazzi cautioned. Modern humans might have competed with Neanderthals, or they might simply have assimilated Neanderthals into their populations.

Moreover, prior research suggests that Neanderthals in Europe might have been headed toward extinction before modern humans even arrived on the continent. Neanderthals apparently experienced a decline in genetic diversity about the time when modern humans began turning up in Europe.

“The only way we might have proof of how modern humans caused the decline of Neanderthals is if we ever find a modern human burying a knife into the head of a Neanderthal,” Benazzi joked.

The researchers now hope to find more Protoaurignacian human remains. “Hopefully, we can find DNA that may say something about whether these modern humans and Neanderthals interbred,” Benazzi said.

 

Source:   livescience.com