Baking Soda kills Cancer

Baking Soda kills Cancer

Baking Soda kills Cancer

Even the most aggressive cancers which have metastasized have been reversed with baking soda cancer treatments. Although chemotherapy is toxic to all cells, it represents the only measure that oncologists employ in their practice to almost all cancer patients.

In fact, 9 out of 10 cancer patients agree to chemotherapy first without investigating other less invasive options.

Doctors and pharmaceutical companies make money from it. That’s the only reason chemotherapy is still used. Not because it’s effective, decreases morbidity, mortality or diminishes any specific cancer rates. In fact, it does the opposite. Chemotherapy boosts cancer growth and long-term mortality rates and oncologists know it.

A few years ago, University of Arizona Cancer Center member Dr. Mark Pagel received a $2 million grant from the National Institutes of Health to study the effectiveness of personalized baking soda cancer treatment for breast cancer.

Obviously, there are people in the know who have understood that sodium bicarbonate, that same stuff that can save a person’s life in the emergency room in a heartbeat, is a primary cancer treatment option of the safest and most effective kind.

Studies have shown that dietary measures to boost bicarbonate levels can increase the pH of acidic tumors without upsetting the pH of the blood and healthy tissues. Animal models of human breast cancer show that oral sodium bicarbonate does indeed make tumors more alkaline and inhibit metastasis.

Based on these studies, plus the fact that baking soda is safe and well tolerated, world renowned doctors such as Dr. Julian Whitaker have adopted successful cancer treatment protocols as part of an overall nutritional and immune support program for patients who are dealing with the disease.

The Whitaker protocol uses 12 g (2 rounded teaspoons) of baking soda mixed in 2 cups water, along with a low-cal sweetener of your choice. (It’s quite salty tasting.)

Sip this mixture over the course of an hour or two and repeat for a total of three times a day. One man claims he has found a cure for cancer using baking soda and molasses and actually successfully treated his own disease by using baking soda.

When taken orally with water, especially water with high magnesium content, and when used transdermally in medicinal baths, sodium bicarbonate becomes a first-line medicinal for the treatment of cancer, and also kidney disease, diabetes, influenza and even the common cold.

It is also a powerful buffer against radiation exposure, so everyone should be up to speed on its use. Everybody’s physiology is under heavy nuclear attack from strong radioactive winds that are circling the northern hemisphere.

Dr. Robert J. Gillies and his colleagues have already demonstrated that pre-treatment of mice with baking soda results in the alkalinization of the area around tumors. The same researchers reported that bicarbonate increases tumor pH and also inhibits spontaneous metastases in mice with breast cancer.

The Baking Soda Formula for Cancer

To make the baking soda natural cancer remedy at home, you need:

  • maple syrup,
  • molasses or
  • honey
  • to go along with the baking soda.

In Dr. Sircus’ book, he documented how one patient used baking soda and blackstrap molasses to fight the prostate cancer that had metastasized to his bones. On the first day, the patient mixed 1 teaspoon of baking soda with 1 teaspoon of molasses in a cup of water.

He took this for another 3 days after which his saliva pH read 7.0 and his urine pH read 7.5.

Encouraged by these results, the patient took the solution 2 times on day 5 instead of once daily. And from day 6 – 10, he took 2 teaspoons each of baking soda and molasses twice daily.

By the 10th day, the patient’s pH had risen to 8.5 and the only side effects experienced were headaches and night sweat (similar to cesium therapy).

The next day, the patient had a bone scan and too other medical tests. His results showed that his PSA (prostate-specific antigen, the protein used to determine the severity of prostate enlargement and prostate cancer) level was down from 22.3 at the point of diagnosis to 0.1.

Another baking soda formula recommends mixing 90 teaspoons of maple syrup with 30 teaspoons of baking soda.

To do this, the maple syrup must be heated to become less viscous. Then the baking syrup is added and stirred for 5 minutes until it is fully dissolved.

This preparation should provide about 10-day worth of the baking soda remedy. 5 – 7 teaspoons per day is the recommended dose for cancer patients.

Care should be taken when using the baking soda remedy to treat cancer. This is because sustaining a high pH level can itself cause metabolic alkalosis and electrolyte imbalance. These can result in edema and also affect the heart and blood pressure.

One does not have to be a doctor to practice pH medicine. Every practitioner of the healing arts and every mother and father needs to understand how to use sodium bicarbonate.

Bicarbonate deficiency is a real problem that deepens with age so it really does pay to understand and appreciate what baking soda is all about.

Do you have baking soda in your house?

 

Source:   humansarefree.com

Cancer Kill Switch

Cancer kill switch

Cancer kill switch

What if you could just flick a switch and turn off cancer? It seems like something you would see in a sci-fi flick, but scientists are working towards a future where that could be a reality. At the Mayo Clinic in Jacksonville, Florida, a group of researchers have made a discovery that could be a kill switch for cancer. They have found a way to reprogram mutating cancer cells back to normal, healthy cells.

Panos Anastasiadis, PhD, head of the Department of Cancer Biology at the Mayo Clinic, and his team were studying the role of adhesion proteins in cells. Anastasiadis’ primary focus was on the p120 catenin protein and long held hypothesis on it being a major player in the suppressor of tumors. The team found that p120, along with another adhesion protein, E-cadherin, actually promoted cancer growth. “That led us to believe that these molecules have two faces — a good one, maintaining the normal behavior of the cells, and a bad one that drives tumorigenesis.”

In that research, however, Anastasiadis made a remarkable discovery, “an unexpected new biology that provides the code, the software for turning off cancer.” That would be a partner to the P120 protein, dubbed PLEKHA7. When introduced to tumors, PLEKHA7 was able to “turn off” the cancerous cells’ ability to replicate and return it to a benign state. It stopped the cancer in its tracks.

How it all works is pretty straightforward. Normal, healthy cells are regulated by a sort of biological microprocessor known as microRNAs, which tell the cells to stop replicating when they have reproduced enough. Cancer is caused by a cell’s inability to stop replicating itself, and eventually grows into a cluster of cells that we know as a tumor. Anastasiadis’ team found that PLEKHA7 was an important factor in halting the replication of cells, but that it wasn’t present in the cancerous cells. By reintroducing PLEKHA7, what were once raging cancerous cells returned to normal.

This was done by injecting PLEKHA7 directly into the cells, under a controlled lab test. Anastasiadis said they still need to work on “better delivery options,” as these tests were done on human cells in a lab. They did find success, however, in stopping the growth in two very aggressive forms of cancer: breast and bladder. While this isn’t being tested on humans yet, it represents a huge step forward in understanding the nature of cancer and we can cure it.

 

Source:  Geek.com

Scientists grow 5-week-old human brain

Scientists successfully grow human brain in lab

Scientists successfully grow human brain in lab

Growing brain tissue in a dish has been done before, but bold new research announced this week shows that scientists’ ability to create human brains in laboratory settings has come a long way quickly.

Researchers at the Ohio State University in the US claim to have developed the most complete laboratory-grown human brain ever, creating a model with the brain maturity of a 5-week-old foetus. The brain, which is approximately the size of a pencil eraser, contains 99 percent of the genes that would be present in a natural human foetal brain.

“It not only looks like the developing brain, its diverse cell types express nearly all genes like a brain,” Rene Anand, professor of biological chemistry and pharmacology at Ohio State and lead researcher on the brain model, said in a statement.

“We’ve struggled for a long time trying to solve complex brain disease problems that cause tremendous pain and suffering. The power of this brain model bodes very well for human health because it gives us better and more relevant options to test and develop therapeutics other than rodents.”

Anand turned to stem cell engineering four years ago after his specialized field of research – examining the relationship between nicotinic receptors and central nervous system disorders – ran into complications using rodent specimens. Despite having limited funds, Anand and his colleagues succeeded with their proprietary technique, which they are in the process of commercializing.

The brain they have developed is a virtually complete recreation of a human foetal brain, primarily missing only a vascular system – in other words, all the blood vessels. But everything else (spinal cord, major brain regions, multiple cell types, signalling circuitry is there). What’s more, it’s functioning, with high-resolution imaging of the brain model showing functioning neurons and brain cells.

The researchers say that it takes 15 weeks to grow a lab-developed brain to the equivalent of a 5-week-old foetal human brain, and the longer the maturation process the more complete the organoid will become.

“If we let it go to 16 or 20 weeks, that might complete it, filling in that 1 percent of missing genes. We don’t know yet,” said Anand.

The scientific benefit of growing human brains in laboratory settings is that it enables high-end research into human diseases that cannot be completed using rodents.

“In central nervous system diseases, this will enable studies of either underlying genetic susceptibility or purely environmental influences, or a combination,” said Anand. “Genomic science infers there are up to 600 genes that give rise to autism, but we are stuck there. Mathematical correlations and statistical methods are insufficient to in themselves identify causation. You need an experimental system – you need a human brain.”

The research was presented this week at the Military Health System Research Symposium.

 

Source:  sciencealert.com

1 in 3 American’s are Alcoholic’s

American's are alcoholics

American’s are alcoholics

About 30 percent of adults in the United States misuse alcohol at some point in their lives, but the large majority don’t seek treatment, a new study suggests.

Researchers also found that in a given year, about 14 percent of American adults misuse alcohol, which researchers refer to as having “alcohol use disorder.” This yearly rate translates to an estimated 32.6 million Americans with drinking problems during a 12-month period.

“The study found that the risk of alcohol use disorders appears to be going up in the last decade,” said George Koob, director of the National Institute on Alcohol Abuse and Alcoholism (NIAAA), the agency that conducted the research.

Not only is problem drinking becoming more widespread, but the intensity of drinking is also going up, Koob said. Instead of having three drinks on a night out, more people may be drinking heavily and having at least five, or even eight or 10 drinks at a time.

“Alcohol use disorder” is a relatively new term. Prior to May 2013, people who had drinking problems were diagnosed with either “alcohol abuse” or “alcohol dependence.”

Now, rather than categorizing these problems as two separate conditions, the latest edition of the American Psychiatric Association’s “Diagnostic and Statistical Manual of Mental Disorders” (American Psychiatric Publishing, 2013) considers the two a single diagnosis known as “alcohol use disorder.” A person with the disorder is further classified as having a mild, moderate or severe form of the condition, based on the number of symptoms the individual has. [7 Ways Alcohol Affects Your Health]

Adults who meet at least two of the 11 diagnostic criteria are considered as having an alcohol use disorder. Criteria include having strong cravings for alcohol, making unsuccessful efforts to cut down consumption and drinking causing problems at work, home or school.

The results, published online today (June 3) in the journal JAMA Psychiatry, are the first to estimate nationwide prevalence rates for alcohol misuse since the diagnostic criteria were changed.

 

Source:  livescience.com

Human cyborgs within 200 years

cyborg women

cyborg women

Within the next 200 years, humans will have become so merged with technology that we’ll have evolved into “God-like cyborgs”, according to Yuval Noah Harari, an historian and author from the Hebrew University of Jerusalem in Israel.

Harari researches the history of the human species, and after writing a new book on our past, he now believes that we’re just a few short centuries away from being able to use technology to avoid death altogether – if we can afford it, that is.

 “I think it is likely in the next 200 years or so Homo sapiens will upgrade themselves into some idea of a divine being, either through biological manipulation or genetic engineering of by the creation of cyborgs: part organic, part non-organic,” Harari said during his presentation the Hay Festival in the UK, as Sarah Knapton reports for the Telegraph. “It will be the greatest evolution in biology since the appearance of life … we will be as different from today’s humans as chimps are now from us.”

Obviously, we should take Harari’s predictions with a grain of salt, but while they sound more suited to science fiction than real life, they’re not actually that out-there. Many researchers believe that we’ve already started down the path towards a cyborg future; after all, many of us already rely on bionic ears and eyes, insulin pump technology and prosthetics to help us survive. And with researchers recently learning how to send people’s thoughts across the web, subconsciously control bionic limbs and use liquid metal to heal severed nerves, it’s not hard to imagine how we could continue to use technology to supplement our vulnerable human bodies further.

Interestingly, Harare’s comments came just a few days after UK-based neuroscientist Hannah Critchlow from Cambridge University got the Internet excited by saying that it could be possible to upload our brains into computers, if we could build computers with 100 trillion circuit connections. “People could probably live inside a machine. Potentially, I think it is definitely a possibility,” Critchlow said during her presentation at the festival.

But Harari warned that these upgrades may only be available to the wealthiest members of society, and that could cause a growing biological divide between rich and poor – especially if some of us can afford to pay for the privilege of living forever while the rest of the species dies out.

If that sounds depressing, the alternative is a future where instead of us taking advantage of technology, technology takes advantage of us, and artificial intelligence poses a threat to our survival, as Elon Musk, Stephen Hawking, and Bill Gates have all predicted.

Either way, one thing seems pretty clear – our future as a species is now inextricably linked with the technology we’ve created. For better or for worse.

 

Source:  sciencealert.com

US military robots will leave humans defenceless

US military robots

US military robots

Killer robots which are being developed by the US military ‘will leave humans utterly defenceless‘, an academic has warned.

Two programmes commissioned by the US Defense Advanced Research Projects Agency (DARPA) are seeking to create drones which can track and kill targets even when out of contact with their handlers.

Writing in the journal Nature, Stuart Russell, Professor of Computer Science at the University of California, Berkley, said the research could breach the Geneva Convention and leave humanity in the hands of amoral machines.

“Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans,” he said.

“Existing AI and robotics components can provide physical platforms, perception, motor control, navigation, mapping, tactical decision-making and long-term planning. They just need to be combined.

“In my view, the overriding concern should be the probable endpoint of this technological trajectory.

“Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.”

• Killer robots a small step away and must be outlawed, says UN official
• Britain prepared to develop ‘killer robots’, minister says

The robots, called LAWS – lethal autonomous weapons systems – are likely to be armed quadcopters of mini-tanks that can decided without human intervention who should live or die.

DARPA is currently working on two projects which could lead to killer bots. One is Fast Lightweight Autonomy (FLA) which is designing a tiny rotorcraft to manoeuvre unaided at high speed in urban areas and inside buildings. The other and Collaborative Operations in Denied Environment (CODE), is aiming to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission — find, fix, track, target, engage, assess” in situations in which enemy signal-jamming makes communication with a human commander impossible.

Last year Angela Kane, the UN’s high representative for disarmament, said killer robots were just a ‘small step’ away and called for a worldwide ban. But the Foreign Office has said while the technology had potentially “terrifying” implications, Britain “reserves the right” to develop it to protect troops.

Professor Russell said: “LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting ‘threatening behaviour’

“Debates should be organized at scientific meetings; arguments studied by ethics committees. Doing nothing is a vote in favour of continued development and deployment.”

• The US army tests a killer robot tank
• Future robots will resemble ostriches or dinosaurs, scientists say

However Dr Sabine Hauert, a lecturer in robotics at the University of Bristol said that the public did not need to fear the developments in artificial intelligence.

“My colleagues and I spend dinner parties explaining that we are not evil but instead have been working for years to develop systems that could help the elderly, improve health care, make jobs safer and more efficient, and allow us to explore space or beneath the ocean,” she said.

 

Source:   telegraph.co.uk

Freelance NSA Spies Private Conversations

NSA Spies

NSA Spies

Thanks to Edward Snowden, we know that the National Security Agency collects the phone records of every American in order to keep the country safe from terrorism. But for the past eight months a group of artists claiming to work for the NSA on “a freelance, pro bono basis” have been recording people’s private conversations in popular bars, restaurants, and gyms in Lower Manhattan to ensure that no actionable intelligence falls through the cracks.

“We’re looking for terrorism, we’re looking for signs of plots and schemes that could put the homeland at risk,” one of the group’s “agents” tells us.

The project’s website, We Are Always Listening, includes snippets of actual conversations recorded by tiny, hidden tape recorders placed in The Brindle Room, Café Mogador, and the Crunch Gym in Union Square, among other popular public spaces.

In the recordings, a group of men talk about how a friend is “trying too hard to be one of us,” a woman complains about paying more than $2,000/month in rent, and a man describes a former boyfriend’s fetish: “He wanted me to like, fake double over in pain. Like we’re doing a scene from Batman Returns.”

None of the recordings contain any last names or other forms of information that would allow the people in the recordings to be directly identified, but first names flow freely.

“The reason we broadcast small, small, small, fractions of what we’ve gathered is because we’ve also heard members of the American public say they want a more transparent window into how data is collected,” said the “agent,” who asked to speak anonymously because New York State law requires the consent of at least one party in order to record a conversation (as Governor Cuomo famously discovered).

“Our agents would dispute that having a conversation at a restaurant or a gym is private. There should not be an assumption of privacy.”

The Manhattan DA’s office declined to comment on the group’s activities.

The project is seemingly designed to shake Americans (and, based on the locations the group placed their recorders, the Downtown bourgeoisie) out of their torpor with respect to how the NSA collects data and the federal government’s reliance on millions of independent contractors with security clearances.

“We imagine people are fine with this type of surveillance,” the “agent” said, tongue firmly in cheek. “The general public has mostly spoken in a unified voice saying, well, it’s just what you need to do to keep the country safe.”

For those who believe that posting audio of private conversations online is wrong, or that it surpasses what even the NSA considers appropriate, a button marked “Angry?” on the group’s website directs users to the ACLU’s website that allows you to contact your federal representatives and urge them to kill the portion of the Patriot Act that allows for the NSA’s blanket surveillance (the Senate recently voted to block a bill from the House designed to curtail the government’s collection of phone data).

The “agent” told us that New Yorkers should expect more leaked conversations. If you’ve hung out at 61 Local in Cobble Hill recently, you might want to keep your eye on the group’s website: a tape recorder has been listening there for some time.

 

Source:  gothamist.com

Google closer to developing human-like intelligence

Artificial Intelligence

Artificial Intelligence

Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.

Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.

The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors”.

Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”

The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” he added. “But there’s no reason why not. I think you can capture a thought by a vector.”

Hinton, who is due to give a talk at the Royal Society in London on Friday, believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence: mastering natural, conversational language, and the ability to make leaps of logic.

He painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.

“It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.”

In the past two years, scientists have already made significant progress in overcoming this challenge.

Richard Socher, an artificial intelligence scientist at Stanford University, recently developed a program called NaSent that he taught to recognise human sentiment by training it on 12,000 sentences taken from the film review website Rotten Tomatoes.

Part of the initial motivation for developing “thought vectors” was to improve translation software, such as Google Translate, which currently uses dictionaries to translate individual words and searches through previously translated documents to find typical translations for phrases. Although these methods often provide the rough meaning, they are also prone to delivering nonsense and dubious grammar.

Thought vectors, Hinton explained, work at a higher level by extracting something closer to actual meaning.

The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical “meaning space” or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector.

The “thought” serves as a the bridge between the two languages because it can be transferred into the French version of the meaning space and decoded back into a new path between words.

The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences.

At first the translations it produces are nonsense, but a feedback loop provides an error signal that allows the position of each word to be refined until eventually the positions of words in the cloud captures the way humans use them – effectively a map of their meanings.

Hinton said that the idea that language can be deconstructed with almost mathematical precision is surprising, but true. “If you take the vector for Paris and subtract the vector for France and add Italy, you get Rome,” he said. “It’s quite remarkable.”

Dr Hermann Hauser, a Cambridge computer scientist and entrepreneur, said that Hinton and others could be on the way to solving what programmers call the “genie problem”.

“With machines at the moment, you get exactly what you wished for,” Hauser said. “The problem is we’re not very good at wishing for the right thing. When you look at humans, the recognition of individual words isn’t particularly impressive, the important bit is figuring out what the guy wants.”

“Hinton is our number one guru in the world on this at the moment,” he added.

Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits.”

A flirtatious program would “probably be quite simple” to create, however. “It probably wouldn’t be subtly flirtatious to begin with, but it would be capable of saying borderline politically incorrect phrases,” he said.

Many of the recent advances in AI have sprung from the field of deep learning, which Hinton has been working on since the 1980s. At its core is the idea that computer programs learn how to carry out tasks by training on huge datasets, rather than being taught a set of inflexible rules.

With the advent of huge datasets and powerful processors, the approach pioneered by Hinton decades ago has come into the ascendency and underpins the work of Google’s artificial intelligence arm, DeepMind, and similar programs of research at Facebook and Microsoft.

Hinton played down concerns about the dangers of AI raised by those such as the American entrepreneur Elon Musk, who has described the technologies under development as humanity’s greatest existential threat. “The risk of something seriously dangerous happening is in the five year timeframe. Ten years at most,” Musk warned last year.

“I’m more scared about the things that have already happened,” said Hinton in response. “The NSA is already bugging everything that everybody does. Each time there’s a new revelation from Snowden, you realise the extent of it.”

“I am scared that if you make the technology work better, you help the NSA misuse it more,” he added. “I’d be more worried about that than about autonomous killer robots.

 

Source:  theguardian.com

Iron levels hasten Alzheimer’s disease

Brain

Brain

High levels of iron in the brain could increase the risk of developing Alzheimer’s disease and hasten the cognitive decline that comes with it, new research suggests.

The results of the study, which tracked the brain degeneration of people with Alzheimer’s over a seven-year period, suggest it might be possible to halt the disease with drugs that reduce iron levels in the brain.

 “We think that iron is contributing to the disease progression of Alzheimer’s disease,” neuroscientist Scott Ayton, from the University of Melbourne in Australia, told Anna Salleh at ABC Science.

“This is strong evidence to base a clinical trial on lowering iron content in the brain to see if that would impart a cognitive benefit.”

Alzheimer’s is a devastating disease that researchers suspect “begins when two abnormal protein fragments, known as plaques and tangles, accumulate in the brain and start killing our brain cells,” explains Fiona Macdonald for ScienceAlert.

It starts by destroying the hippocampus – the region of the brain where memories are formed and stored – and eventually damages the region where language is processed, making it difficult for advanced Alzheimer’s patients to communication. As the disease’s gradual takeover continues, people lose the ability to regulate their emotions and behaviour, and to make sense of the world around them.

But previous studies have shown that people with Alzheimer’s disease also have elevated levels of brain iron, which may also be a risk factor for the disease.

“There has been debate for a long period of time whether this is important or whether it’s just a coincidence,” Ayton told ABC Science.

The long-term impact of elevated iron levels on the disease outcome has not been investigated, the researchers say.

So Ayton’s team decided to test this, examining the link between brain iron levels and cognitive decline in three groups of people over seven years. The participants included 91 people with normal cognition, 144 people with mild cognitive impairment, and 67 people with diagnosed Alzheimer’s disease.

At the beginning of the study, the researchers determined the patients’ brain iron levels by measuring the amount of ferritin in the cerebrospinal fluid around the brain. Ferritin is a protein that stores and releases iron.

The researchers did regular tests and MRI scans to track cognitive decline and changes in the brain over the study period.

They found that people with higher levels of ferritin – in all groups – had faster declines in cognitive abilities and accelerated shrinking of the hippocampus. Levels of ferritin were also a linked to a greater likelihood of people with mild cognitive impairment developing Alzheimer’s.

Their data contained some other interesting takeaways: The researchers found higher levels of ferritin corresponded to earlier ages for diagnoses – roughly three months for every 1 nanogram per millilitre increase.

They also found that people with the APOE-e4 gene variant, which is known to be the strongest genetic risk factor for the disease, had the highest levels of iron in their brains.

This suggests that APOE-e4 may be increasing Alzheimer’s disease risk by increasing iron levels in the brain, Ayton told ABC Science.

The researchers say their findings, which were published in the journal Nature Communications, justify the revival of clinical trials to explore drugs to target brain iron levels.

In a study carried out 24 years ago, a drug called deferiprone halved the rate of Alzheimer’s cognitive decline, Ayton told Clare Wilson at NewScientist. “Perhaps it’s time to refocus the field on looking at iron as a target.”

“Lowering CSF ferritin, as might be expected from a drug like deferiprone, could conceivably delay mild cognitive impairment conversion to Alzheimer’s disease by as much as three years,” the team wrote.

FDA Cover’s Up Deaths in Drug Trials

FDA

FDA

Does the habitual use of antidepressants do more harm than good to many patients? Absolutely, says one expert in a new British Medical Journal report. Moreover, he says that the federal Food and Drug Administration might even be hiding the truth about antidepressant lethality.

In his portion of the report, Peter C. Gotzsche, a professor at the Nordic Cochrane Centre in Denmark, said that nearly all psychotropic drug use could be ended today without deleterious effects, adding that such “drugs are responsible for the deaths of more than half a million people aged 65 and older each year in the Western world.”

Gotzsche, author of the 2013 book Deadly Medicines and Organized Crime: How Big Pharma Has Corrupted Healthcare, further notes in the BMJ that “randomized trials that have been conducted do not properly evaluate the drugs’ effects.” He adds, “Almost all of them are biased because they included patients already taking another psychiatric drug.”

Hiding or fabricating data about harmful side effects

The FDA’s data is incomplete at best and intentionally skewed at worst, he insisted:

Under-reporting of deaths in industry funded trials is another major flaw. Based on some of the randomised trials that were included in a meta-analysis of 100,000 patients by the US Food and Drug Administration, I have estimated that there are likely to have been 15 times more suicides among people taking antidepressants than reported by the FDA – for example, there were 14 suicides in 9,956 patients in trials with fluoxetine and paroxetine, whereas the FDA had only five suicides in 52,960 patients, partly because the FDA only included events up to 24 hours after patients stopped taking the drug.

He said that he was most concerned about three classes of drugs: antipsychotics, benzodiazepines and antidepressants, saying they are responsible for 3,693 deaths a year in Denmark alone. When scaling up that figure in relation to the U.S. and European Union together, he estimated that 539,000 people die every year because of the medications.

“Given their lack of benefit, I estimate we could stop almost all psychotropic drugs without causing harm – by dropping all antidepressants, ADHD drugs, and dementia drugs (as the small effects are probably the result of unblinding bias) and using only a fraction of the antipsychotics and benzodiazepines we currently use,” Gotzsche wrote.

“This would lead to healthier and more long lived populations. Because psychotropic drugs are immensely harmful when used long-term, they should almost exclusively be used in acute situations and always with a firm plan for tapering off, which can be difficult for many patients,” he added.

Gotzsche’s views were disputed in the same BMJ piece by Allan Young, professor of mood disorders at King’s College London, and psychiatric patient John Crace.

“More than a fifth of all health-related disability is caused by mental ill health, studies suggest, and people with poor mental health often have poor physical health and poorer (long-term) outcomes in both aspects of health,” they wrote.

They also insisted that psychiatric drugs are “rigorously examined for efficacy and safety, before and after regulatory approval.”

 

Source:  globalresearch.ca

Bionic Lens 3x better than 20/20

bionic-lens

bionic-lens

Imagine being able to see three times better than 20/20 vision without wearing glasses or contacts — even at age 100 or more — with the help of bionic lenses implanted in your eyes.

Dr. Garth Webb, an optometrist in British Columbia who invented the Ocumetics Bionic Lens, says patients would have perfect vision and that driving glasses, progressive lenses and contact lenses would become a dim memory as the eye-care industry is transformed.

Dr. Garth Webb says the bionic lens would allow people to see to infinity and replace the need for eyeglasses and contact lenses. (Darryl Dyck/Canadian Press)

Webb says people who have the specialized lenses surgically inserted would never get cataracts because their natural lenses, which decay over time, would have been replaced.

Perfect eyesight would result “no matter how crummy your eyes are,” Webb says, adding the Bionic Lens would be an option for someone who depends on corrective lenses and is over about age 25, when the eye structures are fully developed.

“This is vision enhancement that the world has never seen before,” he says, showing a Bionic Lens, which looks like a tiny button.

“If you can just barely see the clock at 10 feet, when you get the Bionic Lens you can see the clock at 30 feet away,” says Webb, demonstrating how a custom-made lens that folded like a taco in a saline-filled syringe would be placed in an eye, where it would unravel itself within 10 seconds.

8-minute surgery

He says the painless procedure, identical to cataract surgery, would take about eight minutes and a patient’s sight would be immediately corrected.

Webb, who is the CEO of Ocumetics Technology Corp., has spent the last eight years and about $3 million researching and developing the Bionic Lens, getting international patents and securing a biomedical manufacturing facility in Delta, B.C.

Webb says people who have the specialized lenses surgically inserted would never get cataracts because their natural lenses, which decay over time, would have been replaced. (Laitr Keiows/Wikicommons)

His mission is fuelled by the “obsession” he’s had to free himself and others from corrective lenses since he was in Grade 2, when he was saddled with glasses.

“My heroes were cowboys, and cowboys just did not wear glasses,” Webb says.

“At age 45 I had to struggle with reading glasses, which like most people, I found was a great insult. To this day I curse my progressive glasses. I also wear contact lenses, which I also curse just about every day.”

Webb’s efforts culminated in his recent presentation of the lens to 14 top ophthalmologists in San Diego the day before an annual gathering of the American Society of Cataract and Refractive Surgery.

Dr. Vincent DeLuise, an ophthalmologist who teaches at Yale University in New Haven, Conn., and at Weill Cornell Medical College in New York City, says he arranged several meetings on April 17, when experts in various fields learned about the lens.

He says the surgeons, from Canada, the United States, Australia and the Dominican Republic, were impressed with what they heard and some will be involved in clinical trials for Webb’s “very clever” invention.

“There’s a lot of excitement about the Bionic Lens from very experienced surgeons who perhaps had some cynicism about this because they’ve seen things not work in the past. They think that this might actually work and they’re eager enough that they all wish to be on the medical advisory board to help him on his journey,” DeLuise says.

“I think this device is going to bring us closer to the holy grail of excellent vision at all ranges — distant, intermediate and near.

 

Source:  cbc.ca

Coffee Antioxidant 500 times greater than vitamin C

coffee that mimics effects of morphine

coffee that mimics effects of morphine

The coffee industry plays a major role in the global economy. It also has a significant impact on the environment, producing more than 2 billion tonnes of coffee by-products annually. Coffee silverskin (the epidermis of the coffee bean) is usually removed during processing, after the beans have been dried, while the coffee grounds are normally directly discarded.

It has traditionally been assumed that these by-products ─ coffee grounds and coffee silverskin, have few practical uses and applications. Spent coffee grounds are sometimes employed as homemade skin exfoliants or as abrasive cleaning products. They are also known to make great composting agents for fertilizing certain plants. But apart from these limited applications, coffee by-products are by and large deemed to be virtually useless. As such, practically all of this highly contaminating ‘coffee waste’ ends up in landfills across the globe and has a considerable knock-on effect on the environment.

However, a UGR research team led by José Ángel Rufíán Henares set out to determine the extent to which these by-products could be recycled for nutritional purposes, thereby reducing the amount of waste being generated, as well as benefitting coffee producers, recycling companies, the health sector, and consumers.

In an article published in the academic journal Food Science and Technology, the researchers demonstrate the powerful antioxidant and antimicrobial properties of the coffee grounds and silverskin, which are highly rich in fibre and phenols. Indeed, their findings indicate that the antioxidant effects of these coffee grounds are 500 times greater than those found in vitamin C and could be employed to create functional foods with significant health benefits.

Moreover, Professor Rufián Henares points out: “They also contain high levels of melanoidins, which are produced during the roasting process and give coffee its brown colour. The biological properties of these melanoidins could be harnessed for a range of practical applications, such as preventing harmful pathogens from growing in food products.” However, he also adds: “If we are to harness the beneficial prebiotic effects of the coffee by-products, first of all we need to remove the melanoidins, since they interfere with such beneficial prebiotic properties.”

The researchers conclude that processed coffee by-products could potentially be recycled as sources of new food ingredients. This would also greatly diminish the environmental impact of discarded coffee by-products.

The Ministry of Economics and Finance has recently allocated a new research project to the team under the ‘State R&D programme’, in order to enable them to conduct further studies in the area and re-assess the potential value of coffee by-products.

 

Source:  sciencedaily.com

Doctor who discovered Cancer blames lack of Oxygen

The Man Who Discovered Cancer

The Man Who Discovered Cancer

Dr. Otto H. Warburg won a Nobel Prize for discovering the cause of cancer. There is one aspect of our bodies that is the key to preventing cancer: pH levels.

What Dr. Warburg figured out is that when there is a lack of oxygen, cancer cells develop. As Dr. Warburg said, “All normal cells have an absolute requirement for oxygen, but cancerous cells can live without oxygen – a rule without exception. Deprive a cell of 35% of it’s oxygen for 48 hours and it may become cancerous.” Cancer cells therefore cannot live in a highly oxygenated state, like the one that develops when your body’s pH levels are alkaline, and not acidic.

Most people’s diets promote the creation of too much acid, which throws our body’s natural pH levels from a slightly alkaline nature to an acidic nature. Maintaining an alkaline pH level can prevent health conditions like cancer, osteoporosis, cardiovascular diseases, diabetes, and acid reflux. Eating processed foods like refined sugars, refined grains, GMOs, and other unnatural foods can lead to a pH level that supports the development of these conditions, and leads to overall bad health. In fact, most health conditions that are common today stem from a pH level that is too acidic, including parasites, bacteria, and viruses are all attributed to an acidic pH level.

There is a natural remedy that you can use at home that is simple, and readily available. All you need is 1/3 tablespoon of baking soda, and 2 tablespoons of lemon juice or apple cider vinegar. Mix the ingredients into 8ounces of cold water, and stir well. The baking soda will react with the lemon juice or ACV and begin to fizz. Drink the mixture all at once. The combination will naturally reduce your pH levels in your body and prevent the conditions associated with an acidic pH level. Maintaining a healthy pH level will do wonders for your health, and you will notice the difference after only a few days of the treatment.

 

Source:  buynongmoseeds.com

Success Regenerating Spinal Cords

Regenerated nerves after spinal cord injury

Regenerated nerves after spinal cord injuryHead Transplant

Working with paralysed rats, scientists in the US have shown how they might be able to regenerate spines after injury and help paralysed people to one day walk again.

The team, from Tufts University School of Medicine, crushed the spines of lab rats at the dorsal root, which is the main bundle of nerve fibres that branches off the spine, and carries signals of sensation from the body to the brain. They then treated the spines with a protein called artemin, known to help neurons grow and function. After the two-week treatment, the nerve fibres regenerated and successfully passed signals over a distance of 4 centimetres.

“This is a significantly longer length of Central Nervous System regeneration than has been reported earlier,” one of the team, physiologist Eric Frank, “But still a long way to go!”

Reporting in a study published by the Proceedings of the National Academy of Sciences, the team says the artemin treatment was successful in regenerating both large and small sensory neurons.

And while that 4-centimetre distance is important, Frank says that’s not all that counts: “The regenerating nerve fibres are growing back to the right places in the spinal cord and brainstem.” He adds that this is pretty impressive, given that their subjects were several months old, which isn’t young in rat years.

The results suggest that the chemical guidance cues that allow the nerve fibres to get to their correct target areas persist in the adult spinal cord, says Frank. This means that while artemin may not help regenerate all nerve fibres -some aren’t receptive to it – it’s likely to help with other neurones to. “If it becomes possible to get these other types of nerve fibres to regenerate for long distances as well, there is a reasonable chance that they can also grow back to their original target areas,” says Frank.

The challenge is getting regenerated nerve fibres to reconnect, so they can do what there are supposed to do, which just might be possible, considering these results. If scientists could achieve that, it would be a big leap forward in improving the lives of paralysed people.

Source:  sciencealert.com

Most likely culprit for schizophrenia

Schizophrenia is eight different diseases

Schizophrenia is eight different diseases

Researchers have found a gene that links the three previously unrelated biological changes most commonly blamed for causing schizophrenia, making it one of the most promising culprits for the disease so far, and a good target for future treatments.

Schizophrenia is a debilitating mental disorder that usually appears in late adolescence, and changes the way people think, act and perceive reality. For decades, scientists have struggled to work out what causes the hallucinations and strange behaviour associated with the disorder, and keep coming back to three neuronal changes that seem to be to blame. The only problem is that the changes seemed to be unrelated, and, in some cases, even contradictory.

But now, researchers from Duke University in the US have managed to find a link between these three hypotheses, and have shown that all three changes can be brought about by a malfunction in the same gene.

Publishing in Nature Neuroscience, the researchers explain that their results could lead to new treatment strategies that target the underlying cause of the disease, rather than the visible changes or phenotypes, associated with schizophrenia.

“The most exciting part was when all the pieces of the puzzle fell together,” lead researcher, Scott Soderling, a professor of cell biology and neurobiology from Duke University, said in a press release. “When [co-researcher Il Hwan Kim] and I finally realised that these three outwardly unrelated phenotypes … were actually functionally interrelated with each other, that was really surprising and also very exciting for us.”

So what are these three phenotypes? The first is spine pruning, which means that the neurons of people with schizophrenia have fewer spines – the long part of a brain cell that passes signals back and forth. Some people with schizophrenia also have hyperactive neurons, and excess dopamine production.

But these changes just didn’t seem to make sense together. After all, how could neurons be overactive if they didn’t have enough dendritic spines to pass messages back and forth, and why would either of these symptoms trigger excess dopamine production? Now, researchers believe that a mutation in the gene Arp2/3 may be to blame.

Soderling and his team originally spotted the gene during previous studies, which identified thousands of genes linked to schizophrenia. But Arp2/3 was of particular interest, as it controls the formation of synapses, or links, between neurons.

To test its effect, the researchers engineered mice that didn’t have the Arp2/3 gene and, surprisingly, found that they behaved very similarly to humans with schizophrenia. The mice also got worse with age and improved slightly with antipsychotic medications, both traits of human schizophrenia.

But most fascinating was the fact that the mice also had all three of the unrelated brain changes – fewer dendritic spines, overactive neurons and excess dopamine production.

They also took things one step further and showed, for the first time, that this lack of dendritic spines can actually trigger hyperactive neurons. This is because the mice’s brain cells rewire themselves to bypass these spines, effectively skipping the ‘filter’ that usually keeps their activity in check.

They also showed that these overactive neurons at the front of the brain were then stimulating other neurons to dump out dopamine.

“Overall, the combined results reveal how three separate pathologies, at the tiniest molecular level, can converge and fuel a psychiatric disorder,” Susan Scutti explains over at Medical Daily.

The group will now study the role Arp2/3 plays in different parts of the brain, and how its linked to other schizophrenia symptoms. The research is still in its very early stages, and obviously has only been demonstrated in mice and not humans. But it’s a promising first step towards understanding this mysterious disease.

“We’re very excited about using this type of approach, where we can genetically rescue Arp2/3 function in different brain regions and normalise behaviours,” Soderling said. “We’d like to use that as a basis for mapping out the neural circuitry and defects that also drive these other behaviours.”

Source:  sciencealert.com

Babies using Smart Phones

baby using smart phones

baby using smart phones

More than one-third of babies are tapping on smartphones and tablets even before they learn to walk or talk, and by 1 year of age, one in seven toddlers is using devices for at least an hour a day, according to a study to be presented Saturday, April 25 at the Pediatric Academic Societies (PAS) annual meeting in San Diego.

The American Academy of Pediatrics discourages the use of entertainment media such as televisions, computers, smartphones and tablets by children under age 2. Little is known, however, when youngsters actually start using mobile devices.

Researchers developed a 20-item survey to find out when young children are first exposed to mobile media and how they use devices. The questionnaire was adapted from the “Zero to Eight” Common Sense Media national survey on media use in children.

Parents of children ages 6 months to 4 years old who were at a hospital-based pediatric clinic that serves a low-income, minority community were recruited to fill out the survey. Participants were asked about what types of media devices they have in their household, children’s age at initial exposure to mobile media, frequency of use, types of activities and if their pediatrician had discussed media use with them.

Results from 370 parents showed that 74 percent were African-American, 14 percent were Hispanic and 13 percent had less than a high school education. Media devices were ubiquitous, with 97 percent having TVs, 83 percent having tablets, 77 percent having smartphones and 59 percent having Internet access.

Children younger than 1 year of age were exposed to media devices in surprisingly large numbers: 52 percent had watched TV shows, 36 percent had touched or scrolled a screen, 24 percent had called someone, 15 percent used apps and 12 percent played video games.

By 2 years of age, most children were using mobile devices.

Lead author Hilda Kabali, MD, a third-year resident in the Pediatrics Department at Einstein Healthcare Network, said the results surprised her.

“We didn’t expect children were using the devices from the age of 6 months,” she said. “Some children were on the screen for as long as 30 minutes.”

Results also showed 73 percent of parents let their children play with mobile devices while doing household chores, 60 percent while running errands, 65 percent to calm a child and 29 percent to put a child to sleep.

Time spent on devices increased with age, with 26 percent of 2-year-olds and 38 percent of 4-year-olds using devices for at least an hour a day.

Finally, only 30 percent of parents said their child’s pediatrician had discussed media use with them.

Source:  disinformation.com

Magic mushrooms permanently changes personality

psilocybin mushroom

psilocybin mushroom

Psilocybe cubensis, commonly referred to as magic mushrooms have the potential to make a lasting change to one’s personality. This is a preliminary conclusion from a study conducted by Johns Hopkins researchers and published in the Journal of Psychopharmacology.

A single dose of ‘shrooms’ was enough to make a lasting impression on the personality in 30 of the 51 participants, or nearly 60%. Those who had a hallucinatory or mystical experience after consuming the mushrooms showed increased in the personality trait ‘openness’, which is closely related to creativity and curiosity. This increase was measured 2 months and even 14 months after the last session, which suggests long-term effects.

Study leader Roland Griffiths, a professor of psychiatry, finds this lasting impact on a personality trait remarkable: “Normally, if anything, openness tends to decrease as people get older.” Openness is one of five traits that were tested and the only one that changed during the study. Along with the other factors extroversion, neuroticism, agreeableness and conscientiousness, openness is one of the major personality traits that are known to be constant throughout one’s lifetime.

According to the researchers, this study is the first finding of a short-term means with which long-term personality changes can made. “There may be applications for this we can’t even imagine at this point,” says Griffiths. “It certainly deserves to be systematically studied.”

There is currently another study under way to determine whether or not psilocybin can help cancer patients deal with feelings of anxiety and depression.

 

Source:  azarius.pt

MRI Shows Meditation Rebuilds Brain’s Gray Matter

MRI Shows Meditation Rebuilds Brain’s Gray Matter

MRI Shows Meditation Rebuilds Brain’s Gray Matter

Test subjects taking part in an 8-week program of mindfulness meditation showed results that astonished even the most experienced neuroscientists at Harvard University.  The study was led by a Harvard-affiliated team of researchers based at Massachusetts General Hospital, and the team’s MRI scans documented for the very first time in medical history how meditation produced massive changes inside the brain’s gray matter.  “Although the practice of meditation is associated with a sense of peacefulness and physical relaxation, practitioners have long claimed that meditation also provides cognitive and psychological benefits that persist throughout the day,” says study senior author Sara Lazar of the MGH Psychiatric Neuroimaging Research Program and a Harvard Medical School instructor in psychology. “This study demonstrates that changes in brain structure may underlie some of these reported improvements and that people are not just feeling better because they are spending time relaxing.”

Sue McGreevey of MGH writes: “Previous studies from Lazar’s group and others found structural differences between the brains of experienced meditation practitioners and individuals with no history of meditation, observing thickening of the cerebral cortex in areas associated with attention and emotional integration. But those investigations could not document that those differences were actually produced by meditation.”  Until now, that is.  The participants spent an average of 27 minutes per day practicing mindfulness exercises, and this is all it took to stimulate a major increase in gray matter density in the hippocampus, the part of the brain associated with self-awareness, compassion, and introspection.  McGreevey adds: “Participant-reported reductions in stress also were correlated with decreased gray-matter density in the amygdala, which is known to play an important role in anxiety and stress. None of these changes were seen in the control group, indicating that they had not resulted merely from the passage of time.”

“It is fascinating to see the brain’s plasticity and that, by practicing meditation, we can play an active role in changing the brain and can increase our well-being and quality of life,” says Britta Hölzel, first author of the paper and a research fellow at MGH and Giessen University in Germany

 

Source:  feelguide.com

Protein Treatment Staves Off Alzheimer’s Disease Symptoms

Protein Treats Alzheimer’s Disease

Protein Treats Alzheimer’s Disease

Alzheimer’s disease is the sixth leading cause of death in the United States, with over 1,200 individuals developing the disease every day. A new paper in the Journal of Neuroscience from lead author Dena Dubal of the University of California, San Francisco describes how manipulating levels of a protein associated with memory can stave off Alzheimer’s symptoms, even in the presence of the disease-causing toxins.

Klotho is a transmembrane protein associated with longevity. The body makes less of this protein over time, and low levels of klotho is connected to a number of diseases including osteoporosis, heart disease, increased risk of stroke, and decreased cognitive function. These factors lead to diminished quality of life and even early death.

Previous research has shown that increasing klotho levels in healthy mice leads to increased cognitive function. This current paper from Dubal’s team builds on that research by increasing klotho in mice who are also expressing large amounts of amyloid-beta and tau, proteins that are associated with the onset of Alzheimer’s disease. Remarkably, even with high levels of these toxic, disease-causing proteins, the mice with elevated klotho levels were able to retain their cognitive function.

“It’s remarkable that we can improve cognition in a diseased brain despite the fact that it’s riddled with toxins,” Dubal said in a press release. “In addition to making healthy mice smarter, we can make the brain resistant to Alzheimer-related toxicity. Without having to target the complex disease itself, we can provide greater resilience and boost brain functions.”

The mechanism behind this cognitive preservation appears to be klotho interacting with a glutamate receptor called NMDA, which is critically important to synaptic transmission, thus influencing learning, memory, and executive function. Alzheimer’s disease typically damages these receptors, but the mice with elevated klotho were able to retain both NMDA function and cognition. Part of the success also appears to be due to the preservation of the NMDA subunit GluN2B, which existed in significantly larger numbers than the control mice. The mechanism and the results of this study will need to be investigated further before developing it into a possible treatment for humans in the future.

“The next step will be to identify and test drugs that can elevate klotho or mimic its effects on the brain,” added senior author Lennart Mucke from Gladstone Institutes. “We are encouraged in this regard by the strong similarities we found between klotho’s effects in humans and mice in our earlier study. We think this provides good support for pursuing klotho as a potential drug target to treat cognitive disorders in humans, including Alzheimer’s disease.”

 

Source:  iflscience.com

Brain’s neural firing patterns explained

neural noise

neural noise

Researchers at the University of Rochester may have answered one of neuroscience’s most vexing questions—how can it be that our neurons, which are responsible for our crystal-clear thoughts, seem to fire in utterly random ways?

 In the November issue of Nature Neuroscience, the Rochester study shows that the brain’s cortex uses seemingly chaotic, or “noisy,” signals to represent the ambiguities of the real world—and that this noise dramatically enhances the brain’s processing, enabling us to make decisions in an uncertain world.

“You’d think this is crazy because engineers are always fighting to reduce the noise in their circuits, and yet here’s the best computing machine in the universe—and it looks utterly random,” says Alex Pouget, associate professor of brain and cognitive sciences at the University of Rochester.

Pouget’s work for the first time connects two of the brain’s biggest mysteries; why it’s so noisy, and how it can perform such complex calculations. As counter-intuitive as it sounds, the noise seems integral to making those calculations possible.

In the last decade, Pouget and his colleagues in the University of Rochester’s Department of Brain and Cognitive Sciences have blazed a new path to understanding our gray matter. The traditional approach has assumed the brain uses the same method computation in general had used up until the mid-80s: You see an image and you relate that image to one stored in your head. But the reality of the cranial world seems to be a confusing array of possibilities and probabilities, all of which are somehow, mysteriously, properly calculated.

The science of drawing answers from such a variety of probabilities is called Bayesian computing, after minister Thomas Bayes who founded the unusual branch of math 150 years ago. Pouget says that when we seem to be struck by an idea from out of the blue, our brain has actually just resolved many probabilities its been fervently calculating.

“We’ve known for several years that at the behavioral level, we’re ‘Bayes optimal,’ meaning we are excellent at taking various bits of probability information, weighing their relative worth, and coming to a good conclusion quickly,” says Pouget. “But we’ve always been at a loss to explain how our brains are able to conduct such complex Bayesian computations so easily.”

Two years ago, while talking with a physics friend, some probabilities in Pouget’s own head suddenly resolved.

 “One day I had a drink with some machine-learning researchers, and we suddenly said, ‘Oh, it’s not noise,’ because noise implies something’s wrong,” says Pouget. “We started to realize then that what looked like noise may actually be the brain’s way of running at optimal performance.”

Bayesian computing can be done most efficiently when data is formatted in what’s called “Poisson distribution.”

And the neural noise, Pouget noticed, looked suspiciously like this optimal distribution.

This idea set Pouget and his team into investigating whether our neurons’ noise really fits this Poisson distribution, and in his current Nature Neuroscience paper he found that it fit extremely well.

“The cortex appears wired at its foundation to run Bayesian computations as efficiently as can be possible,” says Pouget. His paper says the uncertainty of the real world is represented by this noise, and the noise itself is in a format that reduces the resources needed to compute it. Anyone familiar with log tables and slide rules knows that while multiplying large numbers is difficult, adding them with log tables is relatively undemanding.

The brain is apparently designed in a similar manner—”coding” the possibilities it encounters into a format that makes it tremendously easier to compute an answer.

Pouget now prefers to call the noise “variability.” Our neurons are responding to the light, sounds, and other sensory information from the world around us. But if we want to do something, such as jump over a stream, we need to extract data that is not inherently part of that information. We need to process all the variables we see, including how wide the stream appears, what the consequences of falling in might be, and how far we know we can jump. Each neuron responds to a particular variable and the brain will decide on a conclusion about the whole set of variables using Bayesian inference.

As you reach your decision, you’d have a lot of trouble articulating most of the variables your brain just processed for you. Similarly, intuition may be less a burst of insight than a rough consensus among your neurons.

Pouget and his team are now expanding their findings across the entire cortex, because every part of our highly developed cortex displays a similar underlying Bayes-optimal structure.

“If the structure is the same, that means there must be something fundamentally similar among vision, movement, reasoning, loving—anything that takes place in the human cortex,” says Pouget. “The way you learn language must be essentially the same as the way a doctor reasons out a diagnosis, and right now our lab is pushing hard to find out exactly how that noise makes all these different aspects of being human possible.”

Pouget’s work still has its skeptics, but this, his fourth paper in Nature Neuroscience on the topic, is starting to win converts.

“If you ask me, this is the coming revolution,” says Pouget. “It hit machine learning and cognitive science, and I think it’s just hitting neuroscience. In 10 or 20 years, I think the way everybody thinks about the brain is going to be in these terms.”

Not all of Pouget’s neurons are in agreement, however.

“…but I’ve been wrong before,” he shrugs.

 

Source:  phys.org

Coffee protein mimics effects of morphine

coffee that mimics effects of morphine

coffee that mimics effects of morphine

Brazilian scientists have discovered a protein in coffee that has effects similar to pain reliever morphine, researchers at the state University of Brasilia (UnB) and state-owned Brazilian Agricultural Research Corporation Embrapa said Saturday.

Embrapa said its genetics and biotech division, teaming up with UnB scientists, had discovered “previously unknown protein fragments” with morphine-like effects in that they possess “analgesic and mildly tranquilizing” qualities.

The company added tests on laboratory mice showed that the opioid peptides, which are naturally occurring biological molecules, appeared to have a longer-lasting effect on the mice than morphine itself.

Embrapa said the discovery has “biotechnological potential” for the health foods industry and could also help to alleviate stress in animals bound for the slaughterhouse.

In 2004, Embrapa managed to sequence coffee’s functional genome, a major step towards efforts by the firm and UnB to combine coffee genes with a view to improving grain quality.

Imaging test for autism spectrum disorder

autism spectrum disorder

autism spectrum disorder

Virginia Tech Carilion Research Institute scientists have developed a brain-imaging technique that may be able to identify children with autism spectrum disorder in just two minutes.

This test, while far from being used as the clinical standard of care, offers promising diagnostic potential once it undergoes more research and evaluation.

“Our brains have a perspective-tracking response that monitors, for example, whether it’s your turn or my turn,” said Read Montague, the Virginia Tech Carilion Research Institute professor who led the study.

“This response is removed from our emotional input, so it makes a great quantitative marker,” he said. “We can use it to measure differences between people with and without autism spectrum disorder.”

The finding, expected to be published online next week in Clinical Psychological Science, demonstrates that the perspective-tracking response can be used to determine whether someone has autism spectrum disorder.

Usually, diagnosis – an unquantifiable process based on clinical judgment – is time consuming and trying on children and their families. That may change with this new diagnostic test.

The path to this discovery has been a long, iterative one. In a 2006 study by Montague and others, pairs of subjects had their brains scanned using functional magnetic resonance imaging, or MRI, as they played a game requiring them to take turns.

From those images, researchers found that the middle cingulate cortex became more active when it was the subject’s turn.

“A response in that part of the brain is not an emotional response, and we found that intriguing,” said Montague, who also directs the Computational Psychiatry Unit at the Virginia Tech Carilion Research Institute and is a professor of physics at Virginia Tech. “We realized the middle cingulate cortex is responsible for distinguishing between self and others, and that’s how it was able to keep track of whose turn it was.”

That realization led the scientists to investigate how the middle cingulate cortex response differs in individuals at different developmental levels. In a 2008 study, Montague and his colleagues asked athletes to watch a brief clip of a physical action, such as kicking a ball or dancing, while undergoing functional MRI.

The athletes were then asked either to replay the clips in their mind, like watching a movie, or to imagine themselves as participants in the clips.

“The athletes had the same responses as the game participants from our earlier study,” Montague said. “The middle cingulate cortex was active when they imagined themselves dancing – in other words, when they needed to recognize themselves in the action.”

In the 2008 study, the researchers also found that in subjects with autism spectrum disorder, the more subdued the response, the more severe the symptoms.

Montague and his team hypothesized that a clear biomarker for self-perspective exists and that they could track it using functional MRI. They also speculated that the biomarker could be used as a tool in the clinical diagnosis of people with autism spectrum disorder.

In 2012, the scientists designed another study to see whether they could elicit a brain response to help them compute the unquantifiable. And they could: By presenting self-images while scanning the brains of adults, they elicited the self-perspective response they had previously observed in social interaction games.

In the current study, with children, subjects were shown 15 images of themselves and 15 images of a child matched for age and gender for four seconds per image in a random order.

Like the control adults, the control children had a high response in the middle cingulate cortex when viewing their own pictures. In contrast, children with autism spectrum disorder had a significantly diminished response.

Importantly, Montague’s team could detect this difference in individuals using only a single image.

Montague and his group realized they had developed a single-stimulus functional MRI diagnostic technique. The single-stimulus part is important, Montague points out, as it enables speed. Children with autism spectrum disorder cannot stay in the scanner for long, so the test must be quick.

“We went from a slow, average depiction of brain activity in a cognitive challenge to a quick test that is significantly easier for children to do than spend hours under observation,” Montague said. “The single-stimulus functional MRI could also open the door to developing MRI-based applications for screening of other cognitive disorders.”

By mapping psychological differences through brain scans, scientists are adding a critical component to the typical process of neuropsychiatric diagnosis – math.

Montague has been a pioneering figure in this field, which he coined computational psychiatry. The idea is that scientists can link the function of mental disorders to the disrupted mechanisms of neural tissue through mathematical approaches. Doctors then can use measurable data for earlier diagnosis and treatment.

An earlier diagnosis can also have a tremendous impact on the children and their families, Montague said.

“The younger children are at the time of diagnosis,” Montague said, “the more they can benefit from a range of therapies that can transform their lives.”

FDA-APPROVED ASPARTAME DISEASE

ASPARTAME DISEASE: AN FDA-APPROVED EPIDEMIC

ASPARTAME DISEASE:
AN FDA-APPROVED EPIDEMIC

“Diet” products containing the chemical sweetener aspartame can have multiple neurotoxic, metabolic, allergenic, fetal and carcinogenic effects. My database of 1,200 aspartame reactors–based on logical diagnostic criteria, including predictable recurrence on rechallenge–is reviewed.

The existence of aspartame disease continues to be denied by the FDA and powerful corporate entities. Its magnitude, however, warrants removal of this chemical as an “imminent public health threat.” The use of aspartame products by over two-thirds of the population, and inadequate evaluation by corporate-partial investigators underscore this opinion.

About Aspartame

The FDA approved aspartame as a low-nutritive sweetener for use in solid form during 1981, and in soft drinks during 1983. It is a synthetic chemical consisting of two amino acids, phenylalanine (50 percent) and aspartic acid (40 percent), and a methyl ester (10 percent) that promptly becomes free methyl alcohol (methanol; wood alcohol). The latter is universally considered a severe poison.

Senior FDA scientists and consultants vigorously protested approving the release of aspartame products. Their objections related to disturbing findings in animal studies (especially the frequency of brain tumors), seemingly flawed experimental data, and the absence of extensive pre-marketing trials on humans using real-world products over prolonged periods.

Aspartame reactions may be caused by the compound itself, its three components, stereoisomers of the amino acids, toxic breakdown products (including formaldehyde), or combinations thereof. They often occur in conjunction with severe caloric restriction and excessive exercise to lose weight.

Various metabolic and physiologic disturbances explain the clinical complications. Only a few are listed:

  • Damage to the retina or optic nerves is largely due to methyl alcohol exposure. Unlike most animals, humans cannot efficiently metabolize it.
  • High concentrations of phenylalanine and aspartic acid occur in the brain after aspartame intake, unlike the modest levels of amino acids following conventional protein consumption.
  • Aspartame alters the function of major amino acid-derived neurotransmitters, especially in obese persons and after carbohydrate intake.
  • Phenylalanine stimulates the release of insulin and growth hormone.
  • The ambiguous signals to the satiety center following aspartame intake may result either in increased food consumption or severe anorexia.
  • Large amounts of the radioactive-carbon label from oral aspartame intake have been detected in DNA.

The current “acceptable daily intake” (ADI) of 50 mg aspartame/kg body weight makes no sense. It represents the projection of animal studies based on lifetime intake! This was clearly stated by previous FDA Commissioner Dr. Frank Young during a U.S. Senate hearing on November 3, 1987. Furthermore, it disregards the usual 100-fold safety factor used by the FDA as a guideline for regulated food additives. The maximum daily intake tolerated by most reactors in my series, based on the predictable recurrence of induced symptoms and signs, ranged from 10 to 18.3 mg/kg.

“We had better be sure that the questions that have been raised about the safety of this product are answered. I must say at the outset, this product was approved by the FDA in circumstances that can only be described as troubling.”

I have devoted more than two decades to analyzing aspartame disease, a widespread but largely ignored disorder. Its existence continues to be reflexively denied by the Food and Drug Administration (FDA), the American Medical Association (AMA), and many public health/ regulatory organizations.

The medical profession and consumers have been assured by the Council on Scientific Affairs of the AMA2 and the Centers for Disease Control (CDC) that aspartame is “completely safe.” Moreover, the impression is left that reports of serious reactions are a “health rumor” fabrication … notwithstanding the CDC report in 1984 of 649 aspartame reactors with many attributed disorders3.

Aspartame Intake

Many reactors consumed prodigious amounts of aspartame, especially during hot weather. Conversely, some experienced convulsions, headache, or other severe symptoms after exposure to small amounts (e.g., chewing aspartame gum; placing an aspartame strip on the tongue; babies while breast-feeding as the mother drank an aspartame beverage).

Interval Between Cessation and Improvement

Nearly two-thirds of aspartame reactors experienced symptomatic improvement within two days after avoiding aspartame. With continued abstinence, their complaints generally disappeared.

Causation

The causative role of aspartame products has been repeatedly shown by (a) the prompt improvement of symptoms (grand mal seizures, headache, itching, rashes, severe gastrointestinal reactions) after stopping aspartame products, and (b) their recurrence within minutes or hours after resuming them. The latter included self-testing on numerous occasions, inadvertent ingestion, and formal rechallenge.

Some aspartame reactors with convulsions purposefully rechallenged themselves on one or several occasions “to be absolutely certain.” This was unique among six pilots who had lost their licenses for unexplained seizures while consuming aspartame products. (All had been in otherwise excellent health.) They sought to have their licenses reinstated by such objective confirmation on rechallenge.

High-Risk Individuals

These groups include pregnant and lactating women, young children, older persons, those at risk for phenylketonuria (PKU), the relatives of aspartame reactors (see above), and patients with liver disease, iron-deficiency anemia, kidney impairment, migraine, diabetes, hypoglycemia, and hypothyroidism.

 

Source: wnho.net  By H. J. Roberts

Schizophrenia is eight different diseases

Schizophrenia is eight different diseases

Schizophrenia is eight different diseases

 

New research shows that schizophrenia is not a single disease, but a group of eight distinct disorders, each caused by changes in clusters of genes that lead to different sets of symptoms.

The finding sets the stage for scientists to develop better ways to diagnose and treat schizophrenia, a mental illness that can be devastating when not adequately managed, says C. Robert Cloninger, co-author of the study published Monday in the American Journal of Psychiatry.

“We are really opening a new era of psychiatric diagnosis,” says Cloninger, professor of psychiatry and genetics at the Washington University School of Medicine in St. Louis. Cloninger says he hopes his work will “allow for the development of a personalized diagnosis, opening the door to treating the cause, rather than just the symptoms, of schizophrenia.”

Clonginger and colleagues found that certain genetic profiles matched particular symptoms. While people with one genetic cluster have odd and disorganized speech – what is sometimes called “word salad” – people with another genetic profile hear voices, according to the study, funded by the National Institutes of Health.

Some genetic clusters gave people higher risks of the disease than others, according to the study, which compared the DNA of 4,200 people with schizophrenia to that of 3,800 healthy people.

One set of genetic changes, for example, confers a 95% chance of developing schizophrenia. In the new study, researchers describe a woman with this genetic profile who developed signs of the disorder by age 5, when she taped over the mouths of her dolls to make them stop whispering to her and calling her name. Another patient – whose genetic profile gave her a 71% risk of schizophrenia – experienced a more typical disease course and began hearing voices at age 17.

The average person has less than a 1% risk of developing schizophrenia, Cloninger says.

Psychiatrists such as Stephen Marder describe the the study as a step forward. Today, doctors diagnose patients with mental illness with a process akin to a survey, asking about the person’s family history and symptoms, says Marder, a professor at the David Geffen School of Medicine at the University of California-Los Angeles.

“It underlines that the way we diagnose schizophrenia is relatively primitive,” Marder says.

Patients may wait years for an accurate diagnosis, and even longer to find treatments that help them without causing intolerable side effects.

Doctors have long known that schizophrenia can run in families, says Robert Freedman, editor in chief of the American Journal of Psychiatry and chair of psychiatry at the University of Colorado Anschutz Medical Campus. If one identical twin has schizophrenia, for example, there is an 80% chance that the other twin has the disease, as well.

In the past, doctors looked for single genes that might cause schizophrenia, without real success, Freedman says.

The new paper suggests that genes work together like a winning or losing combination of cards in poker, Freedman says. “This shows us that there are some very bad hands out there,” Freedman says.

 In some cases – in which a genetic profile conveys close to a 100% risk of schizophrenia – people may not be able to escape the disease, Cloninger says. But if doctors could predict who is at high risk, they might also be able to tailor an early intervention to help a patient better manage their condition, such as by managing stress.

Doctors don’t yet know why one person with a 70% risk of schizophrenia develops the disease and others don’t, Clonginger says. It’s possible that environment plays a key role, so that child with a supportive family and good nutrition might escape the diagnosis, while someone who experiences great trauma or deprivation might become very ill.

The study also reflects how much has changed in the way that scientists think about the genetic causes of common diseases, Marder says. He notes that diseases caused by a single gene – such as sickle-cell anemia and cystic fibrosis – affect very few people. Most common diseases, such as cancer, are caused by combinations of genes. Even something as apparently simple as height is caused by combinations of genes, he says.

Doctors have known for years that breast cancer is not one disease, for example, but at least half a dozen diseases driven by different genes, says study co-author Igor Zwir, research associate in psychiatry at Washington University. Doctors today have tests to predict a woman’s risk of some types of breast cancer, and other tests that help them select the most effective drugs.

Those sorts of tests could be extremely helpful for people with schizophrenia, who often try two or three drugs before finding one that’s effective, Cloninger says.

“Most treatment today is trial and error,” Cloninger says.

If doctors could pinpoint which drugs could be the most effective, they might be able to use lower doses, producing fewer of the bothersome side effects that lead many patients to stop taking their medication, Cloninger says.

 

Source:  usatoday.com

1 Million Dollar Race For Cure To End Aging

The $1 Million Race For The Cure To End Aging

The $1 Million Race For The Cure To End Aging

 

The hypothesis is so absurd it seems as though it popped right off the pages of a science-fiction novel. Some scientists in Palo Alto are offering a $1 million prize to anyone who can end aging. “Based on the rapid rate of biomedical breakthroughs, we believe the question is not if we can crack the aging code, but when will it happen,” says director of the Palo Alto Longevity Prize Keith Powers.

It’s a fantastical idea: curing the one thing we will all surely die of if nothing else gets us before that. I sat down with Aubrey de Grey, the chief science officer of the SENS Research Foundation and co-author of “Ending Aging,” to discuss this very topic a few days back. According to him, ending aging comes with the promise to not just stop the hands of time, but to actually reverse the clock. We could, according to him, actually choose the age we’d like to exist at for the rest of our (unnatural?) lives. But we are far off from possibly seeing this happen in our lifetime, says de Grey. “With sufficient funding we have a 50/50 chance to getting this all working within the next 25 years, but it could also happen in the next 100,” he says.

If you ask Ray Kurzweil, life extension expert, futurist and part-time adviser to Google’s somewhat stealth Calico project, we’re actually tip-toeing upon the cusp of living forever. “We’ll get to a point about 15 years from now where we’re adding more than a year every year to your life expectancy,” he told the New York Times in early 2013. He also wrote in the book he co-authored with Terry Grossman, M.D., that “Immortality is within our grasp.” That’s a bit optimistic to de Grey (the two are good friends), but he’s not surprised this prize is coming out of Silicon Valley. “Things are changing here first. We have a high density of visionaries who like to think high.”

And he believes much of what Kurzweil says is true with the right funding. “Give me large amounts of money to get the research to happen faster,” says de Grey. He then points out that Google’s Calico funds are virtually unlimited.

Whether it’s 15, 25 or even 100 years off, we need to spur a revolution in aging research, according to Joon Yun, one of the sponsors of the prize. “The aim of the prize is to catalyze that revolution,” says Yun. His personal assistant actually came up with the initial idea. She just happens to be an acquaintance of Wendy Schmidt, wife of Google’s Eric Schmidt. But it was the passing of Yun’s 68-year-old father-in-law and some conversations with his friends that got him thinking about how to take on aging as a whole.

The Palo Alto Prize is also working with a number of angel investors, venture capital firms, corporate venture arms, institutions and private foundations within Silicon Valley to create health-related incentive prize competitions in the future. This first $1 million prize comes from Yun’s own pockets.

The initial prize will be divided into two $500,000 awards. Half a million dollars will go to the first team to demonstrate that it can restore heart rate variability (HRV) to that of a young adult. The other half of the $1 million will be awarded to the first team that can extend lifespan by 50 percent. So far 11 teams from all over the world have signed up for the challenge.

Source:  techcrunch.com

Monsanto’s Dark Connections to the “Military Industrial Complex”

Monsanto’s  “Military Industrial Complex”

Monsanto’s “Military Industrial Complex”

 

A US peer-reviewed study conducted last year which was published in the scientific journal Entropy, linked Monsanto’s herbicide Roundup – which is the most popular weed killer in the world – to infertility, cancers and Parkinsons Disease amongst other ailments. The authors of the study were Stephanie Seneff, a research scientist at the Massachusetts Institute of Technology, and Anthony Samsel, a retired science consultant from Arthur D. Little, Inc. and a former private environmental government contractor. The main ingredient in Roundup is the “insidious” glyphosate, which the study found to be a deeply harmful chemical:

“Glyphosate enhances the damaging effects of other food borne chemical residues and environmental toxins. Negative impact on the body is insidious and manifests slowly over time as inflammation damages cellular systems throughout the body […] Consequences are most of the diseases and conditions associated with a Western diet, which include gastrointestinal disorders, obesity, diabetes, heart disease, depression, autism, infertility, cancer and Alzheimer’s disease” (Samsel and Seneff, 2013).

The Executive Director of the Institute for Responsible Technology (IRT) Jeffrey M. Smith has discovered a link between gluten disorders and GM foods in a study he conducted last year. Gluten disorders have sharply risen over the past 2 decades, which correlates with GM foods being introduced into the food supply. Smith asserts that GM foods – including soy and corn – are the possible “environmental triggers” that have contributed to the rapid increase of gluten disorders that effect close to 20 million American’s today:

“Bt-toxin, glyphosate, and other components of GMOs, are linked to five conditions that may either initiate or exacerbate gluten-related disorders […] If glyphosate activates retinoic acid, and retinoic acid activates gluten sensitivity, eating GMOs soaked with glyphosate may play a role in the onset of gluten-related disorders” (Smith, 2013).

One of the more damming studies on the safety of GM foods was led by biologist Dr. Gilles-Eric Seralini of the University of Caen, which was the first study to examine the long term affects on rats that had consumed Monsanto’s GM corn and its Roundup herbicide. The study was conducted over a 2 year period – which is the average life-span of a rat – as opposed to Monsanto’s usual period of 90 days. The peer-reviewed study found horrifying effects on the rats health, with a 200% to 300% increase in large tumours, severe organ damage to the kidney and liver and 70% of female participant rats suffered premature death. The first tumours only appeared 4 to 7 months into the research, highlighting the need for longer trials.

Initially the study was published in the September issue of Food and Chemical Toxicology, but was then later retracted after the publisher felt the study was “inconclusive”, although there was no suspicion of fraud or intentional deceit. Dr. Seralini strongly protested the decision and believed “economic interests” were behind the decision as a former Monsanto employee had joined the journal. Monsanto is infamous for employing swaths oflobbyists to control the political, scientific and administrative decisions relating to the organisation, and this incident was a major whitewash by the GM producer to stop the barrage of negative media reports relating to the toxic effects of their products. The study led by Dr. Seralini was later published in a less well renowned journal, the Environmental Sciences Europe, which reignited the fears of GM foods safety.

France has recently implemented a ban on Monsanto produced maize (MON810) – a different variety of the Monsanto GM corn that was discussed in the study above (NK603) – citing environmental concerns as the reason for the ban. France joins a list of countries including Italy and Poland who have imposed bans on GM corn over the past few years. Additionally, Russian MPs have introduced a draft into parliament which could see GM producers punished as terrorists and criminally prosecuted if they are deemed to have harmed the environment or human health. In India, many of the GM seeds sold to Indian farmers under the pretext of greater harvests failed to deliver, which led to an estimated 200,000 Indian farmers committing suicide due to an inability to repay debts.

There is growing evidence to support the theory that bee colonies are collapsing due to GM crops being used in agriculture, with America seeing the largest fall in bee populations in recent years. Resistance to Monsanto and GM foods has been growing in recent years after the launch of the worldwide ‘March Against Monsanto’ in 2012, which organises global protests against the corporation and its toxic products within 52 countries. Monsanto was also voted the ‘most evil corporation’ of 2013 in a poll conducted by the website Natural News, beating the Federal Reserve and British Petroleum to take the top position.

Monsanto Produced and Supplied Toxic Agent Orange

Researching Monsanto’s past reveals a very dark history that has been well documented for years. During the Vietnam War, Monsanto was contracted to produce and supply the US government with a malevolent chemical for military application. Along with other chemical giants at the time such as Dow Chemical, Monsanto produced the military herbicide Agent Orange which contained high quantities of the deadly chemical Dioxin. Between 1961 and 1971, the US Army sprayed between 50 and 80 million litres of Agent Orange across Vietnamese jungles, forests and strategically advantageous positions. It was deployed in order to destroy forests and fertile lands which provided cover and food for the opposing troops. The fallout was devastating, with Vietnam estimating that 400,000 people died or were maimed due to Agent Orange, as well as 500,000 children born with birth defects and up to 2 million people suffer from cancer or other diseases. Millions of US veterans were also exposed and many have developed similar illnesses. The consequences are still felt and are thought to continue for a century as cancer, birth defects and other diseases are exponential due to them being passed down through generations.

Today, deep connections exist between Monsanto, the ‘Military Industrial Complex’ and the US Government which have to be documented to understand the nature of the corporation. On Monsanto’s Board of Directors sits the former Chairman of the Board and CEO of the giant war contractor Lockheed Martin, Robert J. Stevens, who was also appointed in 2012 by Barack Obama to the Advisory Committee for Trade Policy and Negotiations. As well as epitomising the revolving door that exists between the US Government and private trans-national corporations, Stevens is a member of the parallel government in the US, the Council on Foreign Relations (CFR). A second board member at Monsanto is Gwendolyn S. King, who also sits on the board of Lockheed Martin where she chairs the Orwellian ‘Ethics and Sustainability Committee”. Individuals who are veterans of the corporate war industry should not be allowed control over any populations food supply! Additionally, Monsanto board member Dr. George H. Poste is a former member of the Defense Science Board and the Health Board of the U.S. Department of Defense, as well as a Fellow of the Royal Society and a member of the CFR.

Bill Gates made headlines in 2010 when The Bill and Melinda Gates Foundation bought 500,000 Monsanto shares worth a total of $23 million, raising questions as to why his foundation would invest in such a malign corporation. William H. Gates Sr. – Bill’s father – is the former head of Planned Parenthood and a strong advocate of eugenics– the philosophy that there are superior and inferior types of human beings, with the inferior type often sterilised or culled under the pretext of being a plague on society. During his 2010 TED speech, Bill Gates reveals his desire to reduce the population of the planet by “10 or 15 percent” in the coming years through such technologies as “vaccines”:

“The world today has 6.8 billion people. That’s heading up to about 9 billion. Now if we do a really good job on new vaccines, health care, reproductive health services, we could lower that by perhaps 10 or 15 percent” (4.37 into the video).

In 2006, Monsanto acquired a company that has developed – in partnership with the US Department of Agriculture – what is popularly termed terminator seeds, a future major trend in the GM industry. Terminator Seeds or suicide seeds are engineered to become sterile after the first harvest, destroying the ancient practice of saving seeds for future crops. This means farmers are forced to buy new seeds every year from Big-Agri, which produces high debts and a form of servitude for the farmers.

 

Source:  globalresearch.ca

FRANCIS CRICK high on LSD discovering structure of DNA

FRANCIS CRICK

FRANCIS CRICK

FRANCIS CRICK, the Nobel Prize-winning father of modern genetics, was under the influence of LSD when he first deduced thedouble-helix structure of DNA nearly 50 years ago.

The abrasive and unorthodox Crick and his brilliant American co-researcher James Watson famously celebrated their eureka moment in March 1953 by running from the now legendary Cavendish Laboratory in Cambridge to the nearby Eagle pub, where they announced over pints of bitter that they had discovered the secret of life.

Crick, who died ten days ago, aged 88, later told a fellow scientist that he often used small doses of LSD then an experimental drug used in psychotherapy to boost his powers of thought. He said it was LSD, not the Eagle’s warm beer, that helped him to unravel the structure of DNA, the discovery that won him the Nobel Prize.

Despite his Establishment image, Crick was a devotee of novelist Aldous Huxley, whose accounts of his experiments with LSD and another hallucinogen, mescaline, in the short stories The Doors Of Perception and Heaven And Hell became cult texts for the hippies of the Sixties and Seventies. In the late Sixties, Crick was a founder member of Soma, a legalise-cannabis group named after the drug in Huxley’s novel Brave New World. He even put his name to a famous letter to The Times in 1967 calling for a reform in the drugs laws.

It was through his membership of Soma that Crick inadvertently became the inspiration for the biggest LSD manufacturing conspiracy-the world has ever seen the multimillion-pound drug factory in a remote farmhouse in Wales that was smashed by the Operation Julie raids of the late Seventies.

Crick’s involvement with the gang was fleeting but crucial. The revered scientist had been invited to the Cambridge home of freewheeling American writer David Solomon a friend of hippie LSD guru Timothy Leary who had come to Britain in 1967 on a quest to discover a method for manufacturing pure THC, the active ingredient of cannabis.

It was Crick’s presence in Solomon’s social circle that attracted a brilliant young biochemist, Richard Kemp, who soon became a convert to the attractions of both cannabis and LSD. Kemp was recruited to the THC project in 1968, but soon afterwards devised the world’s first foolproof method of producing cheap, pure LSD. Solomon and Kemp went into business, manufacturing acid in a succession of rented houses before setting up their laboratory in a cottage on a hillside near Tregaron, Carmarthenshire, in 1973. It is estimated that Kemp manufactured drugs worth Pounds 2.5 million an astonishing amount in the Seventies before police stormed the building in 1977 and seized enough pure LSD and its constituent chemicals to make two million LSD ‘tabs’.

The arrest and conviction of Solomon, Kemp and a string of co-conspirators dominated the headlines for months. I was covering the case as a reporter at the time and it was then that I met Kemp’s close friend, Garrod Harker, whose home had been raided by police but who had not been arrested. Harker told me that Kemp and his girlfriend Christine Bott by then in jail were hippie idealists who were completely uninterested in the money they were making.

They gave away thousands to pet causes such as the Glastonbury pop festival and the drugs charity Release.

‘They have a philosophy,’ Harker told me at the time. ‘They believe industrial society will collapse when the oil runs out and that the answer is to change people’s mindsets using acid. They believe LSD can help people to see that a return to a natural society based on self-sufficiency is the only way to save themselves.

‘Dick Kemp told me he met Francis Crick at Cambridge. Crick had told him that some Cambridge academics used LSD in tiny amounts as a thinking tool, to liberate them from preconceptions and let their genius wander freely to new ideas. Crick told him he had perceived the double-helix shape while on LSD.

‘It was clear that Dick Kemp was highly impressed and probably bowled over by what Crick had told him. He told me that if a man like Crick, who had gone to the heart of human existence, had used LSD, then it was worth using. Crick was certainly Dick Kemp’s inspiration.’ Shortly afterwards I visited Crick at his home, Golden Helix, in Cambridge.

He listened with rapt, amused attention to what I told him about the role of LSD in his Nobel Prize-winning discovery. He gave no intimation of surprise. When I had finished, he said: ‘Print a word of it and I’ll sue.’

 

Source:  miqel.com

Lung Cancer Leading Cause of Death in China

Cancer Now Leading Cause of Death in China

Cancer Now Leading Cause of Death in China

 

Cancer is now the leading cause of death in China. Chinese Ministry of Health data implicate cancer in close to a quarter of all deaths countrywide. As is common with many countries as they industrialize, the usual plagues of poverty — infectious diseases and high infant mortality — have given way to diseases more often associated with affluence, such as heart disease, stroke, and cancer.

While this might be expected in China’s richer cities, where bicycles are fast being traded in for cars and meat consumption is climbing, it also holds true in rural areas. In fact, reports from the countryside reveal a dangerous epidemic of “cancer villages” linked to pollution from some of the very industries propelling China’s explosive economy. By pursuing economic growth above all else, China is sacrificing the health of its people, ultimately risking future prosperity.

Lung cancer is the most common cancer in China. Deaths from this typically fatal disease have shot up nearly fivefold since the 1970s. In China’s rapidly growing cities, like Shanghai and Beijing, where particulates in the air are often four times higher than in New York City, nearly 30 percent of cancer deaths are from lung cancer.

Dirty air is associated with not only a number of cancers, but also heart disease, stroke, and respiratory disease, which together account for over 80 percent of deaths countrywide. According to the Chinese Centre for Disease Control and Prevention, the burning of coal is responsible for 70 percent of the emissions of soot that clouds out the sun in so much of China; 85 percent of sulfur dioxide, which causes acid rain and smog; and 67 percent of nitrogen oxide, a precursor to harmful ground level ozone. Coal burning is also a major emitter of carcinogens and mercury, a potent neurotoxin. Coal ash, which contains radioactive material and heavy metals, including chromium, arsenic, lead, cadmium, and mercury, is China’s number one source of solid industrial waste. The toxic ash that is not otherwise used in infrastructure or manufacturing is stored in impoundments, where it can be caught by air currents or leach contaminants into the groundwater.

Coal pollution combined with emissions from China’s burgeoning industries and the exhaust of a fast-growing national vehicle fleet are plenty enough to impair breathing and jeopardize health. But that does not stop over half the men in China from smoking tobacco. Smoking is far less common among women; less than 3 percent light up. Still, about 1 in 10 of the estimated 1 million Chinese who die from smoking-related diseases each year are exposed to carcinogenic second hand smoke but do not smoke themselves.

 

Source:  sustainablog.org

Being Slim Middle Age May Boost Dementia

Too Slim at Midlife May Boost Dementia Risk

Too Slim at Midlife May Boost Dementia Risk

 

Being too thin in middle age might be bad for brain health later in life, a new study suggests.

Researchers found that people who were underweight in their 40s, 50s and 60s were 34 percent more likely to be diagnosed with dementia up to 15 years later, compared with similarly aged men and women who were a healthy weight.

Exactly why being underweight —defined as having a body mass index (BMI) of less than 20 —in middle age is linked with dementia is unclear and requires further investigation, said study co-author Dr. Nawab Qizilbash, a clinical epidemiologist and the head of OXON Epidemiology, a research organization in London. But he speculates thatfactors such asdiet, exercise, frailty, weight changesand deficiencies in vitamins D and Emight play a role.

The study, published online April 10 in the journal The Lancet Diabetes & Endocrinology, analyzed data from nearly 2 million people ages 40 and older in the United Kingdom.

None of the people had dementia when the study began, but nearly 46,000 were diagnosed with it during the follow-up period of up to 20 years.

In a surprising finding that contradicts some previous studies, the researchers found that being overweight or obese in middle age actually appeared to protect brain health.

In fact, people who were the heaviest at midlife, with a BMI of 40 or higher, had a 29 percent lower risk of developing dementia than people whose weight fell into a healthy range, according to the study.

“Contrary to the prevailing — but not unanimous — view, people who are overweight or obese in middle age appear not to be at higher risk of dementia in old age,”.

He said these findings were unexpected, and although the research team performed many different analyses to see if they could find an explanation for the results, so far they have not.

Qizilbash said some next steps in this research include understanding the influence of weight changes, such as recent weight loss in a person who may not have previously been underweight, on the risk of dementia.

He also wants to look into whether being overweight or obese hasan overall positive effect on dementia because someone who weighs more may not live long enough to reap its possible brain-protective effects.

More research is also needed to determine how weight influences the risk of different types of dementia, such as Alzheimer’s disease, vascular disease and Lewy body disease, Qizilbash said.

 

Source:  livescience.com

500 Year Old Map Shatters Official Story

500 Year Old Map  Shatters Official Story

500 Year Old Map Shatters Official Story

If conventional wisdom on the history of the human race is correct, then human civilization is not old enough, nor was it advanced enough, to account for many of the mysterious monolithic and archeological sites around the world. Places like Gobekli Tepe in Turkey, the Bosnian Pyramids, and Adam’s Calendar in South Africa, beg the same question: if human civilization is supposedly not old enough to have created all of these sites, then who, or what, had the capacity to create so many elaborate structures around the globe?

It is clear that our understanding of our own history is incomplete, and there is plenty of credible evidence pointing to the existence of intelligent and civilized cultures on Earth long before the first human cultures emerged from the Middle East around 4000BC. The Admiral Piri Reis world map of 1513 is part of the emerging more complete story of our history, one that challenges mainstream thinking in big ways.

Mapmaking is a complex and civilized task, thought to have emerged around 1000BC with the Babylonian clay tablets. Antarctica was officially first sighted by a Russian expedition in 1820 and is entirely covered in ice caps thought to have formed around 34-45 million years ago. Antarctica, therefore, should not be seen on any map prior to 1820, and all sighted maps of Antarctica should contain the polar ice caps, which are supposedly millions of years old.

A world map made by Ottoman cartographer and military admiral, Piri Reis, casts some doubt on what we think we know about ancient civilization.

The Piri Reis map, which focuses on Western Africa, the East Coast of South America, and the North Coast of Antarctica, features the details of a coastline that many historians and geologists believe represents Queen Maud Land, that is, Antarctica. Remarkably, as represented in this map, the frigid continent was not covered in ice caps, but, rather, with dense vegetation. How could a map drawn in 1513 feature a continent that wasn’t discovered until 1820? And if the continent had in fact been discovered by one of the civilizations known to have emerged after 4000BC, why were the ice caps not on the map?

The paradoxes presented by the map were of little significance to the world until Charles Hapgood, a history professor from New Hampshire, USA, claimed that the information in the Piri Reis map supported a different view of geology and ancient history. Hapgood believed that the map verified his global geological theory, which explains how portions of Antarctica could have remained ice-free until 4000BC.

Hapgood’s presentation is so convincing that even famed theoretical physicist and philosopherAlbert Einstein wrote the following supportive forward to a book that Hapgood wrote in 1953:

“His idea is original, of great simplicity, and – if it continues to prove itself – of great importance to everything that is related to the history of the Earth’s surface.” -Albert Einstein
Unquestionably not a hoax, the map is certifiably authentic, but the information on the map is of mysterious origin. Piri Reis himself notes that the map was drawn from information sourced from other, older maps, charts and logs, many of which, Hapgood suggests, may have been copied and transcribed repeatedly since before the destruction of the Library of Alexandria in Egypt, which wiped out the literature of antiquity and vast cultural knowledge.

This hypothesis opens the door to the possibility that some forgotten ancient civilization had the capacity to voyage to the Antarctic, charting the earth, with the technology to make maps, sometime before the ice caps formed. A significant departure from our present understanding of our history.

The absence of the ice caps in the Piri Reis map is peculiar, and in 1960 Hapgood brought his theories on this to the attention of the United States Air Force. Hapgood asked, among other things, if the shape of the continent, as it appeared on the Piri Reis map, was at all similar to the shape of the continent under the ice, as revealed by recent Air Force testing of seismic data on the continent. Their answer was astonishing:

“…the geographical detail shown in the lower part of the map agrees very remarkably with the results of the seismic profile made across the top of the ice-cap by the Swedish-British Antarctic Expedition of 1949.

This indicates the coastline had been mapped before it was covered by the ice-cap.

The ice-cap in this region is now about a mile thick.

We have no idea how the data on this map can be reconciled with the supposed state of geographical knowledge in 1513.

Harold Z. Ohlmeyer
Lt. Colonel, USAF
Commander”
If Hapgood’s theory has merit, as even Einstein believed, then there was a period of time from around 13000BC to 6000BC when Antarctica was located more closely to the equator and was more tropical in climate, much like parts of South America. This was caused by a sudden shift of the earth’s entire lithosphere, he theorized, simultaneously moving all of the continents into their present position, a much different view than the widely accepted explanation offered the plate tectonics theory.

If Antarctica had indeed been further North then than it presently is, and was not covered in ice only as recently as 6000BC, then who was around back then that could have mapped it, long before any known civilizations? And who could have done so long before the advent of the marine chronometer in the 18th century, which finally solved the problem of accurately tracking longitude on the high seas?

Had the entire Earth already been mapped by 4000BC, by a civilization that has been forgotten, as analysis of the Piri Reis map and the theories of Charles Hapgood suggest?

Source:  themindunleashed.org

Software already taking jobs from humans

AI already taking jobs from humans

AI already taking jobs from humans

FORGET Skynet. Hypothetical world-ending artificial intelligence makes headlines, but the hype ignores what’s happening right under our noses. Cheap, fast AI is already taking our jobs, we just haven’t noticed.

This isn’t dumb automation that can rapidly repeat identical tasks. It’s software that can learn about and adapt to its environment, allowing it to do work that used to be the exclusive domain of humans, from customer services to answering legal queries.

These systems don’t threaten to enslave humanity, but they do pose a challenge: if software that does the work of humans exists, what work will we do?

In the last three years, UK telecoms firm O2 has replaced 150 workers with a single piece of software. A large portion of O2’s customer service is now automatic, says Wayne Butterfield, who works on improving O2’s operations. “Sim swaps, porting mobile numbers, migrating from prepaid onto a contract, unlocking a phone from O2” – all are now automated, he says.

Humans used to manually move data between the relevant systems to complete these tasks, copying a phone number from one database to another, for instance. The user still has to call up and speak to a human, but now an AI does the actual work.

To train the AI, it watches and learns while humans do simple, repetitive database tasks. With enough training data, the AIs can then go to work on their own. “They navigate a virtual environment,” says Jason Kingdon, chairman of Blue Prism, the start-up which developed O2’s artificial workers. “They mimic a human. They do exactly what a human does. If you watch one of these things working it looks a bit mad. You see it typing. Screens pop-up, you see it cutting and pasting.”

One of the world’s largest banks, Barclays, has also dipped a toe into this specialised AI. It used Blue Prism to deal with the torrent of demands that poured in from its customers after UK regulators demanded that it pay back billions of pounds of mis-sold insurance. It would have been expensive to rely entirely on human labour to field the sudden flood of requests. Having software agents that could take some of the simpler claims meant Barclays could employ fewer people.

The back office work that Blue Prism automates is undeniably dull, but it’s not the limit for AI’s foray into office space. In January, Canadian start-up ROSS started using IBM’s Watson supercomputer to automate a whole chunk of the legal research normally carried out by entry-level paralegals.

Legal research tools already exist, but they don’t offer much more than keyword searches. This returns a list of documents that may or may not be relevant. Combing through these for the argument a lawyer needs to make a case can take days.

ROSS returns precise answers to specific legal questions, along with a citation, just like a human researcher would. It also includes its level of confidence in its answer. For now, it is focused on questions about Canadian law, but CEO Andrew Arruda says he plans for ROSS to digest the law around the world.

Since its artificial intelligence is focused narrowly on the law, ROSS’s answers can be a little dry. Asked whether it’s OK for 20 per cent of the directors present at a directors’ meeting to be Canadian, it responds that no, that’s not enough. Under Canadian law, no directors’ meeting may go ahead with less than 25 per cent of the directors present being Canadian. ROSS’s source? The Canada Business Corporations Act, which it scanned and understood in an instant to find the answer.

By eliminating legal drudge work, Arruda says that ROSS’s automation will open up the market for lawyers, reducing the time they need to spend on each case. People who need a lawyer but cannot afford one would suddenly find legal help within their means.

ROSS’s searches are faster and broader than any human’s. Arruda says this means it doesn’t just get answers that a human would have had difficulty finding, it can search in places no human would have thought to look. “Lawyers can start crafting very insightful arguments that wouldn’t have been achievable before,” he says. Eventually, ROSS may become so good at answering specific kinds of legal question that it could handle simple cases on its own.

Where Blue Prism learns and adapts to the various software interfaces designed for humans working within large corporations, ROSS learns and adapts to the legal language that human lawyers use in courts and firms. It repurposes the natural language-processing abilities of IBM’s Watson supercomputer to do this, scanning and analysing 10,000 pages of text every second before pulling out its best answers, ranked by confidence.

Lawyers are giving it feedback too, says Jimoh Ovbiagele, ROSS’s chief technology officer. “ROSS is learning through experience.”

Massachusetts-based Nuance Communications is building AIs that solve some of the same language problems as ROSS, but in a different part of the economy: medicine. In the US, after doctors and nurses type up case notes, another person uses those notes to try to match the description with one of thousands of billing codes for insurance purposes.

Nuance’s language-focused AIs can now understand the typed notes, and figure out which billing code is a match. The system is already in use in a handful of US hospitals.

Kingdon doesn’t shy away from the implications of his work: “This is aimed at being a replacement for a human, an automated person who knows how to do a task in much the same way that a colleague would.”

But what will the world be like as we increasingly find ourselves working alongside AIs? David Autor, an economist at the Massachusetts Institute of Technology, says automation has tended to reduce drudgery in the past, and allowed people to do more interesting work.

“Old assembly line jobs were things like screwing caps on bottles,” Autor says. “A lot of that stuff has been eliminated and that’s good. Our working lives are safer and more interesting than they used to be.”

The potential problem with new kinds of automation like Blue Prism and ROSS is that they are starting to perform the kinds of jobs which can be the first rung on the corporate ladders, which could result in deepening inequality.

Autor remains optimistic about humanity’s role in the future it is creating, but cautions that there’s nothing to stop us engineering our own obsolescence, or that of a large swathe of workers that further splits rich from poor. “We’ve not seen widespread technological unemployment, but this time could be different,” he says. “There’s nothing that says it can’t happen.”

Kingdon says the changes are just beginning. “How far and fast? My prediction would be that in the next few years everyone will be familiar with this. It will be in every single office.”

Once it reaches that scale, narrow, specialised AIs may start to offer something more, as their computation roots allow them to call upon more knowledge than human intelligence could.

“Right now ROSS has a year of experience,” says Ovbiagele. “If 10,000 lawyers use ROSS for a year, that’s 10,000 years of experience.”

Source:  newscientist.com

Scientists Closing in on Consciousness

 

 

Scientists Closing in on Theory of Consciousness

Scientists Closing in on Theory of Consciousness

Probably for as long as humans have been able to grasp the concept of consciousness, they have sought to understand the phenomenon.

Studying the mind was once the province of philosophers, some of whom still believe the subject is inherently unknowable. But neuroscientists are making strides in developing a true science of the self.

Here are some of the best contenders for a theory of consciousness.

Cogito ergo sum

Not an easy concept to define, consciousness has been described as the state of being awake and aware of what is happening around you, and of having a sense of self. [Top 10 Mysteries of the Mind]

The 17th century French philosopher René Descartes proposed the notion of “cogito ergo sum” (“I think, therefore I am”), the idea that the mere act of thinking about one’s existence proves there is someone there to do the thinking.

Descartes also believed the mind was separate from the material body — a concept known as mind-body duality — and that these realms interact in the brain’s pineal gland. Scientists now reject the latter idea, but some thinkers still support the notion that the mind is somehow removed from the physical world.

But while philosophical approaches can be useful, they do not constitute testable theories of consciousness, scientists say.

“The only thing you know is, ‘I am conscious.’ Any theory has to start with that,” said Christof Koch, a neuroscientist and the chief scientific officer at the Allen Institute for Neuroscience in Seattle.

Correlates of consciousness

In the last few decades, neuroscientists have begun to attack the problem of understanding consciousness from an evidence-based perspective. Many researchers have sought to discover specific neurons or behaviors that are linked to conscious experiences.

Recently, researchers discovered a brain area that acts as a kind of on-off switch for the brain. When they electrically stimulated this region, called the claustrum, the patient became unconscious instantly. In fact, Koch and Francis Crick, the molecular biologist who famously helped discover the double-helix structure of DNA, had previously hypothesized that this region might integrate information across different parts of the brain, like the conductor of a symphony.

But looking for neural or behavioral connections to consciousness isn’t enough, Koch said. For example, such connections don’t explain why the cerebellum, the part of the brain at the back of the skull that coordinates muscle activity, doesn’t give rise to consciousness, while the cerebral cortex (the brain’s outermost layer) does. This is the case even though the cerebellum contains more neurons than the cerebral cortex.

Nor do these studies explain how to tell whether consciousness is present, such as in brain-damaged patients, other animals or even computers. [Super-Intelligent Machines: 7 Robotic Futures]

Neuroscience needs a theory of consciousness that explains what the phenomenon is and what kinds of entities possess it, Koch said. And currently, only two theories exist that the neuroscience community takes seriously, he said.

Integrated information

Neuroscientist Giulio Tononi of the University of Wisconsin-Madison developed one of the most promising theories for consciousness, known as integrated information theory.

Understanding how the material brain produces subjective experiences, such as the color green or the sound of ocean waves, is what Australian philosopher David Chalmers calls the “hard problem” of consciousness. Traditionally, scientists have tried to solve this problem with a bottom-up approach. As Koch put it, “You take a piece of the brain and try to press the juice of consciousness out of [it].” But this is almost impossible, he said.

In contrast, integrated information theory starts with consciousness itself, and tries to work backward to understand the physical processes that give rise to the phenomenon, said Koch, who has worked with Tononi on the theory.

The basic idea is that conscious experience represents the integration of a wide variety of information, and that this experience is irreducible. This means that when you open your eyes (assuming you have normal vision), you can’t simply choose to see everything in black and white, or to see only the left side of your field of view.

Instead, your brain seamlessly weaves together a complex web of information from sensory systems and cognitive processes. Several studies have shown that you can measure the extent of integration using brain stimulation and recording techniques.

The integrated information theory assigns a numerical value, “phi,” to the degree of irreducibility. If phi is zero, the system is reducible to its individual parts, but if phi is large, the system is more than just the sum of its parts.

This system explains how consciousness can exist to varying degrees among humans and other animals. The theory incorporates some elements of panpsychism, the philosophy that the mind is not only present in humans, but in all things.

An interesting corollary of integrated information theory is that no computer simulation, no matter how faithfully it replicates a human mind, could ever become conscious. Koch put it this way: “You can simulate weather in a computer, but it will never be ‘wet.'”

Global workspace

Another promising theory suggests that consciousness works a bit like computer memory, which can call up and retain an experience even after it has passed.

Bernard Baars, a neuroscientist at the Neurosciences Institute in La Jolla, California, developed the theory, which is known as the global workspace theory. This idea is based on an old concept from artificial intelligence called the blackboard, a memory bank that different computer programs could access.

Anything from the appearance of a person’s face to a memory of childhood can be loaded into the brain’s blackboard, where it can be sent to other brain areas that will process it.  According to Baars’ theory, the act of broadcasting information around the brain from this memory bank is what represents consciousness.

The global workspace theory and integrated information theories are not mutually exclusive, Koch said. The first tries to explain in practical terms whether something is conscious or not, while the latter seeks to explain how consciousness works more broadly.

“At this point, both could be true,” Koch said.

 

Source:  livescience.com

Free will is an illusion

Free Will

Free Will

 

Let’s say you’re approaching a fork in the road, and at the very last minute you decide to take the right fork. Common sense says that you made at active decision to take the right fork — a decision you made more or less a split second before you shifted your body ever so slightly in the direction of said fork.

But recent research reveals that decisions such as these may have much deeper neurological roots — so deep, in fact, that scientists can observe patterns of brain activity that allow them to predict the outcome of decisions like these long before a person is even conscious of his own decision.

In other words, scientists have thrown a serious wrench in the works of the notion of free will.

Nature’s Kerri Smith writes:

As humans, we like to think that our decisions are under our conscious control – that we have free will. Philosophers have debated that concept for centuries, and now [neuroscientist John-Dylan] Haynes and other experimental neuroscientists are raising a new challenge. They argue that consciousness of a decision may be a mere biochemical afterthought, with no influence whatsoever on a person’s actions. According to this logic, they say, free will is an illusion.”

In the words of Patrick Haggard, a neuroscientist at University College London: “We feel we choose, but we don’t.”

 

Source:   io9.com

Neglect Harms Brain Development

Childhood neglect leads to harmful changes in the brain, a new study says.

In new research published in the journal JAMA Pediatrics, researchers looked at brain differences between Romanian children who were either abandoned and institutionalized, sent to institutions and then to foster families, or were raised in biological families.

Kids who were not raised in a family setting had noticeable alterations in the white matter of their brains later on, while the white matter in the brains of the children who were placed with a foster family looked pretty similar to the brains of the children who were raised with their biological families.

Researchers were interested in white matter, which is largely made up of nerves, because it plays an important role in connecting brain regions and maintaining networks critical for cognition. Prior research has shown that children raised in institutional environments have limited access to language and cognitive stimulation, which could hinder development.

These findings suggest that even if a child were at a risk for poor development due to their living circumstances at an early age, placing them in a new caregiving environment with more support could prevent white matter changes or perhaps even heal them.

More studies are needed, but the researchers believe their findings could help public health efforts aimed at children experiencing severe neglect, as well as efforts to build childhood resiliency.

 

Source:   time.com

Head transplants could be a reality by 2017

Head Transplant

Head Transplant

 

 

Transplanting a human head onto a donor body may sound like the stuff of science fiction comics, but not to Italian doctor Sergio Canavero. He has not only published a paper describing the operation in detail, but also believes that the surgery could be a reality as early as 2017.

Canavero, Director of the Turin Advanced Neuromodulation Group, initially highlighted the idea in 2013, stating his belief that the technology to successfully join two severed spinal cords existed. Since then he’s worked out the details, describing the operation in his recent paper, as the Gemini spinal cord fusion protocol (GEMINI GCF).

To carry out the transplant, a state of hypothermia is first induced in both the head to be transplanted and the donor body, to help the cells stay alive without oxygen. Surgeons would then cut into the neck tissue of both bodies and connect the blood vessels with tubes. The next step is to cut the spinal cords as neatly as possible with minimal trauma.

The severed head would then be placed on the donor body and the two spinal cords encouraged to fuse together with a sealant called polyethylene glycol, which Canavero notes in his paper, has “the power to literally fuse together severed axons or seal injured leaky neurons.”

After suturing the blood vessels and the skin, the patient is kept in a comatose state for three to four weeks to discourage movement and give both spinal stumps time to fuse. The fusion point will also be electrically stimulated to encourage neural connections and accelerate the growth of a functional neural bridge. The patient will additionally be put on a regime of anti-rejection medications.

According to Canavero, with rehabilitation the patient should be able to speak in their own voice and walk within a year’s time. The goal is to help people who are paralyzed, or whose bodies are otherwise riddled with degenerative diseases and other complications. While the procedure sounds extremely complex and disturbing on multiple levels, Canvero tells us he’s already conducting interviews with volunteers who’ve stepped forward.

“Many are dystrophic,” Canavero says “These people are in horrible pain.”

The most well-known example of a head transplant was when Dr. Robert White, a neurosurgeon, transplanted the head of one rhesus monkey onto another in 1970. The spinal cords, however, were not connected to each other, leaving the monkey unable to control its body. It subsequently died after the donor body rejected the head.

Current technology and recent advances hold out more promise. Canavero plans to garner support for the project, when he presents it at the American Academy of neurological and Orthopaedic Surgeons conference in Annapolis, Maryland, later this year. Understandably his proposal has generated incredible controversy, with experts questioning the specifics and ethics of the procedure, even going as far as calling it bad science.

 

Source:   gizmag.com