More than one-third of babies are tapping on smartphones and tablets even before they learn to walk or talk, and by 1 year of age, one in seven toddlers is using devices for at least an hour a day, according to a study to be presented Saturday, April 25 at the Pediatric Academic Societies (PAS) annual meeting in San Diego.
The American Academy of Pediatrics discourages the use of entertainment media such as televisions, computers, smartphones and tablets by children under age 2. Little is known, however, when youngsters actually start using mobile devices.
Researchers developed a 20-item survey to find out when young children are first exposed to mobile media and how they use devices. The questionnaire was adapted from the “Zero to Eight” Common Sense Media national survey on media use in children.
Parents of children ages 6 months to 4 years old who were at a hospital-based pediatric clinic that serves a low-income, minority community were recruited to fill out the survey. Participants were asked about what types of media devices they have in their household, children’s age at initial exposure to mobile media, frequency of use, types of activities and if their pediatrician had discussed media use with them.
Results from 370 parents showed that 74 percent were African-American, 14 percent were Hispanic and 13 percent had less than a high school education. Media devices were ubiquitous, with 97 percent having TVs, 83 percent having tablets, 77 percent having smartphones and 59 percent having Internet access.
Children younger than 1 year of age were exposed to media devices in surprisingly large numbers: 52 percent had watched TV shows, 36 percent had touched or scrolled a screen, 24 percent had called someone, 15 percent used apps and 12 percent played video games.
By 2 years of age, most children were using mobile devices.
Lead author Hilda Kabali, MD, a third-year resident in the Pediatrics Department at Einstein Healthcare Network, said the results surprised her.
“We didn’t expect children were using the devices from the age of 6 months,” she said. “Some children were on the screen for as long as 30 minutes.”
Results also showed 73 percent of parents let their children play with mobile devices while doing household chores, 60 percent while running errands, 65 percent to calm a child and 29 percent to put a child to sleep.
Time spent on devices increased with age, with 26 percent of 2-year-olds and 38 percent of 4-year-olds using devices for at least an hour a day.
Finally, only 30 percent of parents said their child’s pediatrician had discussed media use with them.
Psilocybe cubensis, commonly referred to as magic mushrooms have the potential to make a lasting change to one’s personality. This is a preliminary conclusion from a study conducted by Johns Hopkins researchers and published in the Journal of Psychopharmacology.
A single dose of ‘shrooms’ was enough to make a lasting impression on the personality in 30 of the 51 participants, or nearly 60%. Those who had a hallucinatory or mystical experience after consuming the mushrooms showed increased in the personality trait ‘openness’, which is closely related to creativity and curiosity. This increase was measured 2 months and even 14 months after the last session, which suggests long-term effects.
Study leader Roland Griffiths, a professor of psychiatry, finds this lasting impact on a personality trait remarkable: “Normally, if anything, openness tends to decrease as people get older.” Openness is one of five traits that were tested and the only one that changed during the study. Along with the other factors extroversion, neuroticism, agreeableness and conscientiousness, openness is one of the major personality traits that are known to be constant throughout one’s lifetime.
According to the researchers, this study is the first finding of a short-term means with which long-term personality changes can made. “There may be applications for this we can’t even imagine at this point,” says Griffiths. “It certainly deserves to be systematically studied.”
There is currently another study under way to determine whether or not psilocybin can help cancer patients deal with feelings of anxiety and depression.
The Telegraph has reported that the long extinct woolly mammoth could be brought back to life in as little as four years thanks to a breakthrough in cloning technology.
Previous efforts in the 1990s to recover nuclei in cells from the skin and muscle tissue from frozen woolly mammoths found in the Siberian permafrost failed because they had been too badly damaged by the extreme cold in which they had been encased for thousands of years.
But, in 2008, a technique pioneered by Dr. Teruhiko Wakayama — of the Riken Centre for Developmental Biology — was successful in cloning a mouse from the cells of another mouse that had been frozen for 16 years.
Now that hurdle has been overcome a professor at Kyoto University, Professor Akira Iritani, is reactivating his campaign to resurrect the species that died out 5,000 years ago:
“Now the technical problems have been overcome, all we need is a good sample of soft tissue from a frozen mammoth.”
He intends to use Wakayama’s technique to identify the nuclei of viable mammoth cells before extracting the healthy ones.The nuclei will then be inserted into the egg cells of an African elephant, which will act as the surrogate mother for the mammoth, possibly making the creature FORMERLY EXTINCT.
Iritani has announced plans to travel to Siberia in the summer to search for mammoths in the permafrost and to recover a sample of skin or tissue. If he is unsuccessful, he will ask Russian scientists to provide a sample from one of their finds:
“The success rate in the cloning of cattle was poor until recently but now stands at about 30 percent. I think we have a reasonable chance of success and a healthy mammoth could be born in four or five years.”
Test subjects taking part in an 8-week program of mindfulness meditation showed results that astonished even the most experienced neuroscientists at Harvard University. The study was led by a Harvard-affiliated team of researchers based at Massachusetts General Hospital, and the team’s MRI scans documented for the very first time in medical history how meditation produced massive changes inside the brain’s gray matter. “Although the practice of meditation is associated with a sense of peacefulness and physical relaxation, practitioners have long claimed that meditation also provides cognitive and psychological benefits that persist throughout the day,” says study senior author Sara Lazar of the MGH Psychiatric Neuroimaging Research Program and a Harvard Medical School instructor in psychology. “This study demonstrates that changes in brain structure may underlie some of these reported improvements and that people are not just feeling better because they are spending time relaxing.”
Sue McGreevey of MGH writes: “Previous studies from Lazar’s group and others found structural differences between the brains of experienced meditation practitioners and individuals with no history of meditation, observing thickening of the cerebral cortex in areas associated with attention and emotional integration. But those investigations could not document that those differences were actually produced by meditation.” Until now, that is. The participants spent an average of 27 minutes per day practicing mindfulness exercises, and this is all it took to stimulate a major increase in gray matter density in the hippocampus, the part of the brain associated with self-awareness, compassion, and introspection. McGreevey adds: “Participant-reported reductions in stress also were correlated with decreased gray-matter density in the amygdala, which is known to play an important role in anxiety and stress. None of these changes were seen in the control group, indicating that they had not resulted merely from the passage of time.”
“It is fascinating to see the brain’s plasticity and that, by practicing meditation, we can play an active role in changing the brain and can increase our well-being and quality of life,” says Britta Hölzel, first author of the paper and a research fellow at MGH and Giessen University in Germany
A new study presented to the Royal Society meeting on ancient DNA in London last week has revealed a dramatic finding – the genome of one of our ancient ancestors, the Denisovans, contains a segment of DNA that seems to have come from another species that is currently unknown to science. The discovery suggests that there was rampant interbreeding between ancient human species in Europe and Asia more than 30,000 years ago. But, far more significant was the finding that they also mated with a mystery species from Asia – one that is neither human nor Neanderthal.
Scientists launched into a flurry of discussion and debate upon hearing the study results and immediately began speculating about what this unknown species could be. Some have suggested that a group may have branched off to Asia from the Homo heidelbernensis, who resided in Africa about half a million years ago. They are believed to be the ancestors of Europe’s Neanderthals.
However others, such as Chris Stringer, a paleoanthropologist at the London Natural History Museum, admitted that they “don’t have the faintest idea” what the mystery species could be.
Traces of the unknown new genome were detected in two teeth and a finger bone of a Denisovan, which was discovered in a Siberian cave. There is not much data available about the appearance of Denisovans due to lack of their fossils’ availability, but the geneticists and researchers succeeded in arranging their entire genome very precisely.
“What it begins to suggest is that we’re looking at a ‘Lord of the Rings’-type world – that there were many hominid populations,” Mark Thomas, an evolutionary geneticist at University College London.
The question is now: who were these mystery people that the Denisovans were breeding with?
Alzheimer’s disease is the sixth leading cause of death in the United States, with over 1,200 individuals developing the disease every day. A new paper in the Journal of Neuroscience from lead author Dena Dubal of the University of California, San Francisco describes how manipulating levels of a protein associated with memory can stave off Alzheimer’s symptoms, even in the presence of the disease-causing toxins.
Klotho is a transmembrane protein associated with longevity. The body makes less of this protein over time, and low levels of klotho is connected to a number of diseases including osteoporosis, heart disease, increased risk of stroke, and decreased cognitive function. These factors lead to diminished quality of life and even early death.
Previous research has shown that increasing klotho levels in healthy mice leads to increased cognitive function. This current paper from Dubal’s team builds on that research by increasing klotho in mice who are also expressing large amounts of amyloid-beta and tau, proteins that are associated with the onset of Alzheimer’s disease. Remarkably, even with high levels of these toxic, disease-causing proteins, the mice with elevated klotho levels were able to retain their cognitive function.
“It’s remarkable that we can improve cognition in a diseased brain despite the fact that it’s riddled with toxins,” Dubal said in a press release. “In addition to making healthy mice smarter, we can make the brain resistant to Alzheimer-related toxicity. Without having to target the complex disease itself, we can provide greater resilience and boost brain functions.”
The mechanism behind this cognitive preservation appears to be klotho interacting with a glutamate receptor called NMDA, which is critically important to synaptic transmission, thus influencing learning, memory, and executive function. Alzheimer’s disease typically damages these receptors, but the mice with elevated klotho were able to retain both NMDA function and cognition. Part of the success also appears to be due to the preservation of the NMDA subunit GluN2B, which existed in significantly larger numbers than the control mice. The mechanism and the results of this study will need to be investigated further before developing it into a possible treatment for humans in the future.
“The next step will be to identify and test drugs that can elevate klotho or mimic its effects on the brain,” added senior author Lennart Mucke from Gladstone Institutes. “We are encouraged in this regard by the strong similarities we found between klotho’s effects in humans and mice in our earlier study. We think this provides good support for pursuing klotho as a potential drug target to treat cognitive disorders in humans, including Alzheimer’s disease.”
University of Florida Health researchers have found that putting people on a feast-or-famine diet may mimic some of the benefits of fasting, and that adding antioxidant supplements may counteract those benefits.
Fasting has been shown in mice to extend lifespan and to improve age-related diseases. But fasting every day, which could entail skipping meals or simply reducing overall caloric intake, can be hard to maintain.
“People don’t want to just under-eat for their whole lives,” said Martin Wegman, an M.D.-Ph.D. student at the UF College of Medicine and co-author of the paper recently published in the journal Rejuvenation Research. “We started thinking about the concept of intermittent fasting.”
Michael Guo, a UF M.D.-Ph.D. student who is pursuing the Ph.D. portion of the program in genetics at Harvard Medical School, said the group measured the participants’ changes in weight, blood pressure, heart rate, glucose levels, cholesterol, markers of inflammation and genes involved in protective cell responses over 10 weeks.
“We found that intermittent fasting caused a slight increase to SIRT 3, a well-known gene that promotes longevity and is involved in protective cell responses,” Guo said.
The SIRT3 gene encodes a protein also called SIRT3. The protein SIRT3 belongs to a class of proteins called sirtuins. Sirtuins, if increased in mice, can extend their lifespans, Guo said. Researchers think proteins such as SIRT3 are activated by oxidative stress, which is triggered when there are more free radicals produced in the body than the body can neutralize with antioxidants. However, small levels of free radicals can be beneficial: When the body undergoes stress — which happens during fasting — small levels of oxidative stress can trigger protective pathways, Guo said.
“The hypothesis is that if the body is intermittently exposed to low levels of oxidative stress, it can build a better response to it,” Wegman said.
The researchers found that the intermittent fasting decreased insulin levels in the participants, which means the diet could have an anti-diabetic effect as well.
The group recruited 24 study participants in the double-blinded, randomized clinical trial. During a three-week period, the participants alternated one day of eating 25 percent of their daily caloric intake with one day of eating 175 percent of their daily caloric intake. For the average man’s diet, a male participant would have eaten 650 calories on the fasting days and 4,550 calories on the feasting days. To test antioxidant supplements, the participants repeated the diet but also included vitamin C and vitamin E.
At the end of the three weeks, the researchers tested the same health parameters. They found that the beneficial sirtuin proteins such as SIRT 3 and another, SIRT1, tended to increase as a result of the diet. However, when antioxidants were supplemented on top of the diet, some of these increases disappeared. This is in line with some research that indicates flooding the system with supplemental antioxidants may counteract the effects of fasting or exercise, said Christiaan Leeuwenburgh, Ph.D., co-author of the paper and chief of the division of biology of aging in the department of aging and geriatric research.
“You need some pain, some inflammation, some oxidative stress for some regeneration or repair,” Leeuwenburgh said. “These young investigators were intrigued by the question of whether some antioxidants could blunt the healthy effects of normal fasting.”
On the study participants’ fasting days, they ate foods such as roast beef and gravy, mashed potatoes, Oreo cookies and orange sherbet — but they ate only one meal. On the feasting days, the participants ate bagels with cream cheese, oatmeal sweetened with honey and raisins, turkey sandwiches, apple sauce, spaghetti with chicken, yogurt and soda — and lemon pound cake, Snickers bars and vanilla ice cream.
“Most of the participants found that fasting was easier than the feasting day, which was a little bit surprising to me,” Guo said. “On the feasting days, we had some trouble giving them enough calories.”
Leeuwenburgh said future studies should examine a larger cohort of participants and should include studying a larger number of genes in the participants as well as examining muscle and fat tissue.
Viruses infect a wide range of plants and animals, and shows that they can even infect one another. If that seems surprising, no wonder: until a team of French researchers watched one virus invade another, hijacking its genetic machinery and making copies of its victim’s DNA, scientists didn’t even know this was possible.
The French team dubbed the virus’s virus Sputnik and called it a “virophage” to parallel “bacteriophage,” which is the name for a virus that infects bacteria. Sputnik is tiny, with only 18,000 genetic bases in its chromosome. Its victim, by contrast, is a large mamavirus that the scientists found in a Paris cooling tower, and contains about 1.2 million genetic bases. An infection by Sputnik sickens the mamavirus by interfering with its replication.
The discovery that even viruses can fall ill has reignited an old controversy—whether viruses are are actually alive or simply rogue bits of DNA that depend upon other organisms to reproduce. “There’s no doubt this is a living organism,” says Jean-Michel Claverie, a virologist at the the CNRS UPR laboratories in Marseilles, part of France’s basic-research agency. “The fact that it can get sick makes it more alive”.
And now that they know viruses can infect other viruses, the researchers say it could be possible to use virophages against the most harmful viruses, although they’re cautious about the idea. “It’s too early to say we could use Sputnik as a weapon against big viruses or to modify them,” says co-author Bernard La Scola, also at the University of the Mediterranean. “But phages are used to modify bacteria, so why not?”.
Researchers at the University of Rochester may have answered one of neuroscience’s most vexing questions—how can it be that our neurons, which are responsible for our crystal-clear thoughts, seem to fire in utterly random ways?
“You’d think this is crazy because engineers are always fighting to reduce the noise in their circuits, and yet here’s the best computing machine in the universe—and it looks utterly random,” says Alex Pouget, associate professor of brain and cognitive sciences at the University of Rochester.
Pouget’s work for the first time connects two of the brain’s biggest mysteries; why it’s so noisy, and how it can perform such complex calculations. As counter-intuitive as it sounds, the noise seems integral to making those calculations possible.
In the last decade, Pouget and his colleagues in the University of Rochester’s Department of Brain and Cognitive Sciences have blazed a new path to understanding our gray matter. The traditional approach has assumed the brain uses the same method computation in general had used up until the mid-80s: You see an image and you relate that image to one stored in your head. But the reality of the cranial world seems to be a confusing array of possibilities and probabilities, all of which are somehow, mysteriously, properly calculated.
The science of drawing answers from such a variety of probabilities is called Bayesian computing, after minister Thomas Bayes who founded the unusual branch of math 150 years ago. Pouget says that when we seem to be struck by an idea from out of the blue, our brain has actually just resolved many probabilities its been fervently calculating.
“We’ve known for several years that at the behavioral level, we’re ‘Bayes optimal,’ meaning we are excellent at taking various bits of probability information, weighing their relative worth, and coming to a good conclusion quickly,” says Pouget. “But we’ve always been at a loss to explain how our brains are able to conduct such complex Bayesian computations so easily.”
Two years ago, while talking with a physics friend, some probabilities in Pouget’s own head suddenly resolved.
Bayesian computing can be done most efficiently when data is formatted in what’s called “Poisson distribution.”
And the neural noise, Pouget noticed, looked suspiciously like this optimal distribution.
This idea set Pouget and his team into investigating whether our neurons’ noise really fits this Poisson distribution, and in his current Nature Neuroscience paper he found that it fit extremely well.
“The cortex appears wired at its foundation to run Bayesian computations as efficiently as can be possible,” says Pouget. His paper says the uncertainty of the real world is represented by this noise, and the noise itself is in a format that reduces the resources needed to compute it. Anyone familiar with log tables and slide rules knows that while multiplying large numbers is difficult, adding them with log tables is relatively undemanding.
The brain is apparently designed in a similar manner—”coding” the possibilities it encounters into a format that makes it tremendously easier to compute an answer.
Pouget now prefers to call the noise “variability.” Our neurons are responding to the light, sounds, and other sensory information from the world around us. But if we want to do something, such as jump over a stream, we need to extract data that is not inherently part of that information. We need to process all the variables we see, including how wide the stream appears, what the consequences of falling in might be, and how far we know we can jump. Each neuron responds to a particular variable and the brain will decide on a conclusion about the whole set of variables using Bayesian inference.
As you reach your decision, you’d have a lot of trouble articulating most of the variables your brain just processed for you. Similarly, intuition may be less a burst of insight than a rough consensus among your neurons.
Pouget and his team are now expanding their findings across the entire cortex, because every part of our highly developed cortex displays a similar underlying Bayes-optimal structure.
“If the structure is the same, that means there must be something fundamentally similar among vision, movement, reasoning, loving—anything that takes place in the human cortex,” says Pouget. “The way you learn language must be essentially the same as the way a doctor reasons out a diagnosis, and right now our lab is pushing hard to find out exactly how that noise makes all these different aspects of being human possible.”
Pouget’s work still has its skeptics, but this, his fourth paper in Nature Neuroscience on the topic, is starting to win converts.
“If you ask me, this is the coming revolution,” says Pouget. “It hit machine learning and cognitive science, and I think it’s just hitting neuroscience. In 10 or 20 years, I think the way everybody thinks about the brain is going to be in these terms.”
Not all of Pouget’s neurons are in agreement, however.
“…but I’ve been wrong before,” he shrugs.
Virginia Tech Carilion Research Institute scientists have developed a brain-imaging technique that may be able to identify children with autism spectrum disorder in just two minutes.
This test, while far from being used as the clinical standard of care, offers promising diagnostic potential once it undergoes more research and evaluation.
“Our brains have a perspective-tracking response that monitors, for example, whether it’s your turn or my turn,” said Read Montague, the Virginia Tech Carilion Research Institute professor who led the study.
“This response is removed from our emotional input, so it makes a great quantitative marker,” he said. “We can use it to measure differences between people with and without autism spectrum disorder.”
The finding, expected to be published online next week in Clinical Psychological Science, demonstrates that the perspective-tracking response can be used to determine whether someone has autism spectrum disorder.
Usually, diagnosis – an unquantifiable process based on clinical judgment – is time consuming and trying on children and their families. That may change with this new diagnostic test.
The path to this discovery has been a long, iterative one. In a 2006 study by Montague and others, pairs of subjects had their brains scanned using functional magnetic resonance imaging, or MRI, as they played a game requiring them to take turns.
From those images, researchers found that the middle cingulate cortex became more active when it was the subject’s turn.
“A response in that part of the brain is not an emotional response, and we found that intriguing,” said Montague, who also directs the Computational Psychiatry Unit at the Virginia Tech Carilion Research Institute and is a professor of physics at Virginia Tech. “We realized the middle cingulate cortex is responsible for distinguishing between self and others, and that’s how it was able to keep track of whose turn it was.”
That realization led the scientists to investigate how the middle cingulate cortex response differs in individuals at different developmental levels. In a 2008 study, Montague and his colleagues asked athletes to watch a brief clip of a physical action, such as kicking a ball or dancing, while undergoing functional MRI.
The athletes were then asked either to replay the clips in their mind, like watching a movie, or to imagine themselves as participants in the clips.
“The athletes had the same responses as the game participants from our earlier study,” Montague said. “The middle cingulate cortex was active when they imagined themselves dancing – in other words, when they needed to recognize themselves in the action.”
In the 2008 study, the researchers also found that in subjects with autism spectrum disorder, the more subdued the response, the more severe the symptoms.
Montague and his team hypothesized that a clear biomarker for self-perspective exists and that they could track it using functional MRI. They also speculated that the biomarker could be used as a tool in the clinical diagnosis of people with autism spectrum disorder.
In 2012, the scientists designed another study to see whether they could elicit a brain response to help them compute the unquantifiable. And they could: By presenting self-images while scanning the brains of adults, they elicited the self-perspective response they had previously observed in social interaction games.
In the current study, with children, subjects were shown 15 images of themselves and 15 images of a child matched for age and gender for four seconds per image in a random order.
Like the control adults, the control children had a high response in the middle cingulate cortex when viewing their own pictures. In contrast, children with autism spectrum disorder had a significantly diminished response.
Importantly, Montague’s team could detect this difference in individuals using only a single image.
Montague and his group realized they had developed a single-stimulus functional MRI diagnostic technique. The single-stimulus part is important, Montague points out, as it enables speed. Children with autism spectrum disorder cannot stay in the scanner for long, so the test must be quick.
“We went from a slow, average depiction of brain activity in a cognitive challenge to a quick test that is significantly easier for children to do than spend hours under observation,” Montague said. “The single-stimulus functional MRI could also open the door to developing MRI-based applications for screening of other cognitive disorders.”
By mapping psychological differences through brain scans, scientists are adding a critical component to the typical process of neuropsychiatric diagnosis – math.
Montague has been a pioneering figure in this field, which he coined computational psychiatry. The idea is that scientists can link the function of mental disorders to the disrupted mechanisms of neural tissue through mathematical approaches. Doctors then can use measurable data for earlier diagnosis and treatment.
An earlier diagnosis can also have a tremendous impact on the children and their families, Montague said.
“The younger children are at the time of diagnosis,” Montague said, “the more they can benefit from a range of therapies that can transform their lives.”
Chaga is a non-toxic fungal parasite that grows on birch trees (as well as a few other types) in Northern climates. It is far from your typical soft and squishy mushroom, it actually looks and feels like burnt wood or charcoal. Chaga is known by the Siberians as the “Gift From God” and the “Mushroom of Immortality.” The Japanese call it “The Diamond of the Forest,” and the Chinese refer to it as the “King of Plants.” The Chinese also regard it as an amazing factor inachieving longevity. Chaga does grow in North America, but most Americans have no clue of its existence, let alone amazing healing properties, which will be listed below.
This mushroom of immortality is said to have the highest level of anti-oxidants of any food in the world and also, the highest level of superoxide dismutase (one of the body’s primary internal anti-oxidant defenses) that can be detected in any food or herb. The active constituents of Chaga are a combination of: amino acids, beta glucans, betulinic acid, calcium, chloride, copper, dietary fiber, enzymes, flavonoids, germanium, iron, lanosterol, manganese, magnesium, melanin, pantothenic acid, phenols, phosphorus, polysaccharides, potassium, saponins, selenium, sodium, sterols, trametenolic acid, tripeptides, triterpenes, triterpenoids, vannillic acid, vitamin B1, vitamin B2, vitamin B3, Vitamin D2, Vitamin K and zinc. Phew.
Chaga is extremely powerful because it contains within it, the actual life force of trees -the most powerful living beings on this Earth. Trees can live for as long as 10,000 years with some even surpassing that. Chaga concentrates this power, and we can harvest it as well. One of the most important properties of Chaga is betulinic acid, however, in order for chaga to be beneficial, it has to be harvested from birch trees only. Birch trees are the only trees that contain this amazing compound. Betulinic acid has a wide range of biological effects including potent antitumor activity.
Some Other Medicinal Properties Of The Chaga Mushroom Include:
- Anti-HIV – a study published in The Pharmological Potential Of Mushrooms demonstrated chaga’s potential to lessen symptoms of HIV.
- Antibacterial – Chaga kills or inhibits growth or replication by suppressing or destroying the reproduction of bacteria.
- Anti-Inflammatory – Chaga is known to be a powerful anti-inflammatory and pain reliever, which makes it excellent for conditions such as arthritis.
- Anti-Candida – Because chaga promotes and protects the liver, candida toxins are processed efficiently.
- Adaptogen – Chaga is an Adaptogen. Its compounds can increase the body’s capability of adapting to stress, fatigue and anxiety. (Something most Americans can definitely benefit from.)
- Many other potential benefits include the treatment of asthma, hair loss, allergies, boosting the immune system, diabetes, Crohn’s Disease, psoriasis, anti-aging and literally hundreds of others.
How To Prepare Wild Chaga Mushroom Tea
Chaga mushrooms grow wild in forests in Northern climates on birch trees. If you are lucky enough to find one, you’ll want to harvest it, as chaga can be quite expensive to purchase. DO NOT cut into the tree to retrieve the chaga, doing so could kill the tree. If retrieved correctly the chaga will continue to grow and will be ready to harvest every four years or so, and the tree will continue to thrive.
***It is important to properly identify the chaga mushroom before consumption. To ensure you are getting the correct fungus, make sure that you are harvesting from birch trees only. Chaga has a similar texture to wood and looks a lot like burnt wood or charcoal, inside it should be a golden orangy color. Be sure to look it up before consuming if you are unsure just to be safe!***
To make the tea, cut a few small pieces off the chaga and place it in a pot. Pour in about 2 liters of filtered water and cover with a lid. Bring the pot to a boil for a minute or so, then reduce the heat to a simmer and keep the lid off. Let this simmer for about an hour and then add in another liter of water and continue to simmer with the lid on for another hour. This will make approximately 1 liter of chaga mushroom tea. It is a time consuming process, but I think that the amazing benefits justify the process, plus it tastes great! It tastes like a nice vanilla flavored black tea. You can add honey or sweetener if you wish, but I think it tastes surprisingly delicious on its own.
“Diet” products containing the chemical sweetener aspartame can have multiple neurotoxic, metabolic, allergenic, fetal and carcinogenic effects. My database of 1,200 aspartame reactors–based on logical diagnostic criteria, including predictable recurrence on rechallenge–is reviewed.
The existence of aspartame disease continues to be denied by the FDA and powerful corporate entities. Its magnitude, however, warrants removal of this chemical as an “imminent public health threat.” The use of aspartame products by over two-thirds of the population, and inadequate evaluation by corporate-partial investigators underscore this opinion.
The FDA approved aspartame as a low-nutritive sweetener for use in solid form during 1981, and in soft drinks during 1983. It is a synthetic chemical consisting of two amino acids, phenylalanine (50 percent) and aspartic acid (40 percent), and a methyl ester (10 percent) that promptly becomes free methyl alcohol (methanol; wood alcohol). The latter is universally considered a severe poison.
Senior FDA scientists and consultants vigorously protested approving the release of aspartame products. Their objections related to disturbing findings in animal studies (especially the frequency of brain tumors), seemingly flawed experimental data, and the absence of extensive pre-marketing trials on humans using real-world products over prolonged periods.
Aspartame reactions may be caused by the compound itself, its three components, stereoisomers of the amino acids, toxic breakdown products (including formaldehyde), or combinations thereof. They often occur in conjunction with severe caloric restriction and excessive exercise to lose weight.
Various metabolic and physiologic disturbances explain the clinical complications. Only a few are listed:
- Damage to the retina or optic nerves is largely due to methyl alcohol exposure. Unlike most animals, humans cannot efficiently metabolize it.
- High concentrations of phenylalanine and aspartic acid occur in the brain after aspartame intake, unlike the modest levels of amino acids following conventional protein consumption.
- Aspartame alters the function of major amino acid-derived neurotransmitters, especially in obese persons and after carbohydrate intake.
- Phenylalanine stimulates the release of insulin and growth hormone.
- The ambiguous signals to the satiety center following aspartame intake may result either in increased food consumption or severe anorexia.
- Large amounts of the radioactive-carbon label from oral aspartame intake have been detected in DNA.
The current “acceptable daily intake” (ADI) of 50 mg aspartame/kg body weight makes no sense. It represents the projection of animal studies based on lifetime intake! This was clearly stated by previous FDA Commissioner Dr. Frank Young during a U.S. Senate hearing on November 3, 1987. Furthermore, it disregards the usual 100-fold safety factor used by the FDA as a guideline for regulated food additives. The maximum daily intake tolerated by most reactors in my series, based on the predictable recurrence of induced symptoms and signs, ranged from 10 to 18.3 mg/kg.
- “We had better be sure that the questions that have been raised about the safety of this product are answered. I must say at the outset, this product was approved by the FDA in circumstances that can only be described as troubling.”
I have devoted more than two decades to analyzing aspartame disease, a widespread but largely ignored disorder. Its existence continues to be reflexively denied by the Food and Drug Administration (FDA), the American Medical Association (AMA), and many public health/ regulatory organizations.
The medical profession and consumers have been assured by the Council on Scientific Affairs of the AMA2 and the Centers for Disease Control (CDC) that aspartame is “completely safe.” Moreover, the impression is left that reports of serious reactions are a “health rumor” fabrication … notwithstanding the CDC report in 1984 of 649 aspartame reactors with many attributed disorders3.
Many reactors consumed prodigious amounts of aspartame, especially during hot weather. Conversely, some experienced convulsions, headache, or other severe symptoms after exposure to small amounts (e.g., chewing aspartame gum; placing an aspartame strip on the tongue; babies while breast-feeding as the mother drank an aspartame beverage).
Interval Between Cessation and Improvement
Nearly two-thirds of aspartame reactors experienced symptomatic improvement within two days after avoiding aspartame. With continued abstinence, their complaints generally disappeared.
The causative role of aspartame products has been repeatedly shown by (a) the prompt improvement of symptoms (grand mal seizures, headache, itching, rashes, severe gastrointestinal reactions) after stopping aspartame products, and (b) their recurrence within minutes or hours after resuming them. The latter included self-testing on numerous occasions, inadvertent ingestion, and formal rechallenge.
Some aspartame reactors with convulsions purposefully rechallenged themselves on one or several occasions “to be absolutely certain.” This was unique among six pilots who had lost their licenses for unexplained seizures while consuming aspartame products. (All had been in otherwise excellent health.) They sought to have their licenses reinstated by such objective confirmation on rechallenge.
These groups include pregnant and lactating women, young children, older persons, those at risk for phenylketonuria (PKU), the relatives of aspartame reactors (see above), and patients with liver disease, iron-deficiency anemia, kidney impairment, migraine, diabetes, hypoglycemia, and hypothyroidism.
Source: wnho.net By H. J. Roberts
Test identifies nine blood markers tied to depression; predicts who will benefit from therapy
The first blood test to diagnose major depression in adults has been developed by Northwestern Medicine® scientists, a breakthrough approach that provides the first objective, scientific diagnosis for depression. The test identifies depression by measuring the levels of nine RNA blood markers. RNA molecules are the messengers that interpret the DNA genetic code and carry out its instructions.
The blood test also predicts who will benefit from cognitive behavioral therapy based on the behavior of some of the markers. This will provide the opportunity for more effective, individualized therapy for people with depression.
In addition, the test showed the biological effects of cognitive behavioral therapy, the first measurable, blood-based evidence of the therapy’s success. The levels of markers changed in patients who had the therapy for 18 weeks and were no longer depressed.
“This clearly indicates that you can have a blood-based laboratory test for depression, providing a scientific diagnosis in the same way someone is diagnosed with high blood pressure or high cholesterol,” said Eva Redei, who developed the test and is a professor of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine. “This test brings mental health diagnosis into the 21st century and offers the first personalized medicine approach to people suffering from depression.”
Redei is co-lead author of the study, which was published Sept. 16 in Translational Psychiatry.
Redei previously developed a blood test that diagnosed depression in adolescents. Most of the markers she identified in the adult depression panel are different from those in depressed adolescents.
New research shows that schizophrenia is not a single disease, but a group of eight distinct disorders, each caused by changes in clusters of genes that lead to different sets of symptoms.
The finding sets the stage for scientists to develop better ways to diagnose and treat schizophrenia, a mental illness that can be devastating when not adequately managed, says C. Robert Cloninger, co-author of the study published Monday in the American Journal of Psychiatry.
“We are really opening a new era of psychiatric diagnosis,” says Cloninger, professor of psychiatry and genetics at the Washington University School of Medicine in St. Louis. Cloninger says he hopes his work will “allow for the development of a personalized diagnosis, opening the door to treating the cause, rather than just the symptoms, of schizophrenia.”
Clonginger and colleagues found that certain genetic profiles matched particular symptoms. While people with one genetic cluster have odd and disorganized speech – what is sometimes called “word salad” – people with another genetic profile hear voices, according to the study, funded by the National Institutes of Health.
Some genetic clusters gave people higher risks of the disease than others, according to the study, which compared the DNA of 4,200 people with schizophrenia to that of 3,800 healthy people.
One set of genetic changes, for example, confers a 95% chance of developing schizophrenia. In the new study, researchers describe a woman with this genetic profile who developed signs of the disorder by age 5, when she taped over the mouths of her dolls to make them stop whispering to her and calling her name. Another patient – whose genetic profile gave her a 71% risk of schizophrenia – experienced a more typical disease course and began hearing voices at age 17.
The average person has less than a 1% risk of developing schizophrenia, Cloninger says.
Psychiatrists such as Stephen Marder describe the the study as a step forward. Today, doctors diagnose patients with mental illness with a process akin to a survey, asking about the person’s family history and symptoms, says Marder, a professor at the David Geffen School of Medicine at the University of California-Los Angeles.
“It underlines that the way we diagnose schizophrenia is relatively primitive,” Marder says.
Patients may wait years for an accurate diagnosis, and even longer to find treatments that help them without causing intolerable side effects.
Doctors have long known that schizophrenia can run in families, says Robert Freedman, editor in chief of the American Journal of Psychiatry and chair of psychiatry at the University of Colorado Anschutz Medical Campus. If one identical twin has schizophrenia, for example, there is an 80% chance that the other twin has the disease, as well.
In the past, doctors looked for single genes that might cause schizophrenia, without real success, Freedman says.
The new paper suggests that genes work together like a winning or losing combination of cards in poker, Freedman says. “This shows us that there are some very bad hands out there,” Freedman says.
Doctors don’t yet know why one person with a 70% risk of schizophrenia develops the disease and others don’t, Clonginger says. It’s possible that environment plays a key role, so that child with a supportive family and good nutrition might escape the diagnosis, while someone who experiences great trauma or deprivation might become very ill.
The study also reflects how much has changed in the way that scientists think about the genetic causes of common diseases, Marder says. He notes that diseases caused by a single gene – such as sickle-cell anemia and cystic fibrosis – affect very few people. Most common diseases, such as cancer, are caused by combinations of genes. Even something as apparently simple as height is caused by combinations of genes, he says.
Doctors have known for years that breast cancer is not one disease, for example, but at least half a dozen diseases driven by different genes, says study co-author Igor Zwir, research associate in psychiatry at Washington University. Doctors today have tests to predict a woman’s risk of some types of breast cancer, and other tests that help them select the most effective drugs.
Those sorts of tests could be extremely helpful for people with schizophrenia, who often try two or three drugs before finding one that’s effective, Cloninger says.
“Most treatment today is trial and error,” Cloninger says.
If doctors could pinpoint which drugs could be the most effective, they might be able to use lower doses, producing fewer of the bothersome side effects that lead many patients to stop taking their medication, Cloninger says.
The hypothesis is so absurd it seems as though it popped right off the pages of a science-fiction novel. Some scientists in Palo Alto are offering a $1 million prize to anyone who can end aging. “Based on the rapid rate of biomedical breakthroughs, we believe the question is not if we can crack the aging code, but when will it happen,” says director of the Palo Alto Longevity Prize Keith Powers.
It’s a fantastical idea: curing the one thing we will all surely die of if nothing else gets us before that. I sat down with Aubrey de Grey, the chief science officer of the SENS Research Foundation and co-author of “Ending Aging,” to discuss this very topic a few days back. According to him, ending aging comes with the promise to not just stop the hands of time, but to actually reverse the clock. We could, according to him, actually choose the age we’d like to exist at for the rest of our (unnatural?) lives. But we are far off from possibly seeing this happen in our lifetime, says de Grey. “With sufficient funding we have a 50/50 chance to getting this all working within the next 25 years, but it could also happen in the next 100,” he says.
If you ask Ray Kurzweil, life extension expert, futurist and part-time adviser to Google’s somewhat stealth Calico project, we’re actually tip-toeing upon the cusp of living forever. “We’ll get to a point about 15 years from now where we’re adding more than a year every year to your life expectancy,” he told the New York Times in early 2013. He also wrote in the book he co-authored with Terry Grossman, M.D., that “Immortality is within our grasp.” That’s a bit optimistic to de Grey (the two are good friends), but he’s not surprised this prize is coming out of Silicon Valley. “Things are changing here first. We have a high density of visionaries who like to think high.”
And he believes much of what Kurzweil says is true with the right funding. “Give me large amounts of money to get the research to happen faster,” says de Grey. He then points out that Google’s Calico funds are virtually unlimited.
Whether it’s 15, 25 or even 100 years off, we need to spur a revolution in aging research, according to Joon Yun, one of the sponsors of the prize. “The aim of the prize is to catalyze that revolution,” says Yun. His personal assistant actually came up with the initial idea. She just happens to be an acquaintance of Wendy Schmidt, wife of Google’s Eric Schmidt. But it was the passing of Yun’s 68-year-old father-in-law and some conversations with his friends that got him thinking about how to take on aging as a whole.
The Palo Alto Prize is also working with a number of angel investors, venture capital firms, corporate venture arms, institutions and private foundations within Silicon Valley to create health-related incentive prize competitions in the future. This first $1 million prize comes from Yun’s own pockets.
The initial prize will be divided into two $500,000 awards. Half a million dollars will go to the first team to demonstrate that it can restore heart rate variability (HRV) to that of a young adult. The other half of the $1 million will be awarded to the first team that can extend lifespan by 50 percent. So far 11 teams from all over the world have signed up for the challenge.
Biologists (yet again) sound the alarm in this latest article via Stanford News Service, warning that 16 to 33 percent of vertebrates are now endangered. Larger animals such as elephants, rhinos and polar bears face the highest decline rates, which follows in the pattern of past extinction events. The loss of such creatures would mean devastating trickle-down effects on the human population and other species.
[P]revious experiments conducted in Kenya have isolated patches of land from megafauna such as zebras, giraffes and elephants, and observed how an ecosystem reacts to the removal of its largest species. Rather quickly, these areas become overwhelmed with rodents. Grass and shrubs increase and the rate of soil compaction decreases. Seeds and shelter become more easily available, and the risk of predation drops.Consequently, the number of rodents doubles — and so does the abundance of the disease-carrying ectoparasites that they harbor.
“Where human density is high, you get high rates of defaunation, high incidence of rodents, and thus high levels of pathogens, which increases the risks of disease transmission,” said Dirzo, who is also a senior fellow at the Stanford Woods Institute for the Environment. “Who would have thought that just defaunation would have all these dramatic consequences? But it can be a vicious circle.”
Vertebrates aren’t the only species on a troubling path of defaunation. Over the last 35 years, while the human population has doubled, invertebrate numbers have plummeted by 45 percent.
Past mass extinctions were driven by planetary transformations or catastrophes, but the extinction we face today can be attributed to man-made causes.
This is far from the first time that scientists have cautioned us of the next looming mass extinction. Last month, The Week delivered a similar warning that we could be headed the way of the dinosaurs if we don’t address these crises immediately. In February, NPR concluded the mass extinction is already underway.
In the words of Neil deGrasse Tyson:
We just can’t seem to break our addiction to the kinds of fuel that will bring back a climate last seen by the dinosaurs, a climate that will drown our coastal cities and wreak havoc on the environment and our ability to feed ourselves. All the while, the glorious sun pours immaculate, free energy down upon us, more than we will ever need. Why can’t we summon the ingenuity and courage of the generations that came before us? The dinosaurs never saw that asteroid coming. What’s our excuse?
A US peer-reviewed study conducted last year which was published in the scientific journal Entropy, linked Monsanto’s herbicide Roundup – which is the most popular weed killer in the world – to infertility, cancers and Parkinsons Disease amongst other ailments. The authors of the study were Stephanie Seneff, a research scientist at the Massachusetts Institute of Technology, and Anthony Samsel, a retired science consultant from Arthur D. Little, Inc. and a former private environmental government contractor. The main ingredient in Roundup is the “insidious” glyphosate, which the study found to be a deeply harmful chemical:
“Glyphosate enhances the damaging effects of other food borne chemical residues and environmental toxins. Negative impact on the body is insidious and manifests slowly over time as inflammation damages cellular systems throughout the body […] Consequences are most of the diseases and conditions associated with a Western diet, which include gastrointestinal disorders, obesity, diabetes, heart disease, depression, autism, infertility, cancer and Alzheimer’s disease” (Samsel and Seneff, 2013).
The Executive Director of the Institute for Responsible Technology (IRT) Jeffrey M. Smith has discovered a link between gluten disorders and GM foods in a study he conducted last year. Gluten disorders have sharply risen over the past 2 decades, which correlates with GM foods being introduced into the food supply. Smith asserts that GM foods – including soy and corn – are the possible “environmental triggers” that have contributed to the rapid increase of gluten disorders that effect close to 20 million American’s today:
“Bt-toxin, glyphosate, and other components of GMOs, are linked to five conditions that may either initiate or exacerbate gluten-related disorders […] If glyphosate activates retinoic acid, and retinoic acid activates gluten sensitivity, eating GMOs soaked with glyphosate may play a role in the onset of gluten-related disorders” (Smith, 2013).
One of the more damming studies on the safety of GM foods was led by biologist Dr. Gilles-Eric Seralini of the University of Caen, which was the first study to examine the long term affects on rats that had consumed Monsanto’s GM corn and its Roundup herbicide. The study was conducted over a 2 year period – which is the average life-span of a rat – as opposed to Monsanto’s usual period of 90 days. The peer-reviewed study found horrifying effects on the rats health, with a 200% to 300% increase in large tumours, severe organ damage to the kidney and liver and 70% of female participant rats suffered premature death. The first tumours only appeared 4 to 7 months into the research, highlighting the need for longer trials.
Initially the study was published in the September issue of Food and Chemical Toxicology, but was then later retracted after the publisher felt the study was “inconclusive”, although there was no suspicion of fraud or intentional deceit. Dr. Seralini strongly protested the decision and believed “economic interests” were behind the decision as a former Monsanto employee had joined the journal. Monsanto is infamous for employing swaths oflobbyists to control the political, scientific and administrative decisions relating to the organisation, and this incident was a major whitewash by the GM producer to stop the barrage of negative media reports relating to the toxic effects of their products. The study led by Dr. Seralini was later published in a less well renowned journal, the Environmental Sciences Europe, which reignited the fears of GM foods safety.
France has recently implemented a ban on Monsanto produced maize (MON810) – a different variety of the Monsanto GM corn that was discussed in the study above (NK603) – citing environmental concerns as the reason for the ban. France joins a list of countries including Italy and Poland who have imposed bans on GM corn over the past few years. Additionally, Russian MPs have introduced a draft into parliament which could see GM producers punished as terrorists and criminally prosecuted if they are deemed to have harmed the environment or human health. In India, many of the GM seeds sold to Indian farmers under the pretext of greater harvests failed to deliver, which led to an estimated 200,000 Indian farmers committing suicide due to an inability to repay debts.
There is growing evidence to support the theory that bee colonies are collapsing due to GM crops being used in agriculture, with America seeing the largest fall in bee populations in recent years. Resistance to Monsanto and GM foods has been growing in recent years after the launch of the worldwide ‘March Against Monsanto’ in 2012, which organises global protests against the corporation and its toxic products within 52 countries. Monsanto was also voted the ‘most evil corporation’ of 2013 in a poll conducted by the website Natural News, beating the Federal Reserve and British Petroleum to take the top position.
Monsanto Produced and Supplied Toxic Agent Orange
Researching Monsanto’s past reveals a very dark history that has been well documented for years. During the Vietnam War, Monsanto was contracted to produce and supply the US government with a malevolent chemical for military application. Along with other chemical giants at the time such as Dow Chemical, Monsanto produced the military herbicide Agent Orange which contained high quantities of the deadly chemical Dioxin. Between 1961 and 1971, the US Army sprayed between 50 and 80 million litres of Agent Orange across Vietnamese jungles, forests and strategically advantageous positions. It was deployed in order to destroy forests and fertile lands which provided cover and food for the opposing troops. The fallout was devastating, with Vietnam estimating that 400,000 people died or were maimed due to Agent Orange, as well as 500,000 children born with birth defects and up to 2 million people suffer from cancer or other diseases. Millions of US veterans were also exposed and many have developed similar illnesses. The consequences are still felt and are thought to continue for a century as cancer, birth defects and other diseases are exponential due to them being passed down through generations.
Today, deep connections exist between Monsanto, the ‘Military Industrial Complex’ and the US Government which have to be documented to understand the nature of the corporation. On Monsanto’s Board of Directors sits the former Chairman of the Board and CEO of the giant war contractor Lockheed Martin, Robert J. Stevens, who was also appointed in 2012 by Barack Obama to the Advisory Committee for Trade Policy and Negotiations. As well as epitomising the revolving door that exists between the US Government and private trans-national corporations, Stevens is a member of the parallel government in the US, the Council on Foreign Relations (CFR). A second board member at Monsanto is Gwendolyn S. King, who also sits on the board of Lockheed Martin where she chairs the Orwellian ‘Ethics and Sustainability Committee”. Individuals who are veterans of the corporate war industry should not be allowed control over any populations food supply! Additionally, Monsanto board member Dr. George H. Poste is a former member of the Defense Science Board and the Health Board of the U.S. Department of Defense, as well as a Fellow of the Royal Society and a member of the CFR.
Bill Gates made headlines in 2010 when The Bill and Melinda Gates Foundation bought 500,000 Monsanto shares worth a total of $23 million, raising questions as to why his foundation would invest in such a malign corporation. William H. Gates Sr. – Bill’s father – is the former head of Planned Parenthood and a strong advocate of eugenics– the philosophy that there are superior and inferior types of human beings, with the inferior type often sterilised or culled under the pretext of being a plague on society. During his 2010 TED speech, Bill Gates reveals his desire to reduce the population of the planet by “10 or 15 percent” in the coming years through such technologies as “vaccines”:
“The world today has 6.8 billion people. That’s heading up to about 9 billion. Now if we do a really good job on new vaccines, health care, reproductive health services, we could lower that by perhaps 10 or 15 percent” (4.37 into the video).
In 2006, Monsanto acquired a company that has developed – in partnership with the US Department of Agriculture – what is popularly termed terminator seeds, a future major trend in the GM industry. Terminator Seeds or suicide seeds are engineered to become sterile after the first harvest, destroying the ancient practice of saving seeds for future crops. This means farmers are forced to buy new seeds every year from Big-Agri, which produces high debts and a form of servitude for the farmers.
The news that has been welcomed by scientists and leaders of the agriculture business alike as a move forward towards the industrial use of marijuana and hemp products could bring a major shift towards marijuana policies in the U.S.A. and ultimately, to the world.
Under present US federal law, it is illegal to possess, use, buy, sell, or cultivate marijuana, since the Controlled Substances Act of 1970 classifies marijuana as a Schedule I drug, although it has been decriminalized to some extent in certain states, Monsanto’s interest in the field has been interpreted by experts as the precursor to “a major shift in marijuana policy in the US” as it is believed the company would not have invested so much time and energy if it had not had “previous knowledge” of the Federal government’s “openness” towards the future legalization of marijuana.
Lawyer and marijuana law specialist, Edmund Groensch, of the Drug Policy Alliance, admits Monsanto’s involvement in marijuana projects could definitely help the pro-legalization activists.
“Currently, Federal law criminalizes marijuana and hemp derivatives because public opinion is still against it and legal commercial production in the U.S. is currently handled by a patchwork of small farmers whom are not trusted by investors. A major player as Monsanto could bring confidence within government and towards investors in the market if it were to own a large part of the exploitable lands and commercial products”.
“There is presently no way to control the production of marijuana and the quality of the strains. A GM strain produced by a company with the credentials and prestige of Monsanto would definitely lend a massive hand to pro-legalization activists within certain spheres of government and within the business world” he explains.
Although Monsanto’s testing on cannabis is only at an experimental stage, no plan has yet been released by the agriculture business firm as to what purposes the patented strain would be used for, although specialists believe answers should come this fall as rumors of a controversial new bill which could “loosen up laws around medical marijuana” is reportedly scheduled to pass before congress coming this fall.
Critics fear genetically modified cannabis will mix with other strains and could destroy the diversity of DNA, a reality dismissed by most studies claim experts.
In 30 or 40 years, we’ll have microscopic machines traveling through our bodies, repairing damaged cells and organs, effectively wiping out diseases. The nanotechnology will also be used to back up our memories and personalities.
In an interview with Computerworld , author and futurist Ray Kurzweil said that anyone alive come 2040 or 2050 could be close to immortal. The quickening advance of nanotechnology means that the human condition will shift into more of a collaboration of man and machine , as nanobots flow through human blood streams and eventually even replace biological blood, he added.
That may sound like something out of a sci-fi movie, but Kurzweil, a member of the Inventor’s Hall of Fame and a recipient of the National Medal of Technology, says that research well underway today is leading to a time when a combination of nanotechnology and biotechnology will wipe out cancer, Alzheimer’s disease , obesity and diabetes .
It’ll also be a time when humans will augment their natural cognitive powers and add years to their lives, Kurzweil said.
“It’s radical life extension,” Kurzweil said . “The full realization of nanobots will basically eliminate biological disease and aging. I think we’ll see widespread use in 20 years of [nanotech] devices that perform certain functions for us. In 30 or 40 years, we will overcome disease and aging. The nanobots will scout out organs and cells that need repairs and simply fix them. It will lead to profound extensions of our health and longevity.”
Of course, people will still be struck by lightning or hit by a bus, but much more trauma will be repairable. If nanobots swim in, or even replace, biological blood, then wounds could be healed almost instantly. Limbs could be regrown. Backed up memories and personalities could be accessed after a head trauma.
Today, researchers at MIT already are using nanoparticles to deliver killer genes that battle late-stage cancer. The university reported just last month the nano-based treatment killed ovarian cancer, which is considered to be one of the most deadly cancers, in mice.
And earlier this year, scientists at the University of London reported using nanotechnology to blast cancer cells in mice with “tumor busting” genes, giving new hope to patients with inoperable tumors. So far, tests have shown that the new technique leaves healthy cells undamaged.
With this kind of work going on now, Kurzweil says that by 2024 we’ll be adding a year to our life expectancy with every year that passes. “The sense of time will be running in and not running out,” he added. “Within 15 years, we will reverse this loss of remaining life expectancy. We will be adding more time than is going by.”
And in 35 to 40 years, we basically will be immortal, according to the man who wrote The Age of Spiritual Machines and The Singularity is Near: When Humans Transcend Biology .
Kurzweil also maintains that adding microscopic machines to our bodies won’t make us any less human than we are today or were 500 years ago.
“The definition of human is that we are the species that goes beyond our limitations and changes who we are,” he said. “If that wasn’t the case, you and I wouldn’t be around because at one point life expectancy was 23. We’ve extended ourselves in many ways. This is an extension of who we are. Ever since we picked up a stick to reach a higher branch, we’ve extended who we are through tools. It’s the nature of human beings to change who we are.”
But that doesn’t mean there aren’t parts of this future that don’t worry him. With nanotechnology so advanced that it can travel through our bodies and affect great change on them, come dangers as well as benefits.
The nanobots, he explained, will be self-replicating and engineers will have to harness and contain that replication.
“You could have some self-replicating nanobot that could create copies of itself… and ultimately, within 90 replications, it could devour the body it’s in or all humans if it becomes a non-biological plague,” said Kurzweil. “Technology is not a utopia. It’s a double-edged sword and always has been since we first had fire.”
When researchers examine violent assault numbers, historically the data has pointed to higher rates of female victimization in developing countries.
But a study by a West Virginia University sociology professor finds that women in developed countries — like the United States — are actually more likely to be physically assaulted than women in developing countries.
In “Individual and Structural Opportunities: A Cross-National Assessment of Females’ Physical and Sexual Assault Victimization,” Professor Rachel E. Stein examines how individuals’ daily routines and elements of country structure create opportunities prime for victimization.
“Research on developing countries will often lump sexual assault, physical assault and robbery together and sometimes studies expand to examine all types of victimization to increase the report record count,” Stein said.
Using data from the International Crime Victimization Survey from 45 countries, Stein reviewed physical and sexual assault victimization statistics at the national level to determine whether the societal structures around victims played a part in the frequency of attacks.
Sexual victimization is defined as incidents where, “people sometimes grab, touch, or assault others for sexual reasons in a really offensive way.” Physical victimization is defined as “being threatened or personally attacked by someone in a way that really frightened you.” The sample was limited to females only.
A variety of factors contributing to victimization exist. These can range from how often a female goes out for leisure activities (go to a bar, to a restaurant, to see friends), whether she lives alone, and age.
“Because individuals’ routines matter for victimization risk, it is important to educate people so they can become more aware of how their everyday activities might increase their risk for certain types of victimization,” Stein said. “However, individual routines are not the only contributing factor to victimization.
A woman’s surrounding environment also plays a risk, Stein said.
“One example is the unequal distribution of resources, such as formal conflict resolution, in countries with high levels of inequality. If policies are to effectively reduce the risk of victimization, they need to consider not only the lifestyles of individuals, but the context in which these activities take place.”
The paper was featured in the December 2014 issue of the International Criminal Justice Review and was recognized with the 2014 Richard J. Terrill Paper of the Year Award.
This is the second time Stein has been recognized with this award, receiving it in 2010 for her paper “The Utility of Country Structure: A Cross-National Multi-Level Analysis of Property and Violent Victimization.”
FRANCIS CRICK, the Nobel Prize-winning father of modern genetics, was under the influence of LSD when he first deduced thedouble-helix structure of DNA nearly 50 years ago.
The abrasive and unorthodox Crick and his brilliant American co-researcher James Watson famously celebrated their eureka moment in March 1953 by running from the now legendary Cavendish Laboratory in Cambridge to the nearby Eagle pub, where they announced over pints of bitter that they had discovered the secret of life.
Crick, who died ten days ago, aged 88, later told a fellow scientist that he often used small doses of LSD then an experimental drug used in psychotherapy to boost his powers of thought. He said it was LSD, not the Eagle’s warm beer, that helped him to unravel the structure of DNA, the discovery that won him the Nobel Prize.
Despite his Establishment image, Crick was a devotee of novelist Aldous Huxley, whose accounts of his experiments with LSD and another hallucinogen, mescaline, in the short stories The Doors Of Perception and Heaven And Hell became cult texts for the hippies of the Sixties and Seventies. In the late Sixties, Crick was a founder member of Soma, a legalise-cannabis group named after the drug in Huxley’s novel Brave New World. He even put his name to a famous letter to The Times in 1967 calling for a reform in the drugs laws.
It was through his membership of Soma that Crick inadvertently became the inspiration for the biggest LSD manufacturing conspiracy-the world has ever seen the multimillion-pound drug factory in a remote farmhouse in Wales that was smashed by the Operation Julie raids of the late Seventies.
Crick’s involvement with the gang was fleeting but crucial. The revered scientist had been invited to the Cambridge home of freewheeling American writer David Solomon a friend of hippie LSD guru Timothy Leary who had come to Britain in 1967 on a quest to discover a method for manufacturing pure THC, the active ingredient of cannabis.
It was Crick’s presence in Solomon’s social circle that attracted a brilliant young biochemist, Richard Kemp, who soon became a convert to the attractions of both cannabis and LSD. Kemp was recruited to the THC project in 1968, but soon afterwards devised the world’s first foolproof method of producing cheap, pure LSD. Solomon and Kemp went into business, manufacturing acid in a succession of rented houses before setting up their laboratory in a cottage on a hillside near Tregaron, Carmarthenshire, in 1973. It is estimated that Kemp manufactured drugs worth Pounds 2.5 million an astonishing amount in the Seventies before police stormed the building in 1977 and seized enough pure LSD and its constituent chemicals to make two million LSD ‘tabs’.
The arrest and conviction of Solomon, Kemp and a string of co-conspirators dominated the headlines for months. I was covering the case as a reporter at the time and it was then that I met Kemp’s close friend, Garrod Harker, whose home had been raided by police but who had not been arrested. Harker told me that Kemp and his girlfriend Christine Bott by then in jail were hippie idealists who were completely uninterested in the money they were making.
They gave away thousands to pet causes such as the Glastonbury pop festival and the drugs charity Release.
‘They have a philosophy,’ Harker told me at the time. ‘They believe industrial society will collapse when the oil runs out and that the answer is to change people’s mindsets using acid. They believe LSD can help people to see that a return to a natural society based on self-sufficiency is the only way to save themselves.
‘Dick Kemp told me he met Francis Crick at Cambridge. Crick had told him that some Cambridge academics used LSD in tiny amounts as a thinking tool, to liberate them from preconceptions and let their genius wander freely to new ideas. Crick told him he had perceived the double-helix shape while on LSD.
‘It was clear that Dick Kemp was highly impressed and probably bowled over by what Crick had told him. He told me that if a man like Crick, who had gone to the heart of human existence, had used LSD, then it was worth using. Crick was certainly Dick Kemp’s inspiration.’ Shortly afterwards I visited Crick at his home, Golden Helix, in Cambridge.
He listened with rapt, amused attention to what I told him about the role of LSD in his Nobel Prize-winning discovery. He gave no intimation of surprise. When I had finished, he said: ‘Print a word of it and I’ll sue.’
Cancer is now the leading cause of death in China. Chinese Ministry of Health data implicate cancer in close to a quarter of all deaths countrywide. As is common with many countries as they industrialize, the usual plagues of poverty — infectious diseases and high infant mortality — have given way to diseases more often associated with affluence, such as heart disease, stroke, and cancer.
While this might be expected in China’s richer cities, where bicycles are fast being traded in for cars and meat consumption is climbing, it also holds true in rural areas. In fact, reports from the countryside reveal a dangerous epidemic of “cancer villages” linked to pollution from some of the very industries propelling China’s explosive economy. By pursuing economic growth above all else, China is sacrificing the health of its people, ultimately risking future prosperity.
Lung cancer is the most common cancer in China. Deaths from this typically fatal disease have shot up nearly fivefold since the 1970s. In China’s rapidly growing cities, like Shanghai and Beijing, where particulates in the air are often four times higher than in New York City, nearly 30 percent of cancer deaths are from lung cancer.
Dirty air is associated with not only a number of cancers, but also heart disease, stroke, and respiratory disease, which together account for over 80 percent of deaths countrywide. According to the Chinese Centre for Disease Control and Prevention, the burning of coal is responsible for 70 percent of the emissions of soot that clouds out the sun in so much of China; 85 percent of sulfur dioxide, which causes acid rain and smog; and 67 percent of nitrogen oxide, a precursor to harmful ground level ozone. Coal burning is also a major emitter of carcinogens and mercury, a potent neurotoxin. Coal ash, which contains radioactive material and heavy metals, including chromium, arsenic, lead, cadmium, and mercury, is China’s number one source of solid industrial waste. The toxic ash that is not otherwise used in infrastructure or manufacturing is stored in impoundments, where it can be caught by air currents or leach contaminants into the groundwater.
Coal pollution combined with emissions from China’s burgeoning industries and the exhaust of a fast-growing national vehicle fleet are plenty enough to impair breathing and jeopardize health. But that does not stop over half the men in China from smoking tobacco. Smoking is far less common among women; less than 3 percent light up. Still, about 1 in 10 of the estimated 1 million Chinese who die from smoking-related diseases each year are exposed to carcinogenic second hand smoke but do not smoke themselves.
Quantum systems can have several temperatures at once.
Temperature is a very useful physical quantity. It allows us to make a simple statistical statement about the energy of particles swirling around on complicated paths without having to know the specific details of the system. Scientists from the Vienna University of Technology together with colleagues from Heidelberg University have now investigated, how quantum particles reach such a state where statistical statements are possible. The result is surprising: a cloud of atoms can have several temperatures at once. This is an important step towards a deeper understanding of large quantum systems and their exotic properties.
The air around us consists of countless molecules, moving around randomly. It would be utterly impossible to track them all and to describe all their trajectories. But for many purposes, this is not necessary. Properties of the gas can be found which describe the collective behaviour of all the molecules, such as the air pressure or the temperature, which results from the particles’ energy. On a hot summer’s day, the molecules move at about 430 meters per second, in winter, it is a bit less.
This statistical view (which was developed by the Viennese physicist Ludwig Boltzmann) has proved to be extremely successful and describes many different physical systems, from pots of boiling water to phase transitions in liquid crystals in LCD-displays. However, in spite of huge efforts, open questions have remained, especially with regard to quantum systems. How the well-known laws of statistical physics emerge from many small quantum parts of a system remains one of the big open questions in physics.
Hot and Cold at the Same Time
Scientists at the Vienna University of Technology have now succeeded in studying the behaviour of a quantum physical multi-particle system in order to understand the emergence of statistical properties. The team of Professor Jörg Schmiedmayer used a special kind of microchip to catch a cloud of several thousand atoms and cool them close to absolute zero at -273°C, where their quantum properties become visible.
The experiment showed remarkable results: When the external conditions on the chip were changed abruptly, the quantum gas could take on different temperatures at once. It can be hot and cold at the same time. The number of temperatures depends on how exactly the scientists manipulate the gas. “With our microchip we can control the complex quantum systems very well and measure their behaviour,” says Tim Langen, leading author of the paper published in “Science.” There had already been theoretical calculations predicting this effect, but it has never been possible to observe it and to produce it in a controlled environment.
The experiment helps scientists to understand the fundamental laws of quantum physics and their relationship with the statistical laws of thermodynamics. This is relevant for many different quantum systems, maybe even for technological applications. Finally, the results shed some light on the way our classical macroscopic world emerges from the strange world of tiny quantum objects.
Being too thin in middle age might be bad for brain health later in life, a new study suggests.
Researchers found that people who were underweight in their 40s, 50s and 60s were 34 percent more likely to be diagnosed with dementia up to 15 years later, compared with similarly aged men and women who were a healthy weight.
Exactly why being underweight —defined as having a body mass index (BMI) of less than 20 —in middle age is linked with dementia is unclear and requires further investigation, said study co-author Dr. Nawab Qizilbash, a clinical epidemiologist and the head of OXON Epidemiology, a research organization in London. But he speculates thatfactors such asdiet, exercise, frailty, weight changesand deficiencies in vitamins D and Emight play a role.
The study, published online April 10 in the journal The Lancet Diabetes & Endocrinology, analyzed data from nearly 2 million people ages 40 and older in the United Kingdom.
None of the people had dementia when the study began, but nearly 46,000 were diagnosed with it during the follow-up period of up to 20 years.
In a surprising finding that contradicts some previous studies, the researchers found that being overweight or obese in middle age actually appeared to protect brain health.
In fact, people who were the heaviest at midlife, with a BMI of 40 or higher, had a 29 percent lower risk of developing dementia than people whose weight fell into a healthy range, according to the study.
“Contrary to the prevailing — but not unanimous — view, people who are overweight or obese in middle age appear not to be at higher risk of dementia in old age,”.
He said these findings were unexpected, and although the research team performed many different analyses to see if they could find an explanation for the results, so far they have not.
Qizilbash said some next steps in this research include understanding the influence of weight changes, such as recent weight loss in a person who may not have previously been underweight, on the risk of dementia.
He also wants to look into whether being overweight or obese hasan overall positive effect on dementia because someone who weighs more may not live long enough to reap its possible brain-protective effects.
More research is also needed to determine how weight influences the risk of different types of dementia, such as Alzheimer’s disease, vascular disease and Lewy body disease, Qizilbash said.
Eighteen months ago, Shots first told readers about tumor paint, an experimental substance derived from scorpion venom. Inject tumor paint into a patient’s vein, and it will actually cross the blood-brain barrier and find its way to a brain tumor. Shine near-infrared light on a tumor coated with tumor paint, and the tumor will glow.
The main architect of the tumor paint idea is a pediatric oncologist named Dr. Jim Olson. As a physician who treats kids with brain cancer, Olson knows that removing a tumor is tricky.
“The surgeons right now use their eyes and their fingers and their thumbs to distinguish cancer from normal brain,” says Olson. But poking around in someone’s brain with only those tools, it’s inevitable surgeons will sometimes miss bits of tumor or, just as bad, damage healthy brain cells.
So Olson and his colleagues at the Fred Hutchinson Cancer Center in Seattle came up with tumor paint. They handed off commercial development of the compound to Blaze Bioscience.
After initial studies in dogs showed promise, the company won approval to try tumor paint on human subjects. Those trials are taking place at the Cedars Sinai Medical Center in Los Angeles.
Dr. Chirag Patil is one of those surgeons. He says it’s remarkable that you can inject tumor paint into a vein in a patient’s arm, have it go to the brain and attach to a tumor, and only a tumor. “That’s a concept that neurosurgeons have probably been dreaming about for 50 years,” he says.
Patil says they’ve now used tumor paint on a about a half-dozen patients with brain tumors. They use a special camera to see if the tumor is glowing.
“The first case we did was a deep tumor,” says Patil. “So with the camera, we couldn’t really shine it into this deep small cavity. But when we took that first piece out and we put it on the table. And the question was, ‘Does it glow?’ And when we saw that it glows, it was just one of those moments …’Wow, this works.’ ”
In this first study of tumor paint in humans, the goal is just to prove that it’s reaching the tumor. Future studies will see if it actually helps surgeons remove tumors and, even more important, if it results in a better outcome for the patient.
That won’t be quick or easy. Just getting to this point has been a long slog, and there are bound to be hurdles ahead.
And even if tumor paint does exactly what it’s designed to do, Dr. Keith Black, who directs neurosurgery at Cedars-Sinai, says it probably isn’t the long-term solution to brain cancer. “Because surgery is still a very crude technique,” he says.
Even in the best of circumstances, Black says, surgery is traumatic for the patients, and tracking down every last cell of a tumor is probably impossible. Plus, it’s inevitable that some healthy brain tissue will be damaged in removing the tumor.
“Ultimately, we want to eliminate the need to do surgery,” says Black. A start in that direction will be to use a compound like tumor paint to deliver not just a dye, but an anti-cancer drug directly to a tumor. That’s a goal several research groups, including Jim Olson’s, are working on.
FORGET Skynet. Hypothetical world-ending artificial intelligence makes headlines, but the hype ignores what’s happening right under our noses. Cheap, fast AI is already taking our jobs, we just haven’t noticed.
This isn’t dumb automation that can rapidly repeat identical tasks. It’s software that can learn about and adapt to its environment, allowing it to do work that used to be the exclusive domain of humans, from customer services to answering legal queries.
These systems don’t threaten to enslave humanity, but they do pose a challenge: if software that does the work of humans exists, what work will we do?
In the last three years, UK telecoms firm O2 has replaced 150 workers with a single piece of software. A large portion of O2’s customer service is now automatic, says Wayne Butterfield, who works on improving O2’s operations. “Sim swaps, porting mobile numbers, migrating from prepaid onto a contract, unlocking a phone from O2” – all are now automated, he says.
Humans used to manually move data between the relevant systems to complete these tasks, copying a phone number from one database to another, for instance. The user still has to call up and speak to a human, but now an AI does the actual work.
To train the AI, it watches and learns while humans do simple, repetitive database tasks. With enough training data, the AIs can then go to work on their own. “They navigate a virtual environment,” says Jason Kingdon, chairman of Blue Prism, the start-up which developed O2’s artificial workers. “They mimic a human. They do exactly what a human does. If you watch one of these things working it looks a bit mad. You see it typing. Screens pop-up, you see it cutting and pasting.”
One of the world’s largest banks, Barclays, has also dipped a toe into this specialised AI. It used Blue Prism to deal with the torrent of demands that poured in from its customers after UK regulators demanded that it pay back billions of pounds of mis-sold insurance. It would have been expensive to rely entirely on human labour to field the sudden flood of requests. Having software agents that could take some of the simpler claims meant Barclays could employ fewer people.
The back office work that Blue Prism automates is undeniably dull, but it’s not the limit for AI’s foray into office space. In January, Canadian start-up ROSS started using IBM’s Watson supercomputer to automate a whole chunk of the legal research normally carried out by entry-level paralegals.
Legal research tools already exist, but they don’t offer much more than keyword searches. This returns a list of documents that may or may not be relevant. Combing through these for the argument a lawyer needs to make a case can take days.
ROSS returns precise answers to specific legal questions, along with a citation, just like a human researcher would. It also includes its level of confidence in its answer. For now, it is focused on questions about Canadian law, but CEO Andrew Arruda says he plans for ROSS to digest the law around the world.
Since its artificial intelligence is focused narrowly on the law, ROSS’s answers can be a little dry. Asked whether it’s OK for 20 per cent of the directors present at a directors’ meeting to be Canadian, it responds that no, that’s not enough. Under Canadian law, no directors’ meeting may go ahead with less than 25 per cent of the directors present being Canadian. ROSS’s source? The Canada Business Corporations Act, which it scanned and understood in an instant to find the answer.
By eliminating legal drudge work, Arruda says that ROSS’s automation will open up the market for lawyers, reducing the time they need to spend on each case. People who need a lawyer but cannot afford one would suddenly find legal help within their means.
ROSS’s searches are faster and broader than any human’s. Arruda says this means it doesn’t just get answers that a human would have had difficulty finding, it can search in places no human would have thought to look. “Lawyers can start crafting very insightful arguments that wouldn’t have been achievable before,” he says. Eventually, ROSS may become so good at answering specific kinds of legal question that it could handle simple cases on its own.
Where Blue Prism learns and adapts to the various software interfaces designed for humans working within large corporations, ROSS learns and adapts to the legal language that human lawyers use in courts and firms. It repurposes the natural language-processing abilities of IBM’s Watson supercomputer to do this, scanning and analysing 10,000 pages of text every second before pulling out its best answers, ranked by confidence.
Lawyers are giving it feedback too, says Jimoh Ovbiagele, ROSS’s chief technology officer. “ROSS is learning through experience.”
Massachusetts-based Nuance Communications is building AIs that solve some of the same language problems as ROSS, but in a different part of the economy: medicine. In the US, after doctors and nurses type up case notes, another person uses those notes to try to match the description with one of thousands of billing codes for insurance purposes.
Nuance’s language-focused AIs can now understand the typed notes, and figure out which billing code is a match. The system is already in use in a handful of US hospitals.
Kingdon doesn’t shy away from the implications of his work: “This is aimed at being a replacement for a human, an automated person who knows how to do a task in much the same way that a colleague would.”
But what will the world be like as we increasingly find ourselves working alongside AIs? David Autor, an economist at the Massachusetts Institute of Technology, says automation has tended to reduce drudgery in the past, and allowed people to do more interesting work.
“Old assembly line jobs were things like screwing caps on bottles,” Autor says. “A lot of that stuff has been eliminated and that’s good. Our working lives are safer and more interesting than they used to be.”
The potential problem with new kinds of automation like Blue Prism and ROSS is that they are starting to perform the kinds of jobs which can be the first rung on the corporate ladders, which could result in deepening inequality.
Autor remains optimistic about humanity’s role in the future it is creating, but cautions that there’s nothing to stop us engineering our own obsolescence, or that of a large swathe of workers that further splits rich from poor. “We’ve not seen widespread technological unemployment, but this time could be different,” he says. “There’s nothing that says it can’t happen.”
Kingdon says the changes are just beginning. “How far and fast? My prediction would be that in the next few years everyone will be familiar with this. It will be in every single office.”
Once it reaches that scale, narrow, specialised AIs may start to offer something more, as their computation roots allow them to call upon more knowledge than human intelligence could.
“Right now ROSS has a year of experience,” says Ovbiagele. “If 10,000 lawyers use ROSS for a year, that’s 10,000 years of experience.”
A 1,000-year-old treatment for eye infections could hold the key to killing antibiotic-resistant superbugs, experts have said.
Scientists recreated a 9th Century Anglo-Saxon remedy using onion, garlic and part of a cow’s stomach.
They were “astonished” to find it almost completely wiped out methicillin-resistant staphylococcus aureus, otherwise known as MRSA.
Their findings will be presented at a national microbiology conference.
The remedy was found in Bald’s Leechbook – an old English manuscript containing instructions on various treatments held in the British Library.
Anglo-Saxon expert Dr Christina Lee, from the University of Nottingham, translated the recipe for an “eye salve”, which includes garlic, onion or leeks, wine and cow bile.
Experts from the university’s microbiology team recreated the remedy and then tested it on large cultures of MRSA.
The leechbook is one of the earliest examples of what might loosely be called a medical textbook
It seems Anglo-Saxon physicians may actually have practised something pretty close to the modern scientific method, with its emphasis on observation and experimentation.
Bald’s Leechbook could hold some important lessons for our modern day battle with anti-microbial resistance.
In each case, they tested the individual ingredients against the bacteria, as well as the remedy and a control solution.
They found the remedy killed up to 90% of MRSA bacteria and believe it is the effect of the recipe rather than one single ingredient.
Dr Freya Harrison said the team thought the eye salve might show a “small amount of antibiotic activity”.
“But we were absolutely blown away by just how effective the combination of ingredients was,” she said.
Dr Lee said there are many similar medieval books with treatments for what appear to be bacterial infections.
She said this could suggest people were carrying out detailed scientific studies centuries before bacteria were discovered.
The team’s findings will be presented at the Annual Conference of the Society for General Microbiology, in Birmingham.
Equal amounts of garlic and another allium (onion or leek), finely chopped and crushed in a mortar for two minutes.
Add 25ml (0.87 fl oz) of English wine – taken from a historic vineyard near Glastonbury.
Dissolve bovine salts in distilled water, add and then keep chilled for nine days at 4C.
In the first such analysis ever conducted, Swiss economic researchers have conducted a global network analysis of the most powerful transnational corporations (TNCs). Their results have revealed a core of 737 firms with control of 80% of this network, and a “super entity” comprised of 147 corporations that have a controlling interest in 40% of the network’s TNCs.
When we hear conspiracy theorist talk about this or that powerful group (or alliance of said groups) “pulling strings” behind the scenes, we tend to dismiss or minimize such claims, even though, deep down, we may suspect that there’s some degree of truth to it, however distorted by the theorists’ slightly paranoid perception of the world. But perhaps our tendency to dismiss such claims as exaggerations (at best) comes from our inability to get even a slight grip on the complexity of global corporate ownership; it’s all too vast and complicated to get any clear sense of the reality.
But now we have the results of a global network analysis (Vitali, Glattfelder, Battiston) that, for the first time, lays bare the “architecture” of the global ownership network. In the paper abstract, the authors state:
“We present the ﬁrst investigation of the architecture of the international ownership network, along with the computation of the control held by each global player. We ﬁnd that transnational corporations form a giant bow-tie structure* and that a large portion of control ﬂows to a small tightly-knit core of ﬁnancial institutions. This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers.”
Data from previous studies neither fully supported nor completely disproved the idea that a small handful of powerful corporations dominate much or most of the world’s commerce. The researchers acknowledge previous attempts to analyze such networks, but note that these were limited in scope to national networks which “neglected the structure of control at a global level.”
What was needed, assert the researchers, was a complex network analysis.
“A quantitative investigation is not a trivial task because ﬁrms may exert control over other ﬁrms via a web of direct and indirect ownership relations which extends over many countries. Therefore, a complex network analysis is needed in order to uncover the structure of control and its implications. “
To start their analysis, the researchers began with a list of 43,060 TNCs which were taken from a sample of 30 million “economic actors” contained in the Orbis 2007 database [see end note]. TNCs were identified according to the Organization of Economic Co-operation and Development (OECD) definition of a transnational corporation [see end note]. They next applied a recursive search algorithm which singled out the “network of all the ownership pathways originating from and pointing to these TNCs.”
The resulting TNC network includes 600,508 nodes and 1,006,987 ownership ties.
In terms of the connectivity of the network, the researchers found that it consists of many small connected components, but the largest one (encompassing 3/4 of all nodes) “contains all the top TNCs by economic value, accounting for 94.2% of the total TNC operating revenue.”
Two generalized characteristics were identified:
1] A strongly connected component (SCC), that is, a set of ﬁrms in which every member owns directly and/or indirectly shares in every other member. The emergence of such a structure can be explained as a means of preventing take-overs, reducing transaction costs, risk sharing and increasing trust between “groups of interest.”
The largest connect[ed] component contains only one dominant, strongly connected component (comprised of 1347 nodes). This network, like the WWW, has a bow tie structure. What’s more, they found that this component, or core, is also very densely connected; on average, members of this core have ties to 20 other members. “Top actors” occupy the center of the bow tie. In fact, a randomly chosen TNC in the core has about 50% chance of also being among the top holders, as compared to, for example, 6% for the “in” section. [emphasis added]
“As a result, about 3/4 of the ownership of ﬁrms in the core remains in the hands of ﬁrms of the core itself. In other words, this is a tightly-knit group of corporations that cumulatively hold the majority share of each other.”
In examining the details of this core, the analysis also showed that only 737 top holders accumulate 80% of the control over the value of all TNCs (in the analyzed network). Further,
“…despite its small size, the core holds collectively a large fraction of the total network control. In detail, nearly 4/10 of the control over the economic value of TNCs in the world is held, via a complicated web of ownership relations, by a group of 147 TNCs in the core, which has almost full control over itself. The top holders within the core can thus be thought of as an economic “super-entity” in the global network of corporations.” [emphasis added]
Concerning the implications of this super entity, the researchers asked two fundamental questions: First, what are the implications for market competition, and, second, what are the implications for economic stability?
Regarding the first question, the authors assert that no matter the origin of the SCC, due to its high degree of TNC network control, “it weakens market competition”.
Regarding the first question, the authors assert that no matter the origin of the SCC, due to its high degree of TNC network control, “it weakens market competition”.
It is clear just from the history of anti-trust laws in this country (the U.S.) that concentrated ownership stifles free market competition and innovation, reduces over-all employment, and leads to excessive pricing.
In regards to the second question, the researchers note that “the existence of such a core in the global market was never documented before and thus, so far, no scientific study demonstrates or excludes that this international ‘super-entity’ has ever acted as a bloc.“
However, there is historical data — such as within the airline, auto and steel industries — supporting this possibility.
“…top holders are at least in the position to exert considerable control, either formally (e.g., voting in shareholder and board meetings) or via informal negotiations.”
Additionally, recent studies (Stiglitz J.E., 2010, Battiston S. et al, 2009) have shown that densely connected ﬁnancial networks are highly susceptible to systemic risk. Despite the fact that such networks may seem robust in good economic times, in times of crisis however, member firms tend to enter ‘distress mode’ simultaneously. This was seen recently in the 2008 (“near”) financial collapse (note: 3/4 of the network core in this analysis are ﬁnancial intermediaries).
Calling their findings “remarkable”, they suggest that because “international data sets as well as methods to handle large networks became available only very recently, [this] may explain how this ﬁnding could go unnoticed for so long.”
While the researchers acknowledge that verifying whether the implications of their findings “hold true for the global economy” is beyond the scope of their current research, they assert that their unprecedented attempt to uncover the structure of corporate control is “a necessary precondition for future investigations.”