Intelligent Evolution?

Thought-Provoking Challenges to the Powers That Be From the Atlantis Rising News Wires

For his celebrated theories of “morphic resonance” and “formative causation,” Cambridge-trained biologist Rupert Sheldrake has been roundly criticized and, some would say, crucified by the scientific establishment. Sheldrake’s well-evidenced suggestion that many of the so-called “laws” of nature would be more properly described as “habits” evolving over time—i.e., ‘learning’—in response to changing needs, has been ridiculed by the academic powers that be. Some have even proposed that Sheldrake’s books be burned. Now, though, new research from Britain’s University of Southampton suggests that evolution, itself, may be intelligent and that it can learn from experience.

Late in 2015, in an opinion paper published in Trends in Ecology and Evolution, electronics and computer science professor, Richard Watson, asked: “Is evolution more intelligent than we thought?”

Watson claimed, according to the press release, that new research shows that evolution is able to learn from previous experience, which could provide a better explanation of how natural selection produces such apparently intelligent designs.

By unifying the ‘theory of evolution,’ which, its adherents believe, shows how random variation and selection is enough to account for incremental adaptation, and ‘learning theory, which believers say can explain how incremental adaptation is sufficient for a system to exhibit intelligent behavior. Watson seeks to demonstrate that it is possible for evolution to exhibit some of the same intelligent behaviors as learning systems (including neural networks).

In their paper Watson and colleague, Eörs Szathmáry, from the Parmenides Foundation in Munich, explain how formal analogies can be used to transfer specific models and results between the two theories to solve several important evolutionary puzzles.

“Darwin’s theory of evolution,” says Watson, “describes the driving process, but learning theory is not just a different way of describing what Darwin already told us. It expands what we think evolution is capable of. It shows that natural selection is sufficient to produce significant features of intelligent problem-solving.”

“For example,” he continues, “a key feature of intelligence is an ability to anticipate behaviors that will lead to future benefits. Conventionally, evolution, being dependent on random variation, has been considered ‘blind’ or at least ‘myopic’—unable to exhibit such anticipation. But showing that evolving systems can learn from past experience means that evolution has the potential to anticipate what is needed to adapt to future environments in the same way that learning systems do.

“If evolution can learn from experience, and thus improve its own ability to evolve over time, this can demystify the awesomeness of the designs that evolution produces. Natural selection can accumulate knowledge that enables it to evolve smarter. That’s exciting because it explains why biological design appears to be so intelligent.” (http://www.ecs.soton.ac.uk/news/4826)

There are some who would say that professor Watson is making an argument for God, or intelligent designer, but with another name. Lawyers might call that a distinction without a difference.

 

Heat—Global and Otherwise

In another very heated debate, a new study by Michigan State University environmental scientists worries that opponents of climate change appear to be winning the war of words. The research, funded by the National Science Foundation, finds that climate-change advocates are largely failing to influence public opinion. Climate-change foes, on the other hand, are successfully changing people’s minds—Republicans and Democrats alike—with messages denying the existence of global warming.

“This is the first experiment of its kind to examine the influence of the denial messages on American adults,” said Aaron M. McCright, a sociologist and lead investigator on the study. “Until now, most people just assumed climate change deniers were having an influence on public opinion. Our experiment confirms this.”

For the global-warmist community, this is not a happy result. They complain that arguments from skeptics of their agenda, whom they call “climate-change deniers” (as with “Holocaust deniers” since World War II) have forced warmists to be more cautious in their statements, which, they fear, gives what they say is the ‘false’ impression that there might be another side to the argument, something that they categorically deny. (http://msutoday.msu.edu/news/2015/climate-change-foes-winning-public-opinion-war/)

“Climate change denial” in public discourse, says a study from the University of Bristol in the UK, may encourage climate scientists to overemphasize scientific uncertainty and is also affecting how they themselves speak—and perhaps even think—about their own research—less dogmatically, in other words.

Professor Stephan Lewandowsky, from Bristol’s School of Experimental Psychology and the Cabot Institute, and colleagues from Harvard University and three institutions in Australia, show how the language used by people who oppose the “scientific consensus” on climate change has “seeped into scientists’ discussion of the alleged recent ‘hiatus’ or ‘pause’ in global warming, and has thereby unwittingly reinforced a misleading message.”

According to the paper, the idea that ‘global warming has stopped’ has been promoted in contrarian blogs and media articles for many years and ultimately the idea of a ‘pause’ or ‘hiatus’ has become ensconced in the scientific literature, including in the latest assessment report of the Intergovernmental Panel on Climate Change (IPCC). (http://www.bristol.ac.uk/news/2015/may/climate-change-denial-affects-scientists.html)

One reason, perhaps, for the loss of credibility bemoaned by warmists is that the so-called Climategate scandal of 2009 has not been forgotten. Late in November of that year, a hacker broke through the computer security system of the Climate Research Unit (CRU) at the University of East Anglia in Norwich, England, and copied over 3,000 pages of e-mail and computer code, which were soon disseminated across the Internet. Revealed in the documents was what climate-warming skeptics soon trumpeted as a shocking picture of scientific fraud and deceit of enormous proportions. And while global-warming alarmists attempted to downplay the incident, it was soon dubbed “Climategate,” and/or the “CRU-tape Letters.”

 

The Immortal Struggle

As materialistic reductionist science continues the struggle to preserve its hold on power, there are many straws in the wind, which do not bode well for the cause.

The recent completion of the largest medical study of “the human mind and consciousness at time of death,” though attempting to explain away the reality of the so-called Near Death Experience, was left groping for an explanation of its own data. The four-year study of 2,060 cardiac arrest cases across 15 hospitals ended up conceding that, “In some cases… memories of visual awareness compatible with so called out-of-body experiences may correspond with actual events.” In other words, the stories you hear are often true. “Themes relating to the experience of death appear far broader than what has been understood so far,” conceded the authors of the final report. The full implications of this and similar studies have yet to be fully taken in, but it seems clear that the familiar orthodoxy of the materialists may be on its way out the door. (https://www.sciencedaily.com/releases/2014/10/141007092108.htm)

The supremacy of matter over mind is being tested as never before. Now researchers at the Ohio Musculoskeletal and Neurological Institute (OMNI) at Ohio University have found that the mind is critical in maintaining muscle strength following a prolonged period of immobilization and that mental imagery may be key in reducing the associated muscle loss. (http://www.the-aps.org/ mm/hp/Audiences/Public-Press/2014/30.html) Another study from the University of Auckland in Australia shows that illnesses are guided by the perception of the patients. (http://www.the-aps.org/ mm/hp/Audiences/Public-Press/2014/30.html) Still another study from the University of Singapore shows that the brain controls the body’s core temperature. There are many more such studies making news these days.

 

The Judgment of Their Peers

At the same time, we learn that the system we have depended upon to deliver the truth to us may not be what we thought. In fact, the mechanism used by the National Institutes of Health (NIH) to allocate government research funds to scientists whose grants receive its top scores works essentially no better than distributing those dollars at random, new research suggests.

The findings suggest that the expensive and time-consuming, peer-review process is not necessarily funding the best science and that awarding grants by lottery could actually result in equally good, if not better, results. A report on the research, published online on February 16th of this year in the journal eLife, was written by Ferric Fang, MD, at the University of Washington, Anthony Bowen, MS, at the Albert Einstein College of Medicine, and Arturo Casadevall, MD, PhD, at the Johns Hopkins Bloomberg School of Public Health:

“The NIH claims that they are funding the best grants by the best scientists. While these data would argue that the NIH is funding a lot of very good science, they are also leaving a lot of very good science on the table,” says Casadevall, Professor and Chair of the W. Harry Feinstone Department of Molecular Microbiology and Immunology at the Bloomberg School. “The government can’t afford to fund every good grant proposal, but the problems with the current system make it worse than awarding grants through a lottery.” (http://www.jhsph.edu/news/news-releases/2016/researchers-peer-review-system-for-awarding-nih-grants-is-flawed.html)

Of course the problems in the peer-review system are nothing new. Take the case of Jan Hendrik Schön, a young German scientist whose star, reportedly, had already risen. After being credited with a number of amazing discoveries—including plastic transistors, new superconductors, microscopic molecular switches, and more—the 32-year-old researcher at Bell Labs was the toast of big science around the world and considered one of the hottest researchers on the planet. But he was, it turns out, a fake. His peers and the scientific journals (including Science) that published his bogus work had been completely fooled.

The Schön case, we learn though, is not that unusual. South Korean cloning pioneer Hwang Woo-Suk, for example, recently made international headlines when it was discovered that he was faking data. Now, according to the journal Public Library of Science, a review of 21 scientific misconduct surveys of the period from 1986 to 2005, shows that more than two-thirds of researchers said they knew of colleagues who had committed “questionable” practices, and one in seven said that included inventing findings. Of course, very few scientists, just two percent, admitted to having faked results themselves. The most common area of fraud appears to be in medical research, which is seen as evidence for the effect of commercial pressure.

It is not, however, just the gross violations such as falsification, plagiarism, and fabrication that are of concern. According to a study authored by Raymond De Vries, an associate professor of medical education and a member of the Bioethics Program at the University of Michigan in Ann Arbor, scientific misbehavior seems to be endemic today. The study was published in April 2009 in the premier issue of the Journal of Empirical Research on Human Research Ethics.

De Vries says that intense competition between scientists these days is causing them to worry about things they shouldn’t be thinking about, like how their data will be interpreted—not just its integrity. In other words, they are thinking about whether their research will lead to conclusions their peers might not like. Other issues also mentioned by the study include the increasing number of rules that scientists are supposed to follow, and questions of how to deal with the growing competition for the rewards in a shrinking pie.

The study collected its data primarily from six focus groups, with a total of 51 researchers gathered from the top U.S. research universities. The groups were asked to discuss misconduct that the participants had either practiced or witnessed. “After the focus groups,” said De Vries, “we felt like we had been at a confessional. We didn’t intend this, but the focus groups became a place where people could unburden themselves.”

The same conflict, it seems, plays out over and over in scientific debate. Witness the heliocentric solar system of Galileo or the catastrophism of Immanuel Velikovsky. Some, like writer Thomas Kuhn (The Structure of Scientific Revolutions), have perceived a recurring pattern in which the ideas that are unthinkable to one generation become the orthodoxy of the next.

If, however, we want to be free right now from the tyranny, which certain elites would seek to impose on the public mind, we will have to learn to do our own thinking.