Jul 272015
 

The Guardian – July 27th, 2015

 

 

 

 

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.  The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.  Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.  Toby Walsh, professor of AI at the University of New South Wales said: “We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”

Musk and Hawking have warned that AI is “our biggest existential threat” and that the development of full AI could “spell the end of the human race”. But others, including Wozniak have recently changed their minds on AI, with the Apple co-founder saying that robots would be good for humans, making them like the “family pet and taken care of all the time”.

At a UN conference in Geneva in April discussing the future of weaponry, including so-called “killer robots”, the UK opposed a ban on the development of autonomous weapons, despite calls from various pressure groups, including the Campaign to Stop Killer Robots.

Jul 212015
 

Outer Places – July 21st, 2015

 

 

 

Are we on our way to bionic replacement organs? British scientists have just successfully implanted a bionic eye for the first time, allowing an 80-year-old legally blind man to see the world around him with a video camera.

80-year-old Ray Flynn suffers from an age-related condition called macular degeneration, which caused him to lose his central vision in one eye. He can still see out of the periphery of his eye, but there’s a large black hole in the center of his vision that has grown over the years. There is no treatment for this condition, but using the bionic implant, he can now see obstacles in front of him using special video glasses.

The implant, called the Argus II, consists of a transmitter that’s attached directly to the eye using electrodes. Images from a miniature video camera mounted on special glasses transmit images in the form of electrical pulses to the electrodes, which in turn stimulate the remaining cells in the retina to send the information to the brain.

In a test two weeks after the surgery, Flynn could detect horizontal and vertical lines on a computer. The implant even allows him to see with his eyes closed, as the doctors had him close his eyes during the test in order to ensure the information was coming from the camera.

 

eye implant 2

 

The Argus II is not yet available widely in the UK, although the researchers hope that once they run a few more successful tests, they will be able to get it onto NHS.   “Mr. Flynn’s progress is truly remarkable, he is seeing the outline of people and objects very effectively,” said lead surgeon Paulo Stanga, of the University of Manchester. “I think this could be the beginning of a new era for patients with sight loss.

 

Flynn has already begun to return to activities that require his central sight, such as gardening and watching television.

“Your eyes are the most precious thing,” said Flynn. “My brain is still trying to catch up and work out what is going on, but I have been told it will continue to get better.”

Over time, Stanga claims, Flynn should get accustomed to the implant and be able to interpret the images sent to his brain more clearly, so he can see almost exactly what the video camera is seeing.

This technology could have enormous impact on the quality of life for people with macular degeneration, and while the technology is specific to forms of sight loss that leave some healthy retinal cells, the successful implant of a bionic eye, or a bionic organ in general, is a major breakthrough.

“We hope these patients will develop some central visual function which they can work in alongside and complement their peripheral vision,” said Stanga. “We are very excited by this trial and hope that this technology might help people, including children with other forms of sight loss.”

For better and for worse (but mostly for better) this could be the beginning of a whole new landscape for medical treatment. Once we start fusing organic bodily functions with machines, we’ll be on the road towards turning the humans of the future into full-fledged cyborgs. If we could create other bionic organs, like bionic hearts, lungs, and kidneys, then we could extend the human lifespan by many, many years.

Jul 212015
 

Outer Places – July 21st, 2015

 

 

 

Did we just get one step closer to the singularity? Chinese researchers claim they’ve developed an AI that scored higher than the average human on the verbal portion of an intelligence test.  According to the authors, the verbal section of an IQ test is more difficult for an AI to complete accurately than the quantitative section because the verbal portion requires knowledge of the nuances of polysemous words (words with multiple meanings) and the relationships between them. Usually, computer programs utilize word embedding technologies that allow the AI to learn just one vector per word, which is insufficient to pass this sort of test.

 

In this study, the researchers claim to have invented a new “knowledge-powered” type of word embedding that allows the AI to adjust its strategy depending on what type of question is being asked and to take into account the relationships between different words. Using this new method, the AI was not only able to satisfactorily complete the IQ test, but were able to score slightly higher than the average human.
The authors wrote in their paper:   “The results are highly encouraging, indicating that with appropriate uses of the deep learning technologies, we could be a further small step closer to the human intelligence.”

 

These results are, in fact, groundbreaking, as the most successful AI IQ test so far measured the program as having the intelligence of a human four-year-old, while this AI was measured as equally intelligent to most humans. But, that being said, there are several qualifications to consider. First, many experts believe that the IQ test itself is flawed, as it only measures a normative type of intelligence and doesn’t take into account other types of mental ability, such as creativity or emotional intelligence. Some dispute it as a measure of intelligence altogether, as intelligence is a difficult concept to define and test bias has been demonstrated against certain marginalized groups. But even if it’s not exactly a measure of intelligence, most experts contend that it is correlated with intelligence, so these results should still be taken very seriously.

 

 

Cyborg brain

Cyborg brain

This study was also published on an online database, and has not yet been accepted to a scientific journal. But still, experts not involved in the study have praised the findings, even as they caution that this technology is in its infancy, and we’re not anywhere near Hal 9000 yet. Hannaneh Hajishirzi, an electrical engineer and computer scientist at the University of Washington in Seattle, told Business Insider that the researchers in this study “got interesting results in terms of comparison with humans on this test of verbal questions,  we’re still far away from making a system that can reason like humans.”

Robert Sloan, a computer scientist at the University of Illinois at Chicago, similarly acknowledged that this AI was a small step forward, but claimed that there was no guarantee the program would be able to perform as well as a human on an open-ended test, as opposed to multiple-choice. In the field of artificial intelligence research, Sloan said, “the places where so far we’ve seen very little progress have to do with open dialogue and social understanding.” For example, a normal human child would be expected to understand that if he or she sees an adult lying in the street, then he or she should call for help. “Right now, theres no way you could write a computer program to do that.”

Jul 212015
 

Outer Places – July 21st, 2015

 

optigenetics
optigenetics

Is immortality within our reach? Maybe not yet, but we are definitely trying. While the new film “Self/Less” features an interesting science fiction take on achieving immortality, various advances have been taking place in the very real scientific community. We may have a long way to go before we can transfer our consciousness into Ryan Reynolds body, but science is working pretty hard on some fascinating alternatives to the notion of immortality:

 

 

Anti-Aging Genetic Engineering

Maybe someday anti-aging will really reverse aging and keep us young forever, but until that day current anti-aging discoveries are at least helping to slow down specific aspects of the aging process. This spring, scientists at UC Berkeley discovered a drug called the Alk5 kinase inhibitor that helps restore brain and muscle tissues to youthful levels through stem cells used in tests on mice. The Alk5 kinase inhibitor limits the release of TGF-beta1, a chemical that restricts a stem cell’s ability to repair the body. This chemical tends to become over-produced as people age, but in restricting its release, it is hoped that the Alk5 kinase inhibitor can keep people healthier in old age by lessening the onset of aging related diseases, such as Alzheimer’s, increasing the quality of life and cutting down medical costs.

The inhibitor is currently in trials as an anticancer agent, and the hope is that one day death will not be the result a prolonged, painful disease, but through a quicker, more natural means like cardiac arrest or stroke. Here’s what Irina Conboy, one of the scientists at UC Berkeley, said about the motivations behind the team’s efforts.

 

“The goals of my colleagues and I are not to live forever. Instead of becoming old and becoming a burden on society, we can age ourselves more with integrity.”

Regenerative Medicine

robotic brainOne of the main goals of regenerative medicine has been developing the ability to produce hematopoietic stem cells (HSCs) suitable for blood cell transplants, or bone marrow transplants. These transplants are limited by the challenge of finding a good match and the rarity of naturally occurring HSCs, but in 2014 researchers at the Harvard Stem Cell Institute programmed mature blood cells in mice into reprogrammed HSCs by reversing the process of stem cells, to progenitors, to mature effector cells.

Tests have not yet been performed on human subjects, but the progress seen so far is enough to make Stuart Orkin of the Harvard Stem Cell Institute, feel very confident about the future.

 

This discovery could have a radical effect on transplantation You could have gene-matched donors, you could use a patient’s own cells to create iHSCs. It’s a long way off, but this is a good step in the right direction.

But that’s not the only advance in stem cell research. This year, scientists at the Salk Institute discovered a type of stem cell whose identity is tied to their location in a developing embryo, and not their time-related stage of development. These region-selective pluripotent stem cells (rsPSCs) are easier to grow in the laboratory, offer advantages for gene editing, and, unlike conventional stem cells, have the ability to integrate into modified mouse embryos.

As Jun Wu, a postdoctoral researcher describes; understanding the spatial characteristics of the stem cells “could be crucial to generate functional and mature cell types for regenerative medicine.” It could well be that in the near future, parts of the body that have degenerated due to age, could be regenerated at will by the introduction of these fascinating stem cells.

Nanomedicine

We have previously featured nanobots in medicine, but there are many more theoretical uses of nanomedicine that could someday affect our lifespan. According to Frank Boehm, author of “Nanomedical Device and Systems Design: Challenges, Possibilities, Visions,” a conceptual Vascular Cartographic Scanning Nanodevice could scan the entire human vasculature down to the capillary level and transfer the image to a Pixel Matrix display, holograph, or virtual reality system, allowing for a detailed inspection of the system to find aneurysm risks, especially in the brain.

A nanodevice imbued with data on toxins and pathogens could be used to enhance the human immune system by recognizing and destroying an invasive agent. Nanotechnology could also be used to remove lipofuscin, a product that accumulates in lysosomes negatively impacting cell function and manifesting in age related conditions. All of these technologies are speculative, but nanobots are already lengthening our lives in tests to fight cancer, and many believe such technologies are truly the future of the medical industry.

Digital Immortality

At Transhuman Vision 2014, Randal Koene, a neuroscientist and neuro-engineer described his plan to upload his brain to a computer by “mapping the brain, reducing its activity to computations, and reproducing these computations in code.” While it sounds remarkably like that awful Johnny Depp movie, Transcendence, Koene and many neuroscientists believes that our memories, emotions, consciousness, and more are just the sum of signals from electrochemical signal jumps from synapse to synapse.

Computer programmers have already created artificial neural networks that can form associations and learn through pattern-recognition, but they don’t possess the complexity of the human brain. However, if our consciousness is just based on brain activity and if technology can record and analyze them, they could possibly be reduced to computations. Advances have already been made with animal tests, and in 2011 a team from the University of Southern California and Wake Forest University created the first artificial neural implant, a device that produces electrical activity that causes a rat to react as though the signal came from its own brain.

Cyborgization

While it may sound the most sci-fi of all these scenarios, cyborg technology is already a part of our lives. We have artificial retinas, cochlear implants, pacemakers, and even deep-brain implants to alleviate the symptoms of Parkinson’s. In fact, the list of real world cyborg technologies is seemingly endless, so much so that we’ve had to reduce it to bullet form. Below you’ll find a few ways that humans and electronics have merged in beautiful harmony:

– Neruobridge technology reconnected a paralyzed man’s brain to his body

 

 

 

 

– The Eyeborg: Canadian filmmaker Rob Spence lost his right eye in a shotgun accident and replaced it with a video camera that transmits what he’s seeing to a computer.

– Programmer Amal Graafstra has inserted radio-frequency identification chips in his hands connected to scanners on his doors and laptop, eliminating the need for keys or passwords.

 

– “Transhumanists” advocate for cyborgization, genetic engineering, and synthetic biology, to increase our intelligence, health, and lives to transform humanity to a “post-human” stage.

Current advances in anti-aging, regenerative medicine, nanomedicine, digital immortality, and cyborgization may only be focusing on prolonging life at the moment. But these technologies have already improved our lives, and as the possibility of immortality is played out on the movie screen, we can see the world of fiction slowly melding with our own reality.

Jul 212015
 

Science Daily – July 20, 2015

 

 

 

A pioneering new technique to produce high-quality, low cost graphene could pave the way for the development of the first truly flexible ‘electronic skin’, that could be used in robots.

Researchers from the University of Exeter have discovered an innovative new method to produce the wonder material Graphene significantly cheaper, and easier, than previously possible.

The research team, led by Professor Monica Craciun, have used this new technique to create the first transparent and flexible touch-sensor that could enable the development of artificial skin for use in robot manufacturing. Professor Craciun, from Exeter’s Engineering department, believes the new discovery could pave the way for “a graphene-driven industrial revolution” to take place.

She said: “The vision for a ‘graphene-driven industrial revolution’ is motivating intensive research on the synthesis of high quality and low cost graphene. Currently, industrial graphene is produced using a technique called Chemical Vapour Deposition (CVD). Although there have been significant advances in recent years in this technique, it is still an expensive and time consuming process.”

 

Transparent-Graphene-Electrodes-on-skin-590x330

The Exeter researchers have now discovered a new technique, which grows graphene in an industrial cold wall CVD system, a state-of-the-art piece of equipment recently developed by UK graphene company Moorfield.

This so-called nanoCVD system is based on a concept already used for other manufacturing purposes in the semiconductor industry. This shows to the semiconductor industry for the very first time a way to potentially mass produce graphene with present facilities rather than requiring them to build new manufacturing plants. This new technique grows graphene 100 times faster than conventional methods, reduces costs by 99 % and has enhanced electronic quality.

These research findings are published in the scientific journal, Advanced Materials.

Dr Jon Edgeworth, Technical Director at Moorfield said: “We are very excited about the potential of this breakthrough using Moorfield’s technology and look forward to seeing where it can take the graphene industry in the future.”

Professor Seigo Tarucha from the University of Tokyo, coordinator of the Global Center of Excellence for Physics at Tokyo university and director of the Quantum Functional System Research Group at Riken Center for Emergent Matter Science said: “The ability to manufacture high quality, large area graphene (at a low cost) is essential for advancing this exciting material from pure science and proof-of-concept into the realm of conventional and quantum electronic applications. After starting the collaboration with Professor Craciun’s group, we are using Exeter CVD grown graphene instead of the exfoliated material in our graphene-based devices, whenever possible.”

The research team used this new technique to create the first graphene-based transparent and flexible touch sensor. The team believes that the sensors can be used not just to create more flexible electronics, but also a truly-flexible electronic skin that could be used to revolutionise robots of the future.

Dr Thomas Bointon, from Moorfield Nanotechnology and former PhD student in Professor Craciun’s team at Exeter added: “Emerging flexible and wearable technologies such as healthcare electronics and energy-harvesting devices could be transformed by the unique properties of graphene. The extremely cost efficient procedure that we have developed for preparing graphene is of vital importance for the quick industrial exploitation of graphene.”

At just one atom thick, graphene is the thinnest substance capable of conducting electricity. It is very flexible and is one of the strongest known materials. The race has been on for scientists and engineers to adapt graphene for flexible electronics.

Professor Saverio Russo, co-author and also from the University of Exeter, added: “This breakthrough will nurture the birth of new generations of flexible electronics and offers exciting new opportunities for the realization of graphene-based disruptive technologies. ”

In 2012 the teams of Prof Craciun and Profesor Russo, from the University of Exeter’s Centre for Graphene Science, discovered that sandwiched molecules of ferric chloride between two graphene layers make a whole new system that is the best known transparent material able to conduct electricity. The same team have recently discovered that GraphExeter is also more stable than many transparent conductors commonly used by, for example, the display industry.

Jul 212015
 

I09 – July 20th 2015

cyborgs us

cyborgs us

The prospect of uploading your brain into a supercomputer is an exciting one — your mind can live on forever, and expand its capacity in ways that are hard to imagine. But it leaves out one crucial detail: Your mind still needs a body to function properly, even in a virtual world. Here’s what we’ll have to do to emulate a body in cyberspaceWe are not just our brains. Conscious awareness arises from more than just raw calculations. As physical creatures who emerged from a material world, it’s our brains that allow us to survey and navigate through it; bodies are the fundamental medium for perception and action. Without an environment — along with the ability to perceive and work within it — there can be no subjective awareness. Brains need bodies, whether that brain resides in a vertebrate, a robot, or in future, an uploaded mind.

In the case of an uploaded mind, however, the body doesn’t have to be real. It just needs to be an emulation of one. Or more specifically, it needs to be a virtual body that confers all the critical functions of a corporeal body such that an uploaded or emulated mind can function optimally within its given virtual environment. It’s an open question as to whether or not uploading is possible, but if it is, the potential benefits are many, but knowing which particular features of the body need to reconstructed in digital form is not a simple task. So, to help me work through this futuristic thought experiment, I recruited the help of neuroscientist Anders Sandberg, a researcher at the University of Oxford’s Future of Humanity Institute and the co-author of Whole Brain Emulation: A Roadmap. Sandberg has spent a lot of time thinking about how to build an emulated brain, but for the purposes of this article, we exclusively looked at those features outside the brain that need to be digitally re-engineered.

Emulated Embodied Cognition

cognitive scienceTraditionally, this area of research is called embodied cognition. But given that we’re speculating about the realm of 1’s and 0’s, it would be more accurate to call it virtual or emulated embodied cognition. Thankfully, many of the concepts that apply to embodied cognition apply to this discussion as well.  Philosophers and scientists have known for some time that the brain needs a body. In his 1950 article, “Computing Machinery and Intelligence,” AI pioneer Alan Turing wrote:

It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. That process could follow the normal teaching of a child. Things would be pointed out and named, etc.

Turing was talking about robots, but his insights are applicable to the virtual realm as well.  Similarly, roboticist Rodney Brooks has said that robots could be made more effective if they plan, process, and perceive as little as possible. He recognized that constraint in these areas would likewise constrain capacity, thus making the behavior of robots more controllable by its creator (i.e. where computational intelligence is governed by a bottom-up approach instead of superfluous and complicated internal algorithms and representations).

Indeed, though cognition happens primarily (if not exclusively) in the brain, the body transmits critical information to it, in order to fuel subjective awareness. A fundamental premise of embodied cognition is the idea that the motor system influences cognition, along with sensory perception, and chemical and microbial factors. We’ll take a look at each of these in turn as we build our virtual body.  More recently, AI theorist Ben Goertzel has tried to create a cognitive architecture for robot and virtual embodied cognition, which he calls OpenCog. His open source intelligence framework seeks to define the variables that will give rise to human-equivalent artificial general intelligence. Though Goertzel’s primary concern is in giving an AI a sense of embodiment and environment, his ideas fit in nicely with whole brain emulation as well.

The Means of Perception

abstract computer brainA key aspect of the study of embodied cognition is the notion that physicality is a precondition to our intelligence. To a non-trivial extent, our subjective awareness is influenced by motor and sensory feedback fed by our physical bodies. Consequently, our virtual bodies will need to account for motor control in a virtual environment, while also providing for all the senses, namely sight, smell, sound, touch, taste. Obviously, the digital environment will have to produce these stimuli if they’re to be perceived by a virtual agent.

For example, we use our tactile senses a fair bit to interact with the world.  “If objects do not vibrate as a response to our actions, we lose much of our sense of what they are,” says Sandberg. “Similarly, we use the difference in sound due to the shape of our ears to tell what directions they come from.” So, in a virtual reality environment, this could be handled using clever sound processing, rather than simulating the mechanics of the outer ear. Sandberg says we’ll likely need some exceptionally high-resolution simulations of the parts of the world we interact with.  As an aside, he says this is also a concern when thinking about the treatment of virtual lab animals. Not giving virtual mice or dogs a good sense of smell would impair their virtual lives, since rodents are very smell-oriented creatures. While we know a bit about how to simulate them, we don’t know much about how things smell to rodents — and the rodent sense of smell can be tricky to measure.

Jul 202015
 

Defense Systems – July 20th 2015

 

 

 

Having funded the development of the first prosthetic arm that can function in a way close to the real thing, military researchers are now getting behind efforts to help it reach a wide number of users.

The Defense Advanced Research Projects Agency has awarded a contract worth just less than $7 million to the robotic arm’s developer, DEKA Innovative Solutions Corp., that ultimately aims to improve the DEKA Arm System’s functions and ensure that the “system can accommodate the broadest user community possible,” according to the contract award announcement.

 

 

darpa arm

DEKA—which is led by Segway inventor Dean Kamen—developed the arm over about eight years with $40 million in funding from DARPA, which started its Revolutionizing Prosthetics program in 2006 to assist service members who had lost limbs, particularly in the wars in Iraq and Afghanistan. The arm responds to muscle contractions and its wrist and fingers can perform six types of grips; the arm itself can perform 10 different movements. It’s been shown to be able to handle everything from a grape to a power tool.

darpa armThe Food and Drug Administration in May 2014 approved the arm—nicknamed Luke after Luke Skywalker—after its own tests showed that it allowed user to perform tasks they couldn’t with their current prosthetics, such as handling keys or preparing food. FDA also said the arm’s software and other components proved to be durable in various environmental conditions and capably protected against unintended movements.

Under the new contract, DEKA and DARPA will work on improving the feel of the arm. The company will train personnel in two DARPA programs—Hand Proprioception & Touch Interfaces (HAPTIX) and Revolutionizing Prosthetics Follow-on Studies (RPFS)—on setting up, using, maintaining and repairing the system. The HAPTIX program also will work on developing new technology for this “dexterous mechatronic prosthesis” to give amputees “the feel and function of natural limbs,” according to the award announcement.

 

darpa armAdding a sense of touch to prosthetics has been a focus of the HAPTIX program. In February, DARPA awarded contracts for Phase 1 of the program, to research ideas for how existing and new technologies could be applied. HAPTIX will now seek to incorporate those technologies into the DEKA arm under the contract, which covers work over the next 57 months.

Meanwhile, the RPFS program will give the FDA an approved variant of the arm system for further studies, which will including validating the criteria for prescribing the arm to patients, and to inform product and reimbursement code submissions to the Center for Medicaid and Medicare Services.

Jul 162015
 

MIT Tech Review – July 16th, 2015

 

 

 

Even if you lack the resources of Tony Stark, you can obtain a high-tech suit to enhance your natural abilities, or at least help you avoid a backache. Mechanical outfits, known as exoskeletons, are gaining a foothold in the real world.

The Japanese company Panasonic announced recently that it will start selling an exoskeleton designed to help workers lift and carry objects more easily and with less risk of injury. The suit was developed in collaboration with a subsidiary company called ActiveLink. It weighs just over 13 pounds and attaches to the back, thighs, and feet, enabling the wearer to carry 33 pounds of extra load. The device has been tested by warehouse handlers in Osaka, Japan, and is currently in trials with forestry workers in the region.

Panasonic’s device is among a small but growing number of exoskeletons available commercially—less fantastic and more cumbersome versions of a technology that’s been a staple of science fiction for some time. Though they have mainly been tested in medical and military settings, the technology is starting to move beyond these use niches, and it could make a difference for many manual laborers, especially as the workforce ages.

 

exoskeleton panasonic“We expect that exoskeletons, or power-assist suits, will be widely used in people’s lives in 15 years,” says Panasonic spokesperson Mio Yamanaka, who is based in Osaka, Japan. “We expect that they will be used for tasks that require physical strength, such as moving things and making deliveries, public works, construction, agriculture, and forestry.”

The Panasonic suit includes a lightweight carbon-fiber motor; sensors activate the motor when the wearer is lifting or carrying an object. With ActiveLink, the company is testing another, much larger suit designed to help carry loads as heavy as 220 pounds.

Some other companies are showing an interest in technology that can assist workers and help prevent injury. In collaboration with ergonomics researchers at the Technical University of Munich, the German carmaker BMW has given workers a custom-made, 3-D-printed orthotic device that fits over the thumb and helps them perform repetitive tasks. Another German carmaker, Audi, is testing a wearable device from a company called Noonee, which provides back support for workers who need to perform repetitive crouching motions.

Another Japanese company, Cyberdyne, already sells exoskeletons for medical and industrial use. The company’s technology, which was spun out of the University of Tsukuba, uses nerve signals to detect a wearer’s intention to move before applying assistive force. Earlier this year, Cyberdyne signed an agreement with the Japanese automation company Omron to develop assistive technology for use in factories.

panasonic inflatable exoskeletonEsko Bionics, a company cofounded by Homayoon Kazerooni, a professor at the University of California, Berkeley, is also working to commercialize two exoskeletons—one for rehabilitation, which is currently being tested in Italy, and another for industrial use. These are designed to be very lightweight and conform well to a person’s normal motion. Kazerooni says the industrial model, which he demonstrated at Harvard University’s Wyss Institute last month, will be significantly lighter, cheaper, and more flexible. “The key is not just what the exoskeleton does in terms of lessening the load,” he says. “It’s also about preventing maneuvers the user could do without the device.”

Exoskeletons have found commercial traction for rehabilitation and as walking aids. Earlier this week, a company called ReWalk, based in Marlborough, Massachusetts, announced the latest version of its device for people with spinal-cord injuries. The system enables people who normally require a wheelchair to walk with the aid of crutches (see “Personal Exoskeletons for Paraplegics”). Powerful exoskeletons have also been tested by the U.S. military for some time.

Progress in the underlying technology could help make exoskeletons more common. Conor Walsh and Robert Wood, two professors at Harvard University, are developing exoskeletons using novel materials and methods of assisting a wearer’s motion, making them much lighter and more comfortable (see “Motorized Pants to Help Soldiers and Stroke Victims”). If this type of technology can be commercialized, it could make exoskeletons more appealing to workers and employers.

 

panasonic inflatable exoskeleton

Jul 152015
 

MIT Tech Review – July 15th, 2015

biodegradeable circuit

Biodegradable, wood-based computer chips can perform just as well as chips commonly used for wireless communication, according to new research.

The inventors argue that the new chips could help address the global problem of rapidly accumulating electronic waste, some of which contains potentially toxic materials. The results also show that a transparent, wood-derived material called nanocellulose paper is an attractive alternative to plastic as a surface for flexible electronics.

In conventional chip manufacturing, electronic components like transistors are made on the surface of a rigid wafer made of a semiconducting material such as silicon. Researchers at the University of Wisconsin, led by Zhenqiang (Jack) Ma, a professor of electrical and computer engineering, made the electronic components in a similar way but then used a rubber stamp to lift them from the wafer and transfer them to a new surface made of nanocellulose. This reduced the amount of semiconducting material used by a factor of up to 5,000, without sacrificing performance.

In two recent demonstrations, Ma and his colleagues showed they can use nanocellulose as the support layer for radio frequency circuits that perform comparably to those commonly used in smartphones and tablets. They also showed that these chips can be broken down by a common fungus.

water soluble circuitry

water soluble circuitry

The vast majority of the semiconducting material in today’s chips makes up the support layer, and the active electronic components represent only a very tiny fraction. This is an expensive waste, says Ma, and in the case of some materials it can lead to dangerous pollution when a device is thrown out.

In recent years, researchers have demonstrated that nanocellulose, which is made by breaking wood fibers down to the nanoscale, can be a viable support material for a variety of electronic devices, including solar cells. However, the recent demonstrations are the first to reveal properties that make the material promising for use in efficient, high-performing radio frequency circuits, says Ma.

Ma says chips like those his group made are ready for commercialization. But he thinks it’s likely to take heightened environmental pressure, or a spike in the price of rare semiconductor materials like gallium, for the mainstream electronics industry to change its current practices and consider making chips from wood.

water soluble circuitryTechniques for manufacturing devices like those Ma and his colleagues have made are becoming more established in the electronics industry, says John Rogers, a professor of materials science at the University of Illinois at Urbana-Champaign. Rogers originally developed the method Ma’s group used to transfer small amounts of semiconducting material from a large wafer to the nanocellulose surface.

The military is very interested in “transient electronics” that would degrade in some way to prevent sensitive electronics from falling into the hands of adversaries, says Rogers. But perhaps the most important aspect of Ma’s recent demonstrations is the potential environmental benefit, he says. Devices of all shapes and sizes that can communicate wirelessly are proliferating quickly, and this trend shows no signs of slowing. People upgrade their devices often, and outdated devices are commonly thrown out. “What’s happening to all those waste streams? I think that’s a pretty legitimate question to ask,” he says.

Jul 092015
 

PBS – Nova 0 July 7th, 2015

 

 

Brain linked monkeyNeuroscientists have successfully linked three monkeys’ brains using implanted electrodes and coaxed them to cooperatively control a robotic arm. Oh, and they also performed a similar experiment that directly linked the brains of four rats to test their capacity for synchronicity. (I, for one, welcome our new hive-minded, mammalian overlords.) That monkeys can coordinate using nothing but brain-waves isn’t particularly new. This new work builds on earlier experiments that linked animals’ brains, both one at a time and in tandem, to prosthetic limbs, but it is far more sophisticated. No one has ever yoked more than two brains in such a way to accomplish a task. What’s more, the trio of mind-melded monkeys frequently did a better job at controlling the robotic arm than one monkey working alone.

Miguel Nicolelis, director of the Center for Neuroengineering at Duke University and principle investigator for the study, calls the merged minds “brainets.” Nicolelis and his colleagues started with four electrode-implanted rats, linking them both in parallel and in series, to test whether brains could coordinate signals. For the parallel experiment, they sent two types of signals to the four linked rats. When one signal was sent, the rats were rewarded for synchronizing. When they received the other, they were rewarded for not coordinating their brain waves. Quickly, they were able to react appropriately a majority of the time. Then the neuroscientists linked the rats in series, training the first rat on the signals. Once that rat had properly learned them, they hooked up a second to learn from the first, and so on up to four. Again, the rats quickly passed the test.

Brain linked monkeyThen came the monkeys. First, the team tested two monkeys each linked to a computer. The computer then translated their signals to control a robotic arm. The monkeys were rewarded when they successfully moved a ball. In a second experiment, they had each monkey specialize in a different freedom of movement—one vertical, the other horizontal. In a final test, they hooked up three monkeys to the computer that controlled the arm and let them loose. Needless to say, other scientists are impressed. Here’s Jessica Hamzelou, reporting for New Scientist:

“This is incredible,” says Andrea Stocco at the University of Washington in Seattle, who was not involved in the project. “We are sampling different neurons from different animals and putting them together to create a superorganism.”

Neuroscientists are still many, many years away from linking human brains, but the research points to some tantalizing possibilities. First, such research requires sophisticated brain-computer interfaces, which, once perfected, could allow people to deftly control advanced prosthetic limbs. Further in the future, it could also allow a group to coordinate on a difficult task without using language and its inherent barriers. By that point, we may not even need implanted electrodes to tap into a massive brainet—we may only have to slip on a simple headset to contribute our mind’s computing power.

 

Brain linked monkey