aads

Dumbing Down Star Wars in The Force Awakens [Page 3.14]

I’m not going to argue with anyone who liked the new Star Wars movie; I liked it too. But it also infuriated me in a way that no film has since J.J. Abrams took over the direction of another beloved SF franchise—Star Trek, which has long offered spaceship fans a galaxy shaded by scientific utopianism rather than spirituality and melodrama. The Force Awakens is a movie with no soul and little intelligence and it fails to advance the mythos of the franchise.

The Plot that Wasn’t

It was high times for the Rebel Alliance at the end of Return of the Jedi (1983). Across the galaxy, crowds celebrate with fireworks and confetti, jubilant at the destruction of the second Death Star and the apparent defeat of Emperor Palpatine. Princess Leia Organa, who just two films earlier had witnessed her home planet blown up for sport, is united with a brother she never knew she had, becoming aware of her own Force adeptness, and in love with a swashbuckling hero who would later father her son. It is a resounding victory, and deservedly so, even if Ewoks had to help.

The Force Awakens begins thirty years later, yet reveals nothing about the consequences of the Rebellion’s victory. One might think democracy was restored and the title scroll refers very quickly to “THE REPUBLIC” before never mentioning it again. The original Republic, of course, existed in the time of the prequel trilogy and was transformed into the first Galactic Empire through the shrewd machinations of Palpatine, a dark lord of the Sith. But now, Leia and the good guys are called “The Resistance” and the jerks with Star Destroyers are called “The First Order.” Where is the New Republic in all this? We never find out.

Instead the entire film propels itself in pursuit of a particularly foolish MacGuffin (an object, for example, that everyone wants to get their hands on.) This is a common technique in action films and was also used in A New Hope (1977) as the Empire tries to recover stolen plans for the Death Star. In The Force Awakens, the object everyone desires is a map to Luke Skywalker, who has gone into hiding because he failed as a Jedi Master and created a pitiful emo monster in the form of his nephew, Kylo Ren. The whole idea of following a map to find a planet is embarassing–space is 3-D and wide open; all ones needs are coordinates. Instead we are shown a meandering orange trail that stretches across the galaxy. What if you’re coming from another direction? I don’t know, fly casual.

Luke is only in the film for about a minute, and he has no dialogue. So, he contributes very little to the story aside from being a tease. Where else can we look? There is Leia, who is now a General with the Resistance. Her situation must be painfully tragic. Not only is she a woman without a home or a family, but the rebellion that she led so fiercely has failed to change anything in the thirty years since Return of the Jedi. Han Solo has abandoned her and cruises the galaxy with his Wookiee bro looking for their junky old spaceship. Leia and Han’s son, Kylo Ren, has run away to apprentice for an evil mastermind and wants to murder Leia’s brother. And yet the film doesn’t explore her potential pathos at all, it instead ignores her and passes her off as basically content. Although, for Leia, the worst is yet to come.

Han Solo’s Death Wish

Harrison Ford was ready for Han Solo to die in Return of the Jedi, although he didn’t get his wish. He was tired of his character and likely George Lucas as well, and it’s likely only because of the latter’s departure (and Disney’s deep pockets) that Ford reprised the role at all. Still, he was only in it for a last hurrah, and so Disney needed to kill off his character. Han Solo was always a cagey, wily, brave and lucky bastard; despite what George Lucas would later revise, Han did shoot first, because he knew if he didn’t, Greedo would fry his ass. Han Solo is nobody’s fool, and neither is his brother-in-arms Chewbacca, who hardly even loses at chess.

Yet Han’s death in the film is hard to understand. After many years he has been reunited with the Millennium Falcon, he has seen Leia again and they agree that he should ask Kylo Ren to come home. So Han flies to Starkiller base, where Ren likes to brood, and confronts him. Han walks out onto the longest, narrowest, most railing-less, most useless catwalk in the galaxy, above an abyss that is undoubtedly bottomless. He says, kiddo, please, let me help you? And Kylo agrees by switching on his lightsaber. These two may be father and son, but could Han really be so credulous, so naive, have such a blind spot to let himself be murdered, without even a contingency plan? To let down everyone who has ever loved him? While Han’s death is the heart of the film’s narrative, it’s also meaningless, because we know nothing about the relationship Han and Kylo once had.

Kylo Ren turns out to be a kind of metaphor for the whole movie: a clueless newcomer who idolizes the remains of Darth Vader and wants to kill all the characters we love.

Consider Chewbacca, who was once a threatening and temperamental (if warm-hearted) presence. In this film he only makes funny sounds and gestures for easy laughs. And the droids, whom George Lucas envisioned as the point of view for all of Star Wars, are likewise relegated to the sidelines and negelcted; R2-D2 is asleep for most of the movie, and C-3P0 only gets in the way once.

Diverse New Idols

Of course, this film is supposed to be about the new characters, not the old ones. Disney made a very clear nod to gender and racial equality in casting their lead actors. The unfortunate thing is that neither of these characters is given any substantial backstory or character development. They demonstrate no internal conflict or struggle. They experience no defeat, and little growth. It seems to be within these two that the Force has “awakened,” since it gets them out of every jam with killer, invincible instinct. Rey, although she begins the film as a poor desert scavenger, is purely virtuous and physically adept from the beginning: she excels at hand-to-hand combat, won’t sell out a friend for money, magically flies a spaceship for the first time, magically wields a lightsaber for the first time, etc. Her basic attribute is that she kicks ass and while that’s always fun, she’s little more than an emblem, and therefore a stereotype. The film tells us nothing about her personal history or relationships, except that she has been waiting in the desert for someone to return.

Meanwhile Finn the black Stormtrooper begins the film by having a panic attack in battle. He witnesses his fellow Stormtrooper killed and bloodied, and refuses to fire on the enemy. He soon defects from the First Order and joins up with Rey for their mindless hijinx. Finn’s moment of truth is presented as a moral awakening: he realizes that killing is wrong and refuses to do so. And yet, once he joins the good guys, he has no problem turning around and shooting his former comrades. Finn reveals that he was kidnapped as a child and indoctrinated as a soldier all his life—presumably the other Stormtroopers were too. Finn ought to have immense sympathy for Stormtroopers; he should be deeply conflicted about his actions and his future. Instead, he’s a happy-go-lucky blaster jockey, another empty emblem. Both actors are partially wasted in this film because these roles are meaningless. And that is not what women or racial minorities (or anyone) needs.

And moreover, the finesse of these characters undermines everything the other films have taught us about the Force. These new heroes don’t have to learn anything, it just comes to them naturally. It seems this is the only Star Wars film without a line of dialogue spoken by a Jedi Master. Character development in Star Wars has always been about learning and discovering the difference between dark and light, but that sense of apprenticeship is wholly lacking.

Um, That’s Not How Starkilling Works

Of course as someone interested in science I find it infuriating when films present blatantly unscientific cause and effect. Even if the technology in speculative fiction is more advanced than ours, the rules of physics still usually apply. Even magic such as the Force is plausible as long as it operates according to a set of rules. But when writers make lazy shortcuts, it’s hard to take their work seriously.

Take the case of the First Order’s headquarters, Starkiller Base. Now, although The Force Awakens is dead-set on recreating every iconic element of the original trilogy, someone in Hollywood must have thought that after two Death Stars with highly vulnerable shafts, it was time for for the First Order to up the ante. The result is Starkiller Base, an entire planet that has been hollowed out and turned into a weapon that suck up the mass of a star and fire it across the galaxy to incinerate distant planets. It does the same job as a Death Star, except from longer range. Honestly a Death Star is much more economical, if only someone could design some good grates.

The first time Starkiller Base fires its weapon, we see a cinematic technique J.J. Abrams used previously in Star Trek (2009). Here, people on one planet look up in the sky just in time to see another planet destroyed. And they go, OMG! Now the speed of light is not a limiting factor in the Star Wars univere; spacecraft can exceed it. But the beam fired by Starkiller Base was not traveling faster than light, and likewise appeared to consist of matter rather than radiation, meaning it was traveling much slower. So how many years would it take to reach its target, if ever? And once its target was destroyed, how many years before the light from that event would reach other solar systems? Not to mention, planets in other solar systems are too small and distant to be observed with the naked eye at all, much less in broad daylight. I don’t know why Abrams insists on making galaxies so tiny and convenient.

I Have a Bad Feeling About This…

In fact the whole film is an a-causal jumble of narrative serendipity, a steady stream of nostalgic, unconnected tropes that we can expect to see again and again as the franchise rolls forward. We know the Force works in mysterious ways, and so we can accept that Rei stumbles upon the Millennium Falcon sitting under a tarp, collecting dust in a junkyard. What is harder to believe is that on a planet full of scavengers, she is able to walk onto the ship, power it up, and fly it away without a key. Everything is right there when the characters need it. Like Luke’s lightsaber is in the basement, take it with you.

Repetition of themes also defines Star Wars; George Lucas said of the prequel trilogy that is was supposed to mirror the original. Disney obviously had no problem with this concept, but rather than crafting a variation on a theme, they regurgitate every element from the original trilogy that they could. In part two of the original trilogy, a great secret is revealed: Darth Vader is Luke Skywalker’s father and Leia’s as well. One can expect a similar bombshell will drop in in Episode VIII, and it will almost certainly involve Rey and the mysterious figure she was waiting for in the desert. I wager that Rey is Luke’s daughter, or perhaps Kylo’s sister. She must be a Skywalker; she appears to be more gifted than even Anakin. Midichlorians, anyone?

Now, like everyone else who loved Star Wars and was excited for the prequel trilogy, I was let down by The Phantom Menace (1999). It’s a bewildering film, there are aliens are arguing about economics, the acting is horribly stilted, the dialogue is poorly written, the plot is inscrutable. Jar-Jar Binks tries to coin a catchphrase. Everybody dies a little inside.

Attack of the Clones (2002) generates more narrative interest, Anakin is old enough to discover himself and his love for Padme; he shows flashes of the lust and rage that would ultimately lead him into darkness. And Revenge of the Sith (2005) features some truly incredible moments, as Palpatine pulls his labrynthine trap together and Obi-Wan tries to bring back Anakin back to the light. I really do not enjoy watching the prequels. But I feel that, underneath their obscure, indiosyncratic presentation, there is a very interesting story about good and evil. The prequel trilogy burns brightly in my imagination if not onscreen. Meanwhile, The Force Awakens is just the opposite. It is an enjoyable movie to watch. But it has no compelling story, no character development, no morality. The film is obviously a set-up for larger plot elements to follow, but still this is supposed to be cinema, not a TV pilot.

Also they made X-Wings a lot uglier.

Two stars.



from ScienceBlogs http://ift.tt/1SJVOjS

I’m not going to argue with anyone who liked the new Star Wars movie; I liked it too. But it also infuriated me in a way that no film has since J.J. Abrams took over the direction of another beloved SF franchise—Star Trek, which has long offered spaceship fans a galaxy shaded by scientific utopianism rather than spirituality and melodrama. The Force Awakens is a movie with no soul and little intelligence and it fails to advance the mythos of the franchise.

The Plot that Wasn’t

It was high times for the Rebel Alliance at the end of Return of the Jedi (1983). Across the galaxy, crowds celebrate with fireworks and confetti, jubilant at the destruction of the second Death Star and the apparent defeat of Emperor Palpatine. Princess Leia Organa, who just two films earlier had witnessed her home planet blown up for sport, is united with a brother she never knew she had, becoming aware of her own Force adeptness, and in love with a swashbuckling hero who would later father her son. It is a resounding victory, and deservedly so, even if Ewoks had to help.

The Force Awakens begins thirty years later, yet reveals nothing about the consequences of the Rebellion’s victory. One might think democracy was restored and the title scroll refers very quickly to “THE REPUBLIC” before never mentioning it again. The original Republic, of course, existed in the time of the prequel trilogy and was transformed into the first Galactic Empire through the shrewd machinations of Palpatine, a dark lord of the Sith. But now, Leia and the good guys are called “The Resistance” and the jerks with Star Destroyers are called “The First Order.” Where is the New Republic in all this? We never find out.

Instead the entire film propels itself in pursuit of a particularly foolish MacGuffin (an object, for example, that everyone wants to get their hands on.) This is a common technique in action films and was also used in A New Hope (1977) as the Empire tries to recover stolen plans for the Death Star. In The Force Awakens, the object everyone desires is a map to Luke Skywalker, who has gone into hiding because he failed as a Jedi Master and created a pitiful emo monster in the form of his nephew, Kylo Ren. The whole idea of following a map to find a planet is embarassing–space is 3-D and wide open; all ones needs are coordinates. Instead we are shown a meandering orange trail that stretches across the galaxy. What if you’re coming from another direction? I don’t know, fly casual.

Luke is only in the film for about a minute, and he has no dialogue. So, he contributes very little to the story aside from being a tease. Where else can we look? There is Leia, who is now a General with the Resistance. Her situation must be painfully tragic. Not only is she a woman without a home or a family, but the rebellion that she led so fiercely has failed to change anything in the thirty years since Return of the Jedi. Han Solo has abandoned her and cruises the galaxy with his Wookiee bro looking for their junky old spaceship. Leia and Han’s son, Kylo Ren, has run away to apprentice for an evil mastermind and wants to murder Leia’s brother. And yet the film doesn’t explore her potential pathos at all, it instead ignores her and passes her off as basically content. Although, for Leia, the worst is yet to come.

Han Solo’s Death Wish

Harrison Ford was ready for Han Solo to die in Return of the Jedi, although he didn’t get his wish. He was tired of his character and likely George Lucas as well, and it’s likely only because of the latter’s departure (and Disney’s deep pockets) that Ford reprised the role at all. Still, he was only in it for a last hurrah, and so Disney needed to kill off his character. Han Solo was always a cagey, wily, brave and lucky bastard; despite what George Lucas would later revise, Han did shoot first, because he knew if he didn’t, Greedo would fry his ass. Han Solo is nobody’s fool, and neither is his brother-in-arms Chewbacca, who hardly even loses at chess.

Yet Han’s death in the film is hard to understand. After many years he has been reunited with the Millennium Falcon, he has seen Leia again and they agree that he should ask Kylo Ren to come home. So Han flies to Starkiller base, where Ren likes to brood, and confronts him. Han walks out onto the longest, narrowest, most railing-less, most useless catwalk in the galaxy, above an abyss that is undoubtedly bottomless. He says, kiddo, please, let me help you? And Kylo agrees by switching on his lightsaber. These two may be father and son, but could Han really be so credulous, so naive, have such a blind spot to let himself be murdered, without even a contingency plan? To let down everyone who has ever loved him? While Han’s death is the heart of the film’s narrative, it’s also meaningless, because we know nothing about the relationship Han and Kylo once had.

Kylo Ren turns out to be a kind of metaphor for the whole movie: a clueless newcomer who idolizes the remains of Darth Vader and wants to kill all the characters we love.

Consider Chewbacca, who was once a threatening and temperamental (if warm-hearted) presence. In this film he only makes funny sounds and gestures for easy laughs. And the droids, whom George Lucas envisioned as the point of view for all of Star Wars, are likewise relegated to the sidelines and negelcted; R2-D2 is asleep for most of the movie, and C-3P0 only gets in the way once.

Diverse New Idols

Of course, this film is supposed to be about the new characters, not the old ones. Disney made a very clear nod to gender and racial equality in casting their lead actors. The unfortunate thing is that neither of these characters is given any substantial backstory or character development. They demonstrate no internal conflict or struggle. They experience no defeat, and little growth. It seems to be within these two that the Force has “awakened,” since it gets them out of every jam with killer, invincible instinct. Rey, although she begins the film as a poor desert scavenger, is purely virtuous and physically adept from the beginning: she excels at hand-to-hand combat, won’t sell out a friend for money, magically flies a spaceship for the first time, magically wields a lightsaber for the first time, etc. Her basic attribute is that she kicks ass and while that’s always fun, she’s little more than an emblem, and therefore a stereotype. The film tells us nothing about her personal history or relationships, except that she has been waiting in the desert for someone to return.

Meanwhile Finn the black Stormtrooper begins the film by having a panic attack in battle. He witnesses his fellow Stormtrooper killed and bloodied, and refuses to fire on the enemy. He soon defects from the First Order and joins up with Rey for their mindless hijinx. Finn’s moment of truth is presented as a moral awakening: he realizes that killing is wrong and refuses to do so. And yet, once he joins the good guys, he has no problem turning around and shooting his former comrades. Finn reveals that he was kidnapped as a child and indoctrinated as a soldier all his life—presumably the other Stormtroopers were too. Finn ought to have immense sympathy for Stormtroopers; he should be deeply conflicted about his actions and his future. Instead, he’s a happy-go-lucky blaster jockey, another empty emblem. Both actors are partially wasted in this film because these roles are meaningless. And that is not what women or racial minorities (or anyone) needs.

And moreover, the finesse of these characters undermines everything the other films have taught us about the Force. These new heroes don’t have to learn anything, it just comes to them naturally. It seems this is the only Star Wars film without a line of dialogue spoken by a Jedi Master. Character development in Star Wars has always been about learning and discovering the difference between dark and light, but that sense of apprenticeship is wholly lacking.

Um, That’s Not How Starkilling Works

Of course as someone interested in science I find it infuriating when films present blatantly unscientific cause and effect. Even if the technology in speculative fiction is more advanced than ours, the rules of physics still usually apply. Even magic such as the Force is plausible as long as it operates according to a set of rules. But when writers make lazy shortcuts, it’s hard to take their work seriously.

Take the case of the First Order’s headquarters, Starkiller Base. Now, although The Force Awakens is dead-set on recreating every iconic element of the original trilogy, someone in Hollywood must have thought that after two Death Stars with highly vulnerable shafts, it was time for for the First Order to up the ante. The result is Starkiller Base, an entire planet that has been hollowed out and turned into a weapon that suck up the mass of a star and fire it across the galaxy to incinerate distant planets. It does the same job as a Death Star, except from longer range. Honestly a Death Star is much more economical, if only someone could design some good grates.

The first time Starkiller Base fires its weapon, we see a cinematic technique J.J. Abrams used previously in Star Trek (2009). Here, people on one planet look up in the sky just in time to see another planet destroyed. And they go, OMG! Now the speed of light is not a limiting factor in the Star Wars univere; spacecraft can exceed it. But the beam fired by Starkiller Base was not traveling faster than light, and likewise appeared to consist of matter rather than radiation, meaning it was traveling much slower. So how many years would it take to reach its target, if ever? And once its target was destroyed, how many years before the light from that event would reach other solar systems? Not to mention, planets in other solar systems are too small and distant to be observed with the naked eye at all, much less in broad daylight. I don’t know why Abrams insists on making galaxies so tiny and convenient.

I Have a Bad Feeling About This…

In fact the whole film is an a-causal jumble of narrative serendipity, a steady stream of nostalgic, unconnected tropes that we can expect to see again and again as the franchise rolls forward. We know the Force works in mysterious ways, and so we can accept that Rei stumbles upon the Millennium Falcon sitting under a tarp, collecting dust in a junkyard. What is harder to believe is that on a planet full of scavengers, she is able to walk onto the ship, power it up, and fly it away without a key. Everything is right there when the characters need it. Like Luke’s lightsaber is in the basement, take it with you.

Repetition of themes also defines Star Wars; George Lucas said of the prequel trilogy that is was supposed to mirror the original. Disney obviously had no problem with this concept, but rather than crafting a variation on a theme, they regurgitate every element from the original trilogy that they could. In part two of the original trilogy, a great secret is revealed: Darth Vader is Luke Skywalker’s father and Leia’s as well. One can expect a similar bombshell will drop in in Episode VIII, and it will almost certainly involve Rey and the mysterious figure she was waiting for in the desert. I wager that Rey is Luke’s daughter, or perhaps Kylo’s sister. She must be a Skywalker; she appears to be more gifted than even Anakin. Midichlorians, anyone?

Now, like everyone else who loved Star Wars and was excited for the prequel trilogy, I was let down by The Phantom Menace (1999). It’s a bewildering film, there are aliens are arguing about economics, the acting is horribly stilted, the dialogue is poorly written, the plot is inscrutable. Jar-Jar Binks tries to coin a catchphrase. Everybody dies a little inside.

Attack of the Clones (2002) generates more narrative interest, Anakin is old enough to discover himself and his love for Padme; he shows flashes of the lust and rage that would ultimately lead him into darkness. And Revenge of the Sith (2005) features some truly incredible moments, as Palpatine pulls his labrynthine trap together and Obi-Wan tries to bring back Anakin back to the light. I really do not enjoy watching the prequels. But I feel that, underneath their obscure, indiosyncratic presentation, there is a very interesting story about good and evil. The prequel trilogy burns brightly in my imagination if not onscreen. Meanwhile, The Force Awakens is just the opposite. It is an enjoyable movie to watch. But it has no compelling story, no character development, no morality. The film is obviously a set-up for larger plot elements to follow, but still this is supposed to be cinema, not a TV pilot.

Also they made X-Wings a lot uglier.

Two stars.



from ScienceBlogs http://ift.tt/1SJVOjS

Singing in the brain: Songbirds sing like humans

"In terms of vocal control, the bird brain appears as complicated and wonderful as the human brain," says biologist Samuel Sober, shown in his lab with a pair of zebra finches. (Photo by Ofer Tchernichovski.)

By Carol Clark

A songbirds’ vocal muscles work like those of human speakers and singers, finds a study published in the Journal of Neuroscience. The research on Bengalese finches showed that each of their vocal muscles can change its function to help produce different parameters of sounds, in a manner similar to that of a trained opera singer.

“Our research suggests that producing really complex song relies on the ability of the songbirds’ brains to direct complicated changes in combinations of muscles,” says Samuel Sober, a biologist at Emory University and lead author of the study. “In terms of vocal control, the bird brain appears as complicated and wonderful as the human brain.”

Pitch, for example, is important to songbird vocalization, but there is no single muscle devoted to controlling it. “They don’t just contract one muscle to change pitch,” Sober says. “They have to activate a lot of different muscles in concert, and these changes are different for different vocalizations. Depending on what syllable the bird is singing, a particular muscle might increase pitch or decrease pitch.”

Previous research has revealed some of the vocal mechanisms within the human “voice box,” or larynx. The larynx houses the vocal cords and an array of muscles that help control pitch, amplitude and timbre.

Instead of a larynx, birds have a vocal organ called the syrinx, which holds their vocal cords deeper in their bodies. While humans have one pair of vocal cords, a songbird has two sets, enabling it to produce two different sounds simultaneously, in harmony with itself.

“Lots of studies look at brain activity and how it relates to behaviors, but muscles are what translates the brain’s output into behavior,” Sober says. “We wanted to understand the physics and biomechanics of what a songbird’s muscles are doing while singing.”

The researchers devised a method involving electromyography (EMG) to measure how the neural activity of the birds activates the production of a particular sound through the flexing of a particular vocal muscle.

The results showed the complex redundancy of the songbird’s vocal muscles. “It tells us how complicated the neural computations are to control this really beautiful behavior,” Sober says, adding that songbirds have a network of brain regions that non-songbirds do not.

The study was co-authored by Kyle Srivastava, a graduate student of the Emory and Georgia Tech Biomedical Engineering Doctoral Program, and Coen Elemans, a biologist from the University of Southern Denmark and a former visiting professor at Emory, funded by the Emory Institute for Quantitative Theory and Methods and the National Institutes of Health.

Related:
Birdsong study pecks theory that music is uniquely human
How songbirds learn to sing
Birdsong study reveals how brain uses timing during motor activity

from eScienceCommons http://ift.tt/1K9k1ZP
"In terms of vocal control, the bird brain appears as complicated and wonderful as the human brain," says biologist Samuel Sober, shown in his lab with a pair of zebra finches. (Photo by Ofer Tchernichovski.)

By Carol Clark

A songbirds’ vocal muscles work like those of human speakers and singers, finds a study published in the Journal of Neuroscience. The research on Bengalese finches showed that each of their vocal muscles can change its function to help produce different parameters of sounds, in a manner similar to that of a trained opera singer.

“Our research suggests that producing really complex song relies on the ability of the songbirds’ brains to direct complicated changes in combinations of muscles,” says Samuel Sober, a biologist at Emory University and lead author of the study. “In terms of vocal control, the bird brain appears as complicated and wonderful as the human brain.”

Pitch, for example, is important to songbird vocalization, but there is no single muscle devoted to controlling it. “They don’t just contract one muscle to change pitch,” Sober says. “They have to activate a lot of different muscles in concert, and these changes are different for different vocalizations. Depending on what syllable the bird is singing, a particular muscle might increase pitch or decrease pitch.”

Previous research has revealed some of the vocal mechanisms within the human “voice box,” or larynx. The larynx houses the vocal cords and an array of muscles that help control pitch, amplitude and timbre.

Instead of a larynx, birds have a vocal organ called the syrinx, which holds their vocal cords deeper in their bodies. While humans have one pair of vocal cords, a songbird has two sets, enabling it to produce two different sounds simultaneously, in harmony with itself.

“Lots of studies look at brain activity and how it relates to behaviors, but muscles are what translates the brain’s output into behavior,” Sober says. “We wanted to understand the physics and biomechanics of what a songbird’s muscles are doing while singing.”

The researchers devised a method involving electromyography (EMG) to measure how the neural activity of the birds activates the production of a particular sound through the flexing of a particular vocal muscle.

The results showed the complex redundancy of the songbird’s vocal muscles. “It tells us how complicated the neural computations are to control this really beautiful behavior,” Sober says, adding that songbirds have a network of brain regions that non-songbirds do not.

The study was co-authored by Kyle Srivastava, a graduate student of the Emory and Georgia Tech Biomedical Engineering Doctoral Program, and Coen Elemans, a biologist from the University of Southern Denmark and a former visiting professor at Emory, funded by the Emory Institute for Quantitative Theory and Methods and the National Institutes of Health.

Related:
Birdsong study pecks theory that music is uniquely human
How songbirds learn to sing
Birdsong study reveals how brain uses timing during motor activity

from eScienceCommons http://ift.tt/1K9k1ZP

Congress (finally) allows use of federal funds for needle exchanges [The Pump Handle]

USA TODAY’s Laura Ungar highlights an important measure in the omnibus spending bill Congress passed last month: It lifts the ban on the use of federal funds for needle-exchange programs. State and local needle-exchange programs still can’t use federal money to purchase needles, Ungar explains, but they can use it for staff, vans, outreach, and other expenses that typically cost far more than the syringes themselves.

Needle exchanges — also called syringe service programs (SSPs) — allow injection drug users to avoid sharing needles, a practice that can spread HIV, hepatitis C, and other diseases. CDC explains that most of these programs do much more than hand out clean needles:

SSPs are an effective component of a comprehensive approach to HIV prevention among PWID and their sex partners, and most SSPs offer other prevention materials (e.g., alcohol swabs, vials of sterile water, condoms) and services, including education on safe injection practices, counseling and testing for HIV and hepatitis C infections, and screening for other sexually transmitted infections. Many SSPs also provide linkage to critical services and programs, thus promoting integration among drug treatment programs, HIV care and treatment services, hepatitis C treatment programs, and other medical, social and mental health services.

The World Health Organization commissioned a review of 200+ studies on needle exchanges and concluded:

There is compelling evidence that increasing the availability and utilization of sterile injecting equipment for both out-of-treatment and in-treat- ment injecting drug users contributes substantially to reductions in the rate of HIV transmission. … There is no convincing evidence of major unintended negative consequences of programmes providing sterile injecting equipment to injecting drug users, such as initiation of injecting among people who have not injected previously, or an increase in the duration or frequency of illicit drug use or drug injection.

Despite the compelling evidence that these programs are good for public health, Congress began prohibiting the use of federal funds for needle exchanges in 1988. They briefly lifted the ban in 2009 (when Democrats controlled both houses of Congress), then restored it in 2011. What’s changed since then? Laura Ungar reports:

The latest change came at the suggestion of U.S. Rep. Hal Rogers, R-Ky., and Sen. Mitch McConnell, R-Ky., ensured the language remained in the Senate version of the spending bill, their spokespeople say.

“The opioid epidemic is having a devastating effect on communities throughout Kentucky and the nation,” McConnell’s office said in a statement. “As more people inject drugs like heroin, rates of Hepatitis C and HIV have been on the rise.  To help address this issue, Senator McConnell worked with Chairman Rogers to pass legislation to provide flexibility so that certain counties in Kentucky may be able to access federal funds for their treatment and education efforts.”

“Congressman Rogers supports efforts in Kentucky and elsewhere to mitigate the spread of devastating diseases, like HIV and (hepatitis) C, and the associated health care costs,” says Danielle Smoot, communication director for Rogers. Though he still doesn’t want federal money going to the needles themselves, she says, “he believes federal resources can effectively be used for needle exchange programs that focus on education and treatment to help end the cycle of dependency and curb an outbreak of needle-related diseases.”

Ungar notes that more than 1,000 drug overdose deaths now occur in Kentucky each year, and nearby Indiana is home to a new HIV outbreak centered in the town of Austin, Indiana, which now has a higher HIV incidence rate than any country in sub-Saharan Africa.

The issue of Congressional bans on needle-exchange funding is especially salient in Washington, DC, where Congress also blocked the city’s government from using local tax dollars for needle-exchange work between 1998 and 2007. (Yes, Congress can veto the District’s decisions about how to spend our own local funds.) Finally, legislation lifted the ban and DC was able to launch a needle-exchange program in 2008.

In a recently published study, researchers at the George Washington University Milken Institute School of Public Health (where I also work) analyzed data from before and after the DC program began. They estimate that in its first two years, it averted 120 new cases of HIV and saved $44 million — compared to a cost of just $650,000 per year during the program’s first two years. The savings figure includes public money that would have been spent on care for injection drug users who would have been infected with HIV in the absence of the program.

I hope the availability of federal funds will help more jurisdictions launch and strengthen their needle-exchange programs. The research clearly demonstrates that they reduce the spread of some of the worst diseases.



from ScienceBlogs http://ift.tt/1l0lkUa

USA TODAY’s Laura Ungar highlights an important measure in the omnibus spending bill Congress passed last month: It lifts the ban on the use of federal funds for needle-exchange programs. State and local needle-exchange programs still can’t use federal money to purchase needles, Ungar explains, but they can use it for staff, vans, outreach, and other expenses that typically cost far more than the syringes themselves.

Needle exchanges — also called syringe service programs (SSPs) — allow injection drug users to avoid sharing needles, a practice that can spread HIV, hepatitis C, and other diseases. CDC explains that most of these programs do much more than hand out clean needles:

SSPs are an effective component of a comprehensive approach to HIV prevention among PWID and their sex partners, and most SSPs offer other prevention materials (e.g., alcohol swabs, vials of sterile water, condoms) and services, including education on safe injection practices, counseling and testing for HIV and hepatitis C infections, and screening for other sexually transmitted infections. Many SSPs also provide linkage to critical services and programs, thus promoting integration among drug treatment programs, HIV care and treatment services, hepatitis C treatment programs, and other medical, social and mental health services.

The World Health Organization commissioned a review of 200+ studies on needle exchanges and concluded:

There is compelling evidence that increasing the availability and utilization of sterile injecting equipment for both out-of-treatment and in-treat- ment injecting drug users contributes substantially to reductions in the rate of HIV transmission. … There is no convincing evidence of major unintended negative consequences of programmes providing sterile injecting equipment to injecting drug users, such as initiation of injecting among people who have not injected previously, or an increase in the duration or frequency of illicit drug use or drug injection.

Despite the compelling evidence that these programs are good for public health, Congress began prohibiting the use of federal funds for needle exchanges in 1988. They briefly lifted the ban in 2009 (when Democrats controlled both houses of Congress), then restored it in 2011. What’s changed since then? Laura Ungar reports:

The latest change came at the suggestion of U.S. Rep. Hal Rogers, R-Ky., and Sen. Mitch McConnell, R-Ky., ensured the language remained in the Senate version of the spending bill, their spokespeople say.

“The opioid epidemic is having a devastating effect on communities throughout Kentucky and the nation,” McConnell’s office said in a statement. “As more people inject drugs like heroin, rates of Hepatitis C and HIV have been on the rise.  To help address this issue, Senator McConnell worked with Chairman Rogers to pass legislation to provide flexibility so that certain counties in Kentucky may be able to access federal funds for their treatment and education efforts.”

“Congressman Rogers supports efforts in Kentucky and elsewhere to mitigate the spread of devastating diseases, like HIV and (hepatitis) C, and the associated health care costs,” says Danielle Smoot, communication director for Rogers. Though he still doesn’t want federal money going to the needles themselves, she says, “he believes federal resources can effectively be used for needle exchange programs that focus on education and treatment to help end the cycle of dependency and curb an outbreak of needle-related diseases.”

Ungar notes that more than 1,000 drug overdose deaths now occur in Kentucky each year, and nearby Indiana is home to a new HIV outbreak centered in the town of Austin, Indiana, which now has a higher HIV incidence rate than any country in sub-Saharan Africa.

The issue of Congressional bans on needle-exchange funding is especially salient in Washington, DC, where Congress also blocked the city’s government from using local tax dollars for needle-exchange work between 1998 and 2007. (Yes, Congress can veto the District’s decisions about how to spend our own local funds.) Finally, legislation lifted the ban and DC was able to launch a needle-exchange program in 2008.

In a recently published study, researchers at the George Washington University Milken Institute School of Public Health (where I also work) analyzed data from before and after the DC program began. They estimate that in its first two years, it averted 120 new cases of HIV and saved $44 million — compared to a cost of just $650,000 per year during the program’s first two years. The savings figure includes public money that would have been spent on care for injection drug users who would have been infected with HIV in the absence of the program.

I hope the availability of federal funds will help more jurisdictions launch and strengthen their needle-exchange programs. The research clearly demonstrates that they reduce the spread of some of the worst diseases.



from ScienceBlogs http://ift.tt/1l0lkUa

R.I.P., Mr. Bowie [Respectful Insolence]

I couldn’t believe it when I woke up this morning to the news that David Bowie had passed away. Because there had been so many celebrity death hoaxes, I started checking other news outlets. I checked the official David Bowie Facebook page and Twitter feed.

Oh, no. No hoax:

And on his son Duncan Jones’ Twitter feed:



Aw, crap, I thought. It’s true. The New York Times and BBC are reporting that Bowie died of cancer, type unspecified. Damn. Given Bowie’s long smoking history my first guess is that it was probably lung cancer, but who knows? It’s devastating, whatever the cancer was. He was supposed to be immortal! Certainly, he seemed that way, as he’s been my favorite musician at least since the early 1980s, when I was in college and first dove into his music other than the obvious hits on ChangesOneBowie. His music has been a huge part of my life, so much so that I can’t remember a time before it. Even in his fallow years in the late 1980s when his music was just not up to its old grandeur (indeed, his Glass Spider Tour was as close to a Spinal Tap moment as Bowie ever got), there was always still something there worth listening to. Then, with his resurgence in the 1990s to the present day he proved his relevance time and time again.

Bowie’s death is all the more depressing given that his latest album, Blackstar, is one of the best I’ve heard from him at least since the 1980s. I had been hopeful after hearing it Friday that there would be more where that came from. Alas, it was not to be. Worse, knowning what I know now makes his two videos from the album even more dark than I thought they were when I first saw them:

The video for Lazarus is particularly poignant in retrospect. Some interpret it as him saying goodbye.

Still, longtime collaborator and producer Tony Visconti said that Blackstar was meant to be Bowie’s parting gift to his fans, and that’s not a bad way to go at all:

He always did what he wanted to do. And he wanted to do it his way and he wanted to do it the best way. His death was…

Posted by Tony Visconti on Monday, January 11, 2016

It’ll be an all Bowie day at the old office and lab today. Good thing I don’t have any cases or clinic today. Farewell, and thank you Mr. Bowie.



from ScienceBlogs http://ift.tt/1VZLLa0

I couldn’t believe it when I woke up this morning to the news that David Bowie had passed away. Because there had been so many celebrity death hoaxes, I started checking other news outlets. I checked the official David Bowie Facebook page and Twitter feed.

Oh, no. No hoax:

And on his son Duncan Jones’ Twitter feed:



Aw, crap, I thought. It’s true. The New York Times and BBC are reporting that Bowie died of cancer, type unspecified. Damn. Given Bowie’s long smoking history my first guess is that it was probably lung cancer, but who knows? It’s devastating, whatever the cancer was. He was supposed to be immortal! Certainly, he seemed that way, as he’s been my favorite musician at least since the early 1980s, when I was in college and first dove into his music other than the obvious hits on ChangesOneBowie. His music has been a huge part of my life, so much so that I can’t remember a time before it. Even in his fallow years in the late 1980s when his music was just not up to its old grandeur (indeed, his Glass Spider Tour was as close to a Spinal Tap moment as Bowie ever got), there was always still something there worth listening to. Then, with his resurgence in the 1990s to the present day he proved his relevance time and time again.

Bowie’s death is all the more depressing given that his latest album, Blackstar, is one of the best I’ve heard from him at least since the 1980s. I had been hopeful after hearing it Friday that there would be more where that came from. Alas, it was not to be. Worse, knowning what I know now makes his two videos from the album even more dark than I thought they were when I first saw them:

The video for Lazarus is particularly poignant in retrospect. Some interpret it as him saying goodbye.

Still, longtime collaborator and producer Tony Visconti said that Blackstar was meant to be Bowie’s parting gift to his fans, and that’s not a bad way to go at all:

He always did what he wanted to do. And he wanted to do it his way and he wanted to do it the best way. His death was…

Posted by Tony Visconti on Monday, January 11, 2016

It’ll be an all Bowie day at the old office and lab today. Good thing I don’t have any cases or clinic today. Farewell, and thank you Mr. Bowie.



from ScienceBlogs http://ift.tt/1VZLLa0

Maritime Autonomy–Reducing the Risk in a High-Risk Program

jan-16-article-5-lead-624x268By David Antanitus

The fielding of independently deployed unmanned surface vessels designed from the ground up for no person to step aboard at any point in their operating cycles under sparse remote supervisory control is the next necessary technology leap if we are to drastically reduce the number of personnel required to support our warfighting missions and platforms. The Defense Advanced Research Projects Agency (DARPA) undertook the challenge of developing an autonomy suite and building a ship to accomplish this goal with its vision and invitation in early 2010 for industry to design and build the Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV). This revolutionary concept for a maritime vessel, currently being built by an industry team led by Leidos, constitutes the first step in developing a ship with autonomous behaviors capable of extended at-sea operations. In order to meet all of the DARPA requirements for ACTUV, the Leidos team had to formulate and implement a robust risk-reduction plan.


Don’t Reinvent the Wheel

Building the first ship of a class carries numerous inherent risks. Construction of the vessel aside, the real science, and hence the majority of the program risk, is in developing an autonomy system that can (1) sense its environment and the health of its own systems, (2) make intelligent decisions to optimize machinery lineups and sensor employment, (3) avoid other ships and obstacles, and (4) execute the intended mission. So, when tasked with developing this maritime autonomy suite for ACTUV, where do you start, and how do you limit the risk in designing the autonomy architecture to meet such complex requirements?

The Leidos team’s first step in risk reduction for ACTUV was to leverage code already written for less complex autonomous systems. In the 1990s, the NASA Jet Propulsion Laboratory (JPL) developed the Control Architecture for Robotic Agent Command and Sensing (CARACaS) for the Mars Rover Project. CARACaS already has been successfully adapted for several unmanned surface vessel programs—e.g., for the work done by DARPA in developing Grand Challenge I and II and for the Urban Challenge architecture for an autonomous ground vehicle. Leidos leveraged the work done by JPL in developing CARACaS and by DARPA in developing Urban Challenge (NREC Engine) to develop a maritime autonomy capability that uses open standards, libraries and tools.

FIGURE 1. AUTONOMY ARCHITECTURE WITH REMOTE SUPERVISORY CONTROL STATION (RSCS)

jan-16-article-5-figure

Employ a Truly Open Architecture

The ACTUV autonomy suite contains decision algorithms embedded as software modules using an object-oriented framework in which key interface definitions isolate algorithm implementations. It supports multiple, simultaneously executing decision engines and the arbitration logic to choose the best decisions for future actions. It implements a true open systems architecture (OSA) approach that allows for the autonomy capability to be modularly connected to other subsystems—within the same platform and external to the platform. This “plug-and-play” modularity minimizes life-cycle costs, enables reuse, and promotes healthy competition among capability vendors. It also reduces overall risk to the program. In addition, the autonomy capability implements the Service Availability Forum industry standards to achieve a high-availability solution that results in near-continuous uptime when the system is fully integrated.

The OSA uses the Society of Automotive Engineers (SAE) AS4 Joint Architecture for Unmanned Systems (JAUS) messaging between major segments and the OMG Data Distribution Service (DDS) message protocol layer to achieve advanced quality of service. The autonomy engine is a set of algorithm-level specifications for the behaviors and capabilities of the autonomy platform. It lists all the important, high-level, mission-oriented tasks either planned or implemented in the context of the vehicle scenario. It employs a modular approach that supports a Distributed Hierarchical Autonomy (DHA) model and uses replaceable, modular and standard interfaces.

Putting all of the components and modules together, we end up with an autonomous ship control system that is based on a DHA employing new advances such as self-learning and multi-model arbitration. However, before we take this system to sea, we must demonstrate that our ship can safely navigate and comply with the Convention on the International Regulations for Preventing Collisions at Sea (COLREGS)—basically, we must show that our vessel can operate safely at sea and not collide with another vessel or run aground with only sparse remote supervision. As the system and capability matures, we must also demonstrate that the ship can simultaneously execute that desired mission and comply with COLREGS.

Maximize Modeling and Simulation

To cost-effectively mitigate the risk in our autonomy system performance at sea, we must verify quantitatively that the autonomy path-planner engines can navigate safely on the water. Our systematic approach to this quantitative verification is shown in the following assertions:

Assertion 1: Simulations

If the simulation can be demonstrated to correlate highly with on-water testing results in all relevant qualitative senses, we can be confident further simulation results are likely to reflect actual on-water behavior.

Assertion 2: Metrics

If metrics can be demonstrated to correlate highly with subject-matter experts’ understanding of safe navigation, we can be confident those metrics can be used for evaluation of the path planners.

Assertion 3: Scenarios

If the set of scenarios can be demonstrated to provide good coverage of on-water situations, we can be confident that performing well in that set of scenarios will correlate with performing well in any on-water situation.

Assertion 4: Effective evaluation tools and methodology

If we have a good simulation (as per Assertion 1), good metrics (as per Assertion 2), and a good set of scenarios (as per Assertion 3) along with a path planner that performs well in that environment, we can be confident that the path planner really is capable of doing safe navigation.

These assertions resulted in three distinct categories of products being developed to support the safe navigation requirement analysis for the maritime autonomy program:

  • Simulations (Archivist Simulation Integration Framework, Distributed Simulation Environment)
  • Metrics (Real-time Autonomy COLREGS Evaluator [RACE])
  • Scenarios

Prior to at-sea testing, Leidos conducted more than 26,000 simulation runs modeling more than 750 different meeting, crossing and overtaking scenarios in its System Integration Laboratory (SIL) to demonstrate that the autonomy suite would direct actions in accordance with the COLREGS for avoiding collision. Scenarios were developed with the assistance of former U.S. Naval officers with Officer of the Deck and/or Command at Sea certifications, who used a design-of-experiments approach (levels and factors, bounded by the Taguchi method) and included stand-on and give-way behaviors. The approach used to generate and test scenarios is shown in Figure 2.

FIGURE 2. APPROACH USED TO GENERATE AND TEST SCENARIOS

jan-16-article-5-figure-2

Employ a Surrogate Vessel Early

After satisfactory completion of SIL testing, the autonomy suite was installed on a 42-foot test vessel (see photo on page 22), where frequency-modulated continuous-wave and “X”-band radars provided the sensor input to the autonomy suite, and commands from the autonomy suite were forwarded to the vessel’s autopilot for control of the rudder and engines. The test vessel acted as an ACTUV surrogate and allowed for testing of all the autonomy software and ACTUV sensor systems in parallel with the ACTUV ship construction. Before ACTUV ever goes to sea, the autonomy system and sensors will be proven at sea on the surrogate vessel, thereby reducing overall program risk and duration.

To date, more than 100 different scenarios have been executed at sea with the surrogate vessel. During these test scenarios, the autonomy system directed course and speed changes of the surrogate vessel to stay safely outside a 1-kilometer standoff distance from the interfering vessels. The test program clearly demonstrated the ability of the surrogate to maneuver and avoid collision with another vessel and paved the way for follow-on testing involving multiple interfering contacts and adversarial behaviors of interfering vessels.

In addition to the structured test events, the surrogate vessel recently completed a voyage between Biloxi and Pascagoula, Mississippi, with only a navigational chart of the area loaded into its memory and inputs from its commercial off-the-shelf  radars. The surrogate vessel sailed the complicated, inshore environment of the Gulf Intracoastal Waterway, avoiding shoal water, aids and hazards to navigation, and other vessels in the area—all without preplanned waypoints or human direction or intervention. During the 35-nautical-mile voyage, the maritime autonomy system functioned flawlessly, avoiding all obstacles, buoys, land, and interfering vessels.

The Leidos team commenced construction of the first ACTUV vessel in 2014. Named Sea Hunter, this prototype vessel is to launch in early 2016 and embark on a 2-year test program co-sponsored by DARPA and the Office of Naval Research. While problems and issues undoubtedly will surface during this test program (they always do for the first vessel of a class), it is hoped that the number and severity of the issues will be minimized by the work, testing and risk-reduction efforts in the design and execution of the program.

In a program as complex and software-intensive as ACTUV, you have to look beyond the “build a little, test a little” approach and find innovative ways to mitigate as much of the program risk as possible, as early as possible. Ultimately, the success of the ACTUV program will have its roots in the risk-reduction efforts employed in building and testing the autonomy system in parallel with the construction of the vessel. Fielding a revolutionary concept such as ACTUV requires a blend of innovative program management, breakthrough technical skill and a tuned test program.

 



from Armed with Science http://ift.tt/1TPxE5O

jan-16-article-5-lead-624x268By David Antanitus

The fielding of independently deployed unmanned surface vessels designed from the ground up for no person to step aboard at any point in their operating cycles under sparse remote supervisory control is the next necessary technology leap if we are to drastically reduce the number of personnel required to support our warfighting missions and platforms. The Defense Advanced Research Projects Agency (DARPA) undertook the challenge of developing an autonomy suite and building a ship to accomplish this goal with its vision and invitation in early 2010 for industry to design and build the Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV). This revolutionary concept for a maritime vessel, currently being built by an industry team led by Leidos, constitutes the first step in developing a ship with autonomous behaviors capable of extended at-sea operations. In order to meet all of the DARPA requirements for ACTUV, the Leidos team had to formulate and implement a robust risk-reduction plan.


Don’t Reinvent the Wheel

Building the first ship of a class carries numerous inherent risks. Construction of the vessel aside, the real science, and hence the majority of the program risk, is in developing an autonomy system that can (1) sense its environment and the health of its own systems, (2) make intelligent decisions to optimize machinery lineups and sensor employment, (3) avoid other ships and obstacles, and (4) execute the intended mission. So, when tasked with developing this maritime autonomy suite for ACTUV, where do you start, and how do you limit the risk in designing the autonomy architecture to meet such complex requirements?

The Leidos team’s first step in risk reduction for ACTUV was to leverage code already written for less complex autonomous systems. In the 1990s, the NASA Jet Propulsion Laboratory (JPL) developed the Control Architecture for Robotic Agent Command and Sensing (CARACaS) for the Mars Rover Project. CARACaS already has been successfully adapted for several unmanned surface vessel programs—e.g., for the work done by DARPA in developing Grand Challenge I and II and for the Urban Challenge architecture for an autonomous ground vehicle. Leidos leveraged the work done by JPL in developing CARACaS and by DARPA in developing Urban Challenge (NREC Engine) to develop a maritime autonomy capability that uses open standards, libraries and tools.

FIGURE 1. AUTONOMY ARCHITECTURE WITH REMOTE SUPERVISORY CONTROL STATION (RSCS)

jan-16-article-5-figure

Employ a Truly Open Architecture

The ACTUV autonomy suite contains decision algorithms embedded as software modules using an object-oriented framework in which key interface definitions isolate algorithm implementations. It supports multiple, simultaneously executing decision engines and the arbitration logic to choose the best decisions for future actions. It implements a true open systems architecture (OSA) approach that allows for the autonomy capability to be modularly connected to other subsystems—within the same platform and external to the platform. This “plug-and-play” modularity minimizes life-cycle costs, enables reuse, and promotes healthy competition among capability vendors. It also reduces overall risk to the program. In addition, the autonomy capability implements the Service Availability Forum industry standards to achieve a high-availability solution that results in near-continuous uptime when the system is fully integrated.

The OSA uses the Society of Automotive Engineers (SAE) AS4 Joint Architecture for Unmanned Systems (JAUS) messaging between major segments and the OMG Data Distribution Service (DDS) message protocol layer to achieve advanced quality of service. The autonomy engine is a set of algorithm-level specifications for the behaviors and capabilities of the autonomy platform. It lists all the important, high-level, mission-oriented tasks either planned or implemented in the context of the vehicle scenario. It employs a modular approach that supports a Distributed Hierarchical Autonomy (DHA) model and uses replaceable, modular and standard interfaces.

Putting all of the components and modules together, we end up with an autonomous ship control system that is based on a DHA employing new advances such as self-learning and multi-model arbitration. However, before we take this system to sea, we must demonstrate that our ship can safely navigate and comply with the Convention on the International Regulations for Preventing Collisions at Sea (COLREGS)—basically, we must show that our vessel can operate safely at sea and not collide with another vessel or run aground with only sparse remote supervision. As the system and capability matures, we must also demonstrate that the ship can simultaneously execute that desired mission and comply with COLREGS.

Maximize Modeling and Simulation

To cost-effectively mitigate the risk in our autonomy system performance at sea, we must verify quantitatively that the autonomy path-planner engines can navigate safely on the water. Our systematic approach to this quantitative verification is shown in the following assertions:

Assertion 1: Simulations

If the simulation can be demonstrated to correlate highly with on-water testing results in all relevant qualitative senses, we can be confident further simulation results are likely to reflect actual on-water behavior.

Assertion 2: Metrics

If metrics can be demonstrated to correlate highly with subject-matter experts’ understanding of safe navigation, we can be confident those metrics can be used for evaluation of the path planners.

Assertion 3: Scenarios

If the set of scenarios can be demonstrated to provide good coverage of on-water situations, we can be confident that performing well in that set of scenarios will correlate with performing well in any on-water situation.

Assertion 4: Effective evaluation tools and methodology

If we have a good simulation (as per Assertion 1), good metrics (as per Assertion 2), and a good set of scenarios (as per Assertion 3) along with a path planner that performs well in that environment, we can be confident that the path planner really is capable of doing safe navigation.

These assertions resulted in three distinct categories of products being developed to support the safe navigation requirement analysis for the maritime autonomy program:

  • Simulations (Archivist Simulation Integration Framework, Distributed Simulation Environment)
  • Metrics (Real-time Autonomy COLREGS Evaluator [RACE])
  • Scenarios

Prior to at-sea testing, Leidos conducted more than 26,000 simulation runs modeling more than 750 different meeting, crossing and overtaking scenarios in its System Integration Laboratory (SIL) to demonstrate that the autonomy suite would direct actions in accordance with the COLREGS for avoiding collision. Scenarios were developed with the assistance of former U.S. Naval officers with Officer of the Deck and/or Command at Sea certifications, who used a design-of-experiments approach (levels and factors, bounded by the Taguchi method) and included stand-on and give-way behaviors. The approach used to generate and test scenarios is shown in Figure 2.

FIGURE 2. APPROACH USED TO GENERATE AND TEST SCENARIOS

jan-16-article-5-figure-2

Employ a Surrogate Vessel Early

After satisfactory completion of SIL testing, the autonomy suite was installed on a 42-foot test vessel (see photo on page 22), where frequency-modulated continuous-wave and “X”-band radars provided the sensor input to the autonomy suite, and commands from the autonomy suite were forwarded to the vessel’s autopilot for control of the rudder and engines. The test vessel acted as an ACTUV surrogate and allowed for testing of all the autonomy software and ACTUV sensor systems in parallel with the ACTUV ship construction. Before ACTUV ever goes to sea, the autonomy system and sensors will be proven at sea on the surrogate vessel, thereby reducing overall program risk and duration.

To date, more than 100 different scenarios have been executed at sea with the surrogate vessel. During these test scenarios, the autonomy system directed course and speed changes of the surrogate vessel to stay safely outside a 1-kilometer standoff distance from the interfering vessels. The test program clearly demonstrated the ability of the surrogate to maneuver and avoid collision with another vessel and paved the way for follow-on testing involving multiple interfering contacts and adversarial behaviors of interfering vessels.

In addition to the structured test events, the surrogate vessel recently completed a voyage between Biloxi and Pascagoula, Mississippi, with only a navigational chart of the area loaded into its memory and inputs from its commercial off-the-shelf  radars. The surrogate vessel sailed the complicated, inshore environment of the Gulf Intracoastal Waterway, avoiding shoal water, aids and hazards to navigation, and other vessels in the area—all without preplanned waypoints or human direction or intervention. During the 35-nautical-mile voyage, the maritime autonomy system functioned flawlessly, avoiding all obstacles, buoys, land, and interfering vessels.

The Leidos team commenced construction of the first ACTUV vessel in 2014. Named Sea Hunter, this prototype vessel is to launch in early 2016 and embark on a 2-year test program co-sponsored by DARPA and the Office of Naval Research. While problems and issues undoubtedly will surface during this test program (they always do for the first vessel of a class), it is hoped that the number and severity of the issues will be minimized by the work, testing and risk-reduction efforts in the design and execution of the program.

In a program as complex and software-intensive as ACTUV, you have to look beyond the “build a little, test a little” approach and find innovative ways to mitigate as much of the program risk as possible, as early as possible. Ultimately, the success of the ACTUV program will have its roots in the risk-reduction efforts employed in building and testing the autonomy system in parallel with the construction of the vessel. Fielding a revolutionary concept such as ACTUV requires a blend of innovative program management, breakthrough technical skill and a tuned test program.

 



from Armed with Science http://ift.tt/1TPxE5O

Everything existing in the universe is the fruit of chance and necessity [Pharyngula]

I’ve been informed that I’ve been at war for a while. I was surprised. Apparently, Perry Marshall thinks he’s been firing salvo after salvo at me…I just hadn’t noticed.

Oh, OK. I would just ignore him, but he’s presenting some fascinatingly common misconceptions. One of his boogeymen is chance, and I’ve noticed that a lot of people hate the idea of chance. Uncle Fred got hit by lightning? He must have done something very bad. It can’t just have been an accident. There are no accidents!

Yes, Virginia, there are accidents, chance events, and random happenings, and solid scientific explanations have to include chance variation as a component. Even consistently predictable events on a macro scale often have a strong stochastic element to their underlying mechanisms.

Marshall, unfortunately, has this wrong idea that invoking chance is a cop-out — that randomness is bad and unscientific. So one of his salvos is a whole page of synonyms for random, which actually do more to reveal his ignorance than expose any problems with chance.

I don’t know
I don’t care
Can we go to lunch now
POOF!
Flying Spaghetti Monster
Magic
It wasn’t God so it must have been something else
Vague un-testable assertion that excuses me from doing my science job

Oh, rubbish.

Judging a pattern of variation to be random is determined by the actual properties of the data set. Randomness is an empirical conclusion.

Here’s a simple example. I have a six-sided die. I throw it 666 times and record the result of each throw. What do you expect?

You probably expect about 111 “1”s, 111 “2”s, 111 “3”s, 111 “4”s, 111 “5”s, and 111 “6”s. But not exactly 111 of each; you expect some deviation from that number. You might actually suspect a non-random result if you got exactly 111 of each, because that would suggest some regularity. If the data showed that the first 111 throws all produced “1”, the second 111 produced “2”, etc., you’d immediately recognize that as non-random. That we can identify non-random series implies that we have some properties that we can examine to determine randomness.

We can even quantitatively predict properties of random sets of data; there are statistical theories and tests that can estimate how much variation we might expect from a given number of trials, how often and how long a run of repeated results should occur, etc. We can use these parameters to test for faked data, for instance.

To demonstrate this to beginning students of probability, I often ask them to do the following homework assignment the first day. They are either to flip a coin 200 times and record the results, or merely pretend to flip a coin and fake the results. The next day I amaze them by glancing at each student’s list and correctly separating nearly all the true from the faked data. The fact in this case is that in a truly random sequence of 200 tosses it is extremely likely that a run of six heads or six tails will occur (the exact probability is somewhat complicated to calculate), but the average person trying to fake such a sequence will rarely include runs of that length.

I’ve read whole books on the mathematical properties of randomness. I got into it for a while because of a problem that bugged me: I was watching the formation of peripheral sensory networks in zebrafish, which has both random and non-random aspects. Random: neurons grow out and branch in unpredictable ways; they don’t form a methodical pattern, like a fishnet stocking covering the animal. The specifics of branching vary from animal to animal. Non-random: the dispersal of the branches has to demonstrate adequate spacing — you shouldn’t have clumping, or large gaps in the coverage. There were rules, but they played out on a game board where chance drove the particulars.

Or on a grander scale, read David Raup’s Extinction: Bad Genes or Bad Luck?. The answer to the question is both, but luck is the best way to describe some large scale events in the history of life.

That’s the important thing: many phenomena have an underlying basis in chance, and are subsequently shaped by non-chance processes: you can’t model enzyme kinetics without acknowledging random molecular interactions given a direction by the laws of thermodynamics, for instance, or regard evolution without seeing the importance of chance variation, winnowed by selection.

And contra Marshall and a thousand other creationists, chance isn’t simply the answer we give when we don’t know what is going on. There are criteria. We have statistical tests for randomness and non-randomness, and we also use chance as a tool.

For example, one of the things I’ve been doing over this winter break is prepping some fly lines for mapping crosses my students will be doing in genetics next term. We use chance events to peek at the structure of the chromosome. Here’s how it works.

We set up flies with pairs of traits (actually, we’re doing three at a time, but that’s more complicated to explain), and we generate heterozygotes that we are going to cross to homozygotes (or in this case, because these are X-linked traits, we can cross to hemizygous males…but see, it’s already getting complicated). To keep it simple, here’s an example of a fly that is heterozygous for two genes, one for body color and one for eye color. Wild type flies are gray-bodied, and we have a recessive mutant yellow that gives the body a yellowish cast. Wild type eyes are red, and we have another recessive mutant that has white eyes.

flycross

The female, on the left, has a gray body and red eyes, because those traits are dominant, but she’s heterozygous, or a carrier for both recessive traits. The male has only one X chromosome, so he can only pass on the yellow and white traits, and he is also yellow bodied and white eyed.

As I’ve drawn them, if there is no other process in play, the female can pass on either a chromosome carrying yellow and white, or a chromosome carrying the gray and red traits (Note that I’ve faded out the male contribution: he’s just donating solely recessive alleles to allow the female contribution to be expressed in the phenotype, so you can ignore him*). So by default, we’d expect that the result of this cross would be that half the progeny would be gray bodied and red eyed, and the other half would be yellow bodied and white eyed. Note that the half and half distribution is also a product of chance — in most situations, which X chromosome gets passed on is random.

parentals

But there is another process in play! During meiosis, the female can swap around portions of her two X chromosomes in a process called recombination. This is a chance event. One chromosome is broken at a random point along its length, and the other chromosome is broken at the equivalent position (a non-random choice), and they’re re-stitched together to form an intact chromosome with a different arrangement of the alleles. That allows some of the progeny to express a different pair of traits.

recomb

Every time you see a fly with red eyes and yellow body, or white eyes and gray body, in this cross, you are seeing the phenotypic expression of a recombination event between the two genes.

We can use this chance event to make a map of genes, an insight that came to Thomas Hunt Morgan and Alfred Sturtevant around 1911. Yes, genetics has been using chance to study genes for over a hundred years.

The way this works…imagine you have a barn. On the side of the barn, you paint two targets: one is a square 2 meters on a side, and the other is a smaller square, 1 meter on a side. You then blindfold a person with a gun, and tell them to blaze away in the general direction of the barn. At the end of the afternoon, after they’ve gone through a few boxes of ammo, you tell them the results: most of the time they completely missed the targets, but they hit the first one 18 times, and the second one 4 times.

Can they estimate the relative size of the two targets?

Of course they can. And the more shots they take, the more accurate their estimate will be.

That’s what Sturtevant and Morgan were doing. They couldn’t see genes, they could barely see the chromosome, but they could use recombination to take random shots at the arrangement of alleles on the chromosome, and they could see whether they hit that spot between two genes, like yellow and white, by looking for the rearrangement in the phenotype. The frequency of those rearrangements relative to misses also told them the relative size of the target — how far apart yellow and white are on the chromosome.

(FYI, yellow and white are fairly close together, and we see recombination between them in only 1.5% of progeny of the cross. This gets reported as a map distance of 1.5 between yellow and white.)

You cannot predict ahead of time whether a specific individual produced in this cross will be wild type, or white-eyed and gray-bodied, or any particular possibility. It is a random process with a stochastic distribution of results that has some general predictability, just like an individual bullethole produced by the blindfolded shootist.

This seems to baffle creationists. They have a deep antipathy to randomness on principle, but even worse, they seem incapable of realizing that scientists can be simultaneously studying chance events that have statistically predictable outcomes — like genetics, or evolution, or the physics of sub-atomic particles. It’s a guaranteed way to blow their minds to point out that on one scale a phenomenon might be chance driven, but stepping back and looking at the whole reveals a regularity and pattern.

At which point they stagger back and declare that the small-scale events have to be determined and specified and predictable too, and therefore nothing is random. They just don’t get it.


*Misandry!



from ScienceBlogs http://ift.tt/1OKBOfp

I’ve been informed that I’ve been at war for a while. I was surprised. Apparently, Perry Marshall thinks he’s been firing salvo after salvo at me…I just hadn’t noticed.

Oh, OK. I would just ignore him, but he’s presenting some fascinatingly common misconceptions. One of his boogeymen is chance, and I’ve noticed that a lot of people hate the idea of chance. Uncle Fred got hit by lightning? He must have done something very bad. It can’t just have been an accident. There are no accidents!

Yes, Virginia, there are accidents, chance events, and random happenings, and solid scientific explanations have to include chance variation as a component. Even consistently predictable events on a macro scale often have a strong stochastic element to their underlying mechanisms.

Marshall, unfortunately, has this wrong idea that invoking chance is a cop-out — that randomness is bad and unscientific. So one of his salvos is a whole page of synonyms for random, which actually do more to reveal his ignorance than expose any problems with chance.

I don’t know
I don’t care
Can we go to lunch now
POOF!
Flying Spaghetti Monster
Magic
It wasn’t God so it must have been something else
Vague un-testable assertion that excuses me from doing my science job

Oh, rubbish.

Judging a pattern of variation to be random is determined by the actual properties of the data set. Randomness is an empirical conclusion.

Here’s a simple example. I have a six-sided die. I throw it 666 times and record the result of each throw. What do you expect?

You probably expect about 111 “1”s, 111 “2”s, 111 “3”s, 111 “4”s, 111 “5”s, and 111 “6”s. But not exactly 111 of each; you expect some deviation from that number. You might actually suspect a non-random result if you got exactly 111 of each, because that would suggest some regularity. If the data showed that the first 111 throws all produced “1”, the second 111 produced “2”, etc., you’d immediately recognize that as non-random. That we can identify non-random series implies that we have some properties that we can examine to determine randomness.

We can even quantitatively predict properties of random sets of data; there are statistical theories and tests that can estimate how much variation we might expect from a given number of trials, how often and how long a run of repeated results should occur, etc. We can use these parameters to test for faked data, for instance.

To demonstrate this to beginning students of probability, I often ask them to do the following homework assignment the first day. They are either to flip a coin 200 times and record the results, or merely pretend to flip a coin and fake the results. The next day I amaze them by glancing at each student’s list and correctly separating nearly all the true from the faked data. The fact in this case is that in a truly random sequence of 200 tosses it is extremely likely that a run of six heads or six tails will occur (the exact probability is somewhat complicated to calculate), but the average person trying to fake such a sequence will rarely include runs of that length.

I’ve read whole books on the mathematical properties of randomness. I got into it for a while because of a problem that bugged me: I was watching the formation of peripheral sensory networks in zebrafish, which has both random and non-random aspects. Random: neurons grow out and branch in unpredictable ways; they don’t form a methodical pattern, like a fishnet stocking covering the animal. The specifics of branching vary from animal to animal. Non-random: the dispersal of the branches has to demonstrate adequate spacing — you shouldn’t have clumping, or large gaps in the coverage. There were rules, but they played out on a game board where chance drove the particulars.

Or on a grander scale, read David Raup’s Extinction: Bad Genes or Bad Luck?. The answer to the question is both, but luck is the best way to describe some large scale events in the history of life.

That’s the important thing: many phenomena have an underlying basis in chance, and are subsequently shaped by non-chance processes: you can’t model enzyme kinetics without acknowledging random molecular interactions given a direction by the laws of thermodynamics, for instance, or regard evolution without seeing the importance of chance variation, winnowed by selection.

And contra Marshall and a thousand other creationists, chance isn’t simply the answer we give when we don’t know what is going on. There are criteria. We have statistical tests for randomness and non-randomness, and we also use chance as a tool.

For example, one of the things I’ve been doing over this winter break is prepping some fly lines for mapping crosses my students will be doing in genetics next term. We use chance events to peek at the structure of the chromosome. Here’s how it works.

We set up flies with pairs of traits (actually, we’re doing three at a time, but that’s more complicated to explain), and we generate heterozygotes that we are going to cross to homozygotes (or in this case, because these are X-linked traits, we can cross to hemizygous males…but see, it’s already getting complicated). To keep it simple, here’s an example of a fly that is heterozygous for two genes, one for body color and one for eye color. Wild type flies are gray-bodied, and we have a recessive mutant yellow that gives the body a yellowish cast. Wild type eyes are red, and we have another recessive mutant that has white eyes.

flycross

The female, on the left, has a gray body and red eyes, because those traits are dominant, but she’s heterozygous, or a carrier for both recessive traits. The male has only one X chromosome, so he can only pass on the yellow and white traits, and he is also yellow bodied and white eyed.

As I’ve drawn them, if there is no other process in play, the female can pass on either a chromosome carrying yellow and white, or a chromosome carrying the gray and red traits (Note that I’ve faded out the male contribution: he’s just donating solely recessive alleles to allow the female contribution to be expressed in the phenotype, so you can ignore him*). So by default, we’d expect that the result of this cross would be that half the progeny would be gray bodied and red eyed, and the other half would be yellow bodied and white eyed. Note that the half and half distribution is also a product of chance — in most situations, which X chromosome gets passed on is random.

parentals

But there is another process in play! During meiosis, the female can swap around portions of her two X chromosomes in a process called recombination. This is a chance event. One chromosome is broken at a random point along its length, and the other chromosome is broken at the equivalent position (a non-random choice), and they’re re-stitched together to form an intact chromosome with a different arrangement of the alleles. That allows some of the progeny to express a different pair of traits.

recomb

Every time you see a fly with red eyes and yellow body, or white eyes and gray body, in this cross, you are seeing the phenotypic expression of a recombination event between the two genes.

We can use this chance event to make a map of genes, an insight that came to Thomas Hunt Morgan and Alfred Sturtevant around 1911. Yes, genetics has been using chance to study genes for over a hundred years.

The way this works…imagine you have a barn. On the side of the barn, you paint two targets: one is a square 2 meters on a side, and the other is a smaller square, 1 meter on a side. You then blindfold a person with a gun, and tell them to blaze away in the general direction of the barn. At the end of the afternoon, after they’ve gone through a few boxes of ammo, you tell them the results: most of the time they completely missed the targets, but they hit the first one 18 times, and the second one 4 times.

Can they estimate the relative size of the two targets?

Of course they can. And the more shots they take, the more accurate their estimate will be.

That’s what Sturtevant and Morgan were doing. They couldn’t see genes, they could barely see the chromosome, but they could use recombination to take random shots at the arrangement of alleles on the chromosome, and they could see whether they hit that spot between two genes, like yellow and white, by looking for the rearrangement in the phenotype. The frequency of those rearrangements relative to misses also told them the relative size of the target — how far apart yellow and white are on the chromosome.

(FYI, yellow and white are fairly close together, and we see recombination between them in only 1.5% of progeny of the cross. This gets reported as a map distance of 1.5 between yellow and white.)

You cannot predict ahead of time whether a specific individual produced in this cross will be wild type, or white-eyed and gray-bodied, or any particular possibility. It is a random process with a stochastic distribution of results that has some general predictability, just like an individual bullethole produced by the blindfolded shootist.

This seems to baffle creationists. They have a deep antipathy to randomness on principle, but even worse, they seem incapable of realizing that scientists can be simultaneously studying chance events that have statistically predictable outcomes — like genetics, or evolution, or the physics of sub-atomic particles. It’s a guaranteed way to blow their minds to point out that on one scale a phenomenon might be chance driven, but stepping back and looking at the whole reveals a regularity and pattern.

At which point they stagger back and declare that the small-scale events have to be determined and specified and predictable too, and therefore nothing is random. They just don’t get it.


*Misandry!



from ScienceBlogs http://ift.tt/1OKBOfp

Astronaut Chris Hadfield’s Space Oddity

This video was published on YouTube on May 12, 2013. We’re republishing it today to honor the inimitable David Bowie.

It’s a version of David Bowie’s Space Oddity, recorded by Commander Chris Hadfield on board the International Space Station.

Yes, it’s actually Hadfield on guitar and vocals …

Bottom line: Commander Chris Hadfield aboard ISS singing David Bowie’s Space Oddity, republished today to honor Bowie, who died Sunday at age 69.



from EarthSky http://ift.tt/1Kv4Zl1

This video was published on YouTube on May 12, 2013. We’re republishing it today to honor the inimitable David Bowie.

It’s a version of David Bowie’s Space Oddity, recorded by Commander Chris Hadfield on board the International Space Station.

Yes, it’s actually Hadfield on guitar and vocals …

Bottom line: Commander Chris Hadfield aboard ISS singing David Bowie’s Space Oddity, republished today to honor Bowie, who died Sunday at age 69.



from EarthSky http://ift.tt/1Kv4Zl1

adds 2