Welcome to the Weekly Geek, your weekly note on some of the most exciting, innovative, or just plain weird war technology people are talking about online, seasoned with just a dash of International Humanitarian Law (IHL).
From Roman Gladiators to F-35 pilots, warfare has been an act committed by humans, against humans. Even our most advanced U.A.V.’s require someone to “pull the trigger,” and make a conscious decision to end a life. However, advances in technology make the specter of a machines operating without any human interaction on the battlefield a reality, and International Humanitarian Law (IHL) is struggling to keep up.
An autonomous weapon can do three things without human intervention: (1) search for a target, (2) identify a target, and (3) attack the target. We already have machines that can satisfy all three of these prerequisites, albeit in very specific circumstances. These “rule-based” or “dumb” autonomous weapons have such narrow parameters in which they’ll act that they are exclusively used in a defensive posture (where programmable conditions are more stable). For example, the Phalanx Close-In Weapons System (CIWS) can recognize and destroy an incoming anti-ship missile, but that’s pretty much all it can do.
Similarly, the Israeli Trophy system can recognize and destroy an incoming anti-tank projectile, but it has such specific parameters that each new anti-tank rocket must be programmed into the system as a potential threat. In short, these systems are more automated than autonomous, and they certainly aren’t flexible or capable of learning on their own. They are, however, able to react to a threat faster and more accurately than a human, and therein lies the desire for better, faster, and smarter autonomous systems.
The true challenge comes with the development of fully autonomous, “smart” machines, that can learn and adapt. It is already possible for advanced robotic systems to continuously learn from trial and error, and build a rudimentary “knowledge.” This raises a number of fundamental concerns under IHL. At the top of any list is the question of individual responsibility when an autonomous weapon commits a war crime. Depending on who you ask, it’s the engineer who designed the weapon, the commander who deployed the weapon, or a host of other options, each struggling to assign blame when a machine misbehaves.
There’s a growing movement to ban these so called “killer robots,” but given the combat potential of autonomous systems, it’s hard to imagine a world where states decline to field such potent tools. This debate has carried on as just a whisper for decades, but be prepared for it to roar to life in coming years. That being said, don’t expect Terminator-like drones to be kicking down doors any time soon, and if you’re a little freaked out by all this, here’s two full minutes of DARPA robots falling over.
This week, IHL Legal Adviser Federico Barillas passed along a fascinating article on US military plans for cyborg soldiers. The article refers to implants that could aid the human body in a combat environment, known in the tech world as biohacking.
Biohacking is, in short, the blending of human physiology and machines to enhance mental or physical performance. This probably sounds rather outlandish, but it refers to more advanced versions of things you’re probably pretty familiar with, like artificial hearts or pacemakers. With the rapid development of increasingly smaller and more autonomous “smart” technology, it was only a matter of time before these devices were implanted right into the body.
Believe it or not, this practice is real-world right now. Some biohacks are aesthetic, and companies like Wetware Grindhouse will implant LED lights under your skin that allow your body to light up in response to your heart rate, movement, etc. Other biohacks are more industrious, such as those built by startup Chaotic Moon Studios. Chaotic has developed tattoos where the ink is conductive, and creates a circuit within your skin. These circuits can hold gigabytes of personal and medical information, and even transmit signals. The CEO of Chaotic can unlock his iPhone simply by waving it over his forearm, as his tattoo is actually a mini computer. These “tech tats” can notify you if you have a fever, or an abnormal heart rate.
The US military has been actively engaged with the biohacking industry, and has even developed the phrase, “human machine teaming.” Critical mission data could be carried physically within a soldier, and aid with perception and judgment. Surgical implants would allow soldiers to see in the dark, and respond to injuries. The TALOS, due to be released by the Army for trials next year and a feature of last week’s geek, uses liquid armor to fill wounds.
From an IHL perspective, biohacking raises some interesting questions:
- Can information/data that is integrated into a combatant’s body be physically removed if they are captured?
- If a soldier’s body emits signals (like the iPhone unlock code at Wetware) that could hack into enemy systems, can they ever truly be removed from combat? Is the very body a threat that can be legally eliminated?
- Several prototype systems can emit pulses that can disable electronics or overload the eardrums of an enemy. What are the consequences of weaponizing the human body?
This may all sound like science fiction, but these are technologies that already exist, so I would bet you’ll see miniaturized versions integrated into the biohacking field before long. Move over, FitBit.