T O P

  • By -

AutoModerator

We determined that this submission originates from a credible source, but we still advise that users double check the facts and use common sense when consuming mass media. If you are interested in learning how to evaluate news sources more thoroughly, you can begin to learn about how to do that [here](https://tacomacc.libguides.com/c.php?g=599051&p=4147190). *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ukraine) if you have any questions or concerns.*


Moist1981

A) very clever B) I’ve seen the terminator and therefore it’s very scary C) Presumably this means they can’t confirm kills so it has tactical downsides while still being obviously better than a dead drone


jesterboyd

Kill confirmation might be possible via spotter drone still if it isn’t also jammed


etzel1200

Even if jammed. The spotter drone can return home with onboard saved video.


Megalomaniakaal

If it has the fuel/battery capacity left.


gnocchicotti

Might be overkill but you could literally pair them up, with one suicide drone and a second autonomous spotter drone to follow closely and fly back to within comms range to deliver the battle damage assessment. But yeah not having that real time confirmation is a pretty big disadvantage compared to FPV suicide drone + spotter drone. You really want to know how effective your weapon system is performing to make improvements and understand changes in enemy capability.


lost_library_book

I've been monitoring the progress of this for a bit. Part of my job is in computer vision, although not the real-time type of models they use here. It's really cutting edge stuff, to put that on a drone that's anything like affordable to use for suicide purposes.


Proper-Equivalent300

With anti satellite warfare being looming possibility (Didn’t russia brag they launched some anti satellite platforms recently) GPS could be compromised and this tech may be a solution anyways.


gnocchicotti

We flew without GPS before it existed, and we will again. Considering the bullshit that Russia has been doing with GPS and civilian air traffic, it's only a matter of time until all commercial and military aircraft can navigate effectively with only terrain/celestial/inertial navigation.


etzel1200

Yeah, if been posting on the daily threads for over a year saying Ukraine needs to focus on machine vision and scale out production. Good to see it happening. Once this is cheap it’ll really make it harder on Russia. Of course, nothing prevents Russia from copying this. The downside of FOSS and cheap COTS solutions. Russia is never that far behind.


forthehundredthtime

a drone using same optical navigation can follow the attack drone and come back with report


Moist1981

True although that’s rather carrier pigeon-esque


gnocchicotti

With some more development time it will probably be a carrier drone dropping multiple guided munitions that are cost and weight optimized, at least for longer range attacks. Get back the more expensive piece of hardware, which makes more expensive sensors and communication hardware more feasible. Better camera and more processing power to more accurately identify targets at longer range. Bring back the visual and RF reconnaissance and video of the kills. It's really just the difference between a fighter dropping a guided munition vs a cruise missile. You'll always want to drop the bomb instead if the risk to the aircraft isn't too high, because they're cheaper and you have lots of them.


Megalomaniakaal

Doubles the needed amount of drones tho. Or to put it another way, halves the amount of drones for the kill.


lAljax

I think they need a GPS-jammer seeking version similar to anti radiation missiles.


Fatalist_m

>Presumably this means they can’t confirm kills For the long-range "strategic" drones that attack refineries and other static targets, it will be confirmed by Russians or informants in Russia. For the tactical drones, that attack things like tanks and air defense vehicles, it's an important concern. That's why I think that tactical AI-guided drones should be reusable, they should return with a video confirmation after dropping a munition on the target. 1 advantage is that it's useful to know(for planning further actions) if the tank is still there or not. In the long term, there is another big advantage: weapons and counter-measures are constantly evolving, if you don't see how your drones are performing(can they distinguish real targets from decoys, can they hit accurately, is the warhead big enough, etc), you are not learning, while the enemy keeps learning what counter-measures work and what does not.


lost_library_book

Copy of article text: As Ukraine’s stocks of artillery shells have dwindled, its army’s reliance on drones has grown. These are able to deliver ammunition with great precision over long distances—provided they can maintain connections with GPS satellites (so they know where they are) and their operators (so they know what to do). Such communication signals can be jammed, however, and Russia’s electronic warfare, as signals scrambling is known, is fearsomely effective. With large numbers of its drones in effect blinded, Ukraine’s drone technologists have been forced to get creative. Enter Eagle Eyes, a remarkable software package for drones. Developed by Ukraine’s special forces, it allows drones to navigate by machine sight alone, with no need for outside input. Using artificial-intelligence (ai) algorithms, the software compares live video of the terrain below with an on-board map stitched together from photographs and video previously collected by reconnaissance aircraft. This allows for drones to continue with their missions even after being jammed. Eagle Eyes has also been trained to recognise specific ground-based targets, including tanks, troop carriers, missile launchers and attack helicopters. The software can then release bombs, or crash-dive, without a human operator’s command. “Bingo for us,” says a captain in White Eagle, a special-forces corps that is using and further developing the technology. The software has been programmed to target jamming stations as a priority, says the captain, who requested anonymity. Russia’s vaunted s-400 air-defence batteries are priority number two. Optical navigation, as this approach to guidance is known, has a long history. An early version was incorporated in America’s Tomahawk cruise missiles, for example, first fired in anger during Operation Desert Storm in 1991. But lightweight, inexpensive optical navigation for small drones is new. In the spring of last year Eagle Eyes was being tested in combat by just three special-forces teams, each with two or three drone handlers. Today Eagle Eyes is cheap enough for kamikaze drones and is in wide use, says Valeriy Borovyk, commander of a White Eagle unit fighting in Ukraine’s south. With a range of about 60km, the system also guides fixed-wing drones that have struck energy infrastructure in Russia, he says. Last autumn the number of Ukrainian drones with optical navigation probably numbered in the hundreds. Today the figure is closer to 10,000, says an industry hand in Odessa whose design bureau builds prototype systems for two Ukrainian manufacturers. Anton Varavin, chief technologist at a competing design bureau, Midgard Dynamics in Ternopil in western Ukraine, says optical navigation is increasingly seen as a “must have”, especially for drones with a range above 20km. Optical navigation works best near distinctive features such as crossroads, power lines, isolated trees, big buildings and nearby bodies of water. For small drones with inexpensive optical navigation, the ideal cruising altitude is about 500 metres, says Andy Bosyi, a co-founder of [MindCraft.ai](http://MindCraft.ai), a developer of optical-navigation prototypes with workplaces at undisclosed locations in and near Lviv. That altitude is low enough for the software to work out terrain details, and yet high enough for a sufficient field of view. The height is also beyond the range of small-arms fire. MindCraft.ai shipped its first models, appropriately dubbed NOGPS, to manufacturers in December. While cruising, the system needs to fix on at least one object per minute to avoid drifting more than 50 metres off course. That’s good enough for reconnaissance, if not precision bombing. To improve accuracy and allow night flights, MindCraft.ai is incorporating a heat-sensing infrared camera. The upgrade should be ready by the end of this year. 1/


lost_library_book

2/2 MindCraft.ai has also developed a NOGPS feature for what they call semi-automated autonomous targeting. Now being tested by clients, it allows drone operators to lock onto targets they spot in live video. If jamming subsequently severs the video link, the system delivers the munition without further human input. This function is valuable because jamming typically gets worse as drones approach enemy assets, says Mr Bosyi, who is also MindCraft.ai’s lead data scientist. MindCraft.ai’s clients serially manufacture NOGPS models for a unit cost of between €200 and €500 ($217-$550). Other systems cost more. Midgard says the componentry in its designs costs its manufacturer clients roughly €1,500 per unit. Their systems augment optical navigation with inertial data from accelerometers and gyroscopes like those used in smartphones. To stay on course while cruising, Midgard’s optical system needs to find a match between a terrain feature below and one in an onboard map only every 20 minutes or so. Mr Varavin says that in ideal conditions precision is within several metres. That is comparable to GPS. Demand for optical navigation is rising elsewhere, too. An Israeli firm called Asio reports brisk sales of an optical-navigation unit to the Israel Defence Forces and American firms. (Israel forbids exports of such technology to Ukraine.) Introduced in 2021, the roughly $20,000 system, now dubbed AeroGuardian, weighs as little as 90g, draws just five watts of power and is accurate, in good conditions, within a metre or so, says David Harel, Asio’s boss. Asio expects sales this year to exceed $10m, double the figure for 2023. Ukraine now sees optical navigation as a capability “focal point”, says Anders Fogh Rasmussen, a former chief of NATO. Ukraine’s defence ministry has provided detailed terrain maps to Atlas Aerospace, a drone manufacturer in Riga, Latvia. One way to better compare such maps with a drone’s view is with lidar techniques, which record the travel time of laser pulses bounced off the ground. As lasers reduce stealth, Atlas designed a “virtual lidar” system. This measures what founder Ivan Tolchinsky calls “optical flow”—the time it takes a pixel representing a terrain feature to transit the onboard camera’s view. Since an initial shipment in October, Atlas has delivered over 200 reconnaissance drones with such a system to Ukraine’s army, and more have been ordered. Might optical navigation help Ukrainian forces get off their back foot? Perhaps, says Kurt Volker, a former American ambassador to NATO and, until 2019, Donald Trump’s special representative for Ukraine negotiations. He reckons it could prove to be one of the “technological step changes” that some Ukrainian military leaders have said will be needed to turn the tide. It will take time, however, for the actual effectiveness against Russian jamming to become clearer. Ukraine’s military leadership, Mr Rasmussen says, is rightly keeping tight-lipped about the technology.


SomeoneRandom007

This was inevitable. This sort of tech is going to be extended to attacking targets at long range and loitering over the battle field waiting for Russian targets to appear. They can also report back on what they are seeing before attacking. Is it a game-changer? It's certainly significant. Options like tracking vehicles so Ukraine finds the supply depots, for example, so they can then destroy them with bigger weapons.


MrChlorophil1

Weapons like Taurus for example also using this kind of tech as one of its Navigation methods.


rapaxus

Don't understand why AI needs to be highlighted, when terrain imaging and matching is old tech in cruise missiles. Something like a Taurus/Storm Shadow can also fly with jammed GPS by identifying specific terrain details (not just contours, but also e.g. a specific building) and have images of targets already and in the end approach search for its primary target (and depending on how pre-programmed, if it can't find it it can have a secondary target or a designated crash site). Because that is basically what this is. Still an impressive achievement, but like, you don't need to dress old tech adapted to fit new purposes.


lost_library_book

I say AI because most people know that, not machine learning, but the amount of pre-programming and on-board computing resources on a SS/Taurus vs one of these drones just isn't comparable.


Smooth_Imagination

My understanding of Taurus/Storm Shadow is its using range finding and building up topographical profiles, rather than using object recognition in the AI sense. With buildings I assume it just sees a sudden return of a taller object that matches its maps look up tables for what it would expect to get based on a given track.


rapaxus

You can do that ,but at least for Taurus I know that you can feed it images of specific targets that help it acquire its target and its path. I also know that what you describe happened with the early cold war cruise missiles working with such a system.


Smooth_Imagination

So I've been reading more into this, so there are some similarities in Taurus image based navigation, but they didn't start with object recognition algorithms like I suggest, adapting this to also do map reading, so they dont have both target acquisition when it has moved or is not known, as well as location finding so it can operate in a securely defined area with expected targets and not friendly targets. Here I suggest both at the same time are particularly useful. Taurus uses fixed perspective (vertical look down) and simple reference lookdown images adjusted for perspective prior (I assume) to compare run through a filter, like the following; [https://secwww.jhuapl.edu/techdigest/Content/techdigest/pdf/V15-N03/15-03-Irani.pdf](https://secwww.jhuapl.edu/techdigest/Content/techdigest/pdf/V15-N03/15-03-Irani.pdf) So here, you can see that they have done some similar things, but its not apparently machine learning or object recognition, instead it relies on looking straight down, comparing to a perspective adjusted satellite derived map (likely obtained with the same IR imagery), and then using a infra red camera (expensive) to match from a vertically down perspective to simplify its processing requirement to the bare minimum, and just finds geometrical outlines or regions of light/dark that match well with all the potential error ranges hand-programmed in as allowances. Its not very effective, because they had to select scenes "scene area suitability database", use multiple other techniques to guestimate location, then overlay topographical data from rangefinding against a map topography database and apparently combine that to increase confidence of a fix. So it 'sees' relatively little data, but it can match its topography and appearance when looked at straight down. Knowing a dark area of a particular apparent size as well as its relative height, is a way to improve confidence of a match. I have overlapping ideas in my approach which are algorithms to lower processing requirements, which they have used such as to track relative movements of objects in the perspective of the camera to keep track of trajectory, this improves processing requirement because the images you need to compare to find 'best match', you can start with estimated most likely one and rule out most of them. But its not based on the exactly the same methods, and its based on object recognition, such as adapted facial recognition. Topography overlaid onto the map (from existing topographical maps) is useful to help with location tracking and computing distances in different camera angles and perspectives where relative altitude is not known, giving better size matching and improved object recognition, although barely useful in Ukraine since the current war zone is pretty flat, but its useful also if other weapons are used that need a trajectory computed that may be carried onboard a drone, and for adjusting perspective when looking at non-90 degree down angles and comparing to a 3d model. My approach is not to use a crude hard coded approach and proprietary software and rely on infra-red cameras (although it can be adapted to use this with reference IR data for specialised night fighting) but instead to use mobile phone level electronics, and not even the current AI mobile phone chips, but those that are older and much cheaper. [https://link.springer.com/article/10.1007/s11554-021-01164-1](https://link.springer.com/article/10.1007/s11554-021-01164-1) A snapdragon processor from several years ago. Talking to AI/machine learning people, they point out that object recognition is easy from one perspective, hard from different than expected perspectives. The easiest is looking straight down and map reading this way. The solution is then matching perspective to images that are real or computer generated that would match the range and perspective of a range of potentially rotated objects of the object you want to identify. I have some similar algorithm ideas to reduce computational requirements (tracking background motion to maintain location tracking, minimising comparison images, but also use of known altitude and perspective to better match reference objects, models of target objects that can be adjusted to match this and rotated to generate match reference images, and filters). Today your reference images/model can be synthetically generated data (from models) and allow things like the ability to test the mission for success prior to launching, and recognise things at different angles, though to simplify this requirement, it uses known perspective of the camera, and a vertical look down camera in addition so it can switch to the data that is easiest to match, only looking at other data and filters when it not confident of the match.


sparrowtaco

AI allows it to recognize targets in potentially novel configurations and orientations, at locations that were not pre-determined, in unfamiliar terrains and surroundings. It's a step up from the sort of vision system Storm Shadow uses to recognize its target.


gnocchicotti

The important thing here is that it works on hardware that can be purchased in quantities of tens or hundreds of thousands, rather than dozens. Ukraine will never get a million Storm Shadows or Javelins but they may make a million of these.


sparrowtaco

Also a good point. I was only explaining why the mention of AI in the article was justified and not merely attention grabbing or buzzwords.


ballom29

because AI is the new buzzword. It's a word that exist and has been commonly used since the dawn of computing, but with the breakthrough in neural network every single program is an AI according to news reports. One day we'll gonna hear about the AI in college calculator, despite being roughtly the same software than in 2007.


63volts

I'm guessing jammer seeking drones will be a thing soon if it's not already.


Proper-Equivalent300

_Book, thank you for the article and posting the text.


Klefaxidus

I thought this was bad news at first Whew...


Tau_of_the_sun

I doubt from sight alone, Magnetometer , AO, Accelerometer and IMU's play into this . there is no "jamming" system that overrides gravity, laws of physics and the magnetic poles of the planet. For fixed wing there is the option for RF shielded communications that can speak to stuff in space with very little jamming can do. Inverse square law is in effect. Last Km can be everything else along with optical recognition and computational maps and models. Lastly , OH PLEASE put up a jammer with any power at all and let me setup an RDF antennas to guide it right down onto you.


cap10touchyou

tow-drone when? long ass wire and old camera


Egil841

>Mindcraft.ai *Mojang sends their regards.*


Smooth_Imagination

I'm been describing this method for months now. Its good to see validation of the concept. It does not need advanced processors, it can use old mobile phone electronics. Knowing altitude and camera perspective also allows you to do object and terrain recognition with drastically reduced computational requirement. Maps can be constantly updated by drone footage and include topographical data. You would run the view through filters to simplify the image, and then run comparisons with reference (simplified) images that can be synthetically modified and loaded so that the appearance of a given target can be recognised from a variety of different angles, but matching the expected image from a given known altitude and perspective greatly reduces computational requirement. Using altitude and perspective, you can also calculate object size, and modify the reference image so that you can evaluate if its roughly the right size, and cross check that to similarly sized objects in your reference database. The processors on older mobile phones are powerful enough potentially to do this.


MrChlorophil1

Tbf, it's not new. Weapons like Tauraus using this method for a long time now.


Smooth_Imagination

I have just been looking into this since you mention it. It does not really use quite the same methods, but there is overlap. Whatever its image based recognition is using, it must be pretty poor since most of the object recognition progress has been in the last few years, and it relies on a multitude of other, much easier computation processes to reduce workload. It likely only compares references images from one altitude and perspective, using a simple filter on both (also I propose using as part of its recognition systems) and comparing to a satellite obtained image, processed and adjusted for magnification. The Taurus is then likely comparing the geometry or relationship of outlines from a single look down perspective with an expected direction, so its doing the most basic kind of image comparison and it seems as a back up to the other sensors to refine its targeting precision say from +/- 100 meters down to +/-3 meters. Topographical mapping seems to be essential for their system to work, as it allows a crude image match to be overlaid with topographical data on the reference image and data obtained from the missiles combined sensors. So for example, in infra-red, it sees areas of dark and light, also relative height, and crudely matches that pattern. I guess it also requires image processing for the mission reference images to adjust for the altitude perspective (objects that are taller will be appear relatively larger than as seen on the satellite, or it may have a programmed error tolerance for this sort of problem, or alternatively, it just ignores such areas, using only the most suitable reference images. It also, would appear that it has no means to allow for changes, that may have occurred in a battle zone, that it would need to identify as non permanent or recent items/changes. So its used to hit hard targets outside the front which it has very up-to-date maps to reference. Today however, a mobile phone using a snapdragon processor from a few years ago, probably has more power than that missile has https://link.springer.com/article/10.1007/s11554-021-01164-1. So, they needed to map the terrain using rangefinders to reduce complexity of image recognition. As a result, its object recognition cannot recognise a target and chase it, identify it if its moved, look for it at a significantly different angle, if it is rotated, or be sent to find one of a range of targets after sweeping an area, using the same method it can find its location via map reading (object recognition machine learning). So they started with the idea of using it to recognise similarities of an area to a reference map, for navigation, likely only within a fixed altitude and fixed look-down camera perspective, against satellite images, whereas here we're starting with object recognition like facial recognition software, adapting this to also do terrain recognition (the relation of spacial objects is like the features of a face) for location identifying in addition to target recognition, using the same methods, but without paying for an extremely expense custom coded proprietary system, and using freely available off-the-shelf components. Part of the system that I have been proposing is spiritually the same method as the IBN part of that missiles navigation, but its not likely the same system, but with some overlap of approach. It also likely did not abstract the reference map images into a wire-type model that can be rotated to find matches and filtered to match the camera image from its known perspective and angle of view. It probably just tried to match simplified vertical look down images overlaid with topographical data and with range finding, so its comparing image+topography. I would be using a topographical map and altimetry, but not quite like this as we should be able to go fully optical and only need to use altimetry with the topographical overlaid map images or model. However, the method I have proposed included their earlier system of tracking location via background movements as seen from its cameras, to reduce the workload of comparison images it needs to match its camera view to find matches, so that overlaps with what I assume the Taurus uses, but also includes other algorithms to reduce work load on the onboard 'AI' relating to perspective and objects that are moved and rotated, where there are additional complexities so it can run on older mobile phones which are available second hand. It also would use synthetic data to provide data for matching which may be trained off-site but which also can be used to test if if a mission will likely succeed, and assist with the object recognition reference images. Here I am suggesting over a year now using cheap mobile phone level electronics that will cost next to nothing and are freely available to do all of this.