Machine Vision – Scenario 2020
The ANS Are Coming
October 18, 2020. The City of New York is testing a new class of autonomous networked systems. The mobile devices learn by seeing. A meta text (machine language) report generated by the devices and automatically paraphrased into human language explains how they perceive the world around them.
New York City is testing a new class of maintenance workers –Autonomous Networked Systems (ANS). Their first mission: to upgrade surveillance cameras with advanced visual processors. Thanks to their ability to accurately fuse information from a variety of sensors, ANS not only see the world around them, but constantly learn from it while flexibly performing missions
New York. If and when you see us, don’t be surprised. We may not look like municipal workers, but that’s what we are! We are mobile machines about 15 cm, or 6 inches in length. We see the world around us through sensors, interpret what we see, and respond accordingly. "We" are what experts call Autonomous Networked Systems (ANS).
Our mission is to perform inspections and maintenance on hard-to-reach or dangerous equipment and infrastructures. We are also designed to provide mobile surveillance.
Specifically, we have been authorized to upgrade the processors in all the surveillance cameras throughout a two-square mile area in central Manhattan. Upgraded cameras will be able to intelligently interpret what they see, trade information with other cameras and sensors, and decide when to alert security personnel. If we are successful, our numbers will be increased to provide the same service throughout all five Burroughs. This report chronicles our initial deployment.
4:30 a.m.: The first of us were transferred from a development location in New Jersey to a municipal distribution center in lower Manhattan.
6:14 a.m.: ANS-1 (myself) and ANS-2 (trainee) were discharged on a sidewalk at the corner of 53th St. and West End Avenue by an automated municipal transporter. We had not been to the area before. I had been pre-trained for interpretation of complex visual data. ANS-2 had not. My secondary mission was to train ANS-2 and determine the effectiveness of this process.
6:15 a.m.: I was able to confirm our exact location by GPS coordinates and sensor evidence. Verification included use of optical sensors to identify building numbers. I compared these (using augmented reality) with building and municipal databases, which I accessed through the wireless net.
6:16 a.m.: I was able to match this with location data for all the nearest cameras and plan an optimized service schedule that did not conflict with those of other ANS teams.
6:17 a.m.: Our mission was momentarily interrupted by a model "M6 Sidewalk Genie" automated cleaner, which had been alerted by an older security camera of our presence. Contact with the M6’s vacuum attachment was avoided by rapid upward movement over the external surface of target building, a three-year-old, predominantly glass-covered 64-story office tower. I led. ANS-2 followed. (Note: Upgraded security cameras will recognize ANS and inform M6s and other automated devices accordingly.)
6:37 a.m.: We serviced a small number of mini cams as we ascended. At first, several phenomena confused ANS-2. Each time ANS-2 was uncertain regarding interpretation of its sensor data, it transmitted an image to me in which the area of uncertainty was highlighted. I responded with definitions of the phenomena.
8:20 a.m.: As we moved up the building, perspective changed and ANS-2 required additional image interpretation support. The outlines of buildings above us grew larger. Objects below seemed to move more slowly, became harder to interpret, and produced altered sounds.
8:21 a.m.:I instructed ANS-2 to use its radar to interpret image perspective and to use Doppler sound analysis to determine vehicle speeds as they related to vision. This would help calculate the distances of moving objects in real time.
8:22 a.m.: Energy levels in ANS-1 and ANS-2 holding steady. Building motion, wind, sunlight, temperature variations, and vibrations in glass panels caused by ground vehicle engine activity and aircraft were being translated into sufficient energy to maintain full battery strength.
8:23 a.m.: When we reached the top of the building, an older model surveillance camera on a security pedestal well above us failed to recognize us and began transmitting a high definition sequence of our arrival, which it marked "intruder alert." I was able to read its meta text messages over the city’s SecureNet and retrieve an image of our arrival for our report from its transmission. An intermediate processing node recognized us and discontinued the transmission before it could be forwarded to the city’s human-operated security command post. The camera’s processor was immediately upgraded to avoid further false alarms.
8:52 a.m.: We commenced upgrade of the camera we were photographed servicing. After removal of the camera’s earlier-generation processor and disposal in ANS-2’s receptacle, I conducted a non-destructive evaluation of the camera’s hardware using X-rays and structured light.
8:57 a.m.: I indicated to ANS-2 how to access the camera’s secure file, compare previous and current images to detect hairline cracks, abrasions or signs of tampering, including changes in its RFID-marked parts inventory. I indicated how to apply a database-guided expert system to support this comparison. A similar test was performed on the device’s electronics. Finally, I extracted a next-generation processor from my parts receptacle, snapped it into the camera, and conducted an operational test. Everything fine. The camera’s digital file was updated accordingly.
9:20 a.m.: Mission status summary: Camera successfully upgraded. Check. Building’s surveillance system successfully promoted. Check. ANS-2 now fully capable of interpreting fused sensor data. Check. Our own vision is now networked with that of the building’s cameras. Check. We can see what they see. Check. The first step in providing heightened security is complete. Check. City maintenance will never be the same. Check.
Arthur F. Pease
Machines See the Light
Machines equipped with image processing systems are becoming increasingly intelligent and reliable. Already, they can interpret much of what they see more
Meet the Digital Watchman
The latest video surveillance systems not only help observers monitor several areas simultaneously, but also detect meaningful changes. As a result, such systems can automatically track the movements of people who have entered a zone without authorization more
Unlimited Horizons
In an interview, Dr. Norbert Bauer, head of the Fraunhofer Allianz Vision, talks about the future of machine vision more
In-Depth Vision
The latest image processing systems can recognize objects in three dimensions. Potential applications include autonomous cranes, automatic package sorting systems, and digital casts of auditory canals more
Speed Readers
In milliseconds, state-of-the-art reader scanners can recognize the individual "fingerprint" of an envelope or the codes on machine components, such as those used to identify turbine blades more