Adversarial examples have torn into the robustness of machine-vision systems: it turns out that changing even a single well-placed pixel can confound otherwise reliable classifiers, and with the right tricks they can be made to reliably misclassify one thing as another or fail to notice an object altogether. But even as vision systems were falling to adversarial examples, audio systems remained stubbornly hard to fool, until now. (more…)
Thursday, 11 January 2018
Popular Posts
-
It's IT development for the 21st century. DevOps basically takes a holistic approach to product development, bringing together progra...
-
YouTube just unveiled a plan to combat phony conspiracy videos intended to manipulate or defraud viewers. (more…)
-
Looking for something to illustrate a post about crunch-time in game development, I ran into this video depicting many forms of footwear (...
-
This rugged watch band acts as a case to protect my Apple Watch as I fall all over myself. Hiking, fishing, falling down -- these are a fe...
-
Anything can happen on Halloween! How has this film been forgotten? Someone had a good time with the Video Toaster!
-
The guest this week on my Cool Tools show is Madeline Ashby . Madeline is a science fiction writer and futurist living in Toronto. Her mos...
-
Reddit user u/Dinaeh has been posting updates of their project to recreate the world from Zelda Breath of the Wild in Minecraft . They ar...
-
In this video by Dr. Fakenstein , the face of Nick Offernan (in character as cantankerous libertarian Ron Swanson) is deepfaked onto that o...
-
The story varies depending on the source—momma being killed by a trap, run over by a car, lawnmowered, etc—but in all cases the result is t...
-
View this post on Instagram sorry for posting my face twice in a row but here’s an old one that seem...
Powered by Blogger.