Wade Norris

explore Washington, D.C. | Santa Monica email wade.norris@ucla.edu code github.com/wnorris work linkedin.com/in/wadenorris
Hi, I'm Wade!
I'm a software engineer, computer vision researcher, and roboticist.
I studied Computer Science and Philosophy at UCLA. While at UCLA, I did two years of research with the Center for Domain Specific Computing supervised by Professor Glenn Reinman and Professor Luminita Vesei. My reseach focused on accelerating algorithms by porting execution to the GPU using CUDA, with a particular emphasis on three dimensional MRI image processing. I spent one summer working at George Washington University supervised by Professor Rahul Simha. This work was sponsored by a National Science Foundation grant and was focused on creating educational robotics kits to teach students coding and computer vision. I additionally did an internship at Zynx Health, Blackbird Technologies, and served as the President of the California Beta chapter of the computer science honor society, Upsilon Pi Epsilon.
I spent 7 years working for Google. At the end of my tenure at Google I was a technical lead for the Mobile Vision Team, a part of Google Research. Within this group I did computer vision research and worked on applying state of the art deep neural networks to various challenges. I helped launch the Cloud Vision API and Google Lens. I was in charge of Vision Mining Toolkit [Internal Googler Link], Logo Recognition, Artwork Recognition, and one other confidential non-public project.
I now work at Perception Labs.
I am an alumni of FIRST Robotics Team 611, Langley High School, and am now a mentor of Team 702, Culver City High School. I'm passionate about learning, building, tinkering, experimenting, and teaching.
I love meeting new people who are passionate about science and technology. Please feel free to reach out if you'd like to chat!
Take a look at the publication I co-authored with several former colleagues from Google Research on Distill.pub. Distill is a peer reviewed publication with an emphasis on clear explanations of complex topics. In this article we provide mathematical derivations for computing receptive fields of convnets, along with interactive figures to help visualize and explain the concepts along the way.
Extremely proud to have been a part of team 702's first time winning a regional in the 2019 season. They perservered through getting an arm ripped off in the playoffs. When hopes of repair were futile they managed to rip the arm off in the pit and still win the next match playing defense only.
At the next regional they rebuilt the robot on a new frame from scratch in a matter of hours. In retrospect looking at the timelapse, I'm excited to see how little I'm helping, mostly sitting back and making sure the kids were being safe. This was truly a student run team.
In the 2018 season we had a LOT of fun with vision. We were able to use OpenCV to autonomously track and line up on the yellow milk crates. We briefly experimented with using SIFT features on the FIRST logo for alignment as well.
Most excitingly we turned our robot into a giant optical mouse by adding a ring light and optical flow sensor to the bottom. Combined with a PID loop our robot could rapidly follow precide paths in autonomous mode with no added complexity around accounting for wheel slippage. Occassionally even when a robot got in our way, the robot would fight it's way back to the target position. Here was a good match highlighting a lot of our skills. First 15 seconds are completely autonomous.
email