11Sight - Call Quality Test Automation

Automation of monitoring tests for 11Sight's Inbound Video Calls

The Outcome

Provided 11Sight with a reliable method for measuring and monitoring call connectivity. Tests are still being used after 2 years. 11Sight's Quality Assurance team used the tests as a blueprint to build automated testing system and processes.

The Beginning

After graduating from UCSD with a Cognitive & Behavioral Neuroscience degree in June 2020, I joined 11Sight as a manual quality assurance (QA) tester. My job was to systematically test for bugs in the application and report them to the development team. After a month of repeating the same test of calling myself using 11Sight's platform, I had the burning desire to automate many of the repetitive tasks me and other testers had to perform every day. I asked the QA manager and the dev team for approval and kicked off my personal challenge.

The Process

I had no prior coding experience, so I quickly started researching what tools I needed to learn in order to automate tests. I chose Java as my base and Selenium to be able to work with browsers, since 11Sight's platform is browser-based. I set up my working environment and followed Youtube tutorials to learn the basics of automating browser actions through coding. Once I had an understanding of the basics, I moved on to testing frameworks. I first tested out testng and then implemented Maven, which is much more helpful when deploying your code to other environments. It took me close to a month, but at the end of it I had my first working call test. I would start a call from one Chrome window, log in and answer the call from another Chrome window. I would then check if the call was established and pass the test if it was.

In the following 3 months, I kept adding to my code and improving the capabilities of the tests. I continuously went to my development team and managers to get feedback, which helped me improve by:

  • uploading my own audio and video files and measure how good the quality was on the other side
  • passing parameters to the tests, so I could have it log in as different users in each test run
  • taking screenshots if the test fails
  • adding console logs to get more information if the test fails
  • adding try - catch to my code, so the tests would keep running even if one test fails
  • learning enough Unix so I could use Github and Amazon servers to run my test on a virtual environment
Other projects: