To learn about the current and future state of machine learning (ML) in software development, we gathered insights from IT professionals from 16 solution providers. We asked, “What are some real-world problems you, or your clients, are solving by using machine learning in the SDLC?” Here’s what we learned:
You may also like: Top 10 Machine Learning Use Cases (Part 1)
- An example of a place it is very important to use ML is personalization. Older systems would call something personalization where a marketer was given the ability to give a certain experience to a segment of users. Of course, this is not ideal because the marketer may choose incorrectly, or the segment may be too broadly chosen. A better approach is to tell a system what metrics to optimize for, give it as much information as possible about each user, and let it figure out how to give each user the best, most personalized experience possible in a 1-to-1 way.
- We are particularly interested in using “population” level datasets to help give our customers additional context around the security and performance of their deployed applications. This could range from standard benchmarking (what is the average write performance of a given database?) to more sophisticated ML-based modeling approaches (for a given configuration, workload, and resources what are the range of expected performance values, or for a given security configuration, what are the risks of various threats?). Data and ML can also be used to assist in technology selection, for example by analyzing trends around the adoption of different tools or algorithmically identifying “stacks” of related technologies that are often used together.
- Core problems we are solving with ML include data extraction, image recognition and classification, natural language processing, and classification.
- Help with test automation. Be more agile. Release faster, run tests automatically. Automate faster. Lower maintenance costs, and spend less time on testing.
- We mostly see ML on the release side to optimize performance and testing to automatically generate and suggest new test cases to increase coverage.
- Totovist has been able to speed test creation and maintenance by 600% with its skunkworks department. They were using Selenium to test cutting edge applications. A large IT company was looking to improve developer efficiency and stability code commits, issues, bugs, feature requirements, NLP to improve time to stability and optimize testing. They are able to predict how long a bug will take to fix, where it is occurring in tens of millions of lines of code across thousands of developers. They built models to determine where the bug is occurring.
- We detect some common objects of interest such as people and floors in buildings, but also, we use ML to detect some non-conventional objects like wires and escalators. We research the application of ML in classical robotic methods such as Simultaneous Localization And Mapping and Motion Control. As an example, we learn the motion model of the robot how to better predict robot positioning, understand the parameters of a robot, and use reinforcement learning to solve certain difficult motion-control cases.
- Emulate the human eye looking at UI screens and detecting differences. Make sure changes are applied to all platforms and browsers. Self-healing locators.
- Pipelining is a big part of modeling. Data ingestions, prep, run model and code to see what variables look like and decide if it is sufficiently trained and determine how accurate, push to production, send an image and see if it tags correctly. Data comes from many places. We help distribute models across environments. Running the model 100 times on the same data set results in greater performance providing a stateful data layer for the models to run on. Develop models for hedge funds or marketing companies. More models = more value.
- Customer 360 and real-time marketing. Cross-sell, logistics optimization in retail and logistics transportation. Autonomous driving, R&D cycle to achieve autonomy.
- I am passionate about focusing on the end-user and the value proposition we can offer that transforms every day, mundane tasks into jobs they no longer need to think about – our products take care of this for them. Thinking more broadly about the SDLC is enabling us to do just that. The following use cases focus on where we are applying ML technologies — for example, we’re using ML for timekeeping, allowing our end-user customers (EUCs) to keep more accurate records to help service professionals optimize their billing and project costing work. Other areas include anomaly detection, collective intelligence, and fraud detection, as well as in areas like cash flow forecasting to help customers intelligently manage the health of their business based on where they are at today and where they might be, based on receivables, invoices, etc., tomorrow. We’ve also applied automation techniques on our own development processes where we use bots that take the code release to production very quickly, which has rapidly improved our deployment cycle — going from what used to take days to hours from code commitment to deployment. Other areas where ML can be applied in the SDLC include component reuse, failure or defect detection, testing and validation.
- ML within our SDLC drives our technology to assess risk and anomaly detection and improve the process of authentication and authorization of user identities. At the same time, this process enhances the user experience. By removing the need to authenticate at every touchpoint, we remove significant cognitive friction. Because the problem starts out highly unstructured (you don’t know what behavior for a given user looks like initially), you start with an unsupervised approach where you collect data and find ways of structuring it in a way that accurately represents a given identity’s behavior. Once you can model these patterns, you can start labeling these trends and then convert the problem into a supervised learning one. At this point, labels and rules can be developed to further support ML throughout the SDLC via a supervised learning approach.
- Precision Innovation, a network of neurology clinics pools a population database to provide research data to pharmaceutical companies and payers to provide better patient outcomes, better research, and neurologists with ML tools to build better expectations and prediction of the trajectory of disease to provide better patient outcomes. An ML learning model can take multiple dimensions of data and provide better-predicted actions and know-how drugs are working.
- Dynamic pricing for an insurance client. This is exacerbated in insurance because they’re competing against FinTech start-ups. They use sophisticated models to identify their best customers. Capture quote, acceptance, rejection, and the models are refined. They’re able to quickly recycle and rebase if a product is performing poorly. They may have more than 100 models at any one time. They are constantly updating the models to reflect customer interaction. There are also regulatory considerations. They cannot have a bias against gender or race. Another client’s engineering department has single models with different apps that touch the customer, a targeting model for their website and customer service. Having their models work as independent services makes it possible.
- Ensure a project gets done on time and estimate with a reasonable level of accuracy for project duration and cost are areas where we are successfully employing ML in SDLC. For example, organizations that perform professional services engagement need to analyze effort, ROI, and related cost estimation (i.e., how long would it take for us to build a product, what is the level of effort, and how does that translate into the appropriate monitoring equation?). Accuracy is incredibly important here in both estimation and execution. If you estimate the product/solution will take two weeks to build, but it ends up taking six weeks, it becomes an expensive endeavor. By deploying ML, organizations can significantly improve the accuracy of the project-cost estimations, the most efficient way to get it done is by optimizing resource utilization, monitoring the product/solution development through its life cycle and ensuring that the product/solution gets built in the most optimal fashion. Along the way as requirements or conditions change, then factoring in change management as part of SDLC is also best accomplished with efficient ML algorithms.
Here’s who we heard from:
- Dipti Borkar, V.P. Products, Alluxio
- Adam Carmi, Co-founder & CTO, Applitools
- Dr. Oleg Sinyavskiy, Head of Research and Development, Brain Corp
- Eli Finkelshteyn, CEO & Co-founder, Constructor.io
- Senthil Kumar, VP of Software Engineering, FogHorn
- Ivaylo Bahtchevanov, Head of Data Science, ForgeRock
- John Seaton, Director of Data Science, Functionize
- Irina Farooq, Chief Product Officer, Kinetica
- Elif Tutuk, AVP Research, Qlik
- Shivani Govil, EVP Emerging Tech and Ecosystem, Sage
- Patrick Hubbard, Head Geek, SolarWinds
- Monte Zweben, CEO, Splice Machine
- Zach Bannor, Associate Consultant, SPR
- David Andrzejewski, Director of Engineering, Sumo Logic
- Oren Rubin, Founder & CEO, Testim.io
- Dan Rope, Director, Data Science and Michael O’Connell, Chief Analytics Officer, TIBCO