For those who have not heard, ChatGPT is the latest language model released by OpenAI at the end of last year. Since the release, the internet has been awash with individuals extolling the wonders of ChatGPT, probably due to its often uncanny conversational responses. There are examples of this being used to create scripts and programs with varying success. With the move towards the consideration and utilisation of layperson development, one question always comes to mind: can it be trusted?
The corporate world already relies on layperson development in the front office. A prime example of the value that this can bring is the adoption of Microsoft Excel, but this can also create additional complications for a company. Individuals can create, rely upon, establish a process around, and then leave an Excel spreadsheet. Now this issue is not reserved to just Excel, but also extends to algorithms and models of any kind.
When dealing with this form of problem, part of the solution is understanding if a process relies on a particular spreadsheet, what data is used by it, and if anything makes use of the information in it. The same applies to a model.
Throughout my time working in Capital Markets, problems similar to this surrounding Excel, quantitative algorithms, control processes and machine learning are commonplace. Streamlining the solution to these issues is one reason we created Praevisum GALEN.