General Discussion
In reply to the discussion: College professor had students grade ChatGPT-generated essays. All 63 essays had hallucinated errors [View all]oioioi
(1,130 posts)Given the huge amount of interest in Machine Learning and Artificial Intelligence within software engineering, perhaps its more likely that the technology becomes relatively cheap and accessible.
The mechanics of the modeling that underpins the LLMs like ChatGPT and the image generators are quite similar conceptually to those that do stuff like image object and facial recognition. The model is trained on a large amount of pre-classified information and infers predictions based on that. Under the hood of course, the neural networks are extremely complex but essentially they are component-based. You provide the data, you use a specialized software application like tensorflow or pytorch to train a model, and then you assess new inputs and formulate an "intelligent" response.
Presently the cost of assembling, classifying and training a large Large Language Model like ChatGPT is massive, simply due to the costs of aggregating and classifying the raw data, to say nothing of the cost of huge clusters of compute and storage required to assemble the breadth of information required for such a universal tool, i.e. one that attempts to talk about any subject to any person.
If we limit the scope though to an assistive technology that only covers a specifically scoped topic or interest and assemble the models based on that, the complexity and cost is reduced accordingly. Wendy's just deployed an AI Model that will take hamburger orders with a synthesized voice, for example - it should work pretty well, at least as far as software goes.
Whilst there will always be competition among the Silicon Valley grand-standers for the most amazingly lucid AI chatbot, the ability for computer systems to learn from large datasets will be applied in far more granular and specialized applications and become interwoven with general software that does stuff today. Software that has the ability to make decisions based on real-time interpretations, sort of extending the self-driving idea - which is a terrible application because of the overall complexity and safety risks involved but the driving software uses the same fundamental ideas - the deployed "decision making" software in the vehicles is based on having trained a huge model on a gazillion images and inputs that are amalgamated and interpreted based on the model at runtime. We are more likely to not really see AI in front of us like ChatGPT but it will be built into the software we use to interpret real time inputs and respond accordingly with software that otherwise would be far more complex and costly to develop. It's going to change the world, but it probably won't destroy it.
Edit history
Recommendations
0 members have recommended this reply (displayed in chronological order):