In 2023, industries around the globe have been turned on their head as OpenAI and other platforms have changed the game. The advancements in AI, specifically the release of ChatGPT 3.5 and upcoming 4, have become a hot topic in many circles and what it means to work and live in the future. Like Pandora’s box, AI is open and accessible now.
I like to try new technologies, so I have been using these platforms over the last couple of months to see how they would impact my pipelines and industry. I have come to realise that AI has the potential to help us achieve and do more. However, there is an underlying need to address the ethics surrounding its use. It’s important to consider not only the potential consequences of AI systems, but how we use them.
The question remains, will these AI change us for the worse or for the better?
It’s cheating!
As someone working in the education field, I am noticing an immediate impact of AI with submitted work. With the increasing use of AI-powered writing tools, it becomes challenging to determine if the work submitted is truly the student’s own words. These tools can generate text that pass plagiarism checkers, but it raises the question of whether the work is truly original or not.
To address this issue, people have developed AI checkers specifically designed to identify if the work has been created by AI platforms like ChatGPT. However, is this cheating? The term “cheating” implies that there is an agreed upon set of rules or expectations that are being violated. In the case of AI, it depends on how it is being used and the context in which it is being applied. If an AI system is used to cheat on a test or exam, then it would be considered cheating. However, if an AI system is used to improve the efficiency of a manufacturing process, it would be innovation.
When I was marking written assessments, I would always give credit to strongly constructed arguments rather than the words on the page. Both the words and the ideas are important. After using Chat GPT in my writing, AI-powered writing tools can create well-written paragraphs on a given subject matter, however sometimes it needs a bit of support.
AI texts sometimes repeat themselves, add fluff and miss the point, especially if the subject matter is complex or the topic is new and the AI system lacks data to have a deep understanding of it. Also the words used to convey an idea are important, however the idea is the core of the message and to the information that you want to convey. An AI-generated text may be grammatically correct and well-written, but the ideas are not clear, accurate, or meaningful. If someone isn’t knowledgeable in the subject matter, AI-generated content can appear to be correct, However it can be plain to see to a reader who understands it.
This can be easily used as a teaching tool as well to show students that AI isn’t going to write your assessment for you. The teacher can present a prompt or question to ChatGPT, and then have students examine the response to determine its accuracy, completeness, and relevance. This allows students to collaborate and apply their critical thinking skills to assess the AI’s output and understand how it was generated. Then the teacher can guide the student with using essay writing practices and how you can use AI tools for understanding the subject matter, sentence structure, rephrasing topics to understand the content better and check for grammar and spelling.
As we all debate the use of AI in education, using ChatGPT’s direct output as a means of cheating a writing assignment can be identified, similar to plagiarism and would be unethical. However, incorporating it as a tool in the writing process can have significant benefits in capturing and organising ideas. By using ChatGPT to suggest content, generate outlines, check grammar, and simplify language, students can improve the quality of their writing. Teachers can use ChatGPT as a tool to support student learning and promote the development of writing skills, while also enhancing their critical thinking and technology literacy skills. Limiting AI in the classroom isnt going to make the problem go away. As long as the students are using their own creative and critical thinking abilities to create original work. As educators, we can guide the use of AI to support the students journey,
Less Busy Work, More Meaningful Work
Working in game development you will hear of the 90/90 rule. The first 90% of development takes 90% of the time, while the remaining 10% of development takes an additional 90% of the time to complete. The idea behind this rule is that the last 10% of development, which includes polishing, debugging, and fine-tuning, can take just as much time as the first 90% of development. This last 10% of polish is where a project can shine. With this mindset, the goal is to use tools and techniques to get to the last 10% of development as efficiently as possible.
Transitioning my skills to engineering and visualisation. I was exposed to parametric modelling, a powerful tool that allows for the rapid generation of multiple design options, by defining a set of parameters and equations that control the geometry of an asset. This enables engineers to easily explore a wide range of design options, while keeping the underlying model structure consistent. The resulting parametric models can then be analysed and compared to determine which design options are the most optimal in terms of performance, cost, and other factors. This method allows engineers to quickly iterate and optimise the design, reducing the need for manual adjustments and allowing them to spend more time on that 10%.
When we try to remove busy work, Subject matter experts have been honing their craft for years and are well-equipped to recognize processes that are time-consuming and tedious. Sometimes this grunt work has to be done, however by exploring and leveraging ChatGPT and other AI to automate these pipelines, we can find new ways to free up time to focus on what truly matters.
A recent project I had an annoying texture scaling pipeline in Unity3D that to be honest, I am not going to click through 100 textures to scale them manually to bring the project size down. Some were more important than others. After a quick prompt saying what I needed, ChatGPT started generating away while I continued to build the scene. I was shocked that it not only generated the code, but understood how Unity worked and how to implement the code to work correctly. I just needed to make a few quick changes and added a shortcut and turned a couple of hours into a single click.
AI has the potential to greatly alleviate the burden on humans by automating repetitive and hazardous tasks. People understand where the busy work is and the barrier to create bespoke solutions to help get to that last 10%. AI has the ability to improve efficiency and productivity, freeing up time for people to focus on what is truly important.
Diversity and Breaking the Information Barrier
Everyone at one point of time just wants a bit more information to understand the topic. Like there is a key that goes click and everything else that falls into place. However, when information is obscured unintentionally… or deliberately, it can become incredibly annoying and feel as though the information is being withheld from them.
I had this feeling recently when I was working on a project in Unity3D and needed to understand a particularly obscure part of the API. Despite extensive research and deep-diving into forums, I was struggling to comprehend the concept. The posts I found were not helpful and distasteful, I was frustrated.
That’s when I decided to use ChatGPT. I fed it relevant information about the API and asked specific questions about the obscure part I was having trouble with. After a bit of loading time, ChatGPT generated a clear and concise explanation that just clicked, along with an example of how I could use the API for my needs.
This is when I discovered that ChatGPT does not gate keep information… within some restrictions.
Gatekeeping is the act of controlling access to information, resources, or a particular community or group. This can be done through various means, such as censorship, manipulation of information, or setting strict requirements for entry or participation. As someone who is multi-talented across a variety of creative fields. I have encountered individuals who, intentionally or unintentionally, discourage others from pursuing their passion. These individuals often use negative comments, such as “you’ll never make it in the industry,” or offer a barrage of destructive criticism and personal attacks. Their actions limit access to knowledge and opportunities, and can have a damaging impact on aspiring creatives. This kind of gatekeeping reinforces existing inequalities and hinders diversity and growth within the creative industries.
ChatGPT, does not have the intention of promoting or preventing gatekeeping of information. Instead, its primary function is to generate text based on the input provided to it, without any bias towards certain information or viewpoints. By providing access to vast amounts of information and data, ChatGPT can serve as a tool for individuals to gather information and make informed decisions, rather than promoting or restricting the flow of information.
In some cases, gatekeeping may be necessary to protect sensitive information or to ensure safety. OpenAI has implemented measures to prevent its language models, including ChatGPT, from generating harmful or destructive content. This includes monitoring and filtering out topics related to hate speech, violence, harassment, and other harmful forms of expression. The company has also established ethical guidelines for the development and use of its AI technologies.
Gatekeeping information, whether intentional or not, can have negative effects on how we do things. When individuals or groups control access to information, it can lead to a lack of diversity in thought and decision-making. This can prevent new ideas and perspectives from being considered, leading to a lack of innovation and a failure to adapt to changes in the future. AI can play a significant role in knowledge acquisition. By providing individuals with the information they need to understand a new topic, AI can give them the right understanding or language that they can take further, or arm themselves with to find other creatives who will support them on their journey.
This can help individuals overcome the pitfalls and provide them with the tools they need to succeed. With AI and ChatGPT, learning and growth can become more accessible, and individuals can pursue their passions without being held back by the opinions and biases of others.
Blurring the line of human computer interaction
Have you ever just wished you could simply just “tell” the computer to do the task?
When you are working and the User Interface has that one tool you need buried 5 menus deep and you are really considering who designed it, their mental state and how much they need a stern talking too. I am grateful for my ability to create automated pipelines, reducing the need for manual button clicks through programming. However, the advancement of AI processing natural language models has greatly altered the boundary between human and computer interaction.
With the advanced capabilities of AI systems like ChatGPT, computers can now understand, process, and respond to human language in a way that was previously thought impossible. This has created a new paradigm for communication between humans and machines, where information can be exchanged in a more natural, intuitive way.
The use of AI in conversational interfaces has made it possible for people to interact with technology in a way that feels more like speaking to another person, rather than typing commands into a machine or interacting with multiple systems to do the task. By simply providing a command like “schedule a meeting for Jan tomorrow at 10 about the performance review”, the AI system can automate various tasks such as scheduling a meeting in your calendar, sending an email to Jan with details of the meeting, creating a template in management software for a performance review and generating prompts from previous entered reviews for follow-up. This demonstrates the potential of AI to simplify an involved task to a simple prompt.
Also as our lives are increasingly data-driven. From businesses to individuals, the amount of data we collect, store and analyse is staggering. It’s no secret that big data is the lifeblood of modern society. However, managing and processing this vast amount of information can be overwhelming, requiring vast amounts of storage and computing power.
AI is revolutionising the way we process data by using advanced algorithms and machine learning models to quickly analyse vast amounts of information. With its ability to process massive amounts of information, AI has the potential to revolutionise decision-making by using big data with these conversational interfaces to provide real time information to individuals to make informed choices.
Communication between individuals is crucial for building a common understanding and achieving shared goals, However being able to communicate with machines will revolutionise the way we approach tasks and processes. This potential shift in how we interact with AI and the internet of things is likely to have a profound impact on our lives in the coming years.
No One Left Behind
One common struggle faced by students is falling behind in their studies and mostly no fault of their own. Sometimes it’s just a bit more extra time with the material or have the information shown in a different way. Unfortunately, in traditional classroom settings, these students can easily slip through the cracks and continue to fall further behind. Personalised learning in education has been a growing trend to combat this, however it also faces challenges in implementation and widespread adoption.
This is an area I am fascinated with ChatGPT’s ability to personalise learning in education. its ability to understand and respond to natural language, offers a unique opportunity for students to have a customised learning experience. By prompting ChatGPT to explore a topic, students receive information tailored to their individual learning style and pace, helping them to understand the material.
Additionally, by allowing students to engage with the topic that interests them and prompt ChatGPT to explore it, they are able to build a stronger understanding of the subject matter through intrinsic motivation, not being told too. The use of AI in education has the potential to bridge the gap between students and teachers, providing a new and innovative approach to learning.
From an educator’s viewpoint, By using AI algorithms, the system can analyse student behaviour, recognize their strengths and weaknesses, and provide educators with ways to targeted feedback to help the student. Furthermore, AI can also help educators and instructional designers to create more effective learning materials, making the learning process smoother and more engaging for students. This individualised approach can support students in overcoming difficulties and enhance their academic achievement.
Overall, AI has the potential to transform the education landscape and make learning more effective and accessible for all.
The Pandora’s Box
It’s open now and with the massive investments in the technology, it’s not going to be closed. I hope I highlighted the potential of what these AI language models do for people, however mistreatment can have negative impacts on society.
The use of ChatGPT for cheating can have serious negative impacts in society. By enabling students to easily obtain answers to tests or assignments, it undermines the educational system and devalues the hard work of those who have earned their grades through honest effort. Additionally, it can create an uneven playing field and give an unfair advantage to those who use it. This could potentially lead to a loss of trust in the education system and harm the future prospects of those who cheat.
It’s important for everyone to be transparent about their use of AI, by clearly communicating how the technology is being used, what data is being collected, and what measures are in place to ensure that the technology is being used in an ethical and responsible manner. This will help to build trust and confidence among individuals and can also help to mitigate the potential risks and negative consequences associated with the use of AI. For example I have been using ChatGPT to help block out this article and proofread it, I know the topics I wanted to discuss, however I can get a bit technical. ChatGPT helps rephrase what I am trying to say simply and check if it is grammatically correct.
Going back to the 90/90 rule. I am definitely in the camp of supporting talent over employing workers, meaning if I can give more time to individuals so they can make more meaningful impact and a better product at the same time. I am keeping them as the world is quickly changing and we need talent to find solutions. However I have been in discussions with individuals who view this is a great way to increase profits as we only need a few workers to do the same amount of output.
Automation technologies have the potential to supplant many tasks that were previously performed by humans. This can lead to job displacement and unemployment. Supporting talent is crucial in order to stay ahead in a rapidly advancing technological landscape. Embracing and developing the skills of current employees can help ensure the long-term success of a company and mitigate the negative effects of job displacement due to new and future technology.
Automation taken to the extreme by removing humans entirely can lead to uncontrolled outcomes with potentially damaging consequences. The absence of human oversight can result in systems making decisions based solely on algorithms, which can sometimes lead to unintended or undesirable results. This is particularly problematic in cases where the data used to inform these decisions is biassed or incomplete. Furthermore, the lack of human intervention can cause automation to perpetuate itself, leading to a feedback loop of harmful outcomes that are difficult to reverse. That’s why I always program an Undo button, but there is no undo in the real world.
On the Information front, I am concerned about systems like ChatGPT being utilised for content generation through simple copy-pasting. Simply regurgitating information to blogs and other mediums lacks critical thinking and could redistribute misleading or inaccurate information. Essentially, ChatGPT operates based on the information contained within its dataset, and it cannot generate new information beyond that. This information can lead to a decrease in the credibility and trust of the source. How if intentionally biassed to a group, it can reinforce closed information loops.
Closed information loops, similar to echo chambers, can have negative consequences as they limit exposure to diverse perspectives and can lead to misinformation and reinforcing of biases. ChatGPT Is a closed system that requires user input to generate content. However, as the rapid pace of interconnected technology, pairing ChatGPT with our personal digital profile can separate us even more. AI-powered algorithms can be used to filter and prioritise information based on a user’s browsing history, search history, and other personal data. This can result in a filter bubble, where users are only exposed to information that confirms their existing beliefs and biases. The output of these systems can perpetuate and even amplify the biases present in the data they are trained on, leading to unfair and discriminatory outcomes.
Lastly we should discuss who is in control of these systems. There is extensive legal discussion from community and government about collection of data that power these AI systems and how that data is being used. I am not pointing fingers, however the potential for harm is significant if those in control of language models like ChatGPT are not proactively monitored and held accountable for their actions. The manipulation of these models can lead to the spread of misinformation, manipulation of public opinion, reinforcement of societal biases, and concentration of power in the hands of a few entities. Without appropriate regulations and oversight, these consequences can have a damaging impact on individuals and society as a whole.
It’s important for everyone to be aware of these potential risks and to develop regulations, guidelines, and oversight mechanisms to ensure that AI systems are used in a fair, ethical, and transparent manner. Additionally, it is crucial to develop methods to evaluate the reliability and credibility of the information obtained from these AI systems and to foster a culture of critical thinking and media literacy.
What Does AI Have To Say
Now for the big scoop, getting the hot take from the original source… Or machine.
I found this response to be quite intriguing, especially the idea that human consciousness cannot be replaced or created. AI technology is going to have a significant impact on how we live and work. However, it is essential that people remain at the centre of the solution and that AI systems are designed to support, rather than replace human decision-making and creativity. The widespread use of AI is still in its early stages and the adoption of AI has the potential to revolutionise many industries and processes.
AI has the ability to solve complex problems, but it is important to remember that the ultimate goal of AI should be to augment and enhance human capabilities, rather than replace them.
Thank you for reading, and if you found a part of this useful. Share so it can help others.
Also go come check out my channel on YouTube