29,99 €
Software development is being transformed by GenAI tools, such as ChatGPT, OpenAI API, and GitHub Copilot, redefining how developers work. This book will help you become a power user of GenAI for Python code generation, enabling you to write better software faster. Written by an ML advisor with a thriving tech social media presence and a top AI leader who brings Harvard-level instruction to the table, this book combines practical industry insights with academic expertise.
With this book, you'll gain a deep understanding of large language models (LLMs) and develop a systematic approach to solving complex tasks with AI. Through real-world examples and practical exercises, you’ll master best practices for leveraging GenAI, including prompt engineering techniques like few-shot learning and Chain-of-Thought (CoT).
Going beyond simple code generation, this book teaches you how to automate debugging, refactoring, performance optimization, testing, and monitoring. By applying reusable prompt frameworks and AI-driven workflows, you’ll streamline your software development lifecycle (SDLC) and produce high-quality, well-structured code.
By the end of this book, you'll know how to select the right AI tool for each task, boost efficiency, and anticipate your next coding moves—helping you stay ahead in the AI-powered development era.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 488
Veröffentlichungsjahr: 2025
Supercharged Coding with GenAI
From vibe coding to best practices using GitHub Copilot, ChatGPT, and OpenAI
Hila Paz Herszfang
Peter V. Henstock
Supercharged Coding with GenAI
Copyright © 2025 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors nor Packt Publishing or its dealers and distributors will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Portfolio Director: Gebin George
Relationship Lead: Sonia Chauhan
Project Manager: Prajakta Naik
Content Engineer: Aditi Chatterjee
Technical Editor: Irfa Ansari
Copy Editor: Safis Editing
Indexer: Pratik Shirodkar
Proofreader: Aditi Chatterjee
Production Designer: Vijay Kamble
Growth Lead: Nimisha Dua
First published: August 2025
Production reference: 3040925
Published by Packt Publishing Ltd.
Grosvenor House
11 St Paul’s Square
Birmingham
B3 1RB, UK.
ISBN 978-1-83664-529-0
www.packtpub.com
To my husband, Dvir, my mother, Yifat, my father, Amos, my brother, Roy, and my dog, Panda—thank you for your support and encouragement throughout this journey.
– Hila
To my father, with special thanks to my mother, brother, and especially my wife for their support.
– Peter
When Hila told me she was working on a book about GenAI-powered software development, I smiled. Of course, she was. We’ve collaborated on papers where AI meets cybersecurity, so I’ve seen her thinking firsthand—rigorous, curious, never satisfied with surface-level insights. If anyone was going to map the future of coding with AI, it was Hila.
This book is not just another tour of ChatGPT or Copilot. It’s a builder’s manual for the age of AI-augmented engineering. It’s part workflow, part playbook, and part philosophical reflection on what it means to code when the machine is your collaborator. It goes from prompt engineering to system design, from small refactors to architectural guidance, from GitHub Copilot to OpenAI APIs, without losing the plot or pandering to hype. What I appreciate most is how grounded it is. Hila and Peter don’t romanticize GenAI, and they don’t fear it either. They approach it as engineers: curious, skeptical, and practical. How do you evaluate GenAI output? How do you keep it reliable? When should you override it, or better yet, teach it? These aren’t abstract questions. They’re daily challenges, and this book meets them with clarity and grit. For those of us who live at the intersection of AI, code, and security, this book feels like home. It speaks to the real problems developers face when integrating these tools into production environments, where correctness matters, hallucinations can be dangerous, and productivity means more than autocomplete. If you’re looking for a book that teaches you how to code faster, sure, you’ll get that. But if you’re looking for a book that teaches you how to think more clearly about coding in a world where machines also write code, then this is your book.
Congratulations, Hila and Peter. You’ve created something timely, honest, and actually useful.
Mike Erlihson, PhD
Head of AI, Stealth Cyber Startup
The book Supercharged Coding with GenAI is an essential guide for Python practitioners who want to work better and faster with LLM tools. While many resources stop at simple prompt-engineering tips, this book goes further by setting out frameworks for working with generated code. While using generative AI tools requires new ways of structuring workflows and validating output, this book provides a systematic approach to turning generative AI into a true collaborator and ensuring that systems remain trustworthy in production.
Supercharged Coding with GenAI strikes a rare balance between technical precision and engineering pragmatism, which is exactly what the fast-moving conversation around generative AI in software development needs. Instead of showing you how to prompt ChatGPT to optimize code, Hila and Peter explain the types of optimizations available, such as memory and runtime, how to use LLMs to detect bottlenecks, and how these tools can then be applied to handle larger inputs. They also demonstrate how prompts for GitHub Copilot can be adapted for ChatGPT to add flexibility, or for the OpenAI API to address pragmatic use cases, discussing the advantages and limitations of each tool. With practical examples and clear scenarios, they show how you can apply generative AI in everyday work, from updating documentation to identifying unit tests mismatches.
For developers and teams navigating this new era, this is a must-have book. It is practical, rigorous, and will train you to be a supercharged coder.
Congratulations to Hila and Peter for creating a book that is not only timely but also lays a foundation for the future of engineering with AI.
Alice Fridberg
Data Science Team Lead, Arpeely
Hila Paz Herszfang, with seven years of building machine learning (ML) services and leading teams, holds a master’s degree in information management systems and is completing a second master’s in data science, both from Harvard Extension School. She developed a Python for MLOps Udemy course and runs a math and tech TikTok channel boasting 15K followers and 300K+ likes.
Peter V. Henstock is an AI expert with 25+ years of experience at Pfizer, Incyte, and MIT LL. He teaches graduate software engineering and AI/ML courses at Harvard Extension School. He holds a PhD in AI from Purdue and seven Master’s degrees. Recognized as a top AI leader by DKA, Peter guides professionals in AI/ML, software, visualization, and statistics.
Mike Erlihson is a seasoned AI professional, leveraging his PhD in mathematics and extensive expertise in deep learning and data science. As a prolific scientific content creator and lecturer, he has reviewed approximately 500 deep learning papers and hosted more than 50 recorded podcasts in the field, building a substantial following of over 60,000 on LinkedIn. In addition to his professional work, Mike is committed to education and knowledge sharing in the AI community, making complex topics accessible through his various content platforms.
Alice Fridberg is a data science team lead with a master’s in applied statistics from Tel Aviv University. She specializes in innovative ML and deep learning methods for marketing optimization, forecasting, and user modelling. Her work earned her the Top Women in Media & Ad Tech – Data Demystifiers award. Alice in an active public speaker, delivering talks such as A Brief History of Data Science with the Women on Stage community. She also mentors students and early-career professionals through programs with DataHack, Women in Data Science, and Tel Aviv University.
New frameworks, evolving architectures, research drops, production breakdowns—AI_Distilled filters the noise into a weekly briefing for engineers and researchers working hands-on with LLMs and GenAI systems. Subscribe now and receive a free eBook, along with weekly insights that help you stay focused and informed.
Subscribe at https://packt.link/TRO5B or scan the QR code below:
In Part 1 of this book, we introduce the fundamentals of GenAI for coding and get you started with both OpenAI API and GitHub Copilot. The part begins with a discussion of how GenAI for coding has recently emerged from the intersection of a long evolution in software development tools and the recent large language models (LLMs) from the AI space. This recent fusion of technologies has completely changed the programming landscape. Now is the perfect time to begin the journey since applying them across software engineering tasks requires both training and practice.
The remainder of Part 1 provides hands-on guidance to start using OpenAI API and GitHub Copilot. After setting up these tools, the part introduces best practices for prompting.
This part contains the following chapters:
Chapter 1, From Automation to Full Software Development Life Cycle: The Current Opportunity for GenAIChapter 2, Your Quickstart Guide to OpenAI APIChapter 3, A Guide to GitHub Copilot with PyCharm, VS Code, and Jupyter NotebookChapter 4, Best Practices for Prompting with ChatGPTChapter 5, Best Practices for Prompting with OpenAI API and GitHub CopilotIf you are reading this book, you have probably heard some of the excitement, hype, concerns, and reality of Generative Artificial Intelligence (GenAI) for coding. You may have checked out some tutorials online and perhaps even explored using this technology for your own coding.
Learning to apply GenAI to software coding takes both practice and time. While there are many online demonstrations of the capabilities, there has not been a systematic approach for achieving functional, quality code with any consistency. There also aren’t many resources that guide developers to use GenAI beyond simple code completion or perhaps testing. GenAI can be particularly useful in expediting tasks such as standardizing coding style to improve readability, debugging, optimizing performance, and the many other tasks performed by software engineers.
In this chapter, we will explore the following topics:
Changing the software engineering fieldIntroducing the rise of large language modelsExploring the software development lifecycleEmbracing a GenAI toolkitIs GenAI worth learning for software engineering?What you will get from this bookComputer programming and software engineering, in general, contribute not only to the tech industry, but to many different sectors of the economy, including commerce, finance, health, transportation, and energy. Software drives the creation of many new products. It increases the productivity of companies through the automation and optimization of processes and enables cost reductions.
As software continues to deliver economic value, new paradigms and tools for software developers have increased the ability to write quality software at a faster pace. Over the last couple of years, GenAI has become one of these tools.
In software engineering, GenAI has suddenly advanced to reach an inflection point and is fundamentally changing the field. This recent technology allows everyone from novices to expert software developers to supercharge their productivity not only in coding but, more generally, the full software development lifecycle (SDLC).
Advanced technologies, including artificial intelligence, seem to be in the news every day lately. Despite this, many software engineers seem somewhat surprised that AI has progressed to the point that it can support their field and specific software development work. The current state of software engineering tools has resulted from the convergence of two separate trends. First, software development tools are not new but have progressed continuously over many decades. Second, GenAI technology has crossed over from the rapid emergence of large language models (LLMs), which trace back to neural networks and the origins of artificial intelligence.
The application of GenAI to software engineering is quite a recent development. Although AI has been discussed for many years as a promising set of tools for enhancing code development, the emergence of GenAI has ushered in a new era of capabilities.
Software development has experienced many new tools over the past decades that have transformed the field. It is easy to argue that software development is constantly evolving, with new tools that have streamlined the processes and enhanced productivity. This section provides an overview of some major technology revolutions that have aided software developers.
In the 1970s and 1980s, the Maestro I was developed as the first integrated development environment (IDE), although it would hardly be recognized as such by today’s standards. Its successors, such as Borland’s Turbo Pascal and Visual Studio, provided an easy integration of coding, file management, debugging, compilation, and execution. Today’s IDEs for Python, such as Visual Studio Code, PyCharm, and Spyder, facilitate global changes to variables, code highlighting, syntax checkers, and access to multiple tools.
Version control systems were a critical step in software engineering, enabling many developers to work on a single project. With a single code base, different versions of code can be tracked and managed. IBM’s IEBUPDTE in the 1960s was a forerunner of the technology, followed by the Revision Control System in 1982 and the Concurrent Versions System (CVS) in 1986. It wasn’t until 2005 that the now ubiquitous Git was developed, which enabled a distributed version control system.
Build tools and continuous integration and continuous deployment (CI/CD) systems speed the delivery process of software. Build tools such as Jenkins and Maven transform source code into executable code. CI/CD tools are often triggered by the build, but continue further to automate the testing, execute linters or other code tools, and often deploy the updated version to users. The full deployment pipeline frees the developers from the many manual steps and enables both a rapid and consistent way of providing users with the latest functionality.
Significant research has been poured into software testing. Apart from many specialized tools for different forms of testing, testing frameworks are now a standard part of virtually all software development suites. IDEs already speed up the process of creating skeleton tests from existing code by using method signatures and standard test naming conventions. The unittest frameworks run all the tests and report failures, significantly speeding up the process.
Code analysis and refactoring tools identify issues with code and can improve the overall quality. SonarQube is an example of a code analysis tool that performs static code analysis. It identifies potential problems with code, often referred to as code smell, but can also check for a range of potential issues, such as deviations in code style and poor security handling.
Some more advanced tools have been able to not only recognize coding problems but also fix them. For example, ReSharper actually refactors the code to improve its quality. Such tools save developers time and achieve this result through a combination of pattern matching and AI.
With continual changes in coding sources and packages, software development always seems to require new packages, platforms, or even languages. As a result, software developers require access to the latest manuals or other documentation. Some refer to searching for code examples in Stack Overflow or Reddit. Innovations in this space included Kite, AI-powered software that provided automated code completion and instant code documentation. Kite proved to reduce keyboard clicks and improved code development speed, gaining a user base of an estimated 500,000 programmers. Unfortunately, the company ceased to exist in 2021 and donated its multi-language code tools to the open source community.
Next, we will introduce the turning point in AI research that has driven significant adoption across a variety of domains, including software engineering.
Over the past few short years, LLMs have emerged as the dominant AI resource for writing, research, and inference. They are currently transforming the tech industry, and their applications have a far-reaching impact across all fields. This section provides a brief overview of their unprecedented ascent.
Artificial Intelligence was formally started in 1956 at a famous Dartmouth College workshop of computer science experts. They coined the term artificial intelligence (AI) and set ambitious goals ranging from automated reasoning to natural language processing (NLP). Although the participants expected a rapid progression to these goals, the compute and technology limitations thwarted their success. A publication in 1969 denounced the key technology and allegedly started the first well-documented AI winter, an extended period of no funding or research.
In the 1980s, expert systems emerged as a workable solution where rules could be crafted by technologists to reproduce human-like reasoning over limited domains for a specific problem. Despite some early successes with the approach, it proved difficult to craft and manage the ordering for sets of rules. This hindered its adoption and eventually led to the second AI winter.
Machine learning (ML), a sub-field of AI, emerged as the only viable solution. Unlike the hand-crafted rules of expert systems, ML systems could learn to make predictions or decisions directly from data. Research has led to dozens of techniques within the sub-field, but neural networks have become the dominant approach over the past dozen years. Mildly inspired by biological neurons, neural networks have proven to be a powerful system for learning and modeling data. Researchers have shown that neural networks can generalize well and approximate any function. Deep learning, any neural network with multiple layers of neurons, overcomes the limitations of more traditional machine learning techniques. Specifically, it can continue to learn when provided with ever larger training sets.
NLP is the application of machine learning to human language data. It applies to any texts, such as articles, blogs, emails, or books. The field draws from computer science, AI, and linguistics. Earlier methods drew extensively from statistical methods and later traditional ML techniques. In recent years, deep learning methods have revolutionized the NLP field by introducing language models (LMs), which predict and generate text based on existing language data. LLMs are expanded versions of LMs, trained on massive datasets and billions of parameters, which are internal weights tuned to reflect the patterns in the training data. We will discuss LLMs extensively in later chapters of the book.
Over the past several years, deep learning models have been trained on ever-increasing volumes of text and, with new techniques, can understand how words within each sentence are related to each other. This class of LLMs includes OpenAI’s GPT, Meta’s Llama, Google’s Gemini, Anthropic’s Claude, and newer models continue to be developed. These LLMs were initially designed to accurately predict the next word of a phrase. At scale and with recent technologies, they have enabled natural language generation (NLG) solutions that can write full texts to enable report writing, question-answering, chatbots, and much more.
LLMs are typically trained on large sets of available online text sources, but the same models can also be trained on software code. These LLMs use publicly available code in Python, Java, and other programming languages that are mostly available from GitHub repositories. The result is that the LLMs can predict the next block of code, can generate comments, write tests, and even refactor code. These are all parts of the overall SDLC that we will describe in the next section.
To deliver quality software, most software teams progress through a series of stages known as the software development lifecycle (SDLC). As shown in Figure 1.1, these steps are designed to be an efficient approach that minimizes the risk of failure. The process usually begins with the recognition of an unmet business need, and cycles through many stages to meet the need with a software system. Projects progress from analyzing the existing state to gathering requirements, designing the system, implementing and testing the code, delivering the solution, and often maintaining the software.
While most people associate software development with coding, actual programming makes up only 25-35% of the overall effort, depending on the type of software and its requirements. The remaining steps are needed to gather requirements, test and document the code, deploy the software, and support its continued functionality, as shown in Figure 1.1.
Figure 1.1: The SDLC – the continual process of developing or improving software systems from requirements through maintenance
The SDLC process begins with gathering requirements, followed by planning, feasibility, and risk analysis. A successful analysis leads to the creation of a high-level system design, and only after this step does an engineer continue on to software coding. The form will be formally tested before it is deployed, resulting in a live or production system. As the environment or business needs change, support and maintenance are always needed, and that can trigger the next development cycle.
Important note
While the SDLC is an industry-standard approach, individual organizations often introduce variations to tailor it for their software development processes. For instance, some organizations may choose to implement tests before writing the code, a practice known as test-driven development (TDD). Others may create a prototype system or introduce a proof of concept (POC) before conducting a feasibility analysis, a step that has become easier to perform with the help of LLMs.
There are an increasing number of books and videos that describe the use of GenAI for coding, but the technology can supercharge the entire process, not just the actual coding implementation. This book will explore several of these aspects, including testing, documenting, and monitoring software. These are critical for the success of software projects.
Next, we will see how we can embrace a comprehensive GenAI toolkit in our technological stack as software developers.
This book focuses on three separate tools for software development: ChatGPT, OpenAI API, and GitHub Copilot. In 2024, these three tools had roughly a $35 million combined market size for software engineering applications. The market is expected to grow 25% per year throughout the rest of the decade, according to a Research and Markets report. The following chapters of the book will provide instructions on how to subscribe to these services and how to get started. These tools provide distinct kinds of functionality, and knowing when to use which tool is part of the learning curve. Later chapters will highlight the features and use cases for each of the tools.
OpenAI has been a leader in LLMs since 2015. Led by CEO Sam Altman, the company has produced multiple versions of its Generative Pretrained Transformer (GPT) LLM. While these were well received, the release of ChatGPT in December 2022 transformed the perspectives of AI worldwide.
ChatGPT is an AI-driven chatbot, an application that is designed for text conversations using natural language. Its release spurred widespread use, reaching 100 million users the following month. It continues to be one of the most visited websites across the world.
While natural language conversations with ChatGPT often succeed in eliciting answers to questions, prompt engineering has proven a more robust technique. It is the art of crafting an instruction to produce a more desirable output. The prompt typically consists of context, instructions, a history of the dialog, and sometimes examples of desired output. This book will provide structured formats that guide the reader to effectively perform prompt engineering for producing code, comments, tests, and other outputs.
ChatGPT is among the most popular tools for interacting with LLMs. However, in many cases, prompt engineering lacks the simple structures found in software, such as loops and conditions. OpenAI provides a developer platform for coding directly against the same OpenAI LLM used by ChatGPT. Through its Application Programming Interface (API), OpenAI enables developers to combine software and prompt engineering. The API also provides specific added functionality that is useful for solving software engineering problems.
While GitHub is one of the most popular platforms for sharing code using Git distributed version control, the company released GitHub Copilot in 2021. Originally powered by OpenAI’s LLM, it provides intelligent code completion using GenAI’s programming capability. The functionality has been integrated into many IDEs, including Visual Studio Code and PyCharm—two of the most popular IDEs for Python.
Unlike the other OpenAI models, Copilot functions as a pair programmer. This concept comes from the Extreme Programming (XP) agile methodology, where two developers work together to write code with a single keyboard. Although not yet a fully functioning pair programmer, Copilot can quickly find and display references for code syntax and even provide annotated examples or full code as requested by the user. It interprets the intention from the function and variable names used. Together with the surrounding code as context, it can predict and suggest the next block of code.
Next, we will review recent studies that assess the use of GenAI for software development.
A number of studies have assessed whether GenAI provides increased productivity in coding tasks. McKinsey reported increases ranging from minimal to 50%, depending on the complexity of the task. For code documentation and generation, the gains were much higher than for difficult tasks. They found it was particularly good for routine tasks and repetitive work, as well as initial dives into new code projects. Refactoring code to make changes and tackling new challenges were also improved through GenAI technology. Perhaps as important, their study showed that users of GenAI for software felt happier, were able to focus more on meaningful work, and achieved flow much more frequently. The study details can be found at https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/unleashing-developer-productivity-with-generative-ai.
A similar study by Exadel reported that half of the developers in their study used GitHub Copilot at least 50% of the time. Two-thirds of these developers completed tasks more quickly, saving 10-30% of their development time. Copilot made them more productive and fulfilled. See https://exadel.com/news/measuring-generative-ai-software-development/ for more details on the study.
Research by Colombatto and Rivadulla (https://aws.amazon.com/blogs/apn/transforming-the-software-development-lifecycle-sdlc-with-generative-ai/) found benefits of applying GenAI across the full SDLC. Examining data from AWS and IBM, they found that the benefits begin in the analysis phase with requirements engineering. Even in this early phase of the SDLC, the researchers observed up to a 60% reduction in time from using GenAI. They found a 30% reduction in development time and a 25% reduction in time for generating unit tests and test plans. Even though less time was spent, the code quality improved by 25%, which contributes to fewer bugs and lower software maintenance costs.
A study conducted by BlueOptima from 2022 to 2024 used code repositories to analyze productivity, quality, and cost across 77,338 developers. In contrast with the other studies that reported significant savings, the findings were much more modest. They found only a 3.99% increase in productivity for those with access to GenAI and a 5.12% decrease for those without. Quality still improved slightly, which is important since it proves that the productivity gains do not compromise quality, but the gains were not as significant. However, the study used access to these tools as an input variable without characterizing the training, familiarity, or integration of GenAI into their workflows. In addition, productivity is likely to increase as the predictive accuracy and overall performance of GenAI tools continue to rapidly improve. The details of the study can be found through this link: https://www.blueoptima.com/resource/llm-paper-1/.
Next, we will discuss our perspective on the benefits and downsides of using GenAI in software development.
We have been using code completion tools for over a decade, but current GenAI tools are different. We have used the full range of tools, such as keyboard shortcuts, Stack Overflow searches for help, code API search tools, and all the latest refactoring tools and templates available in the IDE. All these strategies have helped us be more efficient in our work, but there has always been a lot of mundane, repetitive work that has limited our coding speed and enjoyment.
GenAI tools have transformed our output. Within three months of using an earlier version of GitHub Copilot, we were writing code 15% faster. Now, after two years, the combination of GitHub Copilot, ChatGPT, and OpenAI API has supercharged our coding output more than anything else that we have used. We complete twice as much work as we did previously with multiple tools. The improvements in productivity were a combination of advances in the tools themselves as well as familiarity with how to use them, both of which are covered throughout this book.
Beyond the productivity in merely writing code, GenAI contributes to other aspects of software development. GenAI can help refactor code automatically, which helps make it more readable and hence maintainable. As shown in later chapters, code can also be improved by selecting better algorithms that execute faster. GenAI can also help write documentation of code and automate the creation of tests. With GitHub Copilot, the pair programming approach to efficient coding includes providing help, which is useful for senior developers, but invaluable to developers learning a new language or framework.
The technology behind GenAI for software development is still quite new. Early studies from 2022 showed that GitHub Copilot’s accuracy in producing correct code was below 50%. While advances and new versions of the underlying models continue to be released every few months, they are certainly not perfect.
In fact, GenAI has produced some of the worst fatal development mistakes we have ever seen. To put that in perspective, we have seen a data scientist pushing their entire environment file to the corporate repository, which exposed secret tokens that had to be replaced. One software developer crashed a microservice after renaming a file pandas. One data engineer spent two weeks learning Cython to handle a Python DataFrame memory issue instead of just switching to Dask or PySpark. GenAI may not only supercharge your strengths, but may also supercharge your weaknesses. After all, it is still a developing technology, but continues to improve arguably faster than anyone expected.
AI coding has made the headlines, but it may not be clear why it would fail. The underlying coding models are trained on available GitHub repository data and other code that is publicly available in various languages. For problems that are widely documented, such as the Fibonacci sequence calculation or the many code snippets used to pass LeetCode interviewing questions, the answers are nearly perfect. For this reason, YouTube is full of videos showing how GitHub Copilot can program a React web page in 3 minutes.
GenAI has far more difficulty solving more obscure coding tasks where there is far less training data. Even if the most famous LeetCode’s Two Sum Problem were changed slightly to include Python Threads, for example, the solution would be unpredictable.
A well-documented problem with LLMs for generating text is that they tend to hallucinate or fabricate information when the answers are not apparent. Significant research is ongoing to counter this poorly understood problem. However, hallucinations and other LLM issues do occur when GenAI is applied to software engineering.
Some developers worry that GenAI coding tools will turn them into less capable developers. They fear that relying on automatic code completion, suggestions, and examples will cause them to lose their programming edge or familiarity with the functions.
Recent research by Michael Gerlick (https://doi.org/10.3390/soc15010006) suggests that AI tools might decrease our critical thinking capability through a process known as cognitive offloading. However, similar arguments have been made about automated spelling checkers that produce better documents but perhaps reinforce our spelling crimes. It is true that it may take a bit longer to remember the exact syntax of adding tick marks to a matplotlib plot when the internet is down. However, if you can double your output with fewer keystrokes, you can focus on the more important problems that GenAI has yet to solve.
Recent blogs describe a new trend called vibe coding, where developers and even non-developers design and build full applications extensively using GenAI over a weekend that would probably take months. It is remarkable that the technology has advanced to the point where rapid prototyping is effective. However, prototypes are not production code.
In many tutorials where GenAI fails, the common wisdom is You should verify the output you get, yet none offers a pragmatic way or even a guided mindset of how to effectively evaluate the outputs and improve the code.
It is considered good practice to apply unit and other testing approaches for all code. However, using GenAI is neither about blindly trusting nor fact-checking everything. GenAI failures do not mean we have to go overboard with fact-checking any piece of code it produces. Similarly, evidence of GenAI success does not imply you should push every memory optimization suggestion into production.
Leveraging GenAI is about developing a new set of skills to formalize the inputs and outputs obtained from LLMs. This will enable you to truly supercharge your coding tasks throughout the SDLC. It enables you to own the code whether you wrote it from scratch yourself or utilized LLMs. When you can assess the quality and risk of the output these tools generate, you will be able to transform your approach to software engineering.
This chapter highlighted that GenAI for coding emerged from the combination of software tool advancements with LLMs. This nascent technology applies not only to coding but can enhance many aspects of the SDLC. The combination of ChatGPT, OpenAI API, and GitHub Copilot provides a complementary set of tools that have been shown to not only improve productivity and enhance code quality but can even bring happiness to programmers.
Although the technology is new and still evolving, GenAI is already changing the software engineering field. This book was developed to provide a structured approach to effectively leverage the tools and achieve the best results across many aspects of the SDLC.
In the next chapter, we will introduce a quick-start guide to OpenAI API and use the chat service for coding tasks. We will build our own code completion program that takes a function’s signature as input and returns its implementation as output.
To learn more about the topics that were covered in this chapter, take a look at the following resources:
VS Code Plugin: https://github.com/kiteco/vscode-pluginBegum Karaci Deniz, Chandra Gnanasambandam, Martin Harrysson, Alharith Hussin, Shivam Srivastava. Unleashing developer productivity with Generative AI: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/unleashing-developer-productivity-with-generative-ai.Alexey Girzhadovich. Scientifically Measuring the True Impact of Generative AI Software Development: https://exadel.com/news/measuring-generative-ai-software-development/.Diego Colombatto and Jose Manual Pose Rivadulla. Transforming the software development lifecycle (SDLC) with Generative AII: https://aws.amazon.com/blogs/apn/transforming-the-software-development-lifecycle-sdlc-with-generative-ai/Research and Markets Report: Generative Artificial Intelligence (AI) in Coding Market - Forecasts from 2024 to 2029: https://www.researchandmarkets.com/reports/6014321/generative-artificial-intelligence-ai-in?utm_source=GNE&utm_medium=PressRelease&utm_code=8xz7cm&utm_campaign=2014387+-+Generative+Artificial+Intelligence+(AI)+in+Coding+Market+Research+2024-2029%2c+Profiles+of+Codecademy%2c+CodiumAI%2c+Google%2c+IBM%2c+Microsoft%2c+NVIDIA%2c+OpenAI%2c+and+Tabnine&utm_exec=chdomspiMichael Gerlick. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking:https://doi.org/10.3390/soc15010006Scan this QR code or go to packtpub.com/unlock, then search for this book by name.
Note: Keep your purchase invoice ready before you start.