Empowering Developers with AI Tools

author-image

By

In this episode of the Open at Intel podcast, host Katherine Druckman, along with Ezequiel Lanza, an open source AI evangelist at Intel, chats with Codium Co-Founder Dedy Kredo about Codium’s AI coding platform, which helps developers with various tasks throughout the development lifecycle. In their chat, Kredo emphasizes the importance of security and highlights Codium's compliance with security standards and zero data retention policy. Kredo also discusses the benefits of using AI tools in software development, such as increased productivity and improved code quality.   

“Developers are split into two groups. The vast majority hate testing and do it because they have a certain code coverage threshold that they must hit. Then there are the developers who hate testing and don't do it because they don't have those needs. But there is the 1% of developers that like testing and maybe you are one of them.” — Dedy Kredo 

 

What is Codium?

Katherine Druckman: Thank you for joining me to nerd out about something called Codium, which we will let Dedy explain. What is Codium? 

Dedy Kredo: Codium is an AI coding platform that helps developers throughout the development lifecycle to test their code, review, analyze, and document their code, essentially all the tasks the developers hate doing. We make it more streamlined and efficient by using two main tools to help developers. We have an IDE plugin for their JetBrains and VS Code environment. 

Katherine Druckman: I was about to ask that, and you answered it. Which IDEs do you support? 

Dedy Kredo: VS Code and JetBrains environments, and we're also adding additional IDEs that will come up. That helps the developers as they generate tests to analyze code. They can run a variety of commands right inside their IDE. For Al, we have a Git plugin called PR-Agent that connects to their Git environment. We support a variety of Git platforms like GitHub, GitLab, and Bitbucket. Additionally, it helps streamline the code review process, both for their reviewer and the actual developer who opened the PR. There is a set of tools and commands that help generate documentation, find issues, automatically label PRs based on semantic rules, and a variety of capabilities that are surfaced inside of the Git interface. 

Codium vs Other AI Coding Tools

Katherine Druckman: How does your solution differ from other things out there, like GitHub Copilot, for example, or similar AI coding tools? 

Dedy Kredo: Think about it as you would think of diverse types of watches, like an Apple Watch or Garmin. It depends on what you're focused on and what is important to you. The actual code generation piece, the code completion, and the generation of code are becoming increasingly commoditized. Many tools provide this kind of capability. We also provide that, but we have a differentiated approach where our focus is on testing code, analyzing code, reviewing code, and we’re the best at that. We're the best at making sure that the code works as expected. 

We believe that solving the challenge of making sure that the code works as expected, which is the hardest challenge in the software development lifecycle, puts us in a good place in the market to provide additional capabilities. If you are concerned about code quality, testing, and enabling developers to easily test their code and easily review their code, we're the best tool for that. We also play with other tools like the ones you mentioned. I would say that our customers leverage both. They use Copilot, but they use us in tandem. We do have customers who have decided to solely use our product. It's more of a matter of choice, right?

Test-Driven Development and Codium

Katherine Druckman: I'm a fan of test-driven development. If I'm a developer who likes to work that way, how does that play into your tool? I get the impression that your tool will write a test for me after I have code and I'm getting it working. What if I want the test first? 

Dedy Kredo: Great point. That perspective is flexible. I would say most developers don't do test-driven development (TDD). For most developers, testing is an afterthought. 

Katherine Druckman: Sadly. 

Dedy Kredo: Plus, they hate doing it, and always postpone it. Developers are split into two groups. The vast majority hate testing and do it because they have a certain code coverage threshold that they must hit. Then there are the developers who hate testing and don't do it because they don't have those needs. But there is the 1% of developers that like testing and maybe you are one of them. 

Katherine Druckman: I was. It comes down to insecurity. When you are doing the test first and leading with the test, it helps to have a little bit more reassurance that you're going in the right direction. 

Dedy Kredo: You can generate tests like with CodiumAI, even if you don't have the implementation fully built. For example, you can write a function, like the signature of the function, and not even write the implementation yet, and you'll start getting tests. You can give CodiumAI a natural language description and the AI will start generating tests for you. It will generate ideas for what you could be testing. It'll give you thoughts about testing happy path cases or edge cases. It can help you generate those initial tests even if you do TDD. 

Katherine Druckman: Sure. 

Ezequiel Lanza: Not going too deep on AI and the topic, what is the main difference? 

Katherine Druckman: No, we got to. It's the fun stuff. 

Ezequiel Lanza: We can, but is that a generic model? As developers, we would love to have a tool that learns how we code and makes recommendations based on how we code and not something structured. ChatGPT gives generic recommendations. What is the difference between your product and the others? 

Dedy Kredo: Our tools are highly configurable. They're built for enterprise use cases, and there is a set of best practices that you can define both for the IDE plugin, but it's even more important in the Git plugin because you want typically a dev lead or a manager or an architect. We want to define what the best practices are that should be prioritized and surfaced. Because you don't want to have all these issues show up in the actual pull request when they don't matter for the organization. Everything is highly configurable. You can give it additional instructions. You can even do it on a pair repo or pair team. Different repos or different teams have other requirements that they want to enforce, you can customize it that way. We're now introducing multi-repo support for the RAG. We learn from your repos, and we use that in answering questions, in code suggestions, and across the board. 

Katherine Druckman: Since we threw RAG out there, can we define what that is for the listeners? 

Dedy Kredo: Sure. Do you want me to take that? 

Ezequiel Lanza: Yes. 

Dedy Kredo: This is a retrieval augmented generation. It is the concept of using internal knowledge within the organization to add to the model as additional context. When you do a certain prompt to a model, prior to that you indexed the data sources. You save an embedding representation of your data. And then when there is, for example, a free text question that is being sent to the model, it's first also being converted to an embedding that is then compared to your indexed data sources to pull relevant data sources into the prompt to get the answer that the model will then provide. It will consider those additional data sources and parts in the data sources that are most relevant to the question asked. 

Katherine Druckman: Okay. Awesome. 

Ezequiel Lanza: The provision of context. 

Katherine Druckman: Thank you. I like to assume zero knowledge. I want to make sure we define everything. 

Dedy Kredo: Of course. 

Deployment Options and Security

Ezequiel Lanza: In terms of the code, or the purity of the code, how can a developer use it? Is it something that I must download? Does the model run locally on my machine, or are you using an external one? How does it work? 

Dedy Kredo: We have several deployment options. The first is a SaaS offering. You install the plugin directly in your IDE with the click of a button. Developers find us and install the plugin either in their JetBrains IDE or in their VS Code IDE. When you do that, you also automatically enroll in a trial for two weeks for our premium version, which includes the pro version of our Git plugin, PR-Agent Pro. 

In a few clicks, you install it and integrate it with your GitHub, GitLab, or Bitbucket. And then it all runs in SaaS mode in the cloud. You don't run any models on your local machine. We also support other deployment options. You can deploy our product self-hosted or completely air-gapped using a model that runs on graphic processing units (GPUs) in your cloud. This is the private model that we deploy in your environment or your endpoint for a model that we support. This would be a GPT-4 or another kind of powerful model that we can support that you've approved internally but are not running locally. Most models today that are small and can run locally are not quite powerful enough, but maybe we'll get there. 

Ezequiel Lanza: I assume most people run models locally because they don't want to send the data to an external API, and they need to do cross-checks. How do you manage that part of the security of the prompt? Is it secure? 

 

Dedy Kredo: Security is important to us. Code is super sensitive. We're SOC 2 Type 2 compliant. We have zero data retention. We only save your data for a couple of days for troubleshooting purposes, but that can also be turned off completely so you can get it to complete zero data retention. Your data is not used for training or anything of that sort, and that's in the premium version, in the paid version. It's important to highlight that. Then if you go completely air-gapped on-premise, it becomes a non-issue because you have complete control of the environment. 

Intel Ignite Program Experience

Katherine Druckman: Interesting that it can run that way. I feel I would be remiss if I did not point out that your company is an Intel Ignite startup, and I wondered if you could tell us a little bit about that experience. 

Dedy Kredo: Intel Ignite is a wonderful program. I highly recommend it. There's a nice ecosystem around it. Also, typically, it's a point in your company journey where you're overwhelmed as a founder, there's all these things you need to do, and you need to build your go-to-market. I run product and engineering; I have all the engineering, product, and team-building challenges, but then also we must think of go-to-market, when are we going to start monetizing, and how we manage marketing. 

It's a comprehensive program that tries to bring thought leaders to people who have built successful companies to cover all these topics in a way that also doesn't stop your day-to-day. We did it one day a week for a few weeks. It was manageable from our perspective. Also, the ecosystem and the introduction that we got from it are valuable. I couldn't recommend it more. 

Katherine Druckman: How was the selection process for you? 

Dedy Kredo: In terms of the Intel Ignite? 

Katherine Druckman: Yes.  

Dedy Kredo: It was straightforward. We've presented a couple of times. I don't even remember all the details. 

Katherine Druckman: How long ago was it that you went through that process? 

Dedy Kredo: It was a little over a year ago. 

Katherine Druckman: How old is the company? 

Dedy Kredo: The company was founded in July 2022. 

Impact of AI Hype on Business

Katherine Druckman: That's a meteoric rise. You went from founding to being part of the Intel Ignite program.  

Dedy Kredo: Yes, quickly. We had substantial seed funding also. We raised close to $11,000,000 and that helped us to ramp up quickly. We had a product within six months. We already had an initial alpha version in the market and started getting feedback. 

Katherine Druckman: Cool. 

Ezequiel Lanza: It was in July 2022, which was before ChatGPT. 

Dedy Kredo: Pre-Chat... BC. 

Katherine Druckman: BC, before ChatGPT. 

Dedy Kredo: Before ChatGPT. 

Katherine Druckman: That's funny. Let's call it the hype cycle around AI and generative AI. What has that done for your business? 

Dedy Kredo: For us, we felt like we were in the eye of the storm, and we liked it. More people are realizing that one of the best use cases for generative AI is coding in software development. And co-generation has been the focus, but I think that's only scratching the surface because all these additional processes in the software development lifecycle can be much more automated and much more efficient, and they're now the bottlenecks. Generative AI is going to have an enormous impact on the software development industry. I think it's clear today and anytime such a tectonic shift happens, there's a lot of opportunity around it.  

Katherine Druckman: Making the hard stuff easier. 

Dedy Kredo: Yes. 

Katherine Druckman: In terms of security and securing software, how does CodiumAI make that easier? 

Dedy Kredo: We are not a pure cyber company. We're finding out that we can provide value there, too. Because we are doing the code review, especially on the pull request side, we can surface security issues. 

Katherine Druckman: Sure. 

Dedy Kredo: Especially major security issues. We can also do more advanced levels of automation that weren't possible before. It's in many different contexts, but for security, it's important. I'll give you an example. We have one customer where any time a new API endpoint is being introduced; it goes to a security review. Identifying that within the code is not easy. You have a pull request that has a lot of code, and it needs to be identified that a new endpoint was created. But models are good at that. You can give them semantic rules and tell them, "Identify any part in this code that has a new API endpoint." Then automatically based on that, it will add a label to the PR that says, "Require security review." From that point, it's easy to route a PR to the right team. This is an opportunity to do more advanced security mitigation leveraging AI. 

AI-Assisted Development and Semi-Automation

Katherine Druckman: One of the things I talk about is the relationship between product health and best practices and security. You want to make sure that code is being reviewed at a basic level and ensure a certain level of human interaction. A tool like yours might make it easier for that to happen. What are your thoughts on how a tool like that could improve overall project health and help make human reviewers' lives easier? 

Dedy Kredo: Our focus is to give superpowers to the reviewers and developers. We believe in AI-assisted development, and even now we're about to introduce more advanced capabilities and agent-type capabilities. We still believe in semi-automation where the human can review each step, to interject to affect the outcome versus letting the AI loose and do whatever it does. Our core belief is that through quality, you can improve productivity. If you streamline the process of generating higher quality code, better-tested code, make that easier, and generate a higher quality code base, you will then gain efficiencies in terms of productivity. 

Katherine Druckman: Okay. 

Dedy Kredo: You create less tech debt, you'll have fewer issues that hit production, and fewer bugs eventually. Plus, you'll increase the overall productivity of your team. 

Ezequiel Lanza: Can it be a learning opportunity for the developers? 

Katherine Druckman: That's a good point. 

Dedy Kredo: Absolutely. 

Ezequiel Lanza: Even if they can rely 100% on what the tool is saying, I think it’s more educational; right? 

Challenges and Opportunities in AI for Software Development

Dedy Kredo: The times that we live in right now are exciting with everything that's happening around AI. Especially in the context of software development, think about where we are in terms of the level of automation. If you compare it to self-driving, we're in 2013 of self-driving. Remember when it was all the hype and people were saying, "Next year, by 2015, all the cars will be driving themselves?" And here we are, 2024, and that's not the case. 

Katherine Druckman: People are drowning. Did you hear that story? Somebody's car drove them into a lake in Texas. 

Dedy Kredo: No, I haven't heard that. That's horrible. 

Katherine Druckman: Scary. Anyway, go on. 

Dedy Kredo: You can think about our test generation as an example. Even a year or a year and a half ago, it wasn't possible to generate these tests that I just showed you in a demo a few minutes before the podcast was recorded. Some things can be highly automated, but especially for the big enterprises' use cases, you can think about it as driving in the city. There are still these complexities and different environments, and how do you automate what is in the core business of the enterprise where every mistake can be costly? It will take time. We will have increased automation in the next 10 years, but we are now focused on the city. We're focused on helping enterprises. This puts us in a position over time to be the best to automate these processes. 

Adopting AI Tools in Development Teams

Katherine Druckman: If I'm a developer and I want my team to adopt this tool because it makes my life easier as a human who makes code, what would you tell me about how to make the case for it? Because I know a lot of coding teams, and a lot of companies are heavily restricting the types of AI tools that they're using. 

Dedy Kredo: Specifically on our PR-Agent, our customers see a higher throughput of PRs because it's easier to review PRs. This can be tracked and seen. There are also eventually fewer issues and fewer bugs that make it into production. Those are areas where you can advocate.  

Katherine Druckman: Go ahead.  

Dedy Kredo: We have seen our customers report pull coverage increase because of the mere fact that it's much easier for the developers to generate tests. We will typically do a trial. It's easy to get started and start a free trial for a few weeks, have your team try it out, experience it, see both the PR review side and the IDE side, see the impact, and then decide from there. The trial period is seamless. The installation is not complex. Try it out and see quickly if it contributes or not. 

Ezequiel Lanza: What is the reaction of developers?  

Dedy Kredo: Of what? 

Ezequiel Lanza: Of the developers. For instance, with ChatGPT most people start to be scared about, "This will be replacing me. This will be something that will be replaced." But with developers, are they also scared, or do they see that, "Hey, I just want to use that for my job?” 

Dedy Kredo: There might be a certain segment of developers that, like you said, are more deterred from it. The vast majority are realizing that they must use it to be competitive because they're not going to be as productive as others who are leveraging these tools. We are trying to help our customers leverage gen AI smartly and efficiently while maintaining quality standards and even pushing them higher. Most teams are realizing they need to leverage these tools to stay competitive. 

Ezequiel Lanza: Are there any programming languages that work better? 

Dedy Kredo: All the major programming languages are supported fairly well. I would say that Java is the topmost used language. Then JavaScript, TypeScript, Python, and then there is Go, C, C Sharp, C++, Ruby, and many others. Assemblers are not as supported because you must have data to train them.  

Ezequiel Lanza: Do you have to have the data to train them? 

Dedy Kredo: We don't have good enough data to train them, but over time, that will also improve. 

Ezequiel Lanza: Nice. 

Open Source Projects and Community Engagement

Katherine Druckman: Being that we are the Open at Intel podcast and we're all open source nerds here, I wondered if you could tell us a little bit about your open core model. What is open? What can we use? What can anybody use? 

Dedy Kredo: Open source is important for us as a company. We have two main open source projects. One is the PR-Agent that I mentioned. This is our Git plugin that we started completely open source. As I mentioned, it's an open core model, we still have a lot of the main functionality open sourced, and it has quite a bit of traction and usage. 

We have thousands of teams that are using the open source PR-Agent, and then we built in premium features above that. Additionally, we handle all the hosting and management of that. That's the core value position around moving from the open source to the paid. That's one tool. And then we also did a research project called AlphaCodium that we launched about two months ago. That's completely open source. It's a research project that is an AI competitor in coding competitions. We achieved state-of-the-art performance in coding contests. This AI application does better than the majority of human competitors in coding competitions. 

Katherine Druckman: It's scary, but I'm not surprised. 

Dedy Kredo: Amazingly, we achieved state-of-the-art. The previous state-of-the-art was achieved by Google DeepMind. We were inspired by Alpha Code then we called it AlphaCodium, and it leverages a flow engineering concept that we've coined, which is a series of steps that we do while leveraging LLMs, where we let the LLM first reflect on the problem. We create intermediate representations, and then we have two main components, a testing agent and a coding agent that work together and run in loops to reflect and generate additional test cases for each problem to cover additional edge cases. 

That way, we were able to achieve the state-of-the-art. Everything is open. We learned from it and now have incorporated it into our products. AlphaCodium will be increasingly embedded into our core offerings. It's the only open source AI coding competitor that's out there that you can go in and feed it a new problem and try it out. We're excited about that. We even had Andrej Karpathy tweet about it, and we got quite a bit of traction from that project. We got quite a bit of traction from that project. 

Katherine Druckman: That's great. I'm glad to know there is an open version. 

Ezequiel Lanza: In addition to the foundation and the open models, for language, we have Llama, Falcon, and Generative Pre-Trained Transformer (GPT) which are the open source models for code generation. 

Dedy Kredo: Many of these models can be used for code generation. We've experimented with them, and we support a few of them. There are a few open source models that work in coding. It depends on the task, and which one works better. Typically, for the open source models, you do need to fine-tune them to your specific task for them to work versus a GPT-4, for example, that out-of-the-box would work for many coding tasks. 

Katherine Druckman: This has been fantastic. I've learned a lot about Codium. I'm inspired to go play with AI stuff more because I don't do it as much as Ezequiel. 

Dedy Kredo: Thank you so much. It was fun. 

Ezequiel Lanza: Thank you so much. 

Katherine Druckman: Yes, thank you. 

Katherine Druckman: You've been listening to Open at Intel. Be sure to check out more from the Open at Intel podcast at open.intel.com/podcast and on X and LinkedIn. We hope you join us again next time to geek out about open source. 

About the Guest 

Dedy Kredo is the co-founder and chief product officer of CodiumAI, leading the product and engineering teams to empower developers to build software faster and more accurately through the use of artificial and human intelligence. 

Before founding CodiumAI, he served as vice president of Customer Facing Data Science at Explorium, where he built and led a talented data science team and played a key role in the company's growth from seed to series C. 

Previously, Kredo was the founder of an online marketing startup, growing it from a bootstrapped venture to millions in revenue. Before that, he spent seven years in Colorado and California as a product line manager at VMware's management business unit. During this time, he worked closely with Fortune 500 companies and successfully launched several new products to market. 

About the Hosts 

Katherine Druckman, an Intel open source evangelist, hosts the podcasts Open at Intel, Reality 2.0, and FLOSS Weekly. A security and privacy advocate, software engineer, and former Digital Director of Linux Journal, she's a long-time champion of open-source and open standards. She is a software engineer and content creator with over a decade of experience in engineering, content strategy, product management, user experience, and technology evangelism. 

 

Ezequiel Lanza is an open source AI evangelist on Intel’s Open Ecosystem team, passionate about helping people discover the exciting world of AI. He’s also a frequent AI conference presenter and creator of use cases, tutorials, and guides to help developers adopt open source AI tools. He holds an MS in data science. Find him on X at @eze_lanza and LinkedIn at /eze_lanza