What is the AI Alliance?

author-image

By

Open at Intel host Katherine Druckman spoke with Ezequiel Lanza, her fellow Intel open source evangelist, and Dave Nielsen from IBM, about the AI Alliance, which was formed in 2023 by IBM and Meta as a community effort to promote openness, transparency, and security in AI development. They discussed what openness means in AI, the role of open source frameworks in AI development, and the potential benefits of openness in fostering innovation and addressing concerns about access. The discussion included a look at future events to be organized by the AI Alliance and how companies, community groups, and individuals can get involved. 

“Now everyone sees that AI is potentially going to have a transformative effect on all our lives. Everyone has a vested interest in this, and we believe that it’s important for it to be open and transparent, so that AI can be safe, secure, and trustworthy. Being open is really a great way to do that. So, that's what the Alliance is about, making sure AI is open for everyone.”—Dave Nielsen 

 

Katherine Druckman: Today we have a really exciting show and I will tell you why. One, it's because I am joined by two people. One of them is one of my wonderful, fellow Intel open source evangelist, Ezequiel Lanza. You have heard him before, and I'm really excited to have him back helping me out here with the co-hosting duties. Thank you, Eze. 

Ezequiel Lanza: Thank you for having me. I'm pretty excited for this talk.

Katherine Druckman: Yeah, this is a good one, because we also have Dave Nielsen from IBM, who collaborates with us in a group called the AI Alliance, and I will let Dave get into that. So, Eze and I participate with Dave in the AI Alliance on behalf of Intel. Dave, could you tell us a little bit about what the AI Alliance is and why it exists? 

What is the AI Alliance?

Dave Nielsen: Thank you for having me, it's a real pleasure. I've only been in the AI Alliance for a month, and I love getting out here and mixing it up with the community. The AI Alliance was formed last year in December 2023, basically by IBM and Meta. At the launch date, they had 55 member organizations, including companies like LangChain, LlamaIndex, Hugging Face, Anyscale, Together.ai, Lightning.ai, Zilliz, and of course, Intel. We also have communities that are members, such as the MLCommons, and universities like UC Berkeley and some Ivy League schools, a whole bunch of schools, really. It's really growing--we're now at over 106 organizations. Why are we doing this? The reason is that AI has been too important for a long time to be controlled by just a few organizations. 

Historically, it has been fairly distributed, with a lot of different research organizations, governments, and companies all doing their own thing. What's different now is that with generative AI, there's a big focus on all these models, and the impact they’re having is like a step function, or a moment in time where things are changing fast. Now everyone sees that AI is potentially going to have a transformative effect on all our lives. Everyone has a vested interest in this, and we believe that it’s important for it to be open and transparent, so that AI can be safe, secure, and trustworthy. Being open is really a great way to do that. So that's what the Alliance is about, making sure AI is open for everyone. 

Katherine Druckman: I love it. It's obviously important to Intel and IBM. Could you tell us a little bit about who you are and how you got into this work with the AI Alliance? 

Dave Nielsen: Yeah, yeah. I've been in developer relations since 2003, when I joined PayPal in its early days. I've been doing this for a while, and whenever there's a big new technology that seems to have a transformative effect, I try to get involved. That's how I got my job at PayPal. It was when APIs were new, and I started the first user group in the world for APIs. Then when cloud computing came out, I worked with some of the same group of people to create the first conference. It was a cloud computing unconference called CloudCamp, and that was a big thing. 

Since then, I've been involved in big data that evolved into AI, where my journey took me from the cloud to NoSQL databases. I've been involved with databases for many years, which were always tangential to AI. But at Redis, we had an AI module, and recently, at MongoDB, there was a vector database. I've been involved, but this was just too exciting an opportunity to pass up. So, when I saw the opportunity for a community role in the AI Alliance, I jumped on it and I'm very happy and thrilled that it worked out 

Katherine Druckman: Yeah, I am too. I'm very biased toward openness and community-based anything really, development and collaboration. I'm pretty excited about this. Here’s a question for both of you. The conversation around what openness means in AI is interesting and ongoing and complex, but I wondered if y'all could talk a little bit about what openness means in the AI space and why it's so important. 

AI for Good and Community Involvement 

Ezequiel Lanza: It has to be defined because there are different mindsets and opinions about what openness in AI means. In the past, for instance, let's say we have an application on GitHub. I can use the code there according to the license that’s also there, and that is considered open. In AI, it's a bit different because you have something that is behaving in some way, making predictions, and generating context, text, images and so on. So, what is the concept of openness when we talk about a model? For AI, this comparison opens up conversations like, for example, does the data have to be open? 

That doesn't mean I need to know the data used by Llama, ChatGPT, or other companies to train their models. Is the data open or not? Apart from conversations about data, if you have access to a model, of course, you would like to use it. For a model to be considered open, you need to be able to find how to access it, to understand its architecture, and how to code to make the inference, for example. But for me, it’s not just having access to download the model, it's more of a broad conversation. So, I have access to the model, but I also need to know how it behaves, how it was trained, what is the code that they use for training and so on. That opens up a lot of conversation. 

From the openness perspective, my opinion is that we need some levels of information. The most important things you need are to see the model, use it, download it, have access to the weights, and test it. If we have that, we can start talking about the data they used for training and the multiple layers related to that. Conversations can be black and white or more like peeling back the layers of an onion in looking at the multiple aspects of a model’s openness. I think that's something that is very important for the community to enable innovation. That's my take. 

Dave Nielsen: I definitely have similar thoughts. I can add my own perspective. When cloud computing first came out, I was in the Cloud Computing Google Group, which I think was the first group formed online for cloud computing to gain traction. It was the early, early days when we were forming CloudCamp, and I remember people thinking that for cloud computing to be open, it wasn't just about open source, it was a service. For a cloud to be open, you had to be able to take your code and deploy it on one cloud, and you could scale it across many clouds. And that was open. Some people thought that you should be able to take your code and easily move it to another cloud. And that was open. Then, of course, there was Eucalyptus, then CloudStack, then eventually Open Stack, and then Kubernetes came along. By then, they were open source, and that was open. 

There were a lot of different ideas of what open was. And I see the same thing now. There are a lot of different ideas about what open is, and I think eventually we'll gravitate towards one or two. But what I think is unique about AI is that it's not just about being able to access or use multiple AI clouds, it's what's in the cloud that’s unique. It's software-as-a-service in some cases, which you can't move because it's their software-as-a-service. And a lot of companies can't use that. A company like Eze was talking about needs to be able to make sure it meets their criteria. They need to be able to view it, test it, and run it themselves. Maybe they need to meet a data sovereignty requirement to run it in their own environment or country. There are a lot of requirements that require companies to be able to touch it, feel it, and run it themselves. 

Katherine Druckman: My take on all of this is that with source code, it's cut and dry. You have a license, it's either open or it's not. But when you get into AI, we're breaking new ground. We're all pioneers, and the definitions have to be hashed out. And I think that's the interesting part of this conversation. 

Dave Nielsen: Yeah. I agree, and where I was going with this is, in the NoSQL world, we got into a gray area where some people felt like there should be something protecting the innovators. Well, now it's like what's protecting the innovators is the data. If you have the data, you're protected. It still costs you a lot of money, of course, but if nobody else has that data, it's hard for them to reproduce what you've done. So, there are extra inputs into the software that need to be available if it's truly open like Eze was talking about.But that's all still being worked out. That's why some people are, as I interpret it, instead of saying open source AI, they're just saying it's open AI or AI open, something like that. They're not saying those two words, open source, together, because that does mean something very specific. But this is all still being fleshed out and it's complicated. The conversations can happen at the AI Alliance, where we can talk openly and work out these issues. There are companies all over the planet who have so many different requirements and opinions on this, and it needs to be discussed and worked through. That's why I think the AI Alliance is important—to provide a place for that discussion to occur. 

Katherine Druckman: This conversation about what openness means to AI is to lay the groundwork for thinking about why openness is so important for the idea of AI for good. Why does keeping things open and ensuring interoperability and an open ecosystem maybe help fend off some concerns that we see about negative unforeseen consequences in AI development? Why is openness so important in this conversation? 

Ezequiel Lanza: I think what is very important is if developers in the community are relying on your model and you make that model open, you are enabling the community to say, "Hey, this model is biased. This is how you can mitigate that. This is how you can improve your model. This is how you can make it more predictable or whatever." In terms of how the community can help, there is no way that you can do it if you're not working with the community. This is why openness is so good. It is a similar message to what we use when we talk about open software. You need to have the community to help find bugs or to do whatever, but you don't want to centralize everything in one company or one group of people. 

For instance, one of the things that I saw in a paper, Responding to the U.S. NTIA request for comment on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights, launched by the AI Alliance a few months ago, was that if you look at all the development that’s done in AI, you’ll see that it's centralized in three or four countries. You don't have, for instance, a university from South Africa or a university from South America. This means for a lot of countries, the models are probably not open, and access to the hardware is probably more expensive in different places, and so on. But I think that from the start, making the model more open will enable more people to contribute and find bugs. I also think that it's important because it's a powerful tool. 

We are saying that AI will change the world in the next 5 or 10 years, and we've seen a lot of movies about AI and robots. We have this fear like, "Hey, we need to control that." And the only way to control it is if we are all part of it, including non-developers, developers, researchers, and anyone else who can be involved in the development phase of a model. In the AI Alliance paper I mentioned earlier, the author talked about the research and how making a model open can benefit the economy. For example, you can download an open source LLM and build your own startup. It also helps in the development of existing companies. 

Katherine Druckman: Yeah, I do love that association of openness with innovation. 

Dave Nielsen: What originally got us on this podcast was talking about the demo day that the AI Alliance sponsored with Cerebral Valley in San Francisco on August 8th. The whole point of that event was to celebrate how important it is for individuals, entrepreneurs, and companies to have access to open models and open source so they can create AI applications and innovate, and not just leave this to the big companies. Not that there's anything wrong with big companies, but they often don't have the nimbleness and the individual unique ideas that we need so much to build out our ecosystem. That's what we celebrated—open source frameworks using open models to develop AI applications. 

Katherine Druckman: That's very cool. I love the idea of a scrappy, community-focused effort to come together and really pursue the goal that is AI for good, because how else do you solve these complicated problems? The definition of openness aside, as you say, AI development is moving so quickly that it's mind-blowing. It's very difficult to keep up with. Spinning up these events is really how the community grows. And I think that's very cool. 

Dave Nielsen: These open source models inspire people. They feel like they can do so much more because they can use the open source themselves. They're not limited by whatever somebody else's company is going to build, and they have the freedom to create whatever they want. So that's pretty exciting. 

Katherine Druckman: Yeah. And just imagine all the things that have happened in the last year. It seems like just yesterday and yet so many things have happened. 

Upcoming Events and How to Get Involved 

Katherine Druckman: I wanted to ask you if there is any way for listeners to follow up on the innovations that came out of the recent demo day or get plugged into other AI Alliance events. 

Dave Nielsen: Yeah, there are two ways you can do that. Number one, go to our website, thealliance.ai, and click the Contribute link. Scroll to the bottom of that page where there's a “Keep in touch” form and fill that out. Then, we'll send you a regular newsletter that shares what's going on with the Alliance. The other way is to follow Cerebral Valley AI on X at the @CerebralValley handle. It's very popular, and they do a great job of sharing and even live-streaming the demos. And if there’s not a live stream, they record them. These demos get hundreds of thousands of views, even millions sometimes. So I would definitely go and follow them on X and you can tune in for an event or go to AI Alliance Contribute page and sign up there. 

Katherine Druckman: And forgive me, I feel like I should know the answer to this question, but will there be more of these demo days or hackathons? 

Dave Nielsen: We were considering doing a hackathon for the recent demo day event. I'm glad you brought that up because we are going to do a hackathon to follow up on demo day. The reason we chose to do a demo day this time is that we wanted to highlight projects that were a little bit more polished, and had time to ripen on the vine, so to speak, so that when we're showing off why open source is important, we're showing things that people might be able to use today. Whereas like a hackathon, yes, some of those projects you could use today, but usually not. They're pretty rough. It's just the idea and a demo with some code, and it may not be ready for prime time, but we plan on doing one of those, probably in the next couple of months. 

Katherine Druckman: That's fantastic. 

Katherine Druckman: That's great. I'm excited about this. I love, again, I keep saying it, but I'm such a nerd for open communities. I love it when people come together to show off what they're working on. And especially when you're breaking new ground, it's so important because nobody has all the answers. We like to think we do, but nobody has all the answers. And that's why these exchanges of demos and ideas, are how the next big thing comes out, and it's so exciting. 

Dave Nielsen: I want to follow up on what Eze was saying earlier about different countries or people and their access to the AI, the source code. One of the things that we talk about in our AI Alliance meetings is taking a model that already exists. It could be Llama 3 or Llama 4, or it could be IBM Granite code. Granite is our model. You take an existing model that's speaking in English, let's say, and it's using mostly English language from the US, but maybe somebody wants to use generative AI for their language, their country or their culture, and they don't have the resources to build a model from scratch. But using open models, they can then add their own data to it, either through a RAG database, or through a vector database using RAG. 

They could take their data and fine-tune an existing model or even just start from scratch; create their own. But if they don't have the ability to do that, they can use an existing open model with a vector database or fine-tune it themselves. They can make that model more appropriate for their country, their language, their culture, and it's going to be more meaningful to them. And then, honestly, we're so spoiled here in the US to have so much, but that doesn't mean that a smaller country with fewer people speaking the language doesn't have something equally important to say or something new and novel that most likely we've never heard before. 

Katherine Druckman: Yeah, that's very cool. That's a great point. I love the idea of this type of work making technology more accessible to other people to meet their needs. I do see a lot of that happening with some community efforts, which maybe we'll talk about in another episode. Being able to take something that exists, where the heavy lifting has been done for you, makes it possible for you to add on top of it and tweak it so that it becomes something incredibly useful. You benefit from all of this, the great minds that came before you. 

Katherine Druckman: We've covered the demo day and are looking forward to a future hackathon. How can individuals, aside from signing up for the newsletter and visiting @CerebralValley on X, get involved and plugged into the AI Alliance community? 

Dave Nielsen: So we just had our second community call this morning. We're going to be doing them every other week, and they're going to be at 6:00 a.m. Pacific time and 4:00 p.m. Pacific time every other Tuesday of each month. If you go to the AI Alliance Contribute page and fill out the form there, we'll invite you to join those calls. You can also join one of our working groups, which includes Advocacy, Hardware Enablement, Skills & Education, Trust & Safety, Foundation Models, and Applications and Tools. Trust & Safety was one of our first to actually go viral, so to speak. 

If you search for AI trust and safety, our guide already shows up on the first page of Google, which is awesome. Our Foundation Models working group is also very popular. If you want to build a model or use tools to build models, that working group is where you'd go. If you want to use tools involving AI and data, not the models themselves, but all the other enabling technologies, or if you want to build applications to use AI, you can participate in the Applications and Tools working group. We'll let you know how to join those groups if you fill out the form. You can also join as a member. If you want your organization to become one of the 106 and counting members, you can go to the site and apply to become a member of the AI Alliance. 

We’ll have a conversation with you to make sure there's a fit, because it's not just about signing up, it's about participating. So you have to share with us what you plan to do to participate. And then finally, the other thing I'll mention is that we're just getting started. We're probably going to do events in major cities around the world over the next year. An event might just be a user group meeting or it could be an unconference. If you live in a city that hosts one of the big AI conferences, we might do something there right before or after the conference. And some of the universities are holding their own events.  

We want to enable you by providing the resources you need to run your own event and get involved in the discussion. So you could either fill out the form mentioned earlier and say, "Look, I'd like to run an event in my town." Or, find one of the meetup groups we're involved with right now, and you can join a future event with one of them. For example, The Unstructured Data Meetup group in San Francisco is having an event on September 9th, the night before The AI Conference 2024, which we're sponsoring and being a part of. I should also shout out that we get involved in other conferences where we're not sponsoring. So if you have an open source conference, especially if it covers AI and you want us to be involved, reach out to us and we'll try to rally our troops to be involved in your event. 

Katherine Druckman: Cool. We're all on this really exciting journey. It's uncharted territory and it's an adventure. It's complicated. There are a lot of unknowns, but there's also a lot of innovation and a lot of work and a lot of things happening in a lot of different groups and organizations. And I'm excited to be a part of this, honestly. And I think we're all in really good company. 

If y'all have any parting thoughts, please feel free to add them. 

Dave Nielsen: Well, just to broaden out the invite, in addition to the working groups, we have our community group, which I talked about, but we also have a marketing group and a membership group. So that's another way you can be involved. And then personally, I have a real soft spot for user events where the attendees get to contribute and participate and not just listen. I will most likely be running some unconferences around open source and AI, so no matter where you are in the world, if you're interested in events like that, let me know and I'd be happy to work with you to put something on. 

Katherine Druckman: Very cool. We'll be sure and drop links in the description for people to find all of these things. I love a good unconference and I look forward to following up there. Thank you both so much for jumping in and sharing all this work, and again, exciting things are happening. I'm excited to follow it, and I'm excited for our listeners to follow it. So thank you both. 

Dave Nielsen: Yeah, you bet. Thank you for having me. 

Ezequiel Lanza: Thank you so much. 

Resources 

The AI Alliance home page 

Contribute to the AI Alliance—Learn how to contribute and get news 

Responding to the U.S. NTIA request for comment on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights—AI Alliance 

Cerebral Valley AI on X 

The Open at Intel Podcast—Listen and subscribe on your favorite podcast player 

Follow Intel Tech on Medium for AI articles by Ezequiel Lanza 

Open.intel—For open ecosystem news, contents, projects, and events 

About the Guests 

Dave Nielsen 

Dave Nielsen represents IBM as the head of community at the AI Alliance, which brings together compute, data, tools, and talent to accelerate and advocate for open innovation in AI. Prior to IBM, Dave led community programs at companies like MongoDB, Harness, Redis and PayPal. Dave is known for creating community events, such as CloudCamp, and for writing the book PayPal Hacks. 

Ezequiel Lanza

Passionate about helping people discover the exciting world of artificial intelligence, Ezequiel Lanza is a frequent AI conference presenter and the creator of use cases, tutorials, and guides that help developers adopt open source AI tools. 

About the Host

Katherine Druckman, an Intel open source security evangelist, hosts the podcasts Open at Intel, Reality 2.0, andFLOSS Weekly. A security and privacy advocate, software engineer, and former digital director of Linux Journal, she's a long-time champion of open source and open standards. She is a software engineer and content creator with over a decade of experience in engineering, content strategy, product management, user experience, and technology evangelism.