2024 Open Source Contributions: A Year in Review
Tuesday, October 14, 2025
2024 Open Source Contributions: A Year in Review
By
Open source is a critical part of Google with many upstream projects and communities contributing to our infrastructure, products, and services. Within the Open Source Programs Office (OSPO), we continue to focus on investing in the sustainability of open source communities and expanding access to open source opportunities for contributors around the world. As participants in this global ecosystem, our goal with this report is to provide transparency and to report our work within and around open source communities.
In 2024 roughly 10% of Alphabet's full-time workforce actively contributed to open source projects. This percentage has remained roughly consistent over the last five years, indicating that our open source contribution has remained proportional to the size of Alphabet over time. Over the last 5 years, Google has released more than 8000 open source projects, features, libraries, SDKs, datasets, sample code and more. In 2024 alone, we launched more than 700 projects across a wide range of domains: from the Earth Agent Dataset Explorer to Serberus to CQL.
Most open source projects we contribute to are outside of Alphabet
In 2024, employees from Alphabet interacted with more than 19,000 public repositories on GitHub. Over the last six years, more than 78% of the non-personal GitHub repositories receiving Alphabet contributions were outside of Google-managed organizations. Our top external projects (by number of unique contributors at Alphabet) include both Google-initiated projects as well as community-led projects.
In addition to Alphabet employees supporting external projects, in 2024 Alphabet-led projects received contributions from more than 150,000 non-Alphabet employees (unique GitHub accounts not affiliated with Alphabet).
A year of open-source AI and Gemma
As part of the focus on AI in 2024, Google's OSPO supported and actively participated in multiple community efforts, such as the OSI's Open Source AI definition initiative. We continued to release projects with open-source licenses, including AI models and projects, and will continue to be precise in making clear distinctions between "open source" and "open models", as shown in Deep Mind's blog posts about models.
Speaking of AI models — the Gemma team collaborated with the community in every launch. For instance, they shared model weights early with partners like Hugging Face, llama.cpp, mlx. This collaborative approach helped increase Gemma's distribution across many frameworks.
This community spirit is also reflected in projects like GAIA, where a Gemma model was fine-tuned for Portuguese in collaboration with the University of Goias. This collaboration enabled Brazilian government institutions to start using the model, demonstrating the real-world impact of open-source AI. The success of projects like Gemma and Gaia underscores a key theme from our research efforts in 2024: the creation and curation of large, high-quality datasets and open-source tools to accelerate innovation and empower researchers worldwide.
Open data is key to research
The Google Research team created and curated many large, high-quality open access datasets. These serve as the foundation for developing more accurate and equitable AI models. Many of the research projects discussed in their blog from 2024 are also committed to an open science framework, with a strong emphasis on releasing open-source tools, models, and datasets to the broader research community. This collaborative approach accelerates innovation and empowers researchers worldwide.
This commitment is demonstrated through the application of AI and machine learning to tackle complex challenges in various scientific domains. From mapping the human brain in neuroscience to advancing climate science with NeuralGCM and improving healthcare with open foundation models, AI is being used for social good. To foster collaboration and accelerate this research, many projects, including AutoBNN, and NeuralGCM, are made publicly available to the research community.
A key part of making data accessible is the development of new tools and standards. The Croissant metadata format, for example, makes datasets more accessible and usable for machine learning. By focusing on the creation of high-quality datasets, the development of open-source tools, and the application of AI to scientific research, Google Research is helping to build a more open and collaborative research ecosystem.
Investing in the next generation of open source contributors
As a longstanding consumer and contributor to open source projects, we believe it is vital to continue funding both established communities as well as invest in the next generation of contributors to ensure the sustainability of open source ecosystems. In 2024, OSPO provided 2ドル.0M in sponsorships and membership fees to more than 40 open source projects and organizations. Note that this value only represents OSPO's financial contribution; other teams across Alphabet also directly fund open source work. In addition, we continue to support our longstanding programs:
- In its 20th year, Google Summer of Code (GSoC) enabled more than 1200 individuals to contribute to 195 organizations. Over the lifetime of this program, more than 21,000 individuals from 123 countries have contributed to more than 1,000 open source organizations across the globe.
[フレーム]Timothy Jordan's keynote from All Things Open highlighting GSoC's amazing 20 years in open source. - In its sixth and final year, Google Season of Docs provided direct grants to 11 open source projects to improve open source project documentation. Each organization also created a case study to help other open source projects learn from their experience.
- In its final year, the Google Open Source Peer Bonus Program gave awards to 130 non-Alphabet contributors from the broader open source community representing 35 different countries.
Our open source work will continue to grow and evolve to support the changing needs of our communities. Thank you to our colleagues and community members who continue to dedicate personal and professional time supporting the open source ecosystem. Follow our work at opensource.google.
Rising to Meet the Security Challenge
The integrity of the open source software supply chain is essential for the entire ecosystem. In 2024, attacks on the software supply chain continued to increase. We have been working closely with the community and package managers who are rising to meet the challenge in response to this growing threat.
Our efforts have been focused on making it easier for developers to secure their software and for consumers to verify the integrity of the packages they use. A significant achievement in 2024 Googlers contributed to, was the integration of Sigstore into PyPI, the Python Package Index, a major step forward in securing the Python ecosystem. This is part of a broader movement to adopt cryptographic signing for all public packages.
Alongside these initiatives, we continue to support and contribute to efforts like SLSA (Supply chain Levels for Software Artifacts) to establish a common framework for ensuring supply chain security. We also invested in credential scanning across public package registries to help prevent accidental credential leaks, another common vector for attack.
Beyond securing individual packages, we're also focused on providing visibility into the security of running workloads. This year, we introduced new software supply chain security insights into the Google Kubernetes Engine (GKE) Security Posture dashboard. By surfacing these potential risks directly in the GKE dashboard, we empower teams to take immediate, actionable steps to strengthen their security posture from development through to production.
Securing the open source supply chain requires a collective effort, and we are committed to continuing our work with the community to build a more secure future for everyone.
Appendix: About this data
This report features metrics provided by many teams and programs across Alphabet. In regards to the code and code-adjacent activities data, we wanted to share more details about the derivation of those metrics.
- Data sources: These data represent the activities of Alphabet employees on public repositories hosted on GitHub and our internal production Git service Git-on-Borg. These sources represent a subset of open source activity currently tracked by Google OSPO.
- GitHub: We continue to use GitHub Archive as the primary source for GitHub data, which is available as a public dataset on BigQuery. Alphabet activity within GitHub is identified by self-registered accounts, which we estimate underreports actual activity.
- Git-on-Borg: This is a Google managed git service which hosts some of our larger, long running open source projects such as Android and Chromium. While we continue to develop on this platform, most of our open source activity has moved to GitHub to increase exposure and encourage community growth.
- Business and personal: Activity on GitHub reflects a mixture of Alphabet projects, third-party projects, experimental efforts, and personal projects. Our metrics report on all of the above unless otherwise specified.
- Alphabet contributors: Please note that unless additional detail is specified, activity counts attributed to Alphabet open source contributors will include our full-time employees as well as our extended Alphabet community (temps, vendors, contractors, and interns). In 2024, full time employees at Alphabet represented more than 95% of our open source contributors.
- GitHub Accounts: For counts of GitHub accounts not affiliated with Alphabet, we cannot assume that one account is equivalent to one person, as multiple accounts could be tied to one individual or bot account.
- *Active counts: Where possible, we will show ‘active users' defined by logged activity (excluding ‘WatchEvent') within a specified timeframe (a month, year, etc.) and ‘active repositories' and ‘active projects' as those that have enough activity to meet our internal active-project criteria and have not been archived.
Special thanks
This post is a testament to the collaborative spirit across Google. We thank amanda casari, Anna Eilering, Erin McKean, Shane Glass, April Knassi, and Mary Radomile from the Open Source Programs Office; Andrew Helton and Christian Howard from Google Research; Gus Martins and Omar Sansiviero from the Gemma team; and Nicky Ringland for her contributions on open-source security. Our gratitude also goes out to all open-source contributors and maintainers, both inside and outside of Google.
Apache Iceberg 1.10: Maturing the V3 spec, the REST API and Google contributions
Wednesday, September 24, 2025
Apache Iceberg 1.10: Maturing the V3 spec, the REST API and Google contributions
by
The Apache Iceberg 1.10.0 release just dropped. I've been scrolling through the release notes and community analysis, and it's a dense, significant release. You can (and should) read the full release notes, but I want to pull out the "gradients" I see—the directions the community is pushing that signal what's next for the data lakehouse.
Next-Gen Engines Have Arrived
Let's jump straight to the headline news: next-generation engine support. Version 1.10.0 delivers deep, native optimizations for both Apache Spark and Apache Flink, ensuring Iceberg is ready for the future.
For Apache Spark users, the biggest news is full compatibility with the forthcoming Spark 4.0. The release also gets much smarter about table maintenance. The compute_partition_stats procedure now supports incremental refresh, eliminating wasteful recalculations by reusing existing stats and saving massive amounts of compute. For streaming, a critical fix for Spark Structured Streaming converts the maxRecordPerMicrobatch limit to a "soft cap," resolving a common production issue where a single large file could stall an entire data stream.
Apache Flink users get an equally massive upgrade with full Flink 2.0 support. This is accompanied by a new dynamic sink, which is a huge quality-of-life improvement. This feature dramatically streamlines streaming ingestion by automatically handling schema evolution from the input stream and propagating those changes directly to the Iceberg table. It even supports "fan-out" capabilities, letting it create new tables on the fly as new data types appear in the stream, removing a huge layer of operational friction.
Hardening the Core for Speed and Stability
Beyond the big engine updates, 1.10.0 is all about hardening the core for stability and speed. A key part of this is the growing adoption of Deletion Vectors. This V3 feature is now ready for prime time and radically improves the performance of row-level updates by avoiding costly read-modify-write operations.
Speaking of core logic, the compaction code for Spark and Flink has been refactored to share the same underlying logic. This is a fantastic sign of health—it means less duplicated effort, fewer divergent bugs, and a more stable core for everyone, regardless of your engine.
With deletion vectors leading the charge, the rest of the V3 spec is also moving from "on the horizon" to "ready to use." The spec itself is now officially "closed," and we're seeing its most powerful features land, like row lineage for fine-grained traceability and the new variant type for flexibly handling semi-structured data.
The REST Catalog is Ready for Prime Time
For me, the most significant strategic shift in this release is the battle-hardening of the REST Catalog. For years, the de facto standard was the clunky, monolithic Hive Metastore. The REST Catalog spec is the future — a simple, open HTTP protocol that finally decouples compute from metadata.
The 1.10.0 notes are full of REST improvements, but one is critical: a fix for the client that prevents retrying commits after 5xx server errors. This sounds boring, but it's not. When a commit call fails, it's impossible for the client to know if the mutation actually committed or not before the error. Retrying in that ambiguous state could lead to conflicting operations and potential table corruption. This fix is about making the REST standard stable enough for mission-critical production.
Google Cloud and the Open Lakehouse
This industry-wide standardization on a stable REST API is foundational to Google Cloud's BigLake strategy and where our new contributions come in. We're thrilled to have contributed two key features to the 1.10.0 release.
The first is native BigQuery Metastore Catalog support. This isn't just another Hive-compatible API; it's a native implementation that allows you to use the battle-tested, serverless, and globally-replicated BigQuery metadata service as your Iceberg catalog.
The second contribution is the new Google AuthManager. This plugs directly into the REST Catalog ecosystem, allowing Iceberg to authenticate using standard Google credentials. You can now point your open source Spark job (running on GKE, Dataproc, or anywhere) directly at your BigLake-managed tables via the open REST protocol, using standard Google auth.
This is the whole philosophy behind our BigLake REST Catalog. It's our fully-managed, open-standard implementation of the Iceberg REST protocol. This means you get a single source of truth, managing all your Iceberg tables with BigQuery's governance, fine-grained security, and metadata. It also means true interoperability, letting you use BigQuery to analyze your data, or open source Spark, Flink, and Trino to access the exact same tables via the open REST API. And critically, it means no lock-in—you're just talking to an open standard.
You can read more about our managed BigLake REST Catalog service here.
This Week in Open Source #10
Friday, September 19, 2025
This Week in Open Source for 09/19/2025
A look around the world of open source
by
As we enter the Autumn of 2025, AI is still on the top of everyone's mind in tech and the world of open source is no different. This week we delve into various facets of AI's impact on the open source world, from its presence in the Linux kernel and the need for official policy, to the discussion around copyright in AI and how it affects open source licenses. We also explore ways companies can actively support open source, the challenges federated networks like Mastodon face with age verification laws, and the emerging concept of spec-driven development with AI as a design tool.
Upcoming Events
- September 23 - 27: Nerderarla 2025 is happening in Buenos Aires. It is a 100% free, world-class event in Latin America with high-quality content in science and technology.
- September 29 - 30: Git Merge 2025 celebrates 20 years of Git in Sunnyvale, California.
- October 2 - 3: Monktoberfest is happening in Portland, Maine. The only conference focused on how craft, technology and social come together. It's one of the most unique events in the industry.
- October 12 - 14: All Things Open 2025 is happening in Raleigh, North Carolina. The largest open source / tech / web conference on the U.S. east coast will feature many talks, including some from 4 different Googlers on varying topics — from creating mentorship programs, to security, to kubernetes and how open source already has solutions to your data problems that you may be trying to solve with AI.
Open Source Reads and Links
- [Article] AI is creeping into the Linux kernel - and official policy is needed ASAP - Coverage and larger discussion about genAI tooling and development on the Linux kernel. Kicked off from OSSNA 2025 talk from Sasha Levin.
- [Blog] The copyright crisis in AI that open source can't ignore - Many AI companies have been ignoring copyright to create their AI models. This sets a precedent that could lead to ignoring of open source licenses. What can open source do? There are some AI companies still taking a principled stance. What should they do?
- [Blog] 4 ways your company can support open source right now - Your company not only uses, but relies upon, open source software. Supporting open source software is an investment in your company and can give you a voice in the future of those projects. So here are four lenses to look through for how you can support the software that is supporting your company.
- [Article] Mastodon says it doesn't 'have the means' to comply with age verification laws - Age verification laws are starting to affect social media use in different locales. For single entities that run social networks, you can create a single solution that is best for your company's values and hopefully your users. What about federated networks of varying sizes? How are they supposed to address these laws? This is both a sign of the problems with federated networks and the importance of them to keep the internet open.
- [Blog] Spec-driven development with AI: Get started with a new open source toolkit - We're moving from "code is the source of truth" to "intent is the source of truth." With AI the specification becomes the source of truth and determines what gets built. Is this a new best practice for using AI as a design tool?
What exciting open source events and news are you hearing about? Let us know on our @GoogleOSS X account.
Introducing Kotlin FHIR: A new library to bring FHIR to Multiplatform
Tuesday, September 16, 2025
Build once, deploy everywhere: multiplatform FHIR development on Android, iOS, and Web
byThe mission of Google's Open Health Stack team is to accelerate digital health innovation by providing developers everywhere with critical building blocks for next-generation healthcare applications. Expanding its existing components, the team has released Kotlin FHIR (currently in alpha), a new open-source library now available on GitHub. It implements the HL7® FHIR® data model on Kotlin Multiplatform (KMP), enabling developers to build FHIR apps and tools for Android, iOS, and Web simultaneously.
Tools to support health data exchange using a modern standard
HL7® FHIR® (Fast Healthcare Interoperability Resources) is a global interoperability standard for healthcare data exchange. It enables healthcare systems to exchange data freely and securely, improving efficiency and transparency while reducing integration costs. Over the years, it has seen rapidly growing adoption, and its use has been mandated by health regulations in an increasing number of countries.
Since March 2023, the Open Health Stack team at Google has introduced a number of tools to support FHIR development. For example, the Android FHIR SDK helps developers to build offline capable FHIR-native apps that can help community health workers carry out data collection tasks in remote communities. With FHIR Data Pipes, developers can build analytics solutions more easily to generate critical insights for large healthcare programmes more easily. Today, apps powered by these tools are used by health workers covering over 75 million people across Sub-Saharan Africa, South and Southeast Asia.
A leap forward to multiplatform development
In low-resource settings, it is imperative to develop apps that can reach as many patients as possible at a low development cost. However, a lack of infrastructure and tooling often hinders this goal. For example, Kotlin Multiplatform (KMP) is a new and exciting technology rapidly gaining traction, but existing FHIR libraries are not suitable for KMP development due to their platform-specific dependencies. Consequently, developing FHIR apps on KMP has not been possible, causing developers to miss out on a significant opportunity to scale their solutions.
Introducing Kotlin FHIR. It is a modern and lightweight implementation of FHIR data models designed for use on KMP with no platform-specific dependencies. With Kotlin FHIR, developers can build FHIR apps once, and deploy them to Android, iOS, Web, and other platforms.
"Any library that helps implementers use FHIR is my favourite, but I'm particularly thrilled to see a new library from the awesome Open Health Stack team.
– Grahame Grieve, Creator of FHIR, Product Director at HL7
Modern, lightweight, and sustainable
Kotlin FHIR uses KotlinPoet to generate the FHIR data model directly from the specification. This ensures that the library is complete and maintainable. The data model classes it generates are minimalist to provide the best usability for developers: it has everything that you need, nothing less and nothing more. It uses modern language features in Kotlin such as sealed interfaces to ensure type-safety and to give developers the best coding experience. It supports all the FHIR versions: R4, R4B and R5, and will be updated when new FHIR versions are released.
The library is currently in alpha, but has received positive feedback from the developer community. To try FHIR multiplatform development using the library, head to the repository.
Beyond data models: more on multiplatform
Our mission is to empower the digital health community, and the Kotlin FHIR library is our latest step in that effort. But handling the FHIR data model on KMP is just the beginning. Rich features provided by the Android FHIR SDK libraries will also be needed on KMP. This is a collaborative effort, and we invite the FHIR community to join us in defining and building the cross-platform tools you need most. To learn more about how you can get involved, head to the Open Health Stack developer site.
Kubernetes 1.34 is available on GKE!
Wednesday, September 10, 2025
Kubernetes 1.34 is now available in the Google Kubernetes Engine (GKE) Rapid Channel in just 5 days after OSS release! For more information about the content of Kubernetes 1.34, read the official Kubernetes 1.34 Release Notes and the specific GKE 1.34 Release Notes.
Kubernetes Enhancements in 1.34:
The Kubernetes 1.34 release, themed 'Of Wind & Will', symbolizing the winds that shaped the platform, delivers a fresh gust of enhancements. These updates, shaped by both ambitious goals and the steady effort of contributors, continue to propel Kubernetes forward.
Below are some of the Kubernetes 1.34 features that you can use today in production GKE clusters.
DRA Goes GA
The Kubernetes Dynamic Resource Allocation (DRA) APIs are now GA. This is a huge step in the evolution of Kubernetes to stay the undisputed platform for AI/ML workloads. DRA improves Kubernetes' ability to select, configure, allocate, and share GPUs, TPUs, NICs and other specialized hardware. For more information about using DRA in GKE, see About dynamic resource allocation in GKE. You can use DRA now with self-installed drivers and can expect more improvements in upcoming releases.
The Prioritized list and Admin access features have been promoted to beta and will be enabled by default, and the kubelet API has been updated to report status on resources allocated through DRA.
KYAML
We've all been there: a stray space or an unquoted string in a YAML file leads to frustrating debugging sessions. The infamous "Norway Bug" is a classic example of how YAML's flexibility can sometimes be a double-edged sword. 1.34 introduces support for KYAML, a safer and less ambiguous subset of YAML, specifically designed for Kubernetes and helps avoid these common pitfalls.
KYAML is fully compatible with existing YAML parsers but enforces stricter rules making your configurations more predictable and less prone to whitespace errors. This is a game-changer for anyone using templating tools like Helm, where managing indentation can be a headache.
To start using KYAML, simply update your client to 1.34+ and set the environment variable KUBECTL_KYAML=true
to enable use of -o kyaml
. For more details, check out KEP-5925.
Pod-level resource requests and limits
With the promotion of Pod-level resource requests and limits to beta (and on-by-default), you can now define resource requests and limits at the pod level instead of the container level. This simplifies resource allocation, especially for multi-container Pods, by allowing you to set a total resource budget that all containers within the Pod share. When both pod-level and container-level resources are defined, the pod-level settings take precedence, giving you a clear and straightforward way to manage your Pod's resource consumption.
Improved Traffic Distribution for Services
The existing PreferClose
setting for traffic distribution in Services has been a source of ambiguity. To provide clearer and more precise control over how traffic is routed, KEP-3015 deprecates PreferClose
and introduces two new, more explicit values:
PreferSameZone
is equivalent to the existingPreferClose
.PreferSameNode
prioritizes sending traffic to endpoints on the same node as the client. This is particularly useful for scenarios like node-local DNS caches, where you want to minimize latency by keeping traffic on the same node whenever possible.
This feature is now beta in 1.34, with its feature gate enabled by default.
Ordered Namespace Deletion for Enhanced Security
When a namespace is deleted, the order in which its resources are terminated has, until now, been unpredictable. This can lead to security flaws, , such as a NetworkPolicy being removed before the Pods it was protecting, leaving them temporarily exposed. With this enhancement, Kubernetes introduces a structured deletion process for namespaces, ensuring secure and predictable resource removal by enforcing a deletion order that respects dependencies, removing Pods before other resources.
This feature was introduced in Kubernetes v1.33 and became stable in v1.34.
Graceful Shutdowns Made Easy
Ensuring a graceful shutdown for your applications is crucial for zero-downtime deployments. Kubernetes v1.29 introduced a "Sleep" for containers' PreStop and PostStart lifecycle hooks, offering a simple approach to managing graceful shutdowns. This feature allows a container to wait for the specified duration before it's terminated, giving it time to finish in-flight requests and ensuring a clean handoff during rolling updates.
Note: Specifying a negative or zero sleep duration will result in an immediate return, effectively acting as a no-op (added in v1.32).
This feature graduated to stable in v1.34.
Streaming List Responses
Large Kubernetes clusters can push the API server to its limits when dealing with large LIST responses that can consume gigabytes of memory. Streaming list responses address this by changing how the API server handles these requests.
Instead of buffering the entire list in memory, it streams the response object by object, improving performance and substantially reducing memory pressure on the API server. This feature is now GA and is automatically enabled for JSON and Protobuf responses with no client-side changes.
Resilient Watch Cache Initialization
The watch caching layer in the Kubernetes apiserver maintains an eventually consistent cache of cluster state. However, if it needs to be re-initialized, it can potentially lead to a thundering herd of requests that can overload the entire control plane. The Resilient Watch Cache Initialization feature, now stable, ensures clients and controllers can reliably establish watches.
Previously, when the watch cache was initializing, incoming watch and list requests would hang, consuming resources and potentially starving the API server. With this enhancement, such requests are now intelligently handled: watches and most list requests are rejected with a 429
, signaling clients to back off, while simpler get requests are delegated directly to etcd.
In-Place Pod Resize Gets Even Better
In-place pod resize, which allows you to change a Pod's resource allocation without a disruptive restart, remains in Beta, but continues to improve in v1.34. You can now decrease memory limits with a best-effort protection against triggering the OOM killer. Additionally, resizes are now prioritized, and retrying deferred resizes is more responsive to resources being released. A ResizeCompleted
event provides a clear signal when a resize completes, and includes a summary of the new resource requirements.
MutatingAdmissionPolicy Gets to Beta
MutatingAdmissionPolicy, working as a declarative, in-process alternative to mutating admission webhooks, goes to Beta in Kubernetes 1.34.
Mutating admission policies use the Common Expression Language (CEL) to declare mutations to resources. Mutations can be defined either with an apply configuration that is merged using the server side apply merge strategy, or a JSON patch. This feature is highly configurable, enabling policy authors to define policies that can be parameterized and scoped to resources as needed by cluster administrators.
Acknowledgements
As always, we want to thank all the Googlers that provide their time, passion, talent and leadership to keep making Kubernetes the best container orchestration platform. We would like to mention especially Googlers who helped drive some of the open source features mentioned in this blog: Tim Allclair, Natasha Sarkar, Jordan Liggitt, Marek Siarkowicz, Wojciech Tyczyński, Tim Hockin, Benjamin Elder, Antonio Ojea, Gaurav Ghildiyal, Rob Scott, John Belamaric, Morten Torkildsen, Yu Liao, Cici Huang, Joe Betz, and Dixita (Dixi) Narang.
And thank the many Googlers who helped bring 1.34 to GKE!