Bazel supports building Android apps with Java and C++ code out of the box through the
rule and related rules. Android binary builds need a lot of machinery--more than we can cover in a
blog post. However, one aspect that’s fairly important to Bazel’s Android support is scalability.
That’s because we build most of Google’s own Android apps with Bazel and those apps are not only
comparably large but also come with hundreds of engineers that want to build and test their changes
For over a year now, Bazel has used a feature we call incremental dexing to speed up Android builds. As the name implies, incremental dexing is designed to minimize the work needed to rebuild an app after code changes, but it also parallelizes builds and lets them scale better to the needs of Google’s own apps. But how does it work and what is "dexing" anyway?
Dexing is what we call the build step that converts Java bytecode to Android's .dex file format. Traditionally, that’s been done for an entire app at once by a tool fittingly called "dx". Even if only a single class changed, dx would reprocess the entire app, which could take a while. But dx really has two jobs: compile bytecode to corresponding .dex code, and merge all the classes that are going into the app into as few .dex files as possible. The latter is needed because while Java bytecode uses a separate file per class, a single .dex file can contain thousands of classes. But because of the differences in instruction encoding, the compilation step is the more time-consuming one, while merging can be done separately and relatively quickly even for large apps.
Incremental dexing, as you might have guessed, separates bytecode compilation and dex merging. Specifically, it runs the compilation step separately in parallel for each .jar file that’s part of the app’s runtime classpath. To arrive at a final app, Bazel then merges the compilation results from each .jar.
How does that help? In a number of ways:
--experimental_spawn_schedulerflag, it simplifies caching past dexing results (for example, when one class in a large .jar changes).
What we mean by scalability is that the total number of classes in the app matters much less for how long it takes to build the app. This is especially important when rebuilding the app after small changes: with incremental dexing, the time spent on dexing is proportional to the size of the change. Previously, dexing time was always proportional to the number of classes in the app, no matter the change. This scalability has been critical in keeping up with our ever-growing apps.
One prerequisite for taking full advantage of incremental dexing is to split up the app into
multiple, ideally small, .jars. Bazel naturally encourages and enables this with the
rules, which build .jar files from a set of Java sources, conventionally for a single Java package
at a time. Third-party libraries are also often distributed as .jar or .aar files that Bazel
ingests with the
Tooling-wise it was reasonably straightforward to separate the merging step because Android provides a tool called dexmerger for just this purpose and compiling separately more or less just means running dx on one class at a time. One wrinkle for now is that dexmerger creates final .dex files that are larger than necessary. That means you want to turn off incremental dexing when you’re building a binary that you want to give to users, but it can speed up your development and test builds every day with no known adverse effect. Plus, we expect this to get better since Android Studio has started to use a similar scheme to build Android apps in Gradle.
The Bazel team is happy to announce the release of version 0.11.0.
incremental_dexing = 1.
assets/subdirectory of the AAR being imported.
@bazel_tools//tools/runfiles:java-runfilesto get a platform-independent runfiles library for Java. See JavaDoc of Runfiles.java for usage information.
Here are some updates on what happened in the Bazel community over the past month.
Did we miss anything? Fill the form to suggest content for a next blog post.
By Jingwen Chen
BUILD files in directories specify targets that can be built
from the contents of those directories.
Bazel goes through three steps when building targets:
BUILDfile of the target being built and all
BUILDfiles that file transitively depends on.
Each Bazel target is defined by a rule, which specifies inputs, outputs, and
how to get from one to the other. Rules can specify things like creating an
executable binary or defining a library. In Bazel’s code, individual rules are
represented by instances of implementations of
Users can also extend Bazel and create new rules with
Rules create, in turn, any number of actions. Each action takes any number of artifacts as inputs and produces one or more artifacts as outputs. These artifacts represent files that may not yet be available. They can either be source artifacts, such as source code checked in to the repository, or generated artifacts, such as output of other actions. For example, an action to compile a piece of code might take in source artifacts representing the code to be compiled and generated artifacts representing compiled dependencies, even though those dependencies have not yet been compiled, and output a generated artifact representing the compiled result. Additionally, rules may expose any number of provider objects. These providers are the API rules provide to other rules. They provide read-only information about internal state.
During analysis, Bazel runs the rules for each target being built and their transitive dependencies. Each rule generates and records all the actions it depends on. Bazel won’t necessarily run all of those actions; if an action doesn’t end up being required, Bazel will just ignore it. Skyframe is used to evaluate and cache the results of rules.
Information from the rules, including artifacts representing the future output
of actions, are made available to other rules through the rules’ providers. Each
rule has access to its direct dependencies’ providers. Because most information
passed between rules is actually transitive, providers make use of the
class, a DAG-like data structure (it’s not actually a set!) made up of items and
pointers to other (nested)
NestedSets are specially
optimized to work efficiently for analysis. For part of a provider that
represents some transitive state, for example, a trivial implementation might be
to build a new list that contains the items for the current rule and each
transitive dependency (for a chain of n transitive dependencies, that means we’d
add n + (n - 1) + … 1 = O(n^2) items to some list), but building a nested
set containing the new item and a pointer to the previous nested set is much
more efficient (we’d add 2 + 2 + … 2 = O(n) items to some nested set). This
introduces similar efficiency in memory usage as well.
Artifacts can each be added to any number of output groups. Each output
group represents a different group of outputs that a user might choose to build.
For example, the
output group specifies that Bazel should also produce the JARs of the source for
a Java target and its transitive dependencies. The special
output group holds output that is specifically built for a target - for example,
building a Java binary might produce a compiled
.jar file in the default
During execution, Bazel first looks at the artifacts in the requested output groups (plus, unless the user explicitly requested otherwise, the default output group). For each of those artifacts, it finds the actions that generate the artifact, then each of the artifacts each of those actions need, and so on until it finds all the actions and artifacts needed. If the action is not cached or the cache entry should be invalidated, Bazel follows this same process for the action’s dependencies, then runs the action. Once all of the actions in the requested output groups has been run or returned from cache, the build is complete.
There are a few important kinds of rules when building code for Android:
android_binaryrules build Android packages (
android_libraryrules build individual libraries that binaries and other libraries can consume.
android_local_testrules run test on Android code in a JVM.
.aarlibraries built outside of Bazel into a Bazel target.
R.java files (as well as related
files) to contain references to available resources. These R files contain
integer resource IDs that developers can use to refer to their resources.
Within an app, each resource ID refers to one unique resource.
Developers can provide different versions of the same resource (to support, for example, different languages, regions, or screen sizes). Android makes references to the base resource available in the R files, and Android devices select the best available version of that resource at runtime.
Bazel supports processing resources using the original Android resource
aapt, or the new version,
aapt2. Both methods are fundamentally
similar but have a few important differences.
Bazel goes through three steps to build resources.
First, Bazel serializes the files that define the resources. In the
pipeline, the parse action serializes information about resources into
symbols.bin files. In the
aapt2 pipeline, an action calls into the
compile command which serializes the information into a format used by
Next, the serialized resources are merged with similarly serialized
resources inherited from dependencies. Conflicts between identically named
resources are identified and, if possible, resolved during this merging. The
values resource files are generally explicitly merged. For other
files, if resources from the target or its dependencies have the same name and
qualifiers, the contents of the files are compared and, if they are different, a
warning is produced and the resource that was provided last is chosen to be
Finally, Bazel checks that the resources for the target are reasonable and
packages them up. In the
aapt pipeline, the validate action calls into the
aapt package command, and in the
aapt2 pipeline, the
command is called. In both cases, any malformed resources or references to
unavailable resources cause a failure, and, if no failures are encountered,
R.txt files are produced with information about the validated
resources, and a Resource APK containing those resources is produced.
aapt2 rather than
aapt provides better and more efficient support for
a variety of cases. Additionally, more of the resource processing steps are
aapt2 as opposed to Bazel's custom resource processing tools.
Finally, since the serialized format can be understood as-is by future calls to
aapt2, Bazel no longer has to deserialize information about resources to a
aapt2 can understand.
The resource ID values generated for
android_library targets are only
temporary, since higher-level targets might depend on multiple targets where
different resources were assigned the same ID. To ensure that resource IDs
aren’t persisted anywhere permanent, the R files record the IDs as nonfinal,
ensuring that compilation doesn’t inline them into other Java code.
android_library's R files should be discarded after building
android_library files are eventually discarded, we still need to
run resource processing to generate a temporary
R.class to allow compilation,
to merge resources so they can be inherited by consumers, and to validate that
the resources can be compiled correctly - otherwise, if a developer introduces a
bug in their resource definitions, it won’t be caught until they’re used in an
android_binary, resulting in a lot of wasted work done by Bazel.)
Code in android libraries and binaries make references to code in the R files,
R.class file must be generated before regular compilation can start.
android_library targets, since all resource IDs are temporary anyway, we
can speed things up by generating a
R.class file at the end of resource
android_binary targets, we need to wait for the output of
validation to get correct resource IDs. Validation does produce an
file, but generating an
R.class file directly from the contents of the
R.txt file is much faster than compiling the
R.java file into an
android_binary rule includes optional
fields. These fields limit the types of devices that will be built for. For
example, if you only wanted to build for English-language devices with HDPI
displays, you could specify:
android_binary( # ... densities = ["hdpi"], resource_configuration_filters = ["en"], )
Bazel will now be able to skip unneeded resources. As a result, the build will be faster and the resulting APK will be smaller. It won't support all kinds of devices and user preferences, but this speed improvement means developers can build and iterate faster.
rule is a pretty simple rule that builds and organizes an android library for
use in another Android target. In the analysis phase, there are basically three
groups of actions generated:
First, Bazel processes the library's resources, as described above.
Next comes the actual compilation of the library. This mostly just uses the
regular Bazel Java compilation path. The biggest difference is that the
R.class file produced in resource processing is also included in the
compilation path (but is not inherited by consumers, since the R files need to
be regenerated for each target).
Finally, Bazel does some additional work on the compiled code:
.classfiles are desugared to replace bytecode only supported on Java 8 with Java 7 equivalents. Bazel does this so that Java 8 language features can be used for developing the app, even though the next tool,
dx, does not support Java 8 bytecode.
.classfiles are converted to
.dexfiles, executables for Android devices, by
.dexfiles are then packed into the
.jarfile used at runtime. These incremental
.dexfiles, produced for each library, mean that, when some libraries from an app are changed, only those libraries, and not the entire app, need to be re-dexed.
.javafiles for this library are used by
hjarto generate a
.classfiles. Method bodies and private fields are removed from this compile-time
.jar, and targets that depend on this library are compiled against this smaller
.jar. Since these jars contain just the interface of the library, when private fields or method implementations change, dependent libraries do not need to be recompiled (they need to be recompiled only when the interface of the library changes), which results in faster builds.
rule packages the entire target and its dependencies into an APK. On a high
level, binaries are built similarly to libraries. However, there are a few key
For binaries, the three main resource processing actions (parse, merge, and validate), are all combined into a single large action. In libraries, Java compilation can get started while validation is still ongoing, but in binaries, since we need the final resource IDs from validation, we can't take advantage of similar parallelization. Since creating more actions always introduces a small cost, and there's no parallelization available to make up for it, having a single resource processing action is actually more efficient.
In binaries, the Java code is compiled, desugared, and dexed, just like in
libraries. However, afterwards, the
.dex files from the binary are merged
together with the
.dex files from dependencies.
Bazel also links together compiled
C++ native code from dependencies
into a single
.so file for each CPU architecture specified by the
.dex files, the
.so files, and the resource APK are all combined
to build an initial binary APK, which is then
produce an unsigned APK. Finally, the unsigned APK is signed with the binary's
debug key to produce a signed APK.
.dex files are combined with the resource APK to build an initial
binary APK, which is then
produce an unsigned APK. Finally, the unsigned APK is signed with the binary's
debug key to produce a signed APK.
Bazel supports running ProGuard
android_binary targets to optimize them and reduce their
ProGuard substantially changes elements of the build process. In particular, the
build process does not use incremental
.dex files at all, as ProGuard can only
.class files, not
ProGuarding uses a
deploy.jar file, a single
.jar file with all of the
binary's Java bytecode, created from the binary's desugared (but not dexed)
.class files as well as the binary's transitive runtime
.jar files. (This
deploy.jar file is an output of all
android_binary targets, but it doesn't
play a substantial role in builds without ProGuarding.)
Based on information from a series of Proguard specifications (from both the
binary and its transitive dependencies), ProGuard makes serveral passes through
deploy.jar in order to optimize the code, remove unused methods and
fields, and shorten and obfuscate the names of the methods and fields that
remain. In addition to the resulting proguarded
.jar file, ProGuard also
outputs a mapping from old to new names of methods and fields.
ProGuard’s output is not dexed, so when building with ProGuard, the entire
.jar must be re-dexed (even code from dependencies that were dexed
incrementally). The dexed code is then built into the APK as usual.
ProGuard will also remove references to unused resources from the class files.
is enabled, the resource shrinker uses the proguard output to figure out what
resources are no longer used, and then uses
aapt2 to create a new,
smaller resource APK with those resources removed. The shrunk resource APK and
the dexed APK are then fed into the APK building process, which operates the
same as it would without ProGuard.
is a way of rapidly building and deploying Android applications iteratively.
It’s based off of
android_binary, but has some additional functionality to
make builds and deployments more incremental.
We're proud to announce the release of Bazel 0.10. The 400+ commits since last release include performance optimizations, bug fixes, and various improvements.
There is a new android test rule.
code using Robolectric, a unit test framework designed for test-driven
development without the need for an emulator or device. See the documentation for
setup instructions and examples.
The depset type has evolved. To merge multiple depsets or add new elements, do
not use the operators
|, or the
.union method. They are
deprecated and will be removed in the future. Instead use the new
constructor, which has a better performance. For example, instead of
d1 + d2 + d3, use
depset(transitive = [d1, d2, d3]).
See the documentation
for more information and examples.
In addition to this new release, the Bazel community has been very active. See below what happened recently.
Languages & Rules
Did you know?
Did we miss anything? Fill the form to suggest content for a next blog post.
--config expansion order is changing, in order to make it better align with user expectations, and to make layering of configs work as intended. To prepare for the change, please test your build with startup option
Please test this change with Bazel 0.10, triggered by the startup option
--expand_configs_in_place. The change is mostly live with Bazel 0.9, but the newest release adds an additional warning if explicit flags are overriden, which should be helpful when debugging differences. The new expansion order will become the default behavior soon, probably in Bazel 0.11 or 0.12, and will no longer be configurable after another release.
The Bazel User Manual contains the official documentation for bazelrcs and will not be repeated here.
A Bazel build command line generally looks something like this:
bazel <startup options> build <command options> //some/targets
For the rest of the doc, command options are the focus. Startup options can affect which bazelrc's are loaded, and the new behavior is gated by a startup option, but the config mechanisms are only relevant to command options.
The bazelrcs allow users to set command options by default. These options can either be provided unconditionally or through a config expansion:
build --foo # applies "--foo" to build, test, etc.
# applies "--foo" to build, test, etc. when --config=foobar is set. build:foobar --foo
The current semantics of --config expansions breaks last-flag-wins expectations. In broad strokes, the current option order is
--configexpansions are expanded in a "fixed-point" expansions.
--configoption initially was (rc, command line, or another
--config), and will parse a single
--configvalue at most once. Use `--announcerc` to see the order used!_
Bazel claims to have a last-flag-wins command line, and this is usually true, but the fixed-point expansion of configs makes it difficult to rely on ordering where
--config options are concerned.
See the Boolean option example below.
Everywhere else, the last mention of a single-valued option has "priority" and overrides a previous value. The same will now be true of
--config expansion. Like other expansion options,
--config will now expand to its rc-defined expansion "in-place," so that the options it expands to have the same precedence.
Since this is no longer a fixed-point expansion, there are a few other changes:
--config=foo --config=foowill be expanded twice. If this is undesirable, more care will need to be taken to avoid redundancy. Double occurrences will cause a warning.
Other rc ordering semantics remain. "common" options are expanded first, followed by the command hierarchy. This means that for an option added on the line "
build:foo --buildopt", it will get added to
--config=foo's expansion for bazel build, test, coverage, etc. "
test:foo --testopt" will add
--testopt after the (less specific and therefore lower priority) build expansion of
--config=foo. If this is confusing, avoid alternating command types in the rc file, and group them in order, general options at the top. This way, the order of the file is close to the interpretation order.
Check your usual
--config values' expansions by running your usual bazel command line with
--announce_rc. The order that the configs are listed, with the options they expand to, is the order in which they are interpreted.
Spend some time understanding the applicable configs, and check if any configs expand to the same option. If they do, you may need to move rc lines around to make sure the same value has priority with the new ordering. See "Suggestions for config definers."
Flip on the startup option
--expand_configs_in_place and debug any differences using
If you have a shared bazelrc for your project, note that changing it will propagate to other users who might be importing this bazelrc into a personal rc. Proceed with caution as needed
You might be in a situation where you own some
--config definitions that are shared between different people, even different teams, so it might be that the users of your config are using both
--expand_configs_in_place behavior and the old, default behavior.
In order to minimize differences between old and new behavior, here are some tips.
--configat the END of the config expansion
Suggestion #1 is especially important if the config expands to another config. The behavior will be more predictable with
--expand_configs_in_place, but without it, the expansion of a single
--config depends on previous
Suggestion #2 helps mitigate differences if #1 is violated, since the fixed-point expansion will expand all explicit options, and then expand any newly-found config values that were mentioned in the original config expansions. This is equivalent to expanding it at the end of the list, so use this order if you wish to preserve old behavior.
The following example violates both #1 and #2, to help motivate why #2 makes things slightly better when #1 is impossible.
build:misalteredfoo --config=foo # Violation of #2!
build:misalteredfoo --cpu=arm64 # Violation of #1!
bazel build --config=misalteredfoo
effectively x86 in fixed-point expansion, and arm64 with in-place expansion
The following example still violates #1, but follows suggestion #2:
build:misalteredfoo --cpu=arm64 # Violation of #1!
bazel build --config=misalteredfoo
effectively x86 in both expansions, so this does not diverge and appears fine at first glance. (thanks, suggestion #2!)
bazel build --config=foo --config=misalteredfoo
effectively arm64 in fixed-point expansion, x86 with in-place, since misalteredfoo's expansion is independent of the previous config mention.
Lay users of
--config might also see some surprising changes depending on usage patterns. The following suggestions are to avoid those differences. Both of the following will cause warnings if missed.
A. Avoid including to the same
--config options FIRST, so that explicit options continue to have precedence over the expansions of the configs.
Multiple mentions of a single
--config, when combined with violations of #1, may cause surprising results, as shown in #1's motivating examples. In the new expansion, multiple expansions of the same config will warn. Multi-valued options will receive duplicates values, which may be surprising.
bazelrc contents: build:foo --cpu=x86
bazel build --config=foo --cpu=arm64 # Fine
effectively arm64 in both expansion cases
bazel build --cpu=arm64 --config=foo # Violates B
The explicit value arm64 has precedence with fixed-point expansion, but the config value x86 wins in in-place expansion. With in-place expansion, this will print a warning.
There are 2 boolean options,
--bar. Each only accept one value (as opposed to accumulating multiple values).
In the following examples, the two options
--bar have the same apparent order (and will have the same behavior with the new expansion logic). What changes from one example to the next is where the options are specified.
|bazelrc||Command Line||Current final value||New final value|
--nofoo --foo --bar --nobar
# Config definitions build:all --foo build:all --bar
--nofoo --config=all --nobar
# Set for every build build --nofoo build --config=all build --nobar
Now to make this more complicated, what if a config includes another config?
|bazelrc||Command Line||Current final value||New final value|
# Config definitions build:combo --nofoo build:combo --config=all build:combo --nobar build:all --foo build:all --bar
Here, counterintuitively, including
--config=all explicitly makes its values effectively disappear. This is why it is basically impossible to create an automatic migration script to run on your rc - there's no real way to know what the intended behavior is.
Unfortunately, it gets worse, especially if you have the same config for different commands, such as build and test, or if you defined these in different files. It frankly isn't worth going into the further detail of the ordering semantics as it's existed up until now, this should suffice to demonstrate why it needs to change.
To understand the order of your configs specifically, run Bazel as you normally would (remove targets for speed) with the option
--announce_rc. The order in which the config expansions are output to the terminal is the order in which they are currently interpreted (again, between rc and command line).
We are always looking for new ways to improve the experience of contributing to Bazel and helping users understanding how Bazel works. Today, we’re excited to share a preview of Bazel Code Search, a major upgrade to Bazel’s code search site. This new site features a refreshed user interface for code browsing and cross-repository semantic search with regular expression support, and a navigable semantic index of all definitions and references for the Bazel codebase. We’ve also updated the “Contribute” page on the Bazel website with documentation for this tool.
You can try Bazel Code Search right now by visiting https://source.bazel.build.
Select the repository you want to browse from the list on the main screen, or search across all Bazel repositories on the site using the search box at the top of the page.
Bazel Code Search has a semantic understanding of the Bazel codebase and allows you to search for either files or code within files. This semantic understanding of the code means that the search index identifies which parts of your code are entities such as classes, functions, and fields. Since the search index has classified these entities, your queries can include filters to scope the search to classes or functions and allows for improved search relevance by ranking important parts of code like classes, functions, and fields higher. By default, all searches use RE2 regular expressions though you can escape individual special characters with a backslash, or an entire string by enclosing it in quotes.
If you don’t see the result you want in the suggestions, you can submit your search and find all matches on the search result page. From the results page, you can select a matching line or file to view.
Here’s a sampling of different search examples to try out on your own:
Note that all searches are case insensitive unless you specify “case:yes” in the query.
Another way to understand the Bazel repository is through the use of cross references. If you’ve ever wondered how to properly use a method, cross references can help by displaying all references to that method so you can see how it is used in other parts of the codebase. Alternatively, if you see a method being used but don’t understand what that method actually does, cross references enables you to click the method to view the definition or see how it’s used elsewhere.
Cross references aren’t only available for methods, they’re also generated for classes, fields, imports, and enums. Bazel Code Search uses the Kythe open source project to generate a semantic index of cross references for the Bazel codebase. These cross references appear automatically as hyperlinks within source files. To make cross references easier to identify, click the Cross References button to underline all cross references in a file.
Once you’ve clicked on a cross reference, the cross references pane will be displayed where you can view all the definitions and references organized by file. Within the cross references pane, you can navigate into multiple levels of depth of cross references while continuing to view the original file you were viewing in the File pane allowing you to maintain context of the original task.
Selecting a repository from the main screen will take you to a view of the chosen repository with search scoped to its contents. The breadcrumb toolbar at the top allows you to quickly navigate to other repositories, refs, or folders.
We hope you’ll try Bazel Code Search and provide feedback through the “!” button in the top right of any page on the Bazel Code Search site. We would love to hear whether this tool helps you work with Bazel and what else you’d like to see Bazel Code Search offer.
Keep in mind that this project is still experimental and is subject to change.
By Russell Wolf
Your feedback reflected high level of satisfaction, and there was something of interest for everyone:
BazelCon2017 by the Numbers:
What we heard from you:
What we can do next:
What you can do next:
We look forward to working with you and growing our Bazel user community in 2018!
Bazel on Windows is no longer experimental. We think Bazel on Windows is now stable and usable enough to qualify for the Beta status we have given Bazel on other platforms.
Over the last year, we put a lot of work into improving Bazel's Windows support:
/usr/bin) if your dependency graph includes
sh_*rules (similarly to requiring
py_*rules), but you can use any MSYS version and flavour you like, including Git Bash.
android_*rules, Bazel on Windows can now build and deploy Android applications.
BAZEL_PYTHON-- Bazel attempts to autodetect your Bash and Python installation paths.
JAVA_HOME-- we release Bazel with an embedded JDK. (We also release binaries without an embedded JDK if you want to use a different one.)
CROSSTOOLdefinition for Visual C++ and drives the compiler directly. This means Bazel creates fewer processes to run the compiler. By removing the script, we have eliminated one more point of failure.
py_binaryrules. Unlike the .
cmdfiles that Bazel used to build for these rules, the new .
exefiles no longer dispatch to a shell script to launch the
xx_binaries, resulting in faster launch times. (If you see errors, you can use the
--[no]windows_exe_launcherflag to fall back to the old behavior; if you do, please file a bug. We'd like to remove this flag and only support the new behavior.)
We are also working on bringing the following to Bazel on Windows:
Looking ahead, we aim to maintain feature parity between Windows and other platforms. We aim to maximize portability between host systems, so you get the same fast, correct builds on your developer OS of choice. If you run into any problems, please don't hesitate to file a bug.
Bazel team is pleased to announce our first annual Bazel Conference, focused on the needs of our community. The conference will feature user stories and feedback, migration talks, roadmap, hands-on and break-out tech sessions with Bazel engineers, contributors and users.
Dates: November 6 and 7, 2017 Location: Sunnyvale, California
We are humbled by the commitment to make Bazel even better, and are seeing engineers develop advanced features like Remote Execution, and sharing migration tips, tricks and tools. You will hear user stories and tips about iOS migration, and TensorFlow and Kubernetes experience with Bazel. We will also discuss as a community different tools out there that could be open sourced.
We are excited to see all of you at this first annual Bazel Conference in Sunnyvale, California on November 6 and 7, 2017!
Register by October 15, as seating is limited and we won't allow walk-ins. Detailed schedule will be published mid October, and location details will be sent out to registered attendees.
Bazel has been open-sourced exactly 2.5 years ago. It continues to be quite a journey, and we are very happy we have acquired some fellow travellers: many projects, organizations and companies that we all know and love rely on Bazel every day.
As our community grows, we owe to it a transparent and predictable release process. So we are taking some steps to bring more clarity and order to the world of Bazel releases:
Our website has more details on release policy.
As a result of this change, we now issue one minor release per month.
Our roadmap reflects our vision for Bazel 1.0 and beyond. We will annotate features on the roadmap with the release versions as those features get shipped.
By Dmitry Lomov