tsuru server 1.0.0 is out!

    tsuru server 1.0.0, along with tsuru client 1.0.1 and tsuru-admin 1.0.0, has been released last week!

    This release includes some awesome features and fixes. Please refer to the release notes for the full list of features.

    Some features worth highlighting are listed below:

    • Deploy applications using Docker images (#1314). Now it’s possible to deploy a Docker image to tsuru using the command tsuru app-deploy -i. This image should be in a registry and be accessible by tsuru api. Image should also have a Entrypoint or a Procfile at given paths, / or /app/user/ or /home/application/current. See more in tsuru-client app-deploy reference.

    • Improved application log handling. Besides several performance improvements in log handling, it’s now possible to configure tsuru to forward containers logs directly to an external log server. Please check Managing Application Logs for more details.

    • API versioning. Now all API calls to tsuru may include a version prefix in the format /1.0/<request path>. Further changes to the API will be versioned accordingly.

    • EC2 IaaS is now feature complete, supporting parameter such as IAM roles, extra volumes and multiple network interfaces. Since these parameters are composed of multiple values, users must provide a JSON for using them. It also supports using private DNS names now, as long as the user specifies the subnet-id and the index of the network interface that they want to use. For example, with IAM instance profiles, block devices and running on a private network:

    $ tsuru-admin docker-node-add iaas=ec2 'iaminstanceprofile={"name":"docker-instances"}' 'blockdevicemappings=[[{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":100}}]' subnetid=subnet-1234 network-index=0 ...
    

    Backward incompatible changes (action needed)

    • The way the bs container is managed has changed. If you have any configuration setting for bs that was added using tsuru-admin bs-env-set you must run tsurud migrate to ensure every config env has been copied to the new structure.

    bs containers should now be managed using tsuru-admin node-container-update big-sibling [options...]. See node containers reference for more information.

    Contributors

    Besides the core team, this release was also powered by external contributors. And we want to thank them for helping getting tsuru server 1.0.0 out. Here is the list of contributors that helped in this version:

    • Diego Araujo and USP
    • Guilherme Garnier
    • James Pic
    • Paulo Alem
    • Pedro Medeiros
    • Piotr Komborski
    • Renan Mendes Carvalho
    • Rodrigo Oliveira

    Grab it today!

    You can install or upgrade tsuru server using our PPA or building it from source.

    tsuru server 0.13.0 is out!

    tsuru server 0.13.0, along with tsuru client 0.18.1 and tsuru-admin 0.12.1, has been released last week!

    This release includes some awesome features and fixes. Please refer to the release notes for the full list of features.

    Some features worth highlighting are listed below:

    • New authorization system: tsuru now supports more granular authorization system, with roles and permissions. Roles group permissions, and are associated with users.

    • New IaaS available: tsuru now supports DigitalOcean, along with Amazon EC2 and CloudStack. Privileged users are able to spawn droplets on DigitalOcean and use them as managed nodes with tsuru.

    • Platforms can now be enabled and disabled by privileged users. Disabled platforms can be used by privileged users, who can also upgrade the platform and enabled it back later, making it available to all users.

    • New router: support new version of Galeb, which is now fully open source. Galeb is a very fast router, written in Java, with WebSocket support. It was also born at Globo.com. Users from the community can now choose to use Galeb, along with Vulcand and Hipache.

    Backward incompatible changes

    • The new authorization system requires roles to be associated with permissions and users to be associated with roles. So after upgrading, users do not have any permissions, and roles and permissions must be created and associated with the user. There’s an optional migration that can be used to keep tsuru compatible with the previous behavior. Please refer to the migrating section in the Permissions documentation.

    • The post-receive hook is no longer supported, please use one of the available pre-receive hooks.

    • If you’re using the archive-server, we recommend upgrading it and also the pre-receive hook.

    The new release process

    Starting with this release, we’re now running a new release process, that included 12 release candidates for tsurud 0.13.0. This means that we’ve been running tsuru 0.13.0 in production since November 6th, and as so we’re well confident that this release is stable enough to be deployed in production by other companies using tsuru. So, feel safe to take this release to your production environment, and please let us know if you find any issue on it!

    Contributors

    Besides the core team, this release was also powered by external contributors. And we want to thank them for helping getting tsuru server 0.13.0 out. Here is the list of contributors that helped in this version:

    • Giuseppe Ciotta
    • Guilherme Garnier
    • Hugo Seixas Antunes
    • Manoel Domingues Junior

    Grab it today!

    You can install or upgrade tsuru server using our PPA or building it from source.

    tsuru server 0.12.2 is out!

    tsuru server 0.12.2, along with tsuru client 0.17.1 and tsuru-admin 0.11.0, has been released this week!

    This release includes some awesome features and fixes. Please refer to the release notes for the full list of features (the notes for 0.12.1 and 0.12.2 include some bug fixes).

    Some features worth highlighting are listed below:

    • Lean containers: this is definitely the big feature of this release. With lean containers, we’ve dropped Circus, making application images smaller, and containers faster. Improving resource usage, because application containers won’t run tsuru-unit-agent anymore either. It’s still used during the deployment process, but it’s not competing with the application process anymore.

    • Pool management improvements. There are now three kinds of pools:
      • team pools: these pools are segregated by teams, and cloud administrators can associate teams to pools. When creating an application, this pool must be chosen explicitly
      • public pools: these pools are available to all teams, and must be chosen explicitly as well
      • default pool: this is a single pool that replaced the old fall back pool. It’s the implicit pool used for applications when no pool is specified
    • New router available: Vulcand. Vulcand is a powerful reverse proxy, with SNI based TLS support. This is the first step on being able to configure TLS on applications (see issue #1206).

    Backward incompatible changes

    • As this version creates containers per processes, whenever an application has more than one process, tsuru will forward requests to the process named “web”. So, in a Procfile like the one below, “api” should be replaced with “web”:
    api: ./start-api
    worker1: ./start-worker1
    worker2: ./start-worker2
    
    • tsr has been renamed to tsurud. Please update any procedures and work flows (including upstart and other init scripts).

    • The fallback pool should be changed to default pool. Doing that is as simple as running the command tsuru-admin pool-update pool_name --default=true

    A new release policy

    As you have might noticed, we released tsurud 0.12.0 on August 27th, then released 0.12.1 on September 2nd, and then released 0.12.2 on September 3rd. This is definitely not a good way of managing releases.

    We already have a flow for constant testing on nightly packages, ensuring that everything works, but this testing doesn’t reach a good scale yet. As we work to improve this flow, we’re also committing to provide more stable releases, by defining a clearer release cycle:

    Whenever we reach the stable stage of development, we’re going to provide a release candidate (rc) package, and run the rc version on our production environment. If we caught any bad behavior, we fix it and issue another rc version. After some days of running the rc release, as soon as we feel comfortable about it (one week later, possibly), a new release will be tagged and safely advised as stable by the tsuru core team.

    Contributors

    Besides the core team, this release was also powered by external contributors. And we want to thank them for helping getting tsuru server 0.12.2 out. Here is the list of contributors that helped in this version:

    • Dan Carley
    • Dan Hilton
    • Jonathan Prates
    • Leandro Souza
    • Richard Knop

    Grab it today!

    You can install or upgrade tsuru server using our PPA or building it from source.

    Using Packer to speed-up your first experience with tsuru

    If you have ever used tsuru-bootstrap before, you know how slow it can get. If you have never used it, you should know that running a command and then waiting 30 minutes to get tsuru up and running is not ideal. What if we could do it in 2 minutes? Time to get faster!

    In order to achieve this goal, we are creating all-in-one AWS and VirtualBox vagrant box images of tsuru with Packer. These images are based on stable and nightly releases.

    Packer + tsuru now

    Tsuru all-in-one images are built as Amazon AMI and as Vagrant VirtualBox box. We’re using tsuru-now to build the images. They come bundled with the Python platform, and with tsuru configured to use the pre-receive hook on top of archive-server.

    Amazon AMIs are available in us-east-1 region and you can find it in “Community AMIs” tab when launching a new instance. It’s also possible to get the latest AMIs programmatically by downloading two files from S3:

    $ curl https://s3.amazonaws.com/tsuru-images/nightly-ami-id
    ami-1350d678
    
    $ curl https://s3.amazonaws.com/tsuru-images/stable-ami-id
    ami-a98527c2
    

    There are Vagrant boxes for the VirtualBox provider as well. They come in the same two flavors as the ones provided in EC2: stable and nightly.

    The URLs for boxes are listed bellow:

    Users may write the URL directly in the Vagrantfile, or use it in the vagrant init:

    $ vagrant init tsuru-stable https://s3.amazonaws.com/tsuru-images/tsuru-stable-virtualbox.box
    
    $ vagrant init tsuru-nightly https://s3.amazonaws.com/tsuru-images/tsuru-nightly-virtualbox.box
    

    building the images

    If you want to build these images yourself, you can get our conf files here. After checking out the repository you have to run:

    $ make setup
    
    $ AWS_ACCESS_KEY=<your-access-key> AWS_SECRET_KEY=<your-secret-key> packer build tsuru-{nightly,stable}.json
    

    If you want to talk more about this project or any other tsuru-related project feel free to reach us by openning an issue in our Github repository, directly chatting in our Gitter room or posting a message to the tsuru-users group.

    tsuru server 0.11.0 is out!

    tsuru server 0.11.0, along with tsuru client 0.16.0 and tsuru-admin 0.10.0, is out today!

    This release includes some awesome features and fixes. Please refer to the release notes for the full list of features.

    Some features worth highlighting are listed below:

    • Pool management overhaul. Now pools are a concept independent on the Docker provisioner. Users can now have multiple pools associated with each team. If that’s the case, when creating a new application, users will be able to choose which pool they want to use to deploy it;
    • Node auto scaling. It’s now possible to enable automatic scaling of Docker nodes, this will add or remove nodes according to rules specified in your tsuru.conf file. There’s a dedicated page in our documentation for node autoscaling. Check the short demonstration video below.

    Backward incompatible changes

    • There are two migrations that must run before deploying applications with tsr 0.11.0, they concern pools and can be run with tsr migrate. The way pools are handled has changed. Now it’s possible for a team to have access to more than one pool, if that’s the case the pool name will have to be specified during application creation;
    • Queue configuration is necessary for creating and removing machines using a IaaS provider. This can be simply done by indicating a MongoDB database configuration that will be used by tsuru for managing the queue. No external process is necessary. See configuration reference for more details;
    • Previously it was possible for more than one machine have the same address this could cause a number of inconsistencies when trying to remove said machine using tsuru docker-node-remove --destroy. To solve this problem tsuru will now raise an error if the IaaS provider return the same address of an already registered machine.

    Contributors

    Besides the core team, this release was also powered by external contributors. And we want to thank them for helping getting tsuru server 0.11.0 out. Here is the list of contributors that helped in this version:

    • Anna Shipman
    • Diogo Munaro Vieira
    • Felippe da Motta Raposo
    • Gustavo Pantuza
    • Lucas Weiblen
    • Marc Abramowitz
    • Martin Jackson
    • Pablo Aguiar
    • Samuel ROZE
    • Wilson Júnior

    Grab it today!

    You can install or upgrade tsuru server using our PPA or building it from source.

    tsuru at London PaaS User Group (LoPUG)

    This is a guest post by Colin Saliceti.

    On Feb 26th tsuru was invited at London PaaS User group.

    tsuru at LoPUG Colin Saliceti presenting tsuru at LoPUG in the OpenCredo offices

    Tsuru, the enterprise-grade PaaS with Brazilian roots

    Tsuru was born and is actively developed at globo.com, the web arm of Globo, the Brazilian media giant. It is now gaining traction across the world and already has many international contributors.

    You may think of Tsuru as “yet-another-Docker-based-PaaS”, but it is was actually started before Docker based PaaS were cool. Its great maturity brings unique features: it is fast, secure, flexible, pluggable, multi tenants, deploys with zero downtime, auto scales apps, auto scales IaaS…

    Thanks to the audience for the great interest, thanks to LoPUG for the organisation (and the beers).

    Register for next Meetup: LoPUG.

    tsuru server 0.10.1 is out

    Today we’re releasing tsuru-server 0.10.1, two days after the 0.10.0 release, to fix some potential bugs:

    • In the last release, we changed the way tsuru names Docker images, and the API daemon included an automatic migration routine that run during start-up. We expected this routine to slow down only the first start-up, after upgrading. But it ends up slowing down every startup. In order to fix this issue, we ensure that the migration routine do not try to migrate applications that are already migrated;
    • There was an old issue with the healing of Docker nodes: tsuru detects and tracks failures when interacting with Docker, and after some configurable threshold, the API triggers the healing (replacing the nodes). The 0.10.1 release reduced the probability of false positives in failure detection, by properly handling image failures in Docker;
    • Fixed a security flaw in the tsuru API that allowed users to deploy code to apps that they do not have access.

    We apologize for any incoveniences that these issues may have caused to our users. As usual, you can upgrade tsuru server using our PPA or building it from source.

    tsuru server 0.10.0 is out!

    tsuru server 0.10.0, along with tsuru client 0.15.0 and tsuru-admin 0.9.0, is out today!

    This release includes some awesome features and fixes. Please refer to the release notes for the full list of features. In order to take advantage of the new features, please make sure that you have the latest version of tsuru’s components, specially Docker (at least version 1.4, check the upgrading Docker page for proper instructions in the upgrading process) and Gandalf (at least version 0.6.0).

    Some features worth highlighting are listed below:

    • Gandalf is now optional. When Gandalf is not configured, users will be able to deploy code only using app-deploy. In order to use git push deployment, Gandalf must be enabled and configured. Please refer to the managing Git repositories and keys for more details;
    • tsuru now support rollbacks! It will store multiple image versions of an application (one for each deployment) and allow the user to roll back to a specific version. For more details, check the app-deploy-rollback command;
    • tsuru now has an improved healthcheck that allow users to better diagnostic failures in the server. Sending a request to the URL /healthcheck?check=all will print the status of components in tsuru, including EC2, CloudStack, Gandalf, MongoDB and Docker. For more details, see issue #967;
    • more powerful flexible platforms: the new PHP and Ruby platforms are more flexible now and are configurable, like in the Java platform
      • Thanks to Samuel ROZE, the PHP platform support multiple interpretors (FPM and mod_php) and frontends (Apache or nginx).
      • The Ruby platform supports switching between Ruby versions by specifying the desired version in the Gemfile or .ruby-version file.

    Backward incompatible changes

    • Drop support for Docker images that do not run tsuru-unit-agent. Starting on tsuru-server 0.10.0, tsuru-unit-agent is no longer optional.

    Contributors

    Besides the core team, this release was also powered by external contributors. And we want to thank them for helping getting tsuru server 0.10.0 out. Here is the list of contributors that helped in this version:

    • Alessandro Corbelli
    • Lucas Weiblen
    • Marc Abramowitz
    • Mateus Del Bianco
    • Rogério Yokomizo
    • Samuel ROZE

    Grab it today!

    You can install or upgrade tsuru server using our PPA or building it from source.

    tsuru server 0.9.1 is out!

    In the 0.9.0 and 0.9.1 a lot of features are added. The features that worth highlighting are:

    • The experimental support to auto scale applications. If you are using the metric system with Statsd/Graphite it is possible scale the number of the units for your application automatically. We will create a dedicated blog post about this feature in the future.
    • API key that enables authentication without interaction. The key can be regenerated using the command tsuru token-regenerate. To view the current key just use the command tsuru token-show. You can see more information about the new commands in the client reference.
    • templates to create machines in the IaaS provider with docker-node-add. See machine-template-add command for more details.
    • TSURU_SERVICES environment variable: this environment variable lists all service instances that the application is bound. This enables binding an application to multiple instances of a service. For more details, check the TSURU_SERVICES documentation.
    • Improvements to EC2 IaaS provider, it now accepts user-data config through iaas:ec2:user-data and a timeout for machine creation with iaas:ec2:wait-timeout config.
    • A new debug route is available in the API: /debug/goroutines. It can only be hit with admin credentials and will dump a trace of each running goroutine.
    • The unit flow was changed to use correct status on build. The unused status (unreachable and down) was removed. And Created status was added. Now the unit flow is:
    +----------+                           Start          +---------+
    | Building |                   +---------------------+| Stopped |
    +----------+                   |                      +---------+
          ^                        |                           ^
          |                        |                           |
     deploy unit                   |                         Stop
          |                        |                           |
          +                        v       RegisterUnit        +
     +---------+  app unit   +----------+  SetUnitStatus  +---------+
     | Created | +---------> | Starting | +-------------> | Started |
     +---------+             +----------+                 +---------+
                                   +                         ^ +
                                   |                         | |
                             SetUnitStatus                   | |
                                   |                         | |
                                   v                         | |
                               +-------+     SetUnitStatus   | |
                               | Error | +-------------------+ |
                               +-------+ <---------------------+
    

    Backward incompatible changes

    • Service API flow: the service API flow has changed, splitting the bind process in two steps: binding/unbinding the application and binding/unbinding the units. The old flow is now deprecated

    You can see the complete list of features in the release notes of tsuru server 0.9.0 and 0.9.1.

    Grab it today!

    You can install or upgrade tsuru server using our PPA or building it from source.

    tsuru at CloudStack Collaboration Conference Europe

    Good news: we are going to present a poster about tsuru at CloudStack Collaboration Conference Europe!

    The goal of the presentation is to introduce tsuru and its components, demonstrating how they work together and diving as deep as possible into their architectures.

    The audience will have the opportunity to better know what’s behind tsuru, for example:

    • how the architecture looks like
    • how the production environment looks like with Docker
    • how to run tsuru in your own infrastructure
    • how to provide and use services in tsuru applications
    • the integration between tsuru and Apache CloudStack

    See you in Budapest!

    tsuru server 0.8.0 is out!

    tsuru server 0.8.0, along with tsuru client 0.13.0, tsuru-admin 0.7.0 and crane 0.6.0, is out today!

    The main feature of this release is the new application plan support: now apps can be associated to plans, that define the amount of memory and CPU shares that units of the application will have. The container scheduler has been adjusted to respect these limits when scheduling new containers, so containers will born in hosts with the largest amount of available resources.

    Other features worth highlighting:

    • support for multiple CloudStack and Amazon EC2 regions, via custom IaaS configuration
    • actual support for removing platforms from tsuru, there was a bug in previous version that prevented the platform from being removed from the database
    • changes in the behavior of "app-restart", "env-set" and "env-get". Now these commands log their progress, as they go through some steps, like adding new units, waiting for them to become responsive and removing the old ones
    • major rename of commands in the tsuru client. We've standardized the commands in the pattern <subject>-<action>, so "restart" became "app-restart", "log" became "app-log", "add-cname" became "cname-add", and so on. Check tsuru 0.13.0 release notes for more details

    You can see the complete list of features in the release notes of tsuru server 0.8.0.

    Contributors

    Besides the core team, this release was also powered by some contributors. And we want to thank them for helping getting tsuru server 0.8.0 out. Here is the list of contributors that helped in this version:

    • Flavia Missi
    • Josh Blancett

    Grab it today!

    You can install or upgrade tsuru server using our PPA or building it from source. The clients are available on Homebrew, GitHub and in the PPA, for more details, check the client installation docs.

    Tsuru server 0.7.0 is out!!

    tsuru server 0.7.0, along with tsuru client 0.12.0 and tsuru-admin 0.6.0, is out today!

    The main feature in this release is the new command tsuru deploy which allows you to deploy a set of files and/or a directory and/or a binary file to tsuru directly without using git.

    It's useful when you need to build a lot of things, so you can build it locally and deploy to tsuru directly, making your deploy faster and easier. For instance, you can build a java project and just deploy your .WAR file (tsuru deploy yourproject.war) or deploy an entire directory without git (tsuru deploy .).

    Other new features worth highlighting are:

    • It’s now possible for an app to have multiple cnames. The tsuru set-cname and tsuru unset-cname have been removed and tsuru add-cname and tsuru remove-cname were added.
    • tsuru is now able to heal failing nodes and containers automatically, this is disabled by default. Instructions can be found in the config reference.
    • It’s possible to configure a health check request path to be called during the deployment process of an application. See health check docs for more details.

    You can see other cool features in this release here.

    Bonus Feature

    Now, tsuru has responsive tables!!! \o/

    Contributors

    Besides the core team, this release was also powered by some contributors. And we want to thank them for helping getting tsuru server 0.7.0 out. Here is the list of contributors that helped in this version:

    • Diego Toral
    • Marc Abramowitz
    • Thinh Nguyen

    Grab it today!

    You can install or upgrade tsuru server using our PPA or building it from source. The clients are available both on Homebrew and in the PPA, for more details, check the client installation docs.

    A new release of tsuru server is out!

    tsuru server 0.6.0, along with tsuru client 0.11.0 and tsuru-admin 0.5.0, is out today!

    What is tsuru?

    tsuru is an extensible and open source Platform as a Service software. tsr is the binary of the tsuru server api, while tsuru and tsuru-admin are clients of the server, used by application developers and cloud administrators, respectively.

    What’s new in tsr 0.6.0

    The main feature in this release is a better integration with infrastructure providers. tsuru is now able to provision Docker hosts on CloudStack and EC2, which mean that users can start a new cluster just by installing the server package and running some tsuru-admin commands to add new nodes to the cluster.

    Users upgrading their servers to tsr 0.6.0 should also update client versions of tsuru-admin and tsuru to 0.5.0 and 0.11.0, respectively.

    Relevant news

    • The ssh-agent is no more! Now tsuru generates an RSA key pair per container, and connects directly via SSH to the container using the generated private key
    • tsuru administrators are able to take advantage of this new SSH approach and connect directly via SSH to a specific container, by running the command tsuru-admin ssh [container-id]
    • It's now possible to talk to IaaS providers and add new Docker nodes to tsuru from scratch. tsuru administrators can run commands like tsuru-admin docker-node-add and tsuru-admin docker-node-remove (please refer to the tsuru-admin usage guide for more details).
    • beanstalkd support has been removed, the API server will refuse to start if it's configured to use beanstalkd. Users should switch to Redis
    • Now service instances also have team owners. In order to get it working, users should run a migration script in the database
    • </ul>

      For a full list of changes, check the release notes of tsuru server 0.6.0.

      Contributors

      Besides the core team, this release was also powered by some contributors. And we want to thank them for helping getting tsuru server 0.6.0 out. Here is the list of contributors that helped in this version:

      • Dan van Wijk
      • Pablo Aguiar

      Grab it today!

      You can install or upgrade tsuru server using our PPA or building it from source. The clients are available both on Homebrew and in the PPA, for more details, check the client installation docs.

    tsuru server 0.5.2 is out!

    We've just released tsuru server 0.5.2.

    This release fixes some bugs introduced by the 0.5 release:

    • applications are now properly locked on unit-remove.
    • fix race condition on unit-remove that prevented it from removing more than one unit at once
    • properly report errors when removing a Docker node that does not exist in the cluster

    It also includes some minor improvements, for more details, take a look at the release notes for tsuru 0.5.2.

    tsuru server 0.5.0 released!

    We are pleased to announce tsr 0.5.0 and  the equivalent client version, tsuru cli 0.10.0 and tsuru-admin 0.4.0.

    What is tsuru?

    tsuru is an extensible and open source Platform as a Service software. And `tsr` is the binary of the tsuru server api.

    What’s new in tsr 0.5.0

    One of the main feature on this release is improve the stability and consitency of the tsuru API.

    • prevent inconsitency caused by problems on deploy (#803) / (#804)
    • units information is not updated by collector (#806)
    • fixed log listener on multiple API hosts (#762)
    • prevent inconsitency caused by simultaneous operations in an application (#789)
    • prevent inconsitency cause by simultaneous env-set calls (#820)
    • store information about errors and identify flawed application deployments (#816)

    Buildpack

    tsuru now supports deploying applications using Heroku Buildpacks.

    Buildpacks are useful if you’re interested in following Heroku’s best practices for building applications or if you are deploying an application that already runs on Heroku.

    tsuru uses Buildstep Docker image to deploy applications using buildpacks. For more information, take a look at the buildpacks documentation page: http://docs.tsuru.io/en/stable/using/buildpacks.html.

    Other features

    • filter application logs by unit (#375)
    • support for deployments with archives, which enables the use of the pre-receive Git hook, and also deployments without Git (#458, #442 and #701)
    • stop and start commands (#606)
    • oauth support (#752)
    • platform update command (#780)
    • support services with https endpoint (#812) / (#821)
    • grouping nodes by pool in segregate scheduler. For more information you can see the docs about the segregate scheduler: segregate scheduler documentation.

    Platforms

    • deployment hooks support for static and PHP applications (#607)
    • new platform: buildpack (used for buildpack support)

    Backwards incompatible changes

    • Juju provisioner was removed. This provisioner was not being maintained. A possible idea is to use Juju in the future to provision the tsuru nodes instead of units
    • ELB router was removed. This router was used only by juju.
    • tsr admin was removed.
    • The field units was removed from the collection apps. Information about units are now available in the provisioner. Now the unit state is controlled by provisioner. If you are upgrading tsuru from 0.4.0 or an older version you should run the MongoDB script bellow, where the docker collection name is the name configured by docker:collection in tsuru.conf:
    var migration = function(doc) {
        doc.units.forEach(function(unit){
            db.docker.update({"id": unit.name}, {$set: {"status": unit.state}});
        });
    };
    
    db.apps.find().forEach(migration);
    • The scheduler collection has changed to group nodes by pool. If you are using this scheduler you shoul run the MongoDB script bellow:
    function idGenerator(id) {
        return id.replace(/\d+/g, "")
    }
    
    var migration = function(doc) {
        var id = idGenerator(doc._id);
        db.temp_scheduler_collection.update(
            {teams: doc.teams},
            {$push: {nodes: doc.address},
             $set: {teams: doc.teams, _id: id}},
            {upsert: true});
    }
    db.docker_scheduler.find().forEach(migration);
    db.temp_scheduler_collection.renameCollection("docker_scheduler", true);

    You can implement your own idGenerator to return the name for the new pools. In our case the idGenerator generates an id based on node name. It makes sense because we use the node name to identify a node group.

    Features deprecated in 0.5.0

    Beanstalkd queue backend will be removed in 0.6.0.

     Installing and updating

    You can install tsr and tsuru client from our ppa deb packages or using homebrew (clients only).

    tsuru server 0.4.0 released!

    We are pleased to announce tsr 0.4.0 and  the equivalent client version, tsuru cli 0.9.0 and tsuru-admin 0.3.0.

    What is tsuru?

    tsuru is an extensible and open source Platform as a Service software. And `tsr` is the binary of the tsuru server api.

    What’s new in tsr 0.4.0

    There are a lot of changes since the 0.3.0 version:

    New Redis queue backend.

    Now you can use Redis instead of beanstalk for work queue. In order to use Redis, you need to change the configuration file:

    queue: redis
    redis-queue:
      host: "localhost"
      port: 6379
      db: 4
      password: "your-password"

    All settings are optional (queue will still be default to “beanstalkd”), refer to configuration docs for more details.

    Docker

    API

    • Added app team owner - #619
    • Exposed public url in create-app - #724
    • Improved feedback for duplicated users - #693
    • Login expose is_admin info
    • Improve output of the get environment variables endpoint
    • Exposed deploys of the app in the app-info API
    • Improved administrative API for the Docker provisioner
    • Stored deploy metadata
    • Improved healthcheck (ping MongoDB before marking the API is ok)
    • Exposed owner of the app in the app-info API

    Services

    Platforms

    • New Python 3 platform
    • New Go platform
    • Support to syslog
    • When a process restarts too often, we say that it is flapping. Now we keep track of worker restarts and stops the corresponding process in case it is flapping

    tsuru client changes

    • Fixed output when service doesn’t export environment variables (#772)
    • Swap address and cname on apps when running swap
    • App owner team is configurable - #620
    • New plugin system - #737 - Now it is possible customize tsuru client, installing and creating plugins. See the docs for more info

    Backwards incompatible changes

    The S3 integration on app creation was removed. The config properties bucket-support, aws:iam aws:s3 were removed too.

    All existing apps have no team owner. You can run the MongoDB script below to automatically set the first existing team in the app as team owner.

    db.apps.find({ teamowner: { $exists: false }}).forEach(
        function(app) {
            app.teamowner = app.teams[0];
            db.apps.save(app);
        }
    );

     Installing and updating

    You can install tsr and tsuru client from our ppa deb packages or using homebrew (clients only).

    tsuru at OSCON 2014

    Good news: we are going to talk about tsuru at OSCON 2014! The talk, entitled "Tsuru: Open Source Cloud Application Platform", will cover the history, evolution and architecture of tsuru, including its context and adoption at Globo.com.

    The goal of the presentation is to introduce tsuru and its components, demonstrating how they work together and diving as deep as possible into their architectures.

    The audience will have the opportunity to better know what's behind tsuru, for example:

    1. how we use the Go programming language to build tsuru
    2. how to provide and use services in tsuru applications
    3. how the architecture looks like
    4. how the production environment looks like with Juju and Docker
    5. how to run tsuru in your own infrastructure

    We will cover each of these topics in future posts in this blog (there's already a post about Docker in production).

    See you at OSCON 2014!

    Running tsuru in production: scaling and segregating Docker containers

    tsuru is an open source PaaS, born in early 2012 at Globo.com, a brazilian media company. At a first moment, tsuru hit production using Juju to provision virtual machines on-demmand. Juju is an amazing tool, but provisioning virtual machines on-demmand is slow. Later, we started integrating with Linux Containers (lxc), and Docker was born.

    Docker brought some light to our problems, and soon enough we started using it, abandoning our integration with lxc. After some months integrating with Docker, we were able to switch our production environment and start to scale it to run Docker on more than 20 hosts simultaneously. Docker and tsuru enabled Globo.com to run more than 1000 deployments in three months for some of its projects. But how does it work? How does the architecture look like?

    There were two issues for running Docker in production: we needed to scale it; and make sure that we could isolate the applications that needed to be isolated. We need to be able to run containers across multiple Docker nodes and make sure that containers from some apps do not mess with resources from other applications.

    The picture below presents the most important parts of tsuru's architecture related to Docker:

    tsuru_arch(1)

    In future posts, we will dig deeper in other components of this architecture. Let's focus now on Docker.

    tsuru uses docker-cluster for distributing containers across multiple Docker nodes. A Docker node is simply a machine running the Docker daemon. docker-cluster is a Go package that enables Go programs to run Docker containers across clusters of Docker nodes. It contains an interesting component: the scheduler. The scheduler is a Go interface, and users of the package are able to replace the default scheduler (that uses a round robin approach).

    While docker-cluster by itself solves the issue of scaling, tsuru needed to provide an scheduler with the proper rules for segregating the containers of certain applications. And this is the segregated scheduler: it allocates a pool of Docker nodes to one or more teams.

    Imagine there are 6 nodes in the cloud, and we want to split them between 3 teams: each team will get 2 nodes. There will be an association in tsuru:

    +-----------+--------------------------------+--------+
    | Node name | Node host                      | Teams  |
    +-----------+--------------------------------+--------+
    | node01    | http://node01.company.com:4243 | team a |
    | node02    | http://node02.company.com:4243 | team a |
    | node03    | http://node03.company.com:4243 | team b |
    | node04    | http://node04.company.com:4243 | team b |
    | node05    | http://node05.company.com:4243 | team c |
    | node06    | http://node06.company.com:4243 | team c |
    +-----------+--------------------------------+--------+

    Based on this mapping, whenever a user of  team a runs a deployment of one of their applications, tsuru will create the container in node01 or node02, never in any of the other nodes.

    So, by using docker-cluster with a customized scheduler, tsuru is able to scale and segregate Docker containers, and handle Globo.com's production environment.