Coffee and Croissants
It’s a lovely morning in the data center and you are a horrible ~goose~ devop. What sorts of trouble can you cause? For years, configuration management has been touted as a way of guarding against the fragility that comes from having humans configure things manually, but it isn’t a complete safeguard against things going wrong. In this talk I’ll cover the evolution of configuration management strategies used by operators to increase system robustness and will share stories about the ways that even mature management practices can fail. I’ll also discuss the underlying problem from a human factors and resilience engineering perspective, ways to further increase the reliability of our tools, and how those tools may keep evolving.
In this talk, Paul will demonstrate why TypeScript is a great language of choice for infrastructure management. Pulumi is an open source tool that allows users to write their infrastructure code in TypeScript, Python or Go.
TypeScript allows infrastructure code to have integrated testing, compile time checks as well as being able to create infrastructure APIs. This will show why a real language is more suited to infrastructure management than DSLs, JSON or YAML. In addition, he will cover how to build infrastructure that manages Serverless, PaaS and IaaS systems across multiple cloud providers.
Coffee and Snacks
What role does configuration management have in containerized and cloud-native infrastructure? What tools and practices have evolved to work with modern cloud platforms like Kubernetes? Is there a way out of the maze of YAML we've trapped ourselves in? In this session, Eric will share perspectives on the evolution of infrastructure platforms and the changes necessary to adapt to the landscape of today – and tomorrow.
The challenges of automation cannot be demonstrated. It is one of DevOps' CALMS pillars. However, automation serves objectives, and among them, compliance. Puppet Remediate, Chef Inspec, SalStack SecOps, RUDDER… we are all now developing towards compliance.
Why? How does compliance become essential in configuration management? Let's open the debate and talk about it in this 5-minute quick talk.
The Augeasproviders project aims to ease the use of Augeas by providing native Ruby types and providers for Puppet, powered by the Augeas Ruby bindings under the hood. These resource types allow to easily edit configuration files in a clean and idempotent way with Puppet.
This talk recounts the journey of developing a Linux platform to require very little in the way of configuration management, and how to virtually eliminate the need to modify code to change configuration. From configuration via scripts and evolving through a couple of configuration management products, we have used the idea of matching actions to timescales to transform how we do configuration management. We now do very little of it, and we have dramatically reduced its complexity.
You think you know YAML? This talk will show you that you don't.
A look at the changing landscape for Operations. With SRE and Kubernetes both on the rise we’re seeing drastic changes in the way we build and operate infrastructure. At the same time Serverless has exploded onto the scene and confused things even further.
The last several years has brought explosive growth to the realm of open source. Many new projects have started, and many have went on to become foundational components of running applications at scale. Cloud providers have focused on a strategy of embracing open source not only to help build value added services, but to also make it easy to use open source on their compute platforms. Open source companies have reacted by changing their software licenses in an attempt to cut out the Cloud providers.
So what does this mean for the future of open source? In this talk we’ll revisit some of the foundational tenets of open source, and compare these ideas to where open source has evolved. We’ll also talk about the pros and cons, and maybe unintended consequences, of Cloud based computing.
Lunch (on your own)
Intelligent Automation meets Intelligent Monitoring. Enable synergies between Salt & checkmk
Learn how to:
- Setup quickly a full functional monitoring environment
- Add your Salt-Minions automatically to checkmk
- Install checkmk Monitoring Agents via Salt
- Use Salt Grains within checkmk to define Rules
- Send checkmk notifications via the Salt Event Bus
- React on checkmk events with the Salt Reactor System
Application delivery pipelines can make it a lot easier to quickly iterate on applications, but what about infrastructure? There’s toil hiding everywhere in infrastructure management, including processes like scaling up or down, patching, and more. On top of that, security requirements are often a huge consideration.
In this talk, we’ll explore some strategies for continuously delivering your infrastructure using an end-to-end CI/CD pipeline. We will also make use of infrastructure as code tools to manage the full infrastructure lifecycle with testing and feature flags.
Attendees will learn how this use case can help build patterns for others, and some ideas for improving their infrastructure provisioning and management!
Infrastructure as Code (IaC) is considered the predominant approach to manage Cloud Infrastructure at large scale. Terraform is the market-leading tool implementing this approach, including support for all big Clouds Providers. It is extremely convenient to start new projects from scratch and automate your infrastructure right away. But what if you started without? What if you want to manage big amounts of pre-existing Cloud resources with Terraform?
Terraform's import command is one building block, but using it manually for many resources is very tedious and error-prone. A complete import mechanism is announced by Hashicorp, but it is unclear when it will be implemented. In this talk I will show you how to work smart, not hard: we will automate the import into the statefile, generate the required Terraform code and engineer the correctness of the result with automated tests. All hands-on and with practical examples, that you can reuse to migrate your own infrastructure.
Infrastructure-as-code has been one of the key concepts within DevOps to allow the benefits of a full development cycle for infrastrcuture and allow better visibility of the operations process.
However, when we're writing and applying this IaC, we're often interacting with disparate systems, often geographically dispersed and with very different API's and responses. To further complicate things, different teams also have different concerns: Will this break prod? Will this cost too much? Will this comply with our policies?
We'll be discussing the different kind of testing that organisations are doing, what tools are right for each job, and how to keep the various teams happy. To do that, we'll be giving some examples with most of the popular IaC tools, where policy fits in and even covering where testing blurs the lines with observability.
The Foreman community maintains a collection of over 40 Ansible modules for interaction with the Foreman API and the various plugin APIs. This all started with two modules in
ansible/ansible in 2016 and escalated from there.
Today we want to share the lessons learned from these three years of module development and maintenance.
- efficient abstraction -- all modules talk to the same base API, receive the same credentials and execute similar actions, let's abstract that away!
- good tests -- nobody wants to break stuff, but you have to
- migrating to new API libraries -- should be fairly easy with an abstraction layer, right?
- onboarding new contributors -- (un)surprisingly the hardest part after you've built something for your own needs.
We also want to talk about what's next: How we can further improve and ease the interaction? Which challenges we see in the future?
You know the drill: DevOps is using tool(s) X. So obviously, observability can be solved by throwing some tools together as well; generally logs, metrics, and traces often called the trifecta of observability.
But observability is not a tool — it is a property of a system. Moving from many small blackboxes to a more holistic view of your system. It includes tools, but not exactly three distinct features (especially if your solution happens to support those). For example, if half your user base cannot access your service because of some bad DNS settings and external health checks are not part of your trifecta, you are none the wiser.
This is not (just) a rant, but a look at the actual value to be added and some approaches to it. Like turning your logs into richer events that align with your business. Which is not solved by fancy tools alone.
Another year, another CfgMgmt community update. I'll be going over what's new, what changed, and what our plans might look like for the future
In Kubernetes, operators allow the API to be extended to your heart content. If one task requires too much YAML, it’s easy to create an operator to take care of the repetitive cruft, and only require a minimum amount of YAML.
On the other hand, since its beginnings, the Go language has been advertised as closer to the hardware, and is now ubiquitous in low-level programming. Kubernetes has been rewritten from Java to Go, and its whole ecosystem revolves around Go. For that reason, It’s only natural that Kubernetes provides a Go-based framework to create your own operator. While it makes sense, it requires organizations willing to go down this road to have Go developers, and/or train their teams in Go. While perfectly acceptable, this is not the only option. In fact, since Kubernetes is based on REST, why settle for Go and not use your own favorite language?
In this talk, I’ll describe what is an operator, how they work, how to design one, and finally demo a Java-based operator that is as good as a Go one.
The Foreman project is 10 years old, but there's still plenty of things to change. In this presentation we'll go over the current Foreman architecture as well as Katello before looking to the future with Foreman 2.0 and Katello 4.0.
Although some excellent Puppet modules are provided for deploying Icinga 2, gluing everything together into a cluster with multiple satellite zones and redundancy can still be challenging, as you still need to provide the right configuration, endpoints, ... to each additional node.
This talk will introduce some techniques we use to automate this process as much as possible. By leveraging PuppetDB queries, custom Puppet functions, etc. we can achieve a level of automation where new nodes can automatically discover the endpoints they need, and bootstrapping additional satellites is as easy as flipping a feature toggle.
Additionally, techniques for leveraging Icinga 2 apply rules as much as possible are introduced, in order to define thousands of services without the need for superfluous exported resources. This means we can reduce the stress on PuppetDB, which results in a high-performance, highly scalable setup overall.
Ansible is an incredible tool for personal and team productivity, but sharing Ansible Role and Collection content between parts of your organization, or across teams, is hard. Sharing content publicly using galaxy.ansible.com is an option, but everything you post is now public. Alternatively, private git repos are viable, but it can be difficult to scale technical git knowledge and access within organizations.
Learn to enable Ansible collaboration within your organization using pulp_ansible - https://pulp-ansible.readthedocs.io/. You’ll learn how to upload, organize, and download Role and Collection Ansible content using the regular ansible-galaxy command line. Pulp_ansible is easy to install using Ansible itself, which we will also do. Once setup it provides a directory that individuals and teams can use to share and consume Role and Collection content with others securely and privately. You’ll also learn about the quality checks pulp_ansible provides as it analyzes content it hosts.
From TOIL to Continuous Delivery of Infrastructure, our tail of migrating our existing Infrastructure as code tools & wrappers so that they can be used in a CD system, but with all of the control grey-beards, enterprises & governments expect.
If you’re seeking an open-source solution for managing your physical or virtual servers’ software content, then Katello could be for you! Through Katello, magnitudes of servers’ content can be easily and quickly managed via web browser. Files and software packages can be remotely synced or uploaded and then grouped into Lifecycle Environments at a per-content-unit level. Synced content in Katello can be easily deployed onto servers and stepped through Lifecycle Environments such as Development, Testing, and Production, or as many as are needed. Katello can also keep administrators aware of CentOS/RHEL errata -- bug fixes, enhancements, and security patches.
Already an avid user of Katello? Features new to Katello within the past few years will be highlighted, such as dependency solving and Composite Content View auto-publishing. Also, to inspire new ways to use Katello, we’ll explore content-enabled Smart Proxies and see a demonstration of an automated web server deployment. Whether you’re new to Katello or a pro, you're invited to join this community presentation and discussion!
At SUSE we love Salt for configuration management and infrastructure orchestration. We actively develop and integrate Salt as a core component of some of our products. At times we work with customers and users who chose Ansible as their configuration management engine. They invested time and effort designing Ansible playbooks to define their infrastructure and they don't want to waste the effort. But often they then realize that with Salt they can do even more to configure and control their infrastructure with real-time, persistent monitoring, event-driven orchestration, extreme modularity, and more.
The Fluorine release of Salt comes with a new module called ansiblegate which was created by SUSE and allows Ansible to be run from within Salt, offering the best of both worlds. You can execute any Ansible module directly using Salt and you can even reuse your own Ansible playbooks and apply them using Salt.
This session will show how Salt is able to run Ansible using ansiblegate providing users with the flexibility and optionality required to manage and secure diverse infrastructure at scale. SUSE loves openness, and this project gives Ansible and Salt users alike the ability to protect existing investments while leveraging the best infrastructure automation and configuration management for the job.
One of the most frequent complaints about Terraform is the state that configuration gets itself into after a repository have been living for a few years. The root cause is often that teams treat Terraform as configuration instead of code, and throw basic software engineering principles out the window as a result. In this talk, we'll look at proven patterns for writing Terraform configuration which ages well and remains an asset instead of a liability.
Most cloud providers now offer "easy to use" managed Kubernetes clusters, allowing us to get up and running in no time. Then our teams can just deploy containerized apps to them and life is good ..... Or is it? The truth is, this apparent simplicity fades quickly. The difficulties of adopting Kubernetes really hit hard on day two and beyond, when you need to integrate with existing infrastructure technologies and techniques. This includes IAM, networking, load balancing, DNS, monitoring, and logging, in addition to practices like continuous delivery, zero-downtime upgrades, and principle of least authority. Most of us underestimate how difficult these production concerns will be — even though it's getting easier by the day, it's still no casual walk in the park. In this talk, we'll discuss common challenges we see with end users and how we have approached addressing them. You will gain a broad awareness of these challenges so you know what to be on the lookout for and, by being proactive and going in with eyes wide open, will significantly increase the odds of success in your own team's journey to containers and Kubernetes.
In the networking world, configuration management is as much a hot topic as it is in the systems world. In contrast to the systems world, the networking world is full of proprietary devices, each with their own configuration "language". The IETF has standardized (and many vendors have implemented) a protocol to configure network equipment (NETCONF) and a data modeling language (YANG) to represent configuration and state data of the devices.
This talk will give an introduction to YANG and NETCONF, discuss how they relate to concepts in systems configuration management and how it could be used to configure traditional systems.
Once the infrastructure is designed you should be able to deploy it effortlessly. This has long been the goal and can now become a reality!
Coffee and Snacks
This talk will go over adding an API and CLI for a new Compute Resource in Foreman.
We will start with a quick introduction to the Kubevirt Compute Resource.
Then, we will dive into the code needed for adding API endpoints and a Hammer plugin that uses the API to allow provisioning automation and management of a Compute Resource.
The main take away from the talk will be to see how easy it is to create API and CLI support for a new Compute Resource.
It may be hard to image, but some sysadmins do not operate in ideal, tightly controlled circumstances. Apparently, not every developer, application or organization is ready for Kubernetes…
In this presentation we will share a real world use case: deploying and configuring a brand new natural history museum. We’ll show how we built the museum with open source software and config management tools, dealing with a broad set of technologies, a tight schedule, a sector dominated by traditional organizations fixated on proprietary solutions and a whole bunch of actual fossils. We’ll show how far we’ve come, and what choices we made along the way.
Specifically, in this talk we will showcase in detail some of the automation code we developed in the process. For example, we will elaborate on the way how we use Ansible and AWX to configure switchports, (re)deploy computers from scratch with MAAS, configure those computers with the relevant role and provision relevant content, all in one automated workflow.
This is the story of RLC, Roche Linux Client, deployed globally in 13 sites. Fully Integrated to our corporate environment. This talk is about how open source tools like Ansible, Foreman and Aptly made it all possible. Ultimately changing minds about how automation can bring value to our organisation.
This talk demonstrates technologies for automating Grafana dashboard generation and deployment.
Depending on the viewpoint, we can call Kubernetes a cloud, a scheduler or a configuration management tool. Kubernetes is a configuration management tool for the container platform itself, for the deployment of the application containers, the routing and loadbalancing within the container network and the provisioning of persistent storage for important container data.
Using the principle of Infrastructure as Code and a declarative model, we can define the desired state in files containing so-called Kubernetes resource configurations. By applying these configurations in a fully declarative way, we can tell Kubernetes the desired state. We can make use of version control, separation of code and data, and idempotence. By doing so, we achieve automation, standardization and reproducibility.
This talk focuses on the combination of Git and Kubernetes for the administration of the container platform. We use Git for version control of our Kubernetes resource configurations and perform a fully declarative application of these configurations to Kubernetes.
Virtual Machines are live things, but what if I want to manage them just like configuration? Salt helps you doing it by defining the VMs using states. This talk will be showing off how to leverage this feature. The talk will quickly walk through the basics of Salt states before exploring the
virt state. Then we will see how Salt uses libvirt to get this done.
Since this is also used by Uyuni, the session will provide an insight of a real-life use case.
For years, there has been a shift to "Infrastructure as Code (IaC)". The code we write daily is not just the application itself, but also definition of whatever Cloud Infrastructure the application needs. Tools like Terraform, Pulumi or Cloud APIs support this approach. The code base we start with is often simple and clear, the resulting infrastructure predictable. Growing code bases, entangled components and more advanced language features such as conditional configurations make it increasingly hard to foretell if everything works as expected.
In this talk, you will learn ways to test your infrastructure code. We will cover a variety of tools and approaches, that allow you to engineer the reliability of your productive infrastructure and make you confident to roll out more infrastructure changes in less time. This is not a theoretical lesson. We will walk through real-life examples with visible benefits that you can apply yourself right away.
There are countless articles and blogs warning about the pitfalls in Git submodule usage, in effect resulting in an "avoid at all costs" recommendation. By contrast, this talk examines when and how to use Git submodules from a neutral point of view. Legitimate use cases, managing pitfalls, and alternatives will all receive their fair due.
Foreman is a well known infrastructure management swiss army knife. Recently it got a new reporting engine that can be used to gather interesting data about managed hosts. In this talk I'll show how to do that, discuss possible gotchas and explain best practices.
Ansible as a deployment tool, and Rudder as a compliance tool. How to move your Ansible tasks to Rudder, in order to use the best of both worlds?
Kubernetes provides a declarative API, so you can describe the desired state of the system. And then it is the role of the control plane to operate the cluster (make the actual state match the desired state).
But we still need config mgmt for API objects to the point when they are applied to the cluster.
Helm helps to organize these configs into charts, template them, and manage releases. And GitOps lets you use a git repo as a single source of truth for the desired state of the whole system. Then all changes to this state are delivered as git commits instead of using kubectl apply or helm upgrade.
In this talk I will introduce the GitOps model for operating cloud native environments and give a short demo.
This talk introduces Dynflow, the dynamic workflow engine. It starts with a high level description of what the project does and what benefits it brings to its users. Next it describes the building blocks and commonly used action modules using which the user can create complex workflows with examples where each of the modules could be useful. The final part of the talk delves into Dynflow's internals, describing both the monolithic and the new split deployment models.
This talk will be about getting data for troubleshooting out of Foreman. I will take a look at multiple solutions, have a look at how easy they are to setup and which data they provide.
When using any sort of automation system for either remote execution or configuration management, one of the major advantages is the ability to reduce repetitive tasks. Often tasks in these scenarios involve using sensitive information such as passwords. In this talk we’ll look at how the SaltStack Pillar system can be used to store secrets and securely provide them to only the Salt minions that should have access to them. We'll look at how we can take advantage of external systems to store our Pillar data.
Everybody loves Prometheus. Many exporters are available to gather specific data. You can download the binaries from GitHub, start them and they will expose data via plain HTTP, without any firewalling or authentication. That would just complicate the whole setup!
A secure and automated rollout of exporters isn't easy. Also an authenticated connection from the prometheus server to the exporters requires some preparation. This talk will cover a proper concept and all details to rollout multiple exporters to many systems, completely automated with Puppet.
Have you ever committed personal token, password or ssh public keys to GitHub ? or any public source code repository ? The devops culture is rapidly getting adopted and often we get to know instances where private important data was pushed to GitHub. Considering the adoption of Ansible usage this session covers below details,
- What is Ansible, Ansible basics.
- What is Ansible Vault ?
- How to use Ansible Vault ?
- Use cases of Ansible Vault.
Using a Terraform Module and standing up one instance of a module is very common. And spinning up one Vault cluster is fairly straight forward. But what happens when you need to go from 1 instance to 4? This presentation covers how to develop and organize a Terraform project to manage multiple HA Vault Clusters for deployment. As a Senior Implementation Services Engineer for HashiCorp, I've been working with customers large and small to help them put Vault into production, and I will talk about the different strategies and patterns I've seen in the field.
Come hear about what's new in CFEngine with the latest long term supported
release and share perspectives about future work to prioritize.
Versioning and keep track of your DNS records changes and automatize all the thing via Travis CI.
Have a working kubernetes on your laptop as lab environment then k3s, which is a lightweight kubernetes distribution, is your friend. K3s is also an ideal way to get acquainted with kubernetes and to test out your own containerised applications before moving to a real kubernetes cluster. This talk will introduce you to k3s and guide you how to set it and show you some practical usages with demo.
Coffee and Croissants
I got 99 problems and a bash DSL ain't one of them.
Serverless (or Functions as a Service) tends to get thrown in the "paradigms nice for developers" bucket, but Serverless can provide meaningful benefits to Operations, DevOps, and SRE teams. In a world where everything is presented or controlled via an API, Serverless' event driven, api first philosophy can help these teams create new levels of automation that were typically the realm of runbook tooling.
In this talk we'll cover the various open source Serverless frameworks and platforms available. We'll show how to automate basic day to day operational task with Serverless functions. Finally, we will show how to build an open source, automated, Serverless based, event driven pipeline to automatically secure and protect a Kubernetes cluster. Attendees will walk away with fresh ideas on how to leverage Serverless based automation in their operational roles.
All the technical freedom and diversity we enjoy in our industry is the result of internal, grass root evangelism. Over the last couple of decades, thought leaders have strongly opposed manufacturer-centric strategies and argued the case of Open Source and Open Standards. This ultimately led to the success of Linux and Open Source we have today.
But now, two decades later, the IT industry is in upheaval again: All three major cloud providers have been pushing their serverless solutions in order to lure customers into a new form of vendor lock-in. And they succeeded: The number of serverless deployments has already surpassed those of container based ones.
“So this is how liberty dies … with thunderous applause”
I think there is no time to waste, to remind ourselves about Open Standards, their value to our industry, and why it is worth to fight for them to survive. Open Standards go beyond the boundaries of development and operation. They are the foundation of a barrier free interoperability and independent communications. The lecture aims to inspire the connection between both worlds and paradigms for a modern and flexible application infrastructure.
Coffee and Snacks
We have created a Rudder policy that covers all OS that we support at our customers, or that will be coming around (i.e. beta of a new version).
For our managed systems, it covers distro-/OS-specific settings with a generic rule that “what makes sense everywhere, will be applied everywhere”.
For human eyes, it needs to have a clear design that eases understanding and maintenance.
Nomad is a container orchestrator which is cross-platform, scalable, stable, and easy to operate. In this session, I will demonstrate how to create a Nomad cluster, and show how it's architecture and configuration differs from Kubernetes; making it easier to operate and cheaper!
We will then deploy a dotnet core application and some services into the cluster, and show how integration with other infrastructure and services can be done in a way which can reduce the complexity of your applications.
By the end of this session, you should have a good understanding of how most of your deployment problems can be solved without having to resort to the operational complexity and overheads of using Kubernetes.
Web Application Firewalls (WAF) often raise concern about false positives, latency and other potential production problems. In addition, it is often said, that DevOps and WAF do not fit together. That is a pity, since the WAF helps to protect us from web application attacks, like those described by the OWASP Top Ten. But what if you could ensure that introducing and using a WAF went smoothly?
I will show how to integrate a WAF with WAF testing automation into a continuous integration (CI) pipeline. This pipeline ensures that developers receive early and often feedback about their WAF, saves them time and headaches down the line. In fact, DevOps, testing and automation only make sense if all components are part of the process.
Needless to mention, I as an OWASP Core Rule Set (CRS) developer and enthusiast introduced the CRS to Puzzle ITC when I joined them in 2019!
By providing YAML templates, we want to make it easy for developers to introduce WAFs into projects.
An overview of an actual bare metal provisioning scheme powered by Ansible and Cobbler, with support for several Linux flavors and virtual machines.
The Terraform project has grown a lot in popularity since its inception in 2015. Many resources that were not automated as code yet can now be managed this way.
The Terraboard project aims to provide a web interface to visualize and query Terraform states.
cri-o is considered safer than Docker lacking the latter’s privileged central daemon and, additionally, produces less overhead because it does not contain techniques already provided by, for instance, kubernetes clusters. That’s why it is gaining popularity as an alternative to Docker. The large distributions already switched to cri-o as a default backend in their container plattforms (OpenShift and CaaS). But I, personally, had to overcome a lot of uncertainties before actually starting to use it.
There were many open questions regarding the effort needed for a successful transition, change of habits or procedures:
- Is the transition irreversible?
- Where do I get images from? Are the runtimes compatible?
- Which new commands do I have to learn?
- Can I continue using my CI/CD system?
- Do I have to change my workflow?
Similar question arose in my work environment, so I started to collect these questions – and the answers.
We may have too many good options to choose from, aren't we? "Terraform is going to be replaced with Pulumi" - I was told. Well, I suppose that Pulumi will be replaced with what users actually WANT to use... My observations in infrastructure as management tooling in 5 minutes.
Why should you allow all possible system calls from your application when you know that you only need some? If you have ever wondered the same then this is the right talk for you. We are covering:
- What is seccomp in a nutshell and where could you use it.
- Practical example with Elasticsearch and Beats.
- How to collect seccomp violations with Auditd.
Because your security approach can always use an additional layer of protection.
On your own
This talk is designed for people wanting to discover or learn more about how things are done on a day-to-day basis with Rudder. Based on our experience helping Rudder users achieve their automation and compliance goals, this session will detail real-world examples, and describe and explain step-by-step the development and application of policies.
We will cover:
- policy development workflow (advanced reporting analysis, debugging methods)
- best practices in policy organization
- various productivity tips
- reporting and compliance focused policies (component names, rules organization, et.)
- commonly encountered problems
- API usage to collect information (Excel export, ...)
- audit vs. configuration
Using an automation system such as SaltStack is a great way to ensure that traditional servers and desktops are kept in a consistent state. Commonly run tasks such as software updates and system configuration can be done in a way that the results are always consistent. But what about network devices? Or devices where security restrictions prevent a Salt minion from running? The solution is the SaltStack Proxy minion system and Salt SSH.
Are you are one of the many people migrating their projects to Kubernetes? Have you found setting up and maintaining various app and cluster configurations an ordeal? Enter Helm, the package manager for Kubernetes.
What does a package manager have to do with this? This session has the answer! We will walk through some of the lessons learned about stability and migration with the recently released major version of Helm – Helm 3. We will cover how to avoid common mistakes and pitfalls. We will also introduce the improvements to the Helm SDK which aid the automation of your deployments in code. To wrap things up, there will be working examples of how to automate deployments and configurations.
Vox Pupuli maintains a huge amount of puppet modules and utilizes GitHub heavily for maintenance and daily tasks. We've built an app to support all the module maintainers in their daily work.
A look at designing a git driven CD system for software packages used to deploy software to 100s of thousands of nodes continuously
A Tale of Upgrading From MongoDB-based Pulp 2 to PostgreSQL-based Pulp 3 in Katello
Whether you are a developer, system administrator, or simply a consumer of software, upgrades can be a painful experience. When was the last time your prescribed hour-long upgrade turned into a full-day endeavor? Have you been procrastinating upgrading your project’s EOL’d backend service for fear of breaking APIs? In the Katello project, we certainly have experienced these issues ourselves. After learning lessons the hard way when upgrading our backend service Pulp years ago, we’ve created a plan and are currently executing our latest upgrade to Pulp 3 now. The upgrade could be painful if we’re not careful. How can we switch to an in-development service with completely new concepts and APIs in a way that is easy for developers but still produces stable releases?
In this presentation, I will share a development story that covers prior mistakes we made and the lessons learned, the planning and architecture of incrementally introducing Pulp 3, and how we’ve maintained clear communication across the Katello and Pulp 3 teams to keep our efforts aligned.
This talk’s audience isn’t limited to developers alone; anyone in configuration management, system administration, or management fields should find it relatable as well.
A large application landscape, handling 96.000 requests per minute has been successfully migrated to the cloud.
That migration was not only about focussing on the application.
While we applied an lift'n'shift approach to the application, managing the target infrastructure became crucial.
We needed to make sure that a team of 40 people was able to reproduce environments consistently across many geographies. Introducing Infrastructure as code was one of the best decisions we made.
This talk is about our journey from a client's datacenter to a fully customized cloud platform on Azure.
You will see how we used Terraform and Azure DevOps to create a platform for a connected vehicle backend.
If we are talking that infrastructure is code, then we should reuse practices from development for infrastructure, i.e.
- S.O.L.I.D. for Ansible.
- Pair devopsing as part of XP practices.
- Infrastructure Testing Pyramid: static/unit/integration/e2e tests.
Designing the perfect v2 of your API is never enough: look at Perl6 or Python3. In the case of Puppet, look at the cleverly named "modern 4.x function API". The original function API was kind of a mess. It allowed global object pollution, was slow, leaked across environments, and in general contributed to Puppet master instability. But it was "good enough" and so building a new function API that addressed these issues didn't mean that people updated their modules. So I did it for them. I built a tool that ports functions from the old API to the new API and submitted a few hundred auto-generated pull requests to Puppet modules all over the Internet (maybe even to one of yours…)
I'll show you how the tool itself works, and the pull request workflow, and then talk about what this means for raising the bar on ecosystem engagement.
The Foreman community maintains a collection of over 40 Ansible modules for interaction with the Foreman API and the various plugin APIs. This all started with two modules in
ansible/ansible in 2016 and escalated from there.
Today we want to show how development of our modules works:
1. setting up a test environment (Foreman/Katello VM)
2. preparing a development environment (Python virtualenv with Ansible and dependencies)
3. understanding and adjusting the test playbooks
4. recording test fixtures (it's 2020 and we still ♥ VCR)
5. extending and fixing existing modules
6. writing completely new modules
If you are considering a lift-and-shift from on-prem to public cloud, this talk is for you. Our team runs a centralized build farm for Nokia's software division. The build farm consists of a fleet of Jenkins Masters, a Kubernetes cluster, artifact storage, and various back-office services for monitoring and automation. This talk gives an overview of how we migrated our build farm from an on-prem OpenStack-based datacenter to AWS. It was a successful migration, but we made several mistakes along the way and would like to share what we learned.
Stack: Ansible, AWX, Terraform, Jenkins, Artifactory, Prometheus, Grafana, Elasticsearch, Kubernetes, AWS EKS
In this talk, I will demonstrate the use of Chef Inspec for testing all your infrastructure with Inspec, no matter how you build it.
I will cover traditional testing, and also compliance testing on servers, plus how you can verify the state of other types of Infrastructure using APIs.
Plugin Oriented Programing, also known as POP, is a new programming paradigm and open source project developed by SaltStack. Like any programming paradigm learning POP means thinking about programming differently. Using POP to create a plugin oriented project is easy. This introduction will help you learn how POP works and how to get started with a new POP project. In this talk we'll look at ways POP breaks new ground in:
- Memory management
- Dealing with complexity
- Subs and patterns
- App merging
We will also provide a demonstration of POP in action.
The maxim "Test all the things" has not only become a winged word, but is also correct in its basic idea.
Ansible is no exception and not least because of the planned restructuring towards a collection system, where single roles including their necessary modules should be available as entities for import, it is necessary to be able to test single Ansible roles.
The Ansible Molecule Framework - since October 2018 officially in its "new home under Ansible by RedHat" - offers these possibilities and tries to support different providers. In this talk we want to provide an insight into Molecule usage and discuss the possibilities and challenges that it brings.
Introducing Tanka, a scalable Jsonnet based tool for deploying and managing Kubernetes Infrastructure
This year, we have only released one major version. After 5.0, we moved on to... the 6.0!
What has happened in RUDDER since last year? Let's discover together this new version, as well as all the new plugins: Ansible, OpenSCAP, Zabbix... And finally, let's discuss the next development of RUDDER for 2020!
In Rudder 5.0 we have introduced a plugins’ ecosystem to make Rudder more flexible and adaptable to user needs. Plugins aim to bring new functionalities to Rudder: to plug it with other tools or simply package re-usable policy sets for example. Even though the currently available plugins cover a large range of functionalities, you may need to create new ones or extend current ones to meet your specific needs.
This talk will go through the process of plugin creation and maintaining, focusing on the ones involving proper configuration elements. This will let us see the current possibilities to import, export, share and maintain subsets of configuration policies between distinct Rudder environments.
In 2013 the Foreman project started to wrap Puppet modules into an installer. After 6 years it's good to look back at how it went.
Coffee and Snacks
Using an automation system such as SaltStack is a great way to ensure that traditional servers and desktops are kept in a consistent state. Commonly run tasks such as software updates and configurations can be done in a way that the results are always consistent. When using SaltStack this is accomplished using state files.
These state files are usually written using YAML, a human-readable data-serialization language, that presents the dictionaries and lists that SaltStack uses into a friendly format. Occasionally we need to go beyond the capabilities of what YAML can provide.
In this talk we'll explore some of the other ways that Salt states can be written, including using Jinja formatting and writing state files in programming languages such as Python.
A lot of servers still run RHEL7 or CentOS7. But running Ansible with ARA on this moves you into dependency hell.
ARA needs Python3, CentOS delivers this, but not in
/usr/bin/python. ARA needs
nodejs. CentOS only has an old version..........
I will try to show you how I solved this hell and created a working Ansible with ARA environment
The Internet of Things (IoT) is the extension of Internet connectivity into physical devices and everyday objects. Embedded with electronics, internet connectivity, and other forms of hardware (such as sensors), these devices can communicate and interact with others over the internet, and they can be remotely monitored and controlled.
This means that internet connected devices are moving out of the data centers and into a new environment which brings a range of new challenges. A dominating technology in this area is GNU/Linux and although best practices have been established for GNU/Linux in the desktop and server world, the IoT space is still in its infancy and is most accurately comparable to the wild west due to the vast options in hardware configurations and software stacks.
In this presentation Mirza will cover specific areas within this space, highlight the challenges and how they are solved today using open-source technologies.
Key areas that Mirza will cover are:
- Characteristic of the connected device environment
- Software updates and life cycle management
- Device management (Configuration management)
We will also take a closer look on some of the trends that are emerging, as “server technologies” are moving towards the edge devices.
Rudder is a graphical configuration management tool, which is quite an unusual approach in this domain. This talk is about the why and how we are now introducing a new DSL for RUDDER. If you had never considered RUDDER because he didn't have a DSL, or if you want to discuss language with us, now is the right time to attend this talk!
When building infrastructure with technologies such as Kubernetes and Terraform, the complexity of configuration quickly becomes hard to manage, especially with multiple engineers contributing code and config. Kapitan was created at DeepMind to manage complex environments to generate config, documentation and even scripts.
Mark Burgess released the initial version of CFEngine in 1993. It's been used, and developed by people all around the world since then. It's a large C codebase, with a lot of history. In the last few years, we've been taking important steps to prepare the codebase for the future. We are making it more modular, reusable, safe, and maintainable.
We added Puppet support to mgmt quite early on. You can run Puppet manifests through mgmt's engine, and mgmt can in turn rely on Puppet to synchronize resources that mgmt does not natively support. This incurs a significant performance overhead to each resource check, though.
This presentation showcases a new prototype feature of mgmt that allows for the use of the Puppet bridge with no performance penalties. It features live demos, and the concept and implementation are briefly explained.
Ansible, the radically simple IT automation engine is not stranger to pulp, the juicy software repository management tool.
In addition to its ansible based installer and its plugin for ansible content, we present the third way for pulp to interact with ansible:
'Ansible Modules for Pulp', alias Squeezer, is a collection of ansible modules to leverage the feature rich REST-API of pulp in a convenient way.
To this end we show, how repeated repository workflows can be mapped to ansible tasks.
We also discuss the possiblility to enrich bug reports with reproducers in form of playbook snippets.
Data-driven configuration management is a design pattern that can reduce complexity, improve outcomes, and empower engineers to make configuration changes without having to modify code. The new classfiltercsv function in CFEngine 3.14 makes it straightforward to implement a data-driven approach, allowing for:
- Using configuration rather than code to do configuration management
- Deploying simpler code that is easier to use and maintain
- Focusing on higher-order issues
While this talk demonstrates how to use CFEngine's classfiltercsv function to produce these results, the discussion of the approach to data-driven configuration may be of interest to anyone in the configuration management community.
Rudder is based on API/Web application that allows users to configure and verify their configurations. Relying on agents on every system, itchecks and remediates configurations every 5 minutes and centralizes the result of application. Each result is made up of hundreds of events that are historized, and each configuration changes involves calculating and displaying the configurations and conformities for users within a reasonable time.
A Rudder instance can handle 20 000 nodes. Can you imagine what this implies from a network, CPU and storage point of view? How to reach and maintain these performances? What are the different steps that made this possible? And what tools have been put in place?
This presentation will explain the technical stack used (Scala, PostgreSQL, C and Rust), as well as the path, failures and successes that allow us today to reproduce the environments, and also to test and validate the hypotheses to achieve and keep these results.
Why to integrate Ansible and Foreman with each other and how to get the most value when using Ansible from Foreman. I will describe two primary approaches of using Ansible from Foreman. Firstly usage of ansible as a configuration management, where hosts are kept in a predefined state. The second usage as a more remote execution fashion. The talk goes over several scenarios and demonstrates how Foreman can leverage Ansible to effortlessly solve the issues present in the given scenarios and what approach is better for each use case.
CUE is a new abstract-oriented constraint-based configuration language and set of APIs. This talk dives into how it came about and the problems it solves.
We have been able to test our puppet modules using rspec-puppet and
serverspec for a while now and the quality of our code is improving because
of it. This talk will introduce the new kid on the block litmus. This talk will show you how
to use litmus to test puppet modules and how to convert your existing modules to make use of litmus.
The Baremetal Discovery and SecureBoot provisioning using Foreman
This feature enables to do bare-metal discovery of unknown systems of the network. The systems sends the facts to Foreman which then can be used to provision hosts with different parameters. The plugin also provides ability to auto provision the system as per the rules which can be defined beforehand. The talk covers,
1. Introduction to discovery plugin 2. Requirements to use discovery. 3. PXELess and PXE scenario of discovery. 4. Demo.
The security is most important aspect of any department and so secure booting should be because its possible to have intrusion and malicious code running at the time of booting operating system. The UEFI Secure Boot was created to secure booting process from malicious component replacement attacks. Latest operating systems offers support for this by including kernel and associated drivers signed by UEFI CA certificate. The Foreman also supports provisioning over the secureboot.This talk covers,
1. What is Secure Boot ? 2. Introduction to basic Foreman provisioning ? 3. The secure boot provisioning using Foreman. 4. Challenges. 5. Demo.
mgmt has been around for 3+ years but unless you have taken the time to dig in, you probably don't know a lot about it.
As the project was presented early and and evolved a lot in the last months, I would like to do a special-effect free presentation about what it can do and how it can help you to manage your infrastructure.
Yomi (Yet one more installer) is a new proposal for an OS installer
that is build on top of the features that a Software Configuration Tool
Rudder is currently used to manage more than 10k machines from the same central server,
but our agent-server communication (using HTTP for inventory collection,
syslog for reporting and a custom protocol for policy updates) was limiting us in terms of
security, performance and extensibility.
With Rudder 6, we have introduced a new communication infrastructure
to match present and future challenges with consistent security,
better performance, improved continuity through
immediate action triggers, while staying compatible with our
fully asynchronous, pull-based workflow.
The talk will focus on the design choices we made, from the use of Rust for our new
server component, to the network and message protocols we use.
It will also highlight the reasons and constraints behind them,
including ensuring a minimal operation overhead and an easy and smooth transition with no breaking change.
Mgmt is a real-time automation tool that is fast and safe. One goal of the tool is to allow users to model and manage infrastructure that was previously very difficult or impossible to do so previously.
The tool has two main parts: the engine, and the language. This presentation will have a large number of demos of the language.
To showcase this future, we'll show some exciting real-time demos that include scheduling, distributed state machines, and reversible resources.
As we get closer to a 0.1 release that we'll recommend as "production ready", we'll look at the last remaining features that we're aiming to land by then.
Finally we'll talk about some of the future designs we're planning and discuss our free mentoring program that helps interested hackers get involved and improve their coding, sysadmin, and devops abilities.
Whilst best practices do involve over time (and sometimes 'advice' changes completely), there's also the other end of the spectrum where
style-guides are ignored, house 'styles' take over and the anti-patterns and worse prevail.
Fed up with being told 'it works, why shouldn't I write puppet this way?', I present a selection of puppet code witnessed in the wild.
(All from non Puppet Enterprise setups. PE users all write beautiful code and none of it ever looks like what follows, right?)
From just old code, (with better and simple to achieve improvements), to the darn right ugly, stupidly fragile or just plain broken.
- Old constructs (from a land before puppet 4) (create_resource, anchor etc).
- Abuse of hiera
- Too much data
- The super hash. $data = hiera_hash('data') $jdk_version = $data['oracle']['java']['jdk']['version']
- Calling hiera from templates with local scope vars used in the hierarchy
- Abuse of
- Why use a file resource when you can have an exec call
- Replacing an old script with 30 chained execs
- Ruby based facts that shell out to
- Mono repos with forge modules seemingly randomly committed in then modified in place.
- Writing everything from scratch (even when really simple and popular forge modules exist)
Finally, is there anything we can do about this (other than venting frustration at conferences?).
Pulp (pulpproject.org) enables users to organize and distribute software. Now that Pulp 3.0 is generally available, it’s time to integrate it into your software delivery workflows. While the REST API is the primary integration point, it is the OpenAPI schema definition of that API that enables users to build client libraries in various languages. These clients simplify the integration with Pulp 3.
This talk will provide a brief introduction to OpenAPI. This will be followed by a demonstration of how to use the Pulp’s OpenAPI schema to generate a Python client for Pulp’s REST API. The Python client will then be used to perform various workflows in Pulp 3.
Cofee and Croissants
Hack Day Room
This workshop will focus on basic knowledge, provisioning and orchestration for those new to Foreman.
Foreman will be holding its usual Foreman Construction Day on Wednesday 5th February 2020, right after CfgMgmtCamp. Please join us!
The aim is to build upon on the previous 2-4 days of talks and discussions, and put it to use! We’re open to all members of our community, such as
New users looking for help getting started with Foreman Users looking to start contributing Code contributors to any of the core projects UX design / improvement Plugin authors (new or existing plugins) Translators / documentors
This is a great opportunity to get (more) involved in the community, and spend some face-to-face hack time with other Foreman devs. Hope to see you there!
Security policies are increasingly complex and demanding to be implemented by operational teams. How can we be sure that our security policies are properly applied on all our servers other than through a massive and costly audit? Even if they were when they were created, how do you know if they remain perfectly compliant after a few days / weeks / months?
More generally, the problem can be generalized to a devsecops approach: how to ensure that teams work together to make system infrastructures more reliable and secure?
Discover how RUDDER, a solution based on the devops spirit that allows teams to work together, can bring its know-how during a first-hand workshop and deploy your first rules together.
A half to full day training session aimed at people that are new to Containers and Kubernetes. Each attendee will have access to a Kubernetes cluster and will finish the training with the confidence to say “I Know Kubernetes”.
We are going to do two tings in one workshop (how is that even possible):
- Inspired by mob programming – we are going to try an experiment in mob operations. We will get a big screen, and do everything together.
- The thing we are going to mob operate is Kubernetes - we will start from scratch and see how far we can get in one day.
We want to make this a learning experience for everyone in the room, so beginners and people experienced in mob programming and/or Kubernetes are welcome. The workshop facilitators are beginners as well, so we are here to learn together with you!
We want to have fun, so bring your best self and get your YAML editors ready!
Uyuni is a software-defined infrastructure and configuration management solution. You can use it to bootstrap physical servers, deploy and update packages and patches -even with content lifecycle management features- create VMs for virtualization and cloud, builds container images, tracks what runs on your Kubernetes clusters, CVE audit your machines and containers, etc. All using Salt under the hood!
Mgmt is a real-time automation tool that is fast and safe.
It uses a real-time, reactive programming language to model the desired state over time, and a powerful event-driven engine to apply this state.
In this workshop, we'll present a number of live demos, and get you running mgmt yourself, and writing your first module.
This is the modelling language and tool that will let module authors build autonomous self-hosted mail servers, well-managed personal "home clouds", and other useful bits.
Finally we'll talk about some of the future designs we're planning and make it easy for new users to get involved and help shape the project.
A number of blog posts on the subject are available: https://purpleidea.com/tags/mgmtconfig/
Attendees are encouraged to read some before the workshop if they want a preview!
Attendees must arrive with a modern GNU+Linux machine, running golang 1.11 or newer or an equivalent virtual machine.
You will also need to complete the mgmt "quick start guide" to get mgmt running before you arrive.
Doing this will leave us a maximum amount of time for hands on experience with mgmt.
If you have any difficulties, please join the #mgmtconfig IRC channel on Freenode and ask purpleidea for help.
Learn all about Infrastructure as Code: from concepts, to serverless and container technologies, including several hands-on labs to teach you best practices for managing infrastructure in public cloud and Kubernetes.
In this workshop, we will be using a new Infrastructure as Code tool, Pulumi.
You will leave this workshop with a better understanding of modern cloud architectures, the role infrastructure as code has to play in them, and with actionable best practices you can bring back to your teams today.
What You'll Learn:
We will begin with an introduction talk, briefly covering a number of topic, and then transition to hands-on labs to teach you the practicalities of using infrastructure as code to manage public cloud infrastructure on AWS and Kubernetes. You will leave knowing everything you need to be successful with infrastructure as code in your team.
Modern Cloud Architectures: Networking, clustering, containers, Kubernetes, serverless
Modern Infrastructure as Code: immutable infrastructure, automated delivery, policy as code
Infrastructure Patterns: provisioning infrastructure, versioning infrastructure, scaling applications, building and publishing container images, packaging and reusing infrastructure best practices
Collaboration session for the Puppet ecosystem.
On your own
Ansible provides a pluggable architecture that makes it easy to extend functionalities of Ansible. This workshop will be a hands-on session where I will discuss development process of the Ansible Module.