Jelenlegi hely

Feliratkozás hírcsatorna csatornájára
Planet CentOS -
Frissítve: 3 nap 2 óra

CentOS Blog: Student supercomputing is #PoweredByCentOS at SC18

2018, november 14 - 16:45

I'm at SC18 - the premiere international supercomputing event - in Dallas, Texas. Every year at this event, hundreds of companies and universities gather to show what they've been doing in the past year in supercomputing and HPC.

As usual, the highlight of this event for me is the student cluster competition. Teams from around the world gather to compete on which team can make the fastest, most efficient supercomputer within certain constraints. In particular, the machine must be built from commercially available components and not consume more than a certain amount of electrical power while doing so.

This year's teams come from Europe, North America, Asia, and Australia, and come from a pool of applicants of hundreds of universities who have been narrowed down to this list.

Of the 15 teams participating, 11 of them are running their clusters on CentOS. There are 2 running Ubuntu, one Running Debian, and one running fedora. This is, of course, typical at these competitions, with Centos leading as the preferred supercomputing operating system.

The teams are given a variety of projects to work on before they get here, and then there is one surprise project that is presented to them when they arrive. They have 48 hours to work on these projects, and the winner is selected based on benchmarks and power consumption.

You can read more about the competition, and about the teams participating, on the SCC website.



Kategóriák: Informatika

CentOS Blog: OKD v3.11 packages now available

2018, november 9 - 20:18

We would like to announce that OKD v3.11 rpms been officially released and are available at [1]

OKD is the Origin community distribution of Kubernetes.

In order to use the released repo [1] we have created and published the rpm (contains the yum configuration file) [2] which is in the main CentOS extra repository. The rpm itself has a dependency on the centos-release-ansible26 [3] which is the ansbile 2.6 version rpm built by CentOS Infra team.

Should you decide not to use the centos-release-openshift-origin3* rpm then will be your responsibility to get ansible 2.6 required to by openshift-ansible installer.

Please note that due to ongoing work on releasing CentOS 7.6, the repo is in freeze mode - see [4] and as such we have not published the rpms to [5]. Once the freeze mode will end, we'll publish the rpms.

Kudos goes to CentOS Infra team for being very kind in giving us a waiver to make the current release possible.

Thank you,
PaaS SIG team

Reference URLs:



Kategóriák: Informatika

CentOS Blog: Schedule, Registration now available for CentOS Dojo at FOSDEM

2018, november 7 - 16:35

We are pleased to announce the (tentative) schedule of talks for the
upcoming CentOS Dojo in Brussels, which will be held on the day before
FOSDEM - February 1, 2019 - at the Grand Place Marriott.

Details, and the schedule, are now available at (Schedule subject to

Registration is free, but we need to know how many people are coming,
for catering and space purposes. You can register today at:

See you in Brussels!

Kategóriák: Informatika

Fabian Arrotin: Implementing Zabbix custom LLD rules with Ansible

2018, november 7 - 00:00

While I have to admit that I'm using Zabbix since the 1.8.x era, I also have to admit that I'm not an expert, and that one can learn new things every day. I recently had to implement a new template for a custom service, that is multi-instances aware, and so can be started multiple times with various configurations, and so with its own set of settings, like tcp port on which to listen, etc .. , but also the number of instances running as it can be different from one node to the next one.

I was thinking about the best way to implement this through Zabbix, and my initial idea was to just have one template per possible instance type, that would though use macros defined at the host level, to know which port to check, etc .. so in fact backporting into zabbix what configuration management (Ansible in our case) already has to know to deploy such app instance.

But parallel to that, I always liked the fact that Zabbix itself has some internal tools to auto-discover items and so triggers for those : That's called Low-level Discovery (LLD in short).

By default, if you use (or have modified) some zabbix templates, you can see those in actions for the mounted filesystems or even the present network interfaces in your linux OS. That's the "magic" : you added a new mount point or a new interface ? Zabbix discovers it automatically and start monitoring it, and also graph values for those.

So back to our monitoring problem and the need for multiple templates : what if we could use LLD too and so have Zabbix automatically checking our deployed instances (multiple ones) automatically ? The good is that we can : one can create custom LLD rules and so it would work OOTB when only one template would be added for those nodes.

If you read the link above for custom LLD rule, you'll see some examples about a script being called at the agent level, from the zabbix server, at periodic interval, to "discover" those custom discovery checks. The interesting part to notice is that it's a json that is returned to zabbix server , pointing to a new key, that is declared at the template level.

So it (usually) goes like this :

  • create a template
  • create a new discovery rule, give it a name and a key (and also eventually add Filters)
  • deploy a new UserParameter at the agent level reporting to that key the json string it needs to declare to zabbix server
  • Zabbix server receives/parses that json and based on the checks/variables declared in that json, it will create , based on those returned macros, some Item Prototypes, Trigger prototypes and so on ...

Magic! ... except that in my specific case, for some reasons I never allowed the zabbix user to really launch commands, due to limited rights and also the Selinux context in which it's running (for interested people, it's running in the zabbix_agent_t context)

I suddenly didn't want to change that base rule for our deployments, but the good news is that you don't have to use UserParameter for LLD ! . It's true that if you look at the existing Discovery Rules for "Network interface discovery", you'll see the key net.if.discovery, that is used for everything after, but the Type is "Zabbix agent". We can use something else in that list, like we already do for a "normal" check

I'm already (ab)using the Trapper item type for a lot of hardware checks : reason is simple : as zabbix user is limited (and I don't want to grant more rights for it), I have some scripts checking for hardware raid controllers (if any), etc, and reporting back to zabbix through zabbix_sender.

Let's use the same logic for the json string to be returned to Zabbix server for LLD. (as yes, Trapper is in the list for the discovery rule Type.

It's even easier for us, as we'll control that through Ansible : It's what is already used to deploy/configure our RepoSpanner instances so we have all the logic there.

Let's first start by creating the new template for repospanner, and create a discovery rule (detecting each instances and settings) :

You can then apply that template to host[s] and wait ... but first we need to report back from agent to server which instances are deployed/running. So let's see how to implement that through ansible.

To keep it short, in Ansible we have the following (default values, not the correct ones) variables (from roles/repospanner/default.yml):

... repospanner_instances: - name: default admin_cli: False admin_ca_cert: admin_cert: admin_key: rpc_port: 8443 rpc_allow_from: - http_port: 8444 http_allow_from: - tls_ca_cert: ca.crt tls_cert: nodea.regiona.crt tls_key: nodea.regiona.key my_cn: localhost.localdomain master_node : # to know how to join a cluster for other nodes init_node: True # To be declared only on the first node ...

That simple example has only one instance, but you can easily see how to have multiple ones, etc So here is the logic : let's have ansible, when configuring the node, create the file that will be used zabbix_sender (triggered by ansible itself) to send the json to zabbix server. zabbix_sender can use a file that is separated (man page) like this :

  • hostname (or '-' to use name configured in zabbix_agentd.conf)
  • key
  • value

Those three fields have to be separated by one space only, and important : you can't have extra empty line (but something can you easily see when playing with this the first time)

How does our file (roles/repospanner/templates/zabbix-repospanner-lld.j2) look like ? :

- repospanner.lld.instances { "data": [ {% for instance in repospanner_instances -%} { "{{ '{#INSTANCE}' }}": "{{ }}", "{{ '{#RPCPORT}' }}": "{{ instance.rpc_port }}", "{{ '{#HTTPPORT}' }}": "{{ instance.http_port }}" } {%- if not loop.last -%},{% endif %} {% endfor %} ] }

If you have already used jinja2 templates for Ansible, it's quite easy to understand. But I have to admit that I had troubles with the {#INSTANCE} one : that one isn't an ansible variable, but rather a fixed name for the macro that we'll send to zabbix (and so reused as macro everywhere). But ansible, when trying to translate the jinja2 template, was complaining about missing "comment' : Indeed {# ... #} is a comment in jinja2. So the best way (thanks to people in #ansible for that trick) is to include it in {{ }} brackets but then escape it so that it would be rendered as {#INSTANCE} (nice to know if you have to do that too ....)

The rest is trival : excerpt from monitoring.yml (included in that repospanner role) :

- name: Distributing zabbix repospanner check file template: src: "{{ item }}.j2" dest: "/usr/lib/zabbix/{{ item }}" mode: 0755 with_items: - zabbix-repospanner-check - zabbix-repospanner-lld register: zabbix_templates tags: - templates - name: Launching LLD to announce to zabbix shell: /bin/zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -i /usr/lib/zabbix/zabbix-repospanner-lld when: zabbix_templates is changed

And this is how is rendered on one of my test node :

- repospanner.lld.instances { "data": [ { "{#INSTANCE}": "namespace_rpms", "{#RPCPORT}": "8443", "{#HTTPPORT}": "8444" }, { "{#INSTANCE}": "namespace_centos", "{#RPCPORT}": "8445", "{#HTTPPORT}": "8446" } ] }

As ansible auto-announces/push that back to zabbix, zabbix server can automatically start creating (through LLD, based on the item prototypes) some checks and triggers/graphs and so start monitoring each newly instance. You want to add a third one ? (we have two in our case) : ansible pushes the config, would modify the .j2 template and would notify zabbix server. etc, etc ...

The rest is just "normal" operation for zabbix : you can create items/trigger prototypes and just use those special Macros coming from LLD :

It was worth spending some time in the LLD doc and in #zabbix to discuss LLD, but once you see the added value, and that you can automatically configure it through Ansible, one can see how powerful it can be.

Kategóriák: Informatika

CentOS Blog: CentOS Pulse Newsletter, November 2018 (#1806)

2018, november 6 - 15:46

Dear CentOS enthusiast,

Here's what's been happening in the past month at CentOS.

Releases and updates

The following releases and updates happened in October. For each update, the given URL provides the upstream notes about the change.

Errata and Enhancements Advisories

We issued the following CEEA (CentOS Errata and Enhancements Advisories) during October:

Errata and Security Advisories

We issued the following CESA (CentOS Errata and Security Advisories) during October:

Errata and Bugfix Advisories

We issued the following CEBA (CentOS Errata and Bugfix Advisories) during October:

SIG Updates

SIGs - Special Interest Groups - are where people work on the stuff that runs on top of CentOS.


We have been focused on VPP and pre-requisite packages required to build VPP.

OVS and DPDK are available in Cloud SIG but can also be made available in NFV SIG on request.

Current projects are enabling building of VPP 1810 which requires toolset7 and some additional build dependencies.

Storage SIG

Luminous is the current latest major version of ceph maintained by the SIG

We have very recently promoted in this repo the very first version of ceph-ansible which supports ansible 2.6 (previously it would only work with 2.4 and 2.5)

There isn't and probably there won't be a repo for the mimic version

There will be a repo for the nautilus version instead, which will be the first ceph version supporting centos 8

Get involved with the SIGs!

At the recent SIG gathering at CERN, we discussed at some length how to get more people, and more projects, involved in the SIG process.

A SIG is a place for related projects to gather, to work together to get their products packaged, tested, and distributed in CentOS. For example, the Cloud SIG has representatives from OpenStack and Cloudstack, producing packages of their code.

Unfortunately, many of our SIGs have only one project represented. For example, the Storage SIG is primarily Gluster, while the Virtualization SIG is primarily oVirt. We'd like to expand these to include more projects, both to increase the diversity of project availability on CentOS, and because these projects are often solving similar problems, and can cooperate on them.

Which brings us to you. There are so many ways that you can get involved in the SIG process, no matter what your skills and interests.


The primary output of a SIG is a package repository, and so creating those packages tends to be where the main focus of a SIG rests. If you like to create packages, or want to learn how, this is your place to get involved.


While there's extensive process around automated testing of the packages, there's no substitute for actual human testing, to find the edge cases, ensure that things are working correctly, and catch things for which there's no automated testing yet. And creating those tests are a great way to ensure that problems don't reappear in the future.

Promotion and outreach

We want the CentOS SIGs to represent the enormous diversity of the open source landscape itself. We want the Storage SIG to represent not only the hugely popular software defined storage solutions everyone has heard of, but also the smaller communities with more niche use cases. We want the PaaS SIG to represent all of the various PaaS projects.

This takes outreach to the projects themselves, and to the users of those projects, to persuade them of the value of being involved in the SIG process, and then to help onboard them into that process.

It also takes improvement of our documentation to make it more accessible to people who aren't already familiar with how this all works.

And it takes enthusiastic people to produce materials for use at events, and then staff those events to explain to beginners how to get involved.

We even have a separate SIG for this - the Promotion SIG - which focuses on getting the word out, and helping to onboard people when they arrive. And the Artwork SIG is responsible for creating artwork for use both in the distribution, and on our various websites, to make the entire experience more visually appealing.

Get involved!

If you want to get involved in a SIG, or to start a new one, come join us for the SIG meetings on the #centos-devel channel on Freenode IRC. Have a look at the list of active SIGs, and see if there's one that interests you. Or look at the proposed SIGs and see if there's something you can do to get them bootstrapped.

Events Recent events

October was a very, very busy month for CentOS events all over the world.

CentOS was a sponsor of Ohio LinuxFest, in Columbus, Ohio. OLF is an annual event, drawing most of its attendees from Ohio, and surrounding states. The first day of the event has in-depth technical tutorials, while the second day draws more of a hobbyist audience, including a number of highschool students. As such, it’s a great opportunity to talk about CentOS and Fedora. Our friends from Fedora shared our space with us, and we had a number of great conversations with our fans, as well as talking with a number of local businesses who run their operations on CentOS, Fedora, and RHEL.

Later in the month, we held our second annual CentOS Dojo at CERN. There were around 100 people in attendance, and presentations ranged from science to technical to community. We started the day with a presentation from CERN about how they use CentOS, OpenStack, and Ceph in their investigation of the secrets of the universe. We then heard from a number of our SIGs (Special Interest Groups) about what they’re working on, and how people can get more involved. You can watch the video from each presentation by clicking on the paperclip icon next to the individual items in the event schedule listing.

On the day before the Dojo, we had a smaller gathering of our SIGs. There was discussion about the upcoming changes to the Git infrastructure - a conversation that was started at this event last year. Various SIGs reported on what they’ve been working on over the last few months. And there was discussion about how we can get more contributors involved in the SIG process. (See the SIG Updates section of this newsletter for more about this.) Watch the centos-devel list for more discussions around these topics.

During the week of October 22nd, a few of us were at Open Source Summit in Edinburgh (the event formerly known as LinuxCon. Here, too, we had great interactions with people from all levels of involvement, from people running massive server farms to kids running CentOS at home.

And finally, in the last week of the month, we had a sponsor booth at LISA in Nashville, once again shared with our friends from Fedora. LISA - Large Installation System Administration Conference - is one of the oldest software conferences in the world, going back to 1987.

If you are aware of any events in November where CentOS has (or should have!) a presence, please don’t hesitate to announce it on the centos-promo mailing list so that we can help you promote it. Or, you can add it directly to the upcoming events page.

Upcoming events

The next big event for the CentOS community is FOSDEM, and the CentOS Dojo immediately before FOSDEM. We will be announcing the schedule for this event today or tomorrow - as soon as the speakers respond with confirmation of their attendance. See you in Brussels!

Contributing to CentOS Pulse

We are always on the look-out for people who are interested in helping to:

  • report on CentOS community activity
  • provide a report from the SIG on which you participate
  • maintain a (sub-)section of the newsletter
  • write an article on an interesting person or topic
  • provide the hint, tip or trick of the month

Please see the page with further information about contributing. You can also contact the Promotion SIG, or just email Rich directly ( with ideas or articles that you'd like to see in the next newsletter.


Kategóriák: Informatika

CentOS Blog: Video from the CentOS Dojo at CERN now available

2018, november 1 - 17:36

The videos from the recent #CentOSDojo at #CERN are now available on the CentOS YouTube channel. If you have time for only one, be sure to watch the first video, which talks about the challenges that CERN has with the enormous amount of data they produce every day in the LHC.

Also recommended, Fabian's discussion of the coming (and already in place!) changes to the CentOS Git infrastructure.

[UPDATE: The videos which were previously updated were truncated, and we're looking into fixing that. meanwhile you can view the video on the event schedule by clicking the paperclip icon next to each talk title.]

Kategóriák: Informatika

CentOS Blog: Upcoming changes to downloading AltArch .iso images

2018, október 9 - 00:00

Greetings from the mirror-management department! This notice is for those who employ some sort of an automation to download AltArch (ie. aarch64, armhfp, i386, power9, ppc64, ppc64le) CentOS 7 .iso/.raw.xz images from Those using a regular browser to download these images are not particularly affected, and you can continue to the next post on this blog.

Previously, only main architecture .iso image downloads from were redirected to, which then displayed the user a list of nearby external mirrors. We will shortly extend this configuration to cover AltArch image downloads as well, ie. direct AltArch image downloads from will no longer be possible. will still serve .rpm downloads for all architectures as before.

There are three reasons for the change. First, to save bandwidth from nodes directly managed by the CentOS Project. Most of these hosts are also used for seeding the 600+ external mirrors we have. By directing some of that .iso download traffic to external mirrors we can offer faster sync speeds for those external mirrors, and for people downloading individual rpms from Second, most of those external mirrors offer faster download speeds to end users than what could be achieved by downloading from, so the users will benefit from this change as well. Finally, because there are much more external mirrors than  nodes, it is likely that your bits will need to travel a shorter path, conserving bandwidth globally.

The above change will be implemented some time between the releases of RHEL 7.6 and CentOS 7.6.18xx, so that external mirrors syncing CentOS 7.6.18xx content would not need to fight for bandwidth between AltArch .iso downloaders.

The other change, which has already been implemented, is related to how behaves when accessed with curl or wget. If you now do a wget, isoredirect will notice that you are trying to download the file and will redirect the request to the nearest external mirror. If you access the same URL with a regular browser, you will see a list of nearby mirrors from which you can pick your favourite mirror. wget will follow redirects by default, but curl needs a --location switch to follow redirects. If a filename is not specified, you will get a list of mirrors regardless of the browser used.

So, combining the effects of the above two changes: If you currently use some sort of a script that downloads AltArch .iso images from, those requests will soon be served by external mirrors instead of In the case of wget you will only see one additional request and you probably don't need to change anything. If you use curl, you must add the --location switch to curl to follow the redirect issued by If you want to eliminate one redirect, you can change to in your script. The rest of the URL is the same, ie. /altarch/<release>/isos/<arch>/<filename.iso or .raw.xz>

As an aside, even though nodes are managed by the CentOS Project, those servers and their hosting are donations from various organizations. If you think your organization could donate an additional server to share the load and to give us better geographical coverage, please see

If you have questions or concerns regarding this change, please let me know. Thanks!

Kategóriák: Informatika

CentOS Blog: Revamp CentOS Community Container Pipeline to run on OpenShift

2018, október 8 - 22:19

It's been over a year since we published anything about the CentOS Community Container Pipeline. Many interesting things have happened during the past year, many things have changed and there's a complete shift in the architecture of the service that's was rolled out over the last weekend.

Wait, I've never heard of this project

If this is the first time you're hearing about CentOS Community Container Pipeline project, it would be best to refer this blog post, or the GitHub repo of the project, or the wiki page. But to put it in short, the service does below things:

  • Pre-build the artifacts/binaries to be added to the container image
  • Lint the Dockerfile for adherence to best practices
  • Build the container image
  • Scan the image for:
    • available RPM updates
    • updates for packages installed via other package managers:
      • npm
      • pip
      • gem
    • Verify RPM installed files and binaries for integrity
    • point out capabilities of container created from the resulting image by examining RUN label in its Dockerfile
  • Weekly scanning of the container images using above scanners
  • Automatic rebuild of container image when the git repo is modified
  • Parent-child relationship between images to automatically trigger rebuild of child image when parent image gets updated
  • Repo tracking to automatically rebuild the container image in event of an RPM getting updated in any of its configured repos (not available yet in new architecture)
  • A UI that lists all the container images built with the service at
How did the old system work?

When we talked about the project at '18, we received a positive response from the audience. However, at that time, we knew that our service couldn't handle more build requests and on-boarding more community projects would be counter-productive when our backend didn't have the ability to serve those requests.

Old implementation of the service had a lot of plumbing. There are workers written for most of the features mentioned above.

  • Pre-build happened on CentOS CI (ci.c.o) infrastructure.
  • Lint worker ran as a systemd service.
  • Build worker ran as a standalone container and triggered a build in an OpenShift cluster.
  • Scan worker ran as a systemd service and used atomic scan to scan the containers. This in turn spun up a few containers which we needed to delete along with their volumes to make sure that host system disk doesn’t get filled up.
  • Weekly scanning was a Jenkins job that checked against container index, and underlying database of the service before triggering a weekly scan
  • Repo tracking was a Django project and heavily relied on database which we almost always failed to successfully migrate whenever the schema was changed. That's our shortcoming, not Django's. All these heterogeneous pieces talked through beanstalkd.

Everything was spread across different hosts and we were using really huge Ansible playbooks to bring up the service. A fresh deployment took 30 minutes on an average. Testing any change in dev environment would require us to do a redeployment of the service which took another 15 minutes on an average. Deploying and maintaining this service was quite a pain!

What did we do about these problems?

Since long time we were discussing about developing our service on top of OpenShift. Then, at some point, we read about OpenShift Pipeline and found it interesting. We took the plunge and came up with a proof of concept implementation of CentOS Community Container Pipeline on top of OpenShift OKD using Minishift. Results were exciting! We were able to do parallel builds of container image, Jenkins Pipelines orchestrated the flow really well, build times were faster, we didn't need to use beanstalkd at all and, most importantly, there was very less code written to get things done!

With the POC in place, we went ahead with developing more tangible service on top of a real OpenShift cluster instead of developing on top of Minishift. What used to be individual workers doing their thing in old system is now pretty much all inside OpenShift Pipeline.

We now have an OpenShift Pipeline for every project on CentOS Container Index that does Pre-build, Dockerfile lint, container image build, scan the container image and push it to external registry; all from a single container! We have another OpenShift Pipeline for every project to do their weekly scans. So instead of having five workers to do these tasks and communicate with each other via beanstalkd, we have orchestrated things through OpenShift Pipelines.

What are we working on now?

We don't have Repo tracking implemented in the new architecture yet. We don't have a UI for the users to take a look at their build logs or weekly scan logs either. We're initially focusing on getting the UI for logs up and then we will start working on Repo tracking.  We are also working on setting up a CI job that tests core parts of the service on Minishift so that anyone willing to take the service for a spin should literally be able to do it on a Minishift VM!

Let us know your thoughts!

This project is solely focused on making things easier for open-source projects and its developers. If you are working on an open-source project that's building on top of CentOS, we would like to know your thoughts. If you need help getting started, you can contact us on IRC (#centos-devel on Freenode) or take a look at project documentation.

Dharmit Shah (dharmit on #centos-devel IRC)

Kategóriák: Informatika

CentOS Blog: Updated CentOS Vagrant Images Available (v1809.01)

2018, október 4 - 11:26

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.5.1804 for x86_64 (based on the sources of RHEL 7.5). All included packages have been updated to September 30th, 2018.

Notable Changes
  1. The images now use the ext4 filesystem, instead of XFS. We have been getting unbootable images due to XFS corruption over the last few months (the journal appears to be zeroed out, for reasons we do not yet understand). This is why we haven't had any monthly releases since May - I'm still looking into what happens.
  2. The images now use a single partition, swapping into a preallocated 2GB file. This makes resizing the partition and/or swap easier than it was before, with separate partitions inside LVM.
  3. The CentOS Linux 7 image comes with open-vm-tools preinstalled, enabling it to work with VMware ESXi.
Known Issues
  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile: config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in
  4. Some people reported "could not resolve host" errors when running the centos/7 image for VirtualBox on Windows hosts. We don't have access to any Windows computer, but some people reported that adding the following line to the Vagrantfile fixed the problem: vb.customize ["modifyvm", :id, "--natdnshostresolver1", "off"]
Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.


The official images can be downloaded from Vagrant Cloud. We provide images for HyperV, libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or... vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6 vagrant box update --box centos/7 Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl -o sha256sum.txt.asc $ gpg --verify sha256sum.txt.asc

Once you are sure that the checksums are properly signed by the CentOS Project, you have to include them in your Vagrantfile (Vagrant unfortunately ignores the checksum provided from the command line). Here's the relevant snippet from my own Vagrantfile, using v1803.01 and VirtualBox:

Vagrant.configure(2) do |config| = "centos/7" config.vm.provider :virtualbox do |virtualbox, override| virtualbox.memory = 1024 override.vm.box_download_checksum_type = "sha256" override.vm.box_download_checksum = "b24c912b136d2aa9b7b94fc2689b2001c8d04280cf25983123e45b6a52693fb3" override.vm.box_url = "" end end Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or in #centos on Freenode IRC.


I would like to warmly thank Brian Stinson, Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images. I would also like to thank the CentOS Project Lead, Karanbir Singh, without whose years of continuous support we wouldn't have had the Vagrant images in their present form.

I would also like to thank the following people (in alphabetical order):

  • Graham Mainwaring, for helping with tests and validations;
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro;
  • Kirill Kalachev, for reporting and debugging the host name errors with VirtualBox on Windows hosts.
Kategóriák: Informatika

CentOS Blog: CentOS Pulse Newsletter, October 2018 (#1805)

2018, október 2 - 09:37

Dear CentOS enthusiast,

Here's what's been happening in the past month at CentOS

Releases and Updates

The following releases and updates happened in Setember. For each update, the given URL provides the notes about the change.

Errata and Enhancements Advisories

We issued the following CEEA (CentOS Errata and Enhancements Advisories) during September:

Errata and Security Advisories

We issued the following CESA (CentOS Errata and Security Advisories) during September:

Errata and Bugfix Advisories

We issued the following CEBA (CentOS Errata and Bugfix Advisories) during September:

Blog posts and news

If you're not watching the CentOS blog, you may be missing our periodic updates there. I'd like to particularly draw attention to two recent posts:

EPEL for armhfp - Pablo Greco posted about the work on armhfp in the EPEL repository.

New CentOS Pastebin Instance - John R. Dennison posted about the new CentOS pastebin, and the more modern functionality that comes with it.

If you'd like to post on the CentOS blog about work you're doing around the CentOS community, please don't hesitate to contact me directly, at

SIG Updates

SIGs - Special Interest Groups - are where people work on the stuff that runs on top of CentOS. Here's some of the highlights from a few of our SIGs from the past month

Cloud SIG

The RDO project and the Cloud SIG participated in the OpenStack PTG (Project Teams Gathering) last month in Denver, and we anticipate seeing the interviews from that event start coming to the RDO YouTube channel in the coming weeks. They'll also be participating in the upcoming SIG day ahead of the CERN Dojo in October.


In September, we had a table at ApacheCon in Montreal, Canada. CentOS is a platform which many open source projects use for development and testing, and the Apache community of projects is no exception. We had visits from representatives from several Apache projects, and talked about the CentOS CI infrastructure, and our SIGs.

October 12-13: In 2 weeks, CentOS will be sponsoring Ohio LinuxFest in Columbus, Ohio. OLF is an annual gathering of Linux and Open Source enthusiasts from Ohio and the greater Ohio Valley area. We are looking forward to conversations with attendees. If you'd like to volunteer some time to work the CentOS table, please contact me - - to volunteer. Ohio LinuxFest will be held October 12-13 at the Hyatt Regency Columbus.

October 19th: In the third week of October, we'll be gathering at CERN for the annual CERN CentOS Dojo. Details and the event schedule are available on the event website. The event is free to attend, but you must register, in order to get through security at the front desk. That's October 19th at CERN!

October 22-24: CentOS will also have a presence at the Open Source Summit, in Edinburgh, Scotland. Drop by the Red Hat booth for all your CentOS sticker needs.

October 29-31: Finally, we'll also be at LISA/Usenix in Nashville, in the last week of October.

We look forward to meeting you at any or all of these venues!

Contributing to CentOS Pulse

We are always on the look-out for people who are interested in helping to:

  • report on CentOS community activity
  • provide a report from the SIG on which you participate
  • maintain a (sub-)section of the newsletter
  • write an article on an interesting person or topic
  • provide the hint, tip or trick of the month

Please see the page with further information about contributing. You can also contact the Promotion SIG, or just email Rich directly ( with ideas or articles that you'd like to see in the next newsletter.

Kategóriák: Informatika

Fabian Arrotin: Updated mirrorlist code in the CentOS Infra

2018, szeptember 24 - 00:00

Recently I had to update the existing code running behind (the service that returns you a list of validated mirrors for yum, see the /etc/yum.repos.d/CentOS*.repo file) as it was still using the Maxmind GeoIP Legacy country database. As you can probably know, Maxmind announced that they're discontinuing the Legacy DB, so that was one reason to update the code. Switching to GeoLite2 , with python2-geoip2 package was really easy to do and so was done already and pushed last month.

But that's when I discussed with Anssi (if you don't know him, he's maintaining the CentOS external mirrors DB up2date, including through the centos-mirror list ) that we thought about not only doing that change there, but in the whole chain (so on our "mirror crawler" node, and also for the service), and random chat like these are good because suddenly we don't only want to "fix" one thing, but also take time on enhancing it and so adding more new features.

The previous code was already supporting both IPv4 and IPv6, but it was consuming different data sources (as external mirrors were validated differently for ipv4 vs ipv6 connnectivity). So the first thing was to rewrite/combine the new code on the "mirror crawler" process for dual-stack tests, and also reflect that change o nthe frontend (aka nodes.

While we were working on this, Anssi proposed to also not adapt the code, but convert it in the same python format as the, which he did.

Last big change also that was added is the following : only some repositories/architectures were checked/validated in the past but not all the other ones (so nothing from the SIGs and nothing from AltArch, so no mirrorlist support for i386/armhfp/aarch64/ppc64/ppc64le).

While it wasn't a real problem in the past when we launched the SIGs concept, and that we added after that the other architectures (AltArch), we suddenly started suffering from some side-effects :

  • More and more users "using" RPM content from (mainly through SIGs - which is a good indicator that those are successful, which is a good "problem to solve")
  • We are currently losing some nodes in that network (it's still entirely based on free dedicated servers donated to the project)

To address first point, offloading more content to the 600+ external mirrors we have right now would be really good, as those nodes have better connectivity than we do, and with more presence around the globe too, so slowly pointing SIGs and AltArch to those external mirrors will help.

The other good point is that , as we switched to the GeoLite2 City DB, it gives us more granularity and also for example, instead of "just" returning you a list of 10 validated mirrors for USA (if your request was identified as coming from that country of course), you now get a list of validated mirrors in your state/region instead. That means that then for such big countries having a lot of mirrors, we also better distribute the load amongst all of those, which is a big win for everybody - users and mirrors admins - )

For people interested in the code, you'll see that we just run several instances of the python code, behind Apache running with mod_proxy_balancer. That means that if we just need to increase the number of "instances", it's easy to do but so far it's running great with 5 running instances per node (and we have 4 nodes behind Worth noting that on average, each of those nodes gets 36+ millions requests per week for the mirrorlist service (so 144+ millions in total per week)

So in (very) short summary :

  • code now supports SIGs/AltArch repositories (we'll sync with SIGs to update their .repo file to use mirrorlist= instead of baseurl= soon)
  • we have better accuracy for large countries, so we redirect you to a 'closer' validated mirror

One reminder btw : you know that you can verify which nodes are returned to you with some simple requests :

# to force ipv4 curl '' -4 # to force ipv6 curl '' -6

Last thing I wanted to mention was a potential way to fix point #2 from the list : when I checked in our "donated nodes" inventory, we still are running CentOS on nodes from ~2003 (yes, you read that correctly), so if you want to help/sponsor the CentOS Project, feel free to reach out !

Kategóriák: Informatika

CentOS Blog: New CentOS Pastebin Instance

2018, szeptember 21 - 03:42

After many years of excellent service by the Oregon State University Open Source Lab the CentOS Project has decided to migrate our web-based pastebin instance to a self-hosted platform running on our infrastructure.  This has provided us the opportunity to move to a different solution based on the Stikked pastebin server which is a more modern solution with a number of features we felt would best benefit our user communities:

  • Encrypted pastes
  • Direct paste replies along with a 'diff' feature which we believe useful for developer collaboration
  • Burn on reading / immediate expiry on view
  • Anti-spam features
  • And a number of behind-the-scenes improvements

The web interface is available at and from there you can paste content directly into the provided web form and optionally add your name or a paste title and even select the language of the paste if you wish the contents to be syntactically colored when displayed.  You are able to select a number of time periods for the paste's lifetime from the dropdown selection and may opt to have the paste delete itself on view, so called "burn on view".  The option also exists to encrypt your paste if you wish.  After you submit the form you can share the resulting URL with others.

Additionally we've made a command line client, cpaste, available to enable pasting directly from your servers / desktops to our pastebin instance.  This client is based on the Stikkit client by Petr Bena.  This package is in our "extras" repository and can be installed with:

yum --enablerepo=extras install cpaste

Usage information can be retrieved with:

cpaste --help

Examples illustrating how to how the command line client:

Paste a file directly to our server:

cpaste ~/problem.txt

Paste a python code snippet with a title of "code snippet" and an author name of "John Q. Public"

cpaste -l python -t "code snippet" -a "John Q. Public" -i ~/src/project/

Paste the standard output of a process and return only the paste's url:

~/bin/process | cpaste -s

One notable difference between the new and old instances is that the new instance supports paste lifetimes of up to one day only.

We hope you find the new service useful.

We would also like to thank OSUOSL for providing the old pastebin instance for the past many years.

Kategóriák: Informatika

CentOS Blog: EPEL for armhfp

2018, szeptember 15 - 18:53

A few weeks ago, Fabian passed me the torch in our quest for a fully working EPEL rebuild for armhfp, that included access to the builders, the build system manager and a blind, unfunded trust that I wasn't going to break anything.

The plan up to that point was, "if it builds, great, if it doesn't, someone will have to fix it". Enter someone (me) completely clueless of what needed to be done and what I needed to know to actually do it.

Having absolutely no idea where to start, I decided to use repodiff against x86_64, to see if something really jumped at me and said "START HERE!!!!", but all it did was inform me of the hard truth, there were approximately 600 packages that were failing. I needed a quick win and an ego boost, and seeing that cinnamon was only missing a few rpms, I decided to start there.

A few days go by, the list keeps shrinking, I get a brutal fight to the death trying to bootstrap ghc, and finally I see the light at the end of the tunnel. With about 100 packages remaining, I start thinking that our plan wasn't that crazy after all.

Now, the list is 10 rpms long, and it is time to start testing everything. Since I have absolutely no idea what most of the packages that were built actually do, I have no way of testing, so please, install, test, break, fix and, most of all, report back.

If you already installed CentOS (and activated EPEL) using the instructions here, you should have everything you need to start hacking!!

Thanks for testing!

Kategóriák: Informatika

CentOS Blog: CentOS Dojo at FOSDEM (Feb 1, 2019) Call for Presentations

2018, szeptember 14 - 20:36

On February 1, 2019, we'll be holding our annual CentOS Dojo in Brussels, on the day before FOSDEM starts.

FOSDEM, as you probably know, is the annual Free and Open Source Developers European Meeting in Brussels - two days of presentations, projects, and hallway meetings with new and old friends.

For the last several years, CentOS has held a small meetup on the Friday before FOSDEM, and this year we'll once again be at the Marriott Grand Place, just a 3 minute walk from Grand Place in central Brussels. We'll have two tracks of CentOS-related content, and lots of space and time to meet other people in the CentOS community.

If you'd like to be on stage at this event, consider submitting a presentation here:

The call for presentations closes October 15th, 9am Eastern US time.

Kategóriák: Informatika

CentOS Blog: Hurricane Florence and the CentOS Community Build System

2018, szeptember 14 - 14:49

(A note from Brian Stinson, from the CI team.)

Some of you may know that the CentOS Community Build System, and CentOS CI Infrastructures are hosted in Raleigh, North Carolina.

I wanted to take this opportunity to let all of you know that outages are possible (but not expected) in the coming days as Hurricane Florence makes its way toward the US East coast. We are confident in the precautions taken by our datacenter vendor, and in the preparedness plans by our DC operations team.

If there happen to be outages, we will work to get things back as soon as we can.


Kategóriák: Informatika

CentOS Blog: CentOS Pulse Newsletter, September 2018 (#1804)

2018, szeptember 7 - 21:05

Dear CentOS enthusiast,

Here's what's been happening in the past month at CentOS

Releases and Updates

The following releases and updates happened in August. For each update, the given URL provides the upstream notes about the change.


We're pleased to announce the following releases in August:

Errata and Enhancements Advisories

We issued the following CEEAs (CentOS Errata and Enhancement Advisory) during August:

Errata and Security Advisories

We issued the following CESAs (CentOS Errata and Security Advisory) during August:

Errata and Bugfix Advisories

We issued the following CEBAs (CentOS Errata and Bugfix Advisory) during August:

SIG Updates

SIGs - Special Interest Groups - are where people work on the stuff that runs on top of CentOS. Here's some of the highlights from a few of our SIGs from the past month

Platform as a Service (PaaS) SIG
  • Origin 3.10 released, work on 3.11 is in progress
  • Introducing fkluknav as new SIG member
  • Discussing consuming Ansible RPMs from the Config Management SIG
  • Ricardo Martinelli presented at the CentOS Dojo at (video, slides)
  • dpdk 17.11 is in buildlogs
  • vpp 17.10 is in buildlogs
  • OpenVswitch 2.9.2 is in buildlogs
Virtualization SIG
  • Switching to Xen 4.8
  • Xen 4.10 is available in testing
SIG Reporting

If your SIG wants a report to appear in the newsletter, send your report to the centos-devel mailing list with a subject line containing "XYZ SIG Report" (where "XYZ" is the name of your SIG), and we'll include it in upcoming newsletters.

SIG meeting minutes may be read in full in the MeetBot IRC archive.


CentOS participates in many events, in various capacities, in order to build our local communities all over the world.


In August, we were at three large events:

On August 4th through 5th, was held in Bengaluru, India, and CentOS was there, sharing space with Fedora. DevConf is an annual developers conference which is held in three different locations around the world.

Speaking of which, later in the month we also were at DevConf.US in Boston. This was the first DevConf in North America, and we were delighted to be there.

In addition to the main event, we ran a Dojo on the day before, with presentations covering a wide range of topics. The videos from all of the presentations at the event are now on our YouTube channel.

And, in the last week of August, we were at Open Source Summit North America in Vancouver. OSSummit is a great event in that we get a lot of people that may be either new to Linux, or at least to CentOS, and so we have the chance to teach them. But there's also representation from a huge range of industries, and so we get to learn about how CentOS is being used in many different applications.

(If you have photos from any of these events, please consider adding them to the CentOS group on Flickr.)


September looks pretty quiet on the events front (please tell me if you know of any relevant events!), but in October we have two great events.

First, we have the CentOS Dojo at CERN, on October 19th. This is a full day of CentOS technical talks at the legendary CERN facility in Meyrin, Switzerland. Like last year, there's an emphasis on cloud computing, but other topics are also covered. The schedule is published, and registration is open!

The following week, we'll be in Edinburgh for the Open Source Summit Europe. That's a week-long event covering a wide range of technical content around Linux and open source.

We hope to see you there!

Contributing to CentOS Pulse

We are always on the look-out for people who are interested in helping to:

  • report on CentOS community activity
  • provide a report from the SIG on which you participate
  • maintain a (sub-)section of the newsletter
  • write an article on an interesting person or topic
  • provide the hint, tip or trick of the month

Please see the page with further information about contributing. You can also contact the Promotion SIG, or just email Rich directly ( with ideas or articles that you'd like to see in the next newsletter.


Kategóriák: Informatika

CentOS Blog: SecureBoot : rolling out new shim pkgs for CentOS 7.5.1804 in CR repository – asking for testers/feedback

2018, augusztus 30 - 08:14

When we consolidated all CentOS Distro builders in a new centralized setup, covering all arches (so basically x86_64, i386, ppc64le, ppc64, aarch64 and armhfp those days), we wanted also to add redundancy where it was possible to.

The interesting "SecureBoot" corner case came on the table and we had to find a different way to build the following packages:

  •  shim (both signed and unsigned
  • grub2
  • fwupdate
  • kernel

The other reason why we considered rebuilding it is that the cert we were using has expired :

curl --location --silent | openssl x509 -inform der -text -noout|grep -A2 Validity

While technically it doesn't really matter for Secureboot itself, it was better to get a new key/cert rolled-in and use the new one for new builds.

That's where it's interesting as because shim embeds the certs in the Machine Owner Key (MOK), and that each other component used in the boot chain is validated against that (so grub2 first, then kernel and kernel modules) that means that once deployed , the new shim would not be able to boot previous grub2/kernel.

But there is a solution for that : instead of "embedding" only the new cert, we can have both the old one and new one, permitting us to still boot older kernels but also the new ones we'll build/push soon (built on the new build system), and that's what we used for that new shim package.

That's where we'd like you (SecureBoot users) to give us feedback about that new shim pkg. It was already validated on some hardware nodes, passed some QA tests, but we'd prefer to have more feedback.

Worth noting that such rebuild has also a patch that should fix an issue we had with shim not allowing to import key in MOK through mokutil (see

How can you test ?

If you're using UEFI with SecureBoot enabled , we have signed/pushed those pkgs to the CR repository (see

That repo is by default disabled, but following command would let you update shim :

yum update shim --enablerepo=cr

Then reboot and it should work like before, so validating the boot chain (while still using grub2/kernel packages signed with previous key)

We'd appreciate feedback on this list, or #centos-devel on

I'd like to thank Patrick Uiterwijk and Peter Jones for their help for
the patch and validation for that shim

Kategóriák: Informatika

CentOS Blog: Dojo at

2018, augusztus 21 - 11:18

This Thursday we held our first Dojo at in Boston. We had about 40 people in attendance, and had 9 presenters on a variety of topics.

I want to particularly draw attention to our keynote, by Brendan Conoboy, who discussed the relationship - past and future - between Fedora, CentOS, and RHEL, which is more complicated than many people understand. But we're working on simplifying those relationships, and Brendan does a great job of explaining where we're headed, and why.

The details of this event are in the CentOS Wiki and are being updated with slides and videos as they become available. All of the videos are in the event playlist on Youtube - check back over the coming week as we upload the remainder of the talks.

Our next event will be held at CERN in Meyrin, Switzerland, in October. Details are available at and we expect to post the schedule in the coming week.

Kategóriák: Informatika

CentOS Blog: CentOS Atomic Host 7.1807 Available for Download

2018, augusztus 21 - 03:07

The CentOS Atomic SIG has released an updated version of CentOS Atomic Host (7.1807), an operating system designed to run Linux containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host includes these core component versions:

  • atomic-1.22.1-22.git5a342e3.el7.x86_64
  • cloud-init-0.7.9-24.el7.centos.1.x86_64
  • docker-1.13.1-68.gitdded712.el7.centos.x86_64
  • etcd-3.2.22-1.el7.x86_64
  • flannel-0.7.1-4.el7.x86_64
  • kernel-3.10.0-862.11.6.el7.x86_64
  • ostree-2018.5-1.el7.x86_64
  • rpm-ostree-client-2018.5-1.atomic.el7.x86_64
Download CentOS Atomic Host

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. For links to media, see the CentOS wiki.


If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

# atomic host upgrade Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation – join us!

You’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

Kategóriák: Informatika


Theme by me