Which of the following utilities allows you to change the startup type of a service to automatic delayed )?
Red Hat Enterprise Linux 8 Show
A guide to configuring basic system settings in Red Hat Enterprise Linux 8Abstract This document describes basics of system administration on Red Hat Enterprise Linux 8. The title focuses on: basic tasks that a system administrator needs to do just after the operating system has been successfully installed, installing software with yum, using systemd for service management, managing users, groups and file permissions, using chrony to configure NTP, working with Python 3 and others. Making open source more inclusiveRed Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message. Providing feedback on Red Hat documentationWe appreciate your input on our documentation. Please let us know how we could make it better.
Chapter 1. Getting started with RHEL System RolesThis section explains what RHEL System Roles are. Additionally, it describes how to apply a particular role through an Ansible playbook to perform various system administration tasks. 1.1. Introduction to RHEL System RolesRHEL System Roles is a collection of Ansible roles and modules. RHEL System Roles provide a configuration interface to remotely manage multiple RHEL systems. The interface enables managing system configurations across multiple versions of RHEL, as well as adopting new major releases. On Red Hat Enterprise Linux 8, the interface currently consists of the following roles:
All these roles are provided by the rhel-system-roles package available in the AppStream repository. 1.2. RHEL System Roles terminologyYou can find the following terms across this documentation: Ansible playbook Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Control node Any machine with Ansible installed. You can run commands and playbooks, invoking /usr/bin/ansible or /usr/bin/ansible-playbook, from any control node. You can use any computer that has Python installed on it as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes. Inventory A list of managed nodes. An inventory file is also sometimes called a “hostfile”. Your inventory can specify information like IP address for each managed node. An inventory can also organize managed nodes, creating and nesting groups for easier scaling. To learn more about inventory, see the Working with Inventory section. Managed nodes The network devices, servers, or both that you manage with Ansible. Managed nodes are also sometimes called “hosts”. Ansible is not installed on managed nodes. 1.3. Applying a roleThe following procedure describes how to apply a particular role. Prerequisites
Procedure
1.4. Additional resources
Chapter 2. Changing basic environment settingsConfiguration of basic environment settings is a part of the installation process. The following sections guide you when you change them later. The basic configuration of the environment includes:
2.1. Configuring the date and timeAccurate timekeeping is important for a number of reasons. In Red Hat Enterprise Linux, timekeeping is ensured by the NTP protocol, which is implemented by a daemon running in user space. The user-space daemon updates the system clock running in the kernel. The system clock can keep time by using various clock sources. Red Hat Enterprise Linux 8 uses the chronyd daemon to implement NTP. chronyd is available from the chrony package. For more information, see Using the chrony suite to configure NTP. 2.1.1. Displaying the current date and timeTo display the current date and time, use either of these steps. Procedure
2.2. Configuring the system localeSystem-wide locale settings are stored in the /etc/locale.conf file, which is read at early boot by the systemd daemon. Every service or user inherits the locale settings configured in /etc/locale.conf, unless individual programs or individual users override them. This section describes how to manage system locale. Procedure
Additional resources
2.3. Configuring the keyboard layoutThe keyboard layout settings control the layout used on the text console and graphical user interfaces. Procedure
Additional resources
2.4. Changing the language using desktop GUIThis section describes how to change the system language using the desktop GUI. Prerequisites
Procedure
Some applications do not support certain languages. The text of an application that cannot be translated into the selected language remains in US English. 2.5. Additional resources
Chapter 3. Configuring and managing network accessThis section describes different options on how to add Ethernet connections in Red Hat Enterprise Linux. 3.1. Configuring the network and host name in the graphical installation modeFollow the steps in this procedure to configure your network and host name. Procedure
3.2. Configuring a static Ethernet connection using nmcliThis procedure describes adding an Ethernet connection with the following settings using the nmcli utility:
Procedure
Verification steps
Troubleshooting steps
3.3. Adding a connection profile using nmtuiThe nmtui application provides a text user interface to NetworkManager. This procedure describes how to add a new connection profile. Prerequisites
Procedure
Verification steps
3.4. Managing networking in the RHEL web consoleIn the web console, the Networking menu enables you:
Figure 3.1. Managing Networking in the RHEL web console 3.5. Managing networking using RHEL System RolesYou can configure the networking connections on multiple target machines using the network role. The network role allows to configure the following types of interfaces:
The required networking connections for each host are provided as a list within the network_connections variable. The network role updates or creates all connection profiles on the target system exactly as specified in the network_connections variable. Therefore, the network role removes options from the specified profiles if the options are only present on the system but not in the network_connections variable. The following example shows how to apply the network role to ensure that an Ethernet connection with the required parameters exists: An example playbook applying the network role to set up an Ethernet connection with the required parameters # SPDX-License-Identifier: BSD-3-Clause --- - hosts: network-test vars: network_connections: # Create one ethernet profile and activate it. # The profile uses automatic IP addressing # and is tied to the interface by MAC address. - name: prod1 state: up type: ethernet autoconnect: yes mac: "00:00:5e:00:53:00" mtu: 1450 roles: - rhel-system-roles.network3.6. Additional resources
Chapter 4. Registering the system and managing subscriptionsSubscriptions cover products installed on Red Hat Enterprise Linux, including the operating system itself. You can use a subscription to Red Hat Content Delivery Network to track:
4.1. Registering the system after the installationUse the following procedure to register your system if you have not registered it during the installation process already. Prerequisites
Procedure
4.2. Registering subscriptions with credentials in the web consoleUse the following steps to register a newly installed Red Hat Enterprise Linux with account credentials using the RHEL web console. Prerequisites
Procedure
At this point, your Red Hat Enterprise Linux system has been successfully registered. 4.3. Registering a system using Red Hat account on GNOMEFollow the steps in this procedure to enroll your system with your Red Hat account. Prerequisites
Procedure
4.4. Registering a system using an activation key on GNOMEFollow the steps in this procedure to register your system with an activation key. You can get the activation key from your organization administrator. Prerequisites
Procedure
4.5. Registering RHEL 8.4 using the installer GUIUse the following steps to register a newly installed Red Hat Enterprise Linux 8.4 using the RHEL installer GUI. Prerequisites
Procedure
Chapter 5. Making systemd services start at boot timesystemd is a system and service manager for Linux operating systems that introduces the concept of systemd units. This section provides information on how to ensure that a service is enabled or disabled at boot time. It also explains how to manage the services through the web console. 5.1. Enabling or disabling the servicesYou can determine which services are enabled or disabled at boot time already during the installation process. You can also enable or disable a service on an installed operating system. This section describes the steps for enabling or disabling those services on an already installed operating system:
Prerequisites
Procedure
You cannot enable a service that has been previously masked. You have to unmask it first: # systemctl unmask service_name5.2. Managing services in the RHEL web consoleThis section describes how you can also enable or disable a service using the web console. You can manage systemd targets, services, sockets, timers, and paths. You can also check the service status, start or stop services, enable or disable them. Prerequisites
Procedure
Chapter 6. Configuring system securityComputer security is the protection of computer systems and their hardware, software, information, and services from theft, damage, disruption, and misdirection. Ensuring computer security is an essential task, in particular in enterprises that process sensitive data and handle business transactions. This section covers only the basic security features that you can configure after installation of the operating system. 6.1. Enabling the firewalld serviceA firewall is a network security system that monitors and controls incoming and outgoing network traffic according to configured security rules. A firewall typically establishes a barrier between a trusted secure internal network and another outside network. The firewalld service, which provides a firewall in Red Hat Enterprise Linux, is automatically enabled during installation. To enable the firewalld service, follow this procedure. Procedure
Verification steps
6.2. Managing firewall in the rhel 8 web consoleTo configure the firewalld service in the web console, navigate to → . By default, the firewalld service is enabled. Procedure
Additionally, you can define more fine-grained access through the firewall to a service using the Add services… button. 6.3. Managing basic SELinux settingsSecurity-Enhanced Linux (SELinux) is an additional layer of system security that determines which processes can access which files, directories, and ports. These permissions are defined in SELinux policies. A policy is a set of rules that guide the SELinux security engine. SELinux has two possible states:
When SELinux is enabled, it runs in one of the following modes:
In enforcing mode, SELinux enforces the loaded policies. SELinux denies access based on SELinux policy rules and enables only the interactions that are explicitly allowed. Enforcing mode is the safest SELinux mode and is the default mode after installation. In permissive mode, SELinux does not enforce the loaded policies. SELinux does not deny access, but reports actions that break the rules to the /var/log/audit/audit.log log. Permissive mode is the default mode during installation. Permissive mode is also useful in some specific cases, for example when troubleshooting problems. 6.4. Ensuring the required state of selinuxBy default, SELinux operates in enforcing mode. However, in specific scenarios, you can set SELinux to permissive mode or even disable it. Red Hat recommends to keep your system in enforcing mode. For debugging purposes, you can set SELinux to permissive mode. Follow this procedure to change the state and mode of SELinux on your system. Procedure
6.5. Switching SELinux modes in the RHEL 8 web consoleYou can set SELinux mode through the RHEL 8 web console in the SELinux menu item. By default, SELinux enforcing policy in the web console is on, and SELinux operates in enforcing mode. By turning it off, you switch SELinux to permissive mode. Note that this selection is automatically reverted on the next boot to the configuration defined in the /etc/sysconfig/selinux file. Procedure
6.6. Additional resources
Chapter 7. Getting started with managing user accountsRed Hat Enterprise Linux is a multi-user operating system, which enables multiple users on different computers to access a single system installed on one machine. Every user operates under its own account, and managing user accounts thus represents a core element of Red Hat Enterprise Linux system administration. The following are the different types of user accounts:
7.1. Managing accounts and groups using command line toolsThis section describes basic command-line tools to manage user accounts and groups.
Additional resources
7.2. System user accounts managed in the web consoleWith user accounts displayed in the RHEL web console you can:
The RHEL web console displays all user accounts located in the system. Therefore, you can see at least one user account just after the first login to the web console. After logging into the RHEL web console, you can perform the following operations:
7.3. Adding new accounts using the web consoleUse the following steps for adding user accounts to the system and setting administration rights to the accounts through the RHEL web console. Procedure
Chapter 8. Dumping a crashed kernel for later analysisTo analyze why a system crashed, you can use the kdump service to save the contents of the system’s memory for later analysis. This section provides a brief introduction to kdump, and information about configuring kdump using the RHEL web console or using the corresponding RHEL system role. 8.1. What is kdumpkdump is a service which provides a crash dumping mechanism. The service enables you to save the contents of the system memory for analysis. kdump uses the kexec system call to boot into the second kernel (a capture kernel) without rebooting; and then captures the contents of the crashed kernel’s memory (a crash dump or a vmcore) and saves it into a file. The second kernel resides in a reserved part of the system memory. A kernel crash dump can be the only information available in the event of a system failure (a critical bug). Therefore, operational kdump is important in mission-critical environments. Red Hat advise that system administrators regularly update and test kexec-tools in your normal kernel update cycle. This is especially important when new kernel features are implemented. You can enable kdump for all installed kernels on a machine or only for specified kernels. This is useful when there are multiple kernels used on a machine, some of which are stable enough that there is no concern that they could crash. When kdump is installed, a default /etc/kdump.conf file is created. The file includes the default minimum kdump configuration. You can edit this file to customize the kdump configuration, but it is not required. 8.2. Configuring kdump memory usage and target location in web consoleThe procedure below shows you how to use the Kernel Dump tab in the RHEL web console interface to configure the amount of memory that is reserved for the kdump kernel. The procedure also describes how to specify the target location of the vmcore dump file and how to test your configuration. Procedure
8.3. kdump using RHEL System RolesRHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems. The kdump role enables you to set basic kernel dump parameters on multiple systems. The kdump role replaces the kdump configuration of the managed hosts entirely by replacing the /etc/kdump.conf file. Additionally, if the kdump role is applied, all previous kdump settings are also replaced, even if they are not specified by the role variables, by replacing the /etc/sysconfig/kdump file. The following example playbook shows how to apply the kdump system role to set the location of the crash dump files: --- - hosts: kdump-test vars: kdump_path: /var/crash roles: - rhel-system-roles.kdumpFor a detailed reference on kdump role variables, install the rhel-system-roles package, and see the README.md or README.html files in the /usr/share/doc/rhel-system-roles/kdump directory. 8.4. Additional resources
Chapter 9. Recovering and restoring a systemTo recover and restore a system using an existing backup, Red Hat Enterprise Linux provides the Relax-and-Recover (ReaR) utility. You can use the utility as a disaster recovery solution and also for system migration. The utility enables you to perform the following tasks:
Additionally, for disaster recovery, you can also integrate certain backup software with ReaR. Setting up ReaR involves the following high-level steps:
9.1. Setting up ReaRUse the following steps to install the package for using the Relax-and-Recover (ReaR) utility, create a rescue system, configure and generate a backup. Prerequisites
Procedure
9.2. Using a ReaR rescue image on the 64-bit IBM Z architectureBasic Relax and Recover (ReaR) functionality is now available on the 64-bit IBM Z architecture as a Technology Preview. You can create a ReaR rescue image on IBM Z only in the z/VM environment. Backing up and recovering logical partitions (LPARs) has not been tested. ReaR on the 64-bit IBM Z architecture is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview. The only output method currently available is Initial Program Load (IPL). IPL produces a kernel and an initial ramdisk (initrd) that can be used with the zIPL bootloader. Prerequisites
Procedure Add the following variables to the /etc/rear/local.conf to configure ReaR for producing a rescue image on the 64-bit IBM Z architecture:
Currently, the rescue process reformats all the DASDs (Direct Attached Storage Devices) connected to the system. Do not attempt a system recovery if there is any valuable data present on the system storage devices. This also includes the device prepared with the zipl bootloader, ReaR kernel, and initrd that were used to boot into the rescue environment. Ensure to keep a copy. Chapter 10. Troubleshooting problems using log filesLog files contain messages about the system, including the kernel, services, and applications running on it. These contain information that helps troubleshoot issues or monitor system functions. The logging system in Red Hat Enterprise Linux is based on the built-in syslog protocol. Particular programs use this system to record events and organize them into log files, which are useful when auditing the operating system and troubleshooting various problems. 10.1. Services handling syslog messagesThe following two services handle syslog messages:
The systemd-journald daemon collects messages from various sources and forwards them to Rsyslog for further processing. The systemd-journald daemon collects messages from the following sources:
The Rsyslog service sorts the syslog messages by type and priority and writes them to the files in the /var/log directory. The /var/log directory persistently stores the log messages. 10.2. Subdirectories storing syslog messagesThe following subdirectories under the /var/log directory store syslog messages.
10.3. Inspecting log files using the web consoleFollow the steps in this procedure to inspect the log files using the RHEL web console. Figure 10.1. Inspecting the log files in the RHEL 8 web console 10.4. Viewing logs using the command lineThe Journal is a component of systemd that helps to view and manage log files. It addresses problems connected with traditional logging, closely integrated with the rest of the system, and supports various logging technologies and access management for the log files. You can use the journalctl command to view messages in the system journal using the command line, for example: $ journalctl -b | grep kvm May 15 11:31:41 localhost.localdomain kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 11:31:41 localhost.localdomain kernel: kvm-clock: cpu 0, msr 76401001, primary cpu clock ...Table 10.1. Viewing system information
Table 10.2. Viewing information on specific services
Table 10.3. Viewing logs related to specific boots
10.5. Additional resources
Chapter 11. Accessing the Red Hat supportThis section describes how to effectively troubleshoot your problems using Red Hat support and sosreport. To obtain support from Red Hat, use the Red Hat Customer Portal, which provides access to everything available with your subscription. 11.1. Obtaining Red Hat support through Red Hat Customer PortalThe following section describes how to use the Red Hat Customer Portal to get help. Prerequisites
Procedure
11.2. Troubleshooting problems using sosreportThe sosreport command collects configuration details, system information and diagnostic information from a Red Hat Enterprise Linux system. The following section describes how to use the sosreport command to produce reports for your support cases. Prerequisites
Procedure
Chapter 12. Managing software packages12.1. Software management tools in RHEL 8In RHEL 8, software installation is enabled by the new version of the YUM tool (YUM v4), which is based on the DNF technology. Upstream documentation identifies the technology as DNF and the tool is referred to as DNF in the upstream. As a result, some output returned by the new YUM tool in RHEL 8 mentions DNF. Although YUM v4 used in RHEL 8 is based on DNF, it is compatible with YUM v3 used in RHEL 7. For software installation, the yum command and most of its options work the same way in RHEL 8 as they did in RHEL 7. Selected yum plug-ins and utilities have been ported to the new DNF back end, and can be installed under the same names as in RHEL 7. Packages also provide compatibility symlinks, so the binaries, configuration files, and directories can be found in usual locations. Note that the legacy Python API provided by YUM v3 is no longer available. You can migrate your plug-ins and scripts to the new API provided by YUM v4 (DNF Python API), which is stable and fully supported. See DNF API Reference for more information. 12.2. Application streamsRHEL 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages, and are delivered through the AppStream repository in RHEL 8. Each Application Stream has a given life cycle, either the same as RHEL 8 or shorter, more suitable to the particular application. Application Streams with a shorter life cycle are listed in the Red Hat Enterprise Linux 8 Application Streams Life Cycle page. Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, two streams (versions) of the PostgreSQL database server are available in the postgresql module: PostgreSQL 10 (the default stream) and PostgreSQL 9.6. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest. 12.3. Searching for software packagesyum allows you to perform a complete set of operations with software packages. The following section describes how to use yum to:
12.3.1. Searching packages with YUMUse the following procedure to find a package providing a particular application or other content. Procedure
12.3.2. Listing packages with YUMUse the following procedure to list installed and available packages. Procedure
Note that you can filter the results by appending global expressions as arguments. See Specifying global expressions in yum input for more details. 12.3.3. Listing repositories with YUMUse the following procedure to list enabled and disabled repositories. Procedure
Note that you can filter the results by passing the ID or name of repositories as arguments or by appending global expressions. See Specifying global expressions in yum input for more details. 12.3.4. Displaying package information with YUMYou can display various types of information about a package using YUM, for example version, release, size, loaded plugins, and more. Procedure
Note that you can filter the results by appending global expressions as arguments. See Specifying global expressions in yum input for more details. 12.3.5. Listing package groups with YUMUse yum to view installed package groups and filter the listing results. Procedure
Note that you can filter the results by appending global expressions as arguments. See Specifying global expressions in yum input for more details. 12.3.6. Specifying global expressions in YUM inputyum commands allow you to filter the results by appending one or more glob expressions as arguments. You have to escape global expressions when passing them as arguments to the yum command. Procedure To ensure global expressions are passed to yum as intended, use one of the following methods:
12.4. Installing software packagesThe following section describes how to use yum to:
12.4.1. Installing packages with YUM
Note that you can optimize the package search by explicitly defining how to parse the argument. See Section 12.4.3, “Specifying a package name in YUM input” for more details. 12.4.2. Installing a package group with YUMThe following procedure describes how to install a package group by a group name or by a groupID using yum. Procedure
12.4.3. Specifying a package name in YUM inputTo optimize the installation and removal process, you can append -n, -na, or -nerva suffixes to yum install and yum remove commands to explicitly define how to parse an argument:
12.5. Updating software packagesyum allows you to check if your system has any pending updates. You can list packages that need updating and choose to update a single package, multiple packages, or all packages at once. If any of the packages you choose to update have dependencies, they are updated as well. The following section describes how to use yum to:
12.5.1. Checking for updates with YUMThe following procedure describes how to check the available updates for packages installed on your system using yum. Procedure
12.5.2. Updating a single package with YUMUse the following procedure to update a single package and its dependencies using yum.
When applying updates to kernel, yum always installs a new kernel regardless of whether you are using the yum update or yum install command. 12.5.3. Updating a package group with YUMUse the following procedure to update a group of packages and their dependencies using yum. Procedure
12.5.4. Updating all packages and their dependencies with YUMUse the following procedure to update all packages and their dependencies using yum. Procedure
12.5.6. Automating software updatesTo check and download package updates automatically and regularly, you can use the DNF Automatic tool that is provided by the dnf-automatic package. DNF Automatic is an alternative command-line interface to yum that is suited for automatic and regular execution using systemd timers, cron jobs and other such tools. DNF Automatic synchronizes package metadata as needed and then checks for updates available. After, the tool can perform one of the following actions depending on how you configure it:
The outcome of the operation is then reported by a selected mechanism, such as the standard output or email. 12.5.6.1. Installing DNF AutomaticThe following procedure describes how to install the DNF Automatic tool. Procedure
Verification steps
12.5.6.2. DNF Automatic configuration fileBy default, DNF Automatic uses /etc/dnf/automatic.conf as its configuration file to define its behavior. The configuration file is separated into the following topical sections:
With the default settings of the /etc/dnf/automatic.conf file, DNF Automatic checks for available updates, downloads them, and reports the results as standard output. Settings of the operation mode from the [commands] section are overridden by settings used by a systemd timer unit for all timer units except dnf-automatic.timer. Additional resources
12.5.6.3. Enabling DNF AutomaticTo run DNF Automatic, you always need to enable and start a specific systemd timer unit. You can use one of the timer units provided in the dnf-automatic package, or you can write your own timer unit depending on your needs. The following section describes how to enable DNF Automatic. Prerequisites
For more information on DNF Automatic configuration file, see Section 2.5.6.2, “DNF Automatic configuration file”. Procedure
For downloading available updates, use: # systemctl enable dnf-automatic-download.timer# systemctl start dnf-automatic-download.timerFor downloading and installing available updates, use: # systemctl enable dnf-automatic-install.timer# systemctl start dnf-automatic-install.timerFor reporting about available updates, use: # systemctl enable dnf-automatic-notifyonly.timer# systemctl start dnf-automatic-notifyonly.timerOptionally, you can use: # systemctl enable dnf-automatic.timer# systemctl start dnf-automatic.timerIn terms of downloading and applying updates, this timer unit behaves according to settings in the /etc/dnf/automatic.conf configuration file. The default behavior is similar to dnf-automatic-download.timer: it downloads the updated packages, but it does not install them. Alternatively, you can also run DNF Automatic by executing the /usr/bin/dnf-automatic file directly from the command line or from a custom script. Verification steps
Additional resources
12.5.6.4. Overview of the systemd timer units included in the dnf-automatic packageThe systemd timer units take precedence and override the settings in the /etc/dnf/automatic.conf configuration file concerning downloading and applying updates. For example if you set the following option in the /etc/dnf/automatic.conf configuration file, but you have activated the dnf-automatic-notifyonly.timer unit, the packages will not be downloaded: download_updates = yesThe dnf-automatic package includes the following systemd timer units:
Additional resources
12.6. Uninstalling software packagesThe following section describes how to use yum to:
12.6.1. Removing packages with YUMUse the following procedure to remove a package either by the group name or the groupID. Procedure
yum is not able to remove a package without removing depending packages. Note that you can optimize the package search by explicitly defining how to parse the argument. See Specifying a package name in yum input for more details. 12.6.2. Removing a package group with YUM
Use the following procedure to remove a package either by the group name or the groupID. Procedure
12.6.3. Specifying a package name in YUM inputTo optimize the installation and removal process, you can append -n, -na, or -nerva suffixes to yum install and yum remove commands to explicitly define how to parse an argument:
12.7. Managing software package groupsA package group is a collection of packages that serve a common purpose (System Tools, Sound and Video). Installing a package group pulls a set of dependent packages, which saves time considerably. The following section describes how to use yum to:
12.7.1. Listing package groups with YUMUse yum to view installed package groups and filter the listing results. Procedure
Note that you can filter the results by appending global expressions as arguments. See Specifying global expressions in yum input for more details. 12.7.2. Installing a package group with YUMThe following procedure describes how to install a package group by a group name or by a groupID using yum. Procedure
12.7.3. Removing a package group with YUMUse the following procedure to remove a package either by the group name or the groupID. Procedure
12.7.4. Specifying global expressions in YUM inputyum commands allow you to filter the results by appending one or more glob expressions as arguments. You have to escape global expressions when passing them as arguments to the yum command. Procedure To ensure global expressions are passed to yum as intended, use one of the following methods:
12.8. Handling package management historyThe yum history command allows you to review information about the timeline of yum transactions, dates and times they occurred, the number of packages affected, whether these transactions succeeded or were aborted, and if the RPM database was changed between transactions. yum history command can also be used to undo or redo the transactions. The following section describes how to use yum to:
12.8.1. Listing transactions with YUMUse the following procedure to list the latest transactions, the latest operations for a selected package, and details of a particular transaction. Procedure
12.8.2. Reverting transactions with YUMThe following procedure describes how to revert a selected transaction or the last transaction using yum. Procedure
Note that the yum history undo command only reverts the steps that were performed during the transaction. If the transaction installed a new package, the yum history undo command uninstalls it. If the transaction uninstalled a package, the yum history undo command reinstalls it. yum history undo also attempts to downgrade all updated packages to their previous versions, if the older packages are still available. 12.8.3. Repeating transactions with YUMUse the following procedure to repeat a selected transaction or the last transaction using yum. Procedure
Note that the yum history redo command only repeats the steps that were performed during the transaction. 12.8.4. Specifying global expressions in YUM inputyum commands allow you to filter the results by appending one or more glob expressions as arguments. You have to escape global expressions when passing them as arguments to the yum command. Procedure To ensure global expressions are passed to yum as intended, use one of the following methods:
12.9. Managing software repositoriesThe configuration information for yum and related utilities are stored in the /etc/yum.conf file. This file contains one or more [repository] sections, which allow you to set repository-specific options. It is recommended to define individual repositories in new or existing .repo files in the /etc/yum.repos.d/ directory. Note that the values you define in individual [repository] sections of the /etc/yum.conf file override values set in the [main] section. The following section describes how to:
12.9.1. Setting YUM repository optionsThe /etc/yum.conf configuration file contains the [repository] sections, where repository is a unique repository ID. The [repository] sections allows you to define individual yum repositories. Do not give custom repositories names used by the Red Hat repositories to avoid conflicts. For a complete list of available [repository] options, see the [repository] OPTIONS section of the yum.conf(5) manual page. 12.9.2. Adding a YUM repositoryProcedure To define a new repository, you can:
yum repositories commonly provide their own .repo file. It is recommended to define your repositories in a .repo file instead of /etc/yum.conf as all files with the .repo file extension in this directory are read by yum.
Obtaining and installing software packages from unverified or untrusted sources other than Red Hat certificate-based Content Delivery Network (CDN) constitutes a potential security risk, and could lead to security, stability, compatibility, and maintainability issues. 12.9.3. Enabling a YUM repositoryOnce you added a `yum`repository to your system, enable it to ensure installation and updates. Procedure
12.9.4. Disabling a YUM repository
Disable a specific YUM repository to prevent particular packages from installation or update. Procedure
12.10. Configuring YUMThe configuration information for yum and related utilities are stored in the /etc/yum.conf file. This file contains one mandatory [main] section, which enables you to set yum options that have global effect. The following section describes how to:
12.10.1. Viewing the current YUM configurationsUse the following procedure to view the current yum configurations. Procedure
12.10.2. Setting YUM main optionsThe /etc/yum.conf configuration file contains one [main] section. The key-value pairs in this section affect how yum operates and treats repositories. You can add additional options under the [main] section heading in /etc/yum.conf. For a complete list of available [main] options, see the [main] OPTIONS section of the yum.conf(5) manual page. 12.10.3. Using YUM plug-insyum provides plug-ins that extend and enhance its operations. Certain plug-ins are installed by default. The following section describes how to enable, configure, and disable yum plug-ins. 12.10.3.1. Managing YUM plug-insProcedure The plug-in configuration files always contain a [main] section where the enabled= option controls whether the plug-in is enabled when you run yum commands. If this option is missing, you can add it manually to the file. Every installed plug-in has its own configuration file in the /etc/dnf/plugins/ directory. You can enable or disable plug-in specific options in these files. 12.10.3.2. Enabling YUM plug-insThe following procedure describes how to disable or enable all YUM plug-ins, disable all plug-ins for a particular command, or certain YUM plug-ins for a single command. + Procedure
12.10.3.3. Disabling YUM plug-ins
Chapter 13. Introduction to systemdsystemd is a system and service manager for Linux operating systems. It is designed to be backwards compatible with SysV init scripts, and provides a number of features such as parallel startup of system services at boot time, on-demand activation of daemons, or dependency-based service control logic. Starting with Red Hat Enterprise Linux 7, systemd replaced Upstart as the default init system. systemd introduces the concept of systemd units. These units are represented by unit configuration files located in one of the directories listed in the following table: Table 13.1. systemd unit files locations
The units encapsulate information about:
The default configuration of systemd is defined during the compilation and it can be found in the systemd configuration file at /etc/systemd/system.conf. Use this file if you want to deviate from those defaults and override selected default values for systemd units globally. For example, to override the default value of the timeout limit, which is set to 90 seconds, use the DefaultTimeoutStartSec parameter to input the required value in seconds. DefaultTimeoutStartSec=pass:_required value_13.1. systemd unit typesFor a complete list of available systemd unit types, see the following table: Table 13.2. Available systemd unit types
13.2. systemd main featuresThe systemd system and service manager provides the following main features:
13.3. Compatibility changesThe systemd system and service manager is designed to be mostly compatible with SysV init and Upstart. The following are the most notable compatibility changes with regards to Red Hat Enterprise Linux 6 system that used SysV init:
13.4. Additional resources
Chapter 14. Managing system services with systemctlThe systemctl utility helps managing system services. You can use the systemctl utility to perform different tasks related to different services such as starting, stopping, restarting, enabling, and disabling services, listing services, and displaying system services statuses. This section describes how to manage system services with the systemctl utility. 14.1. Service unit management with systemctlThe service units help to control the state of services and daemons in your system. Service units end with the .service file extension, for example nfs-server.service. However, while using service file names in commands, you can omit the file extension. The systemctl utility assumes the argument is a service unit. For example, to stop the nfs-server.service, enter the following command: # systemctl stop nfs-serverAdditionally, some service units have alias names. Aliases can be shorter than units, and you can use them instead of the actual unit names. To find all aliases that can be used for a particular unit, use: # systemctl show nfs-server.service -p NamesAdditional resources
14.2. Comparison of a service utility with systemctlThis section shows a comparison between a service utility and the usage of systemctl command. Table 14.1. Comparison of the service utility with systemctl
14.3. Listing system servicesYou can list all currently loaded service units and the status of all available service units. Procedure
14.4. Displaying system service statusYou can inspect any service unit to get its detailed information and verify the state of the service whether it is enabled or running. You can also view services that are ordered to start after or before a particular service unit. Procedure
14.5. Positive and negative service dependenciesIn systemd, positive and negative dependencies between services exist. Starting a particular service may require starting one or more other services (positive dependency) or stopping one or more services (negative dependency). When you attempt to start a new service, systemd resolves all dependencies automatically, without explicit notification to the user. This means that if you are already running a service, and you attempt to start another service with a negative dependency, the first service is automatically stopped. For example, if you are running the postfix service, and you attempt to start the sendmail service, systemd first automatically stops postfix, because these two services are conflicting and cannot run on the same port. 14.6. Starting a system serviceYou can start system service in the current session using the start command. You must have a root access as starting a service may affect the state of the operating system. Procedure
14.7. Stopping a system serviceYou can stop system service in the current session using the stop command. You must have a root access as stopping a service may affect the state of the operating system. Procedure
14.8. Restarting a system serviceYou can restart system service in the current session using the restart command. You must have a root access as restarting a service may affect the state of the operating system. This procedure describes how to:
Procedure
14.9. Enabling a system serviceYou can configure service to start automatically at the system booting time. The enable command reads the [Install] section of the selected service unit and creates appropriate symbolic links to the /usr/lib/systemd/system/name.service file in the /etc/systemd/system/ directory and its sub-directories. However, it does not rewrite links that already exist. Procedure
14.10. Disabling a system serviceYou can prevent a service unit from starting automatically at boot time. The disable command reads the [Install] section of the selected service unit and removes appropriate symbolic links to the /usr/lib/systemd/system/name.service file from the /etc/systemd/system/ directory and its sub-directories. Procedure
Chapter 15. Working with systemd targetssystemd targets are represented by target units. Target units file ends with the .target file extension and their only purpose is to group together other systemd units through a chain of dependencies. For example, the graphical.target unit, which is used to start a graphical session, starts system services such as the GNOME Display Manager (gdm.service) or Accounts Service (accounts-daemon.service) and also activates the multi-user.target unit. Similarly, the multi-user.target unit starts other essential system services such as NetworkManager (NetworkManager.service) or D-Bus (dbus.service) and activates another target unit named basic.target. This section includes procedures to implement while working with systemd targets. 15.1. Difference between SysV runlevels and systemd targetsThe previous versions of Red Hat Enterprise Linux were distributed with SysV init or Upstart, and implemented a predefined set of runlevels that represented specific modes of operation. These runlevels were numbered from 0 to 6 and were defined by a selection of system services to be run when a particular runlevel was enabled by the system administrator. Starting with Red Hat Enterprise Linux 7, the concept of runlevels has been replaced with systemd targets. Red Hat Enterprise Linux 7 was distributed with a number of predefined targets that are more or less similar to the standard set of runlevels from the previous releases. For compatibility reasons, it also provides aliases for these targets that directly maps to the SysV runlevels. The following table provides a complete list of SysV runlevels and their corresponding systemd targets: Table 15.1. Comparison of SysV runlevels with systemd targets
The following table compares the SysV init commands with systemctl. Use the systemctl utility to view, change, or configure systemd targets: The runlevel and telinit commands are still available in the system and work as expected, but are only included for compatibility reasons and should be avoided. Table 15.2. Comparison of SysV init commands with systemctl
Additional resources
15.2. Viewing the default targetThe default target unit is represented by the /etc/systemd/system/default.target file. Procedure
By default, the systemctl list-units command displays only active units. Procedure
15.2.1. Changing the default targetThe default target unit is represented by the /etc/systemd/system/default.target file. The following procedure describes how to change the default target by using the systemctl command: Procedure
Additional resources
15.2.2. Changing the default target using symbolic linkThe following procedure describes how to change the default target by creating a symbolic link to the target. Procedure
15.2.3. Changing the current targetThis procedure explains how to change the target unit in the current session using the systemctl command. Procedure
Replace multi-user with the name of the target unit you want to use by default. Verification steps
Rescue mode provides a convenient single-user environment and allows you to repair your system in situations when it is unable to complete a regular booting process. In rescue mode, the system attempts to mount all local file systems and start some important system services, but it does not activate network interfaces or allow more users to be logged into the system at the same time. Procedure
15.2.3.1. Booting to emergency modeEmergency mode provides the most minimal environment possible and allows you to repair your system even in situations when the system is unable to enter rescue mode. In emergency mode, the system mounts the root file system only for reading, does not attempt to mount any other local file systems, does not activate network interfaces, and only starts a few essential services. Procedure
Chapter 16. Shutting down, suspending, and hibernating the systemThis section contains instructions about shutting down, suspending, or hibernating your operating system. 16.1. System shutdownTo shut down the system, you can either use the systemctl utility directly, or call this utility through the shutdown command. The advantage of using the shutdown command is:
16.2. Shutting down the system using the shutdown commandBy following this procedure, you can use the shutdown command to perform various operations. You can either shut down the system and power off the machine at a certain time, or shut down and halt the system without powering off the machine, or cancel a pending shutdown. Prerequisites
Procedure
16.3. Shutting down the system using the systemctl commandBy following this procedure, you can use the systemctl command to perform various operations. You can either shut down the system and power off the machine, or shut down and halt the system without powering off the machine. Prerequisites
Procedure
By default, running either of these commands causes systemd to send an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run the selected command with the --no-wall command line option. 16.4. Restarting the systemYou can restart the system by following this procedure. Prerequisites
Procedure
By default, this command causes systemd to send an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run this command with the --no-wall command line option. 16.5. Suspending the systemYou can suspend the system by following this procedure. Prerequisites
Procedure
16.6. Hibernating the systemBy following this procedure, you can either hibernate the system, or hibernate and suspend the system. Prerequisites
Procedure
16.7. Overview of the power management commands with systemctlYou can use the following list of the systemctl commands to control the power management of your system. Table 16.1. Overview of the systemctl power management commands
Chapter 17. Working with systemd unit filesThis chapter includes the description of systemd unit files. The following sections show you how to:
17.1. Introduction to unit filesA unit file contains configuration directives that describe the unit and define its behavior. Several systemctl commands work with unit files in the background. To make finer adjustments, system administrator must edit or create unit files manually. systemd unit files locations lists three main directories where unit files are stored on the system, the /etc/systemd/system/ directory is reserved for unit files created or customized by the system administrator. Unit file names take the following form: unit_name.type_extensionHere, unit_name stands for the name of the unit and type_extension identifies the unit type. For a complete list of unit types, see systemd unit files For example, there usually is sshd.service as well as sshd.socket unit present on your system. Unit files can be supplemented with a directory for additional configuration files. For example, to add custom configuration options to sshd.service, create the sshd.service.d/custom.conf file and insert additional directives there. For more information on configuration directories, see Modifying existing unit files. Also, the sshd.service.wants/ and sshd.service.requires/ directories can be created. These directories contain symbolic links to unit files that are dependencies of the sshd service. The symbolic links are automatically created either during installation according to [Install] unit file options or at runtime based on [Unit] options. It is also possible to create these directories and symbolic links manually. For more details on [Install] and [Unit] options, see the tables below. Many unit file options can be set using the so called unit specifiers – wildcard strings that are dynamically replaced with unit parameters when the unit file is loaded. This enables creation of generic unit files that serve as templates for generating instantiated units. See Working with instantiated units. 17.2. Unit file structureUnit files typically consist of three sections:
17.3. Important [Unit] section optionsThe following tables lists important options of the [Unit] section. Table 17.1. Important [Unit] section options
17.4. Important [Service] section optionsThe following tables lists important options of the [Service] section. Table 17.2. Important [Service] section options
17.5. Important [Install] section optionsThe following tables lists important options of the [Install] section. Table 17.3. Important [Install] section options
17.6. Creating custom unit filesThere are several use cases for creating unit files from scratch: you could run a custom daemon, create a second instance of some existing service as in Creating a custom unit file by using the second instance of the sshd service On the other hand, if you intend just to modify or extend the behavior of an existing unit, use the instructions from Modifying existing unit files. Procedure The following procedure describes the general process of creating a custom service:
17.7. Creating a custom unit file by using the second instance of the sshd serviceSystem Administrators often need to configure and run multiple instances of a service. This is done by creating copies of the original service configuration files and modifying certain parameters to avoid conflicts with the primary instance of the service. The following procedure shows how to create a second instance of the sshd service. Procedure
17.8. Converting SysV init scripts to unit filesBefore taking time to convert a SysV init script to a unit file, make sure that the conversion was not already done elsewhere. All core services installed on Red Hat Enterprise Linux come with default unit files, and the same applies for many third-party software packages. Converting an init script to a unit file requires analyzing the script and extracting the necessary information from it. Based on this data you can create a unit file. As init scripts can vary greatly depending on the type of the service, you might need to employ more configuration options for translation than outlined in this chapter. Note that some levels of customization that were available with init scripts are no longer supported by systemd units. The majority of information needed for conversion is provided in the script’s header. The following example shows the opening section of the init script used to start the postfix service on Red Hat Enterprise Linux 6: #!/bin/bash # postfix Postfix Mail Transfer Agent # chkconfig: 2345 80 30 # description: Postfix is a Mail Transport Agent, which is the program that moves mail from one machine to another. # processname: master # pidfile: /var/spool/postfix/pid/master.pid # config: /etc/postfix/main.cf # config: /etc/postfix/master.cf ### BEGIN INIT INFO # Provides: postfix MTA # Required-Start: $local_fs $network $remote_fs # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop postfix # Description: Postfix is a Mail Transport Agent, which is the program that moves mail from one machine to another. ### END INIT INFOIn the above example, only lines starting with # chkconfig and # description are mandatory, so you might not find the rest in different init files. The text enclosed between the BEGIN INIT INFO and END INIT INFO lines is called Linux Standard Base (LSB) header. If specified, LSB headers contain directives defining the service description, dependencies, and default runlevels. What follows is an overview of analytic tasks aiming to collect the data needed for a new unit file. The postfix init script is used as an example. 17.9. Finding the systemd service descriptionYou can find descriptive information about the script on the line starting with #description. Use this description together with the service name in the Description option in the [Unit] section of the unit file. The LSB header might contain similar data on the #Short-Description and #Description lines. 17.10. Finding the systemd service dependenciesThe LSB header might contain several directives that form dependencies between services. Most of them are translatable to systemd unit options, see the following table: Table 17.4. Dependency options from the LSB header
17.11. Finding default targets of the serviceThe line starting with #chkconfig contains three numerical values. The most important is the first number that represents the default runlevels in which the service is started. Map these runlevels to equivalent systemd targets. Then list these targets in the WantedBy option in the [Install] section of the unit file. For example, postfix was previously started in runlevels 2, 3, 4, and 5, which translates to multi-user.target and graphical.target. Note that the graphical.target depends on multiuser.target, therefore it is not necessary to specify both. You might find information on default and forbidden runlevels also at #Default-Start and #Default-Stop lines in the LSB header. The other two values specified on the #chkconfig line represent startup and shutdown priorities of the init script. These values are interpreted by systemd if it loads the init script, but there is no unit file equivalent. 17.12. Finding files used by the serviceInit scripts require loading a function library from a dedicated directory and allow importing configuration, environment, and PID files. Environment variables are specified on the line starting with #config in the init script header, which translates to the EnvironmentFile unit file option. The PID file specified on the #pidfile init script line is imported to the unit file with the PIDFile option. The key information that is not included in the init script header is the path to the service executable, and potentially some other files required by the service. In previous versions of Red Hat Enterprise Linux, init scripts used a Bash case statement to define the behavior of the service on default actions, such as start, stop, or restart, as well as custom-defined actions. The following excerpt from the postfix init script shows the block of code to be executed at service start. conf_check() { [ -x /usr/sbin/postfix ] || exit 5 [ -d /etc/postfix ] || exit 6 [ -d /var/spool/postfix ] || exit 5 } make_aliasesdb() { if [ "$(/usr/sbin/postconf -h alias_database)" == "hash:/etc/aliases" ] then # /etc/aliases.db might be used by other MTA, make sure nothing # has touched it since our last newaliases call [ /etc/aliases -nt /etc/aliases.db ] || [ "$ALIASESDB_STAMP" -nt /etc/aliases.db ] || [ "$ALIASESDB_STAMP" -ot /etc/aliases.db ] || return /usr/bin/newaliases touch -r /etc/aliases.db "$ALIASESDB_STAMP" else /usr/bin/newaliases fi } start() { [ "$EUID" != "0" ] && exit 4 # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 conf_check # Start daemons. echo -n $"Starting postfix: " make_aliasesdb >/dev/null 2>&1 [ -x $CHROOT_UPDATE ] && $CHROOT_UPDATE /usr/sbin/postfix start 2>/dev/null 1>&2 && success || failure $"$prog start" RETVAL=$? [ $RETVAL -eq 0 ] && touch $lockfile echo return $RETVAL }The extensibility of the init script allowed specifying two custom functions, conf_check() and make_aliasesdb(), that are called from the start() function block. On closer look, several external files and directories are mentioned in the above code: the main service executable /usr/sbin/postfix, the /etc/postfix/ and /var/spool/postfix/ configuration directories, as well as the /usr/sbin/postconf/ directory. systemd supports only the predefined actions, but enables executing custom executables with ExecStart, ExecStartPre, ExecStartPost, ExecStop, and ExecReload options. The /usr/sbin/postfix together with supporting scripts are executed on service start. Converting complex init scripts requires understanding the purpose of every statement in the script. Some of the statements are specific to the operating system version, therefore you do not need to translate them. On the other hand, some adjustments might be needed in the new environment, both in unit file as well as in the service executable and supporting files. 17.13. Modifying existing unit filesServices installed on the system come with default unit files that are stored in the /usr/lib/systemd/system/ directory. System Administrators should not modify these files directly, therefore any customization must be confined to configuration files in the /etc/systemd/system/ directory. Procedure
To modify properties, such as dependencies or timeouts, of a service that is handled by a SysV initscript, do not modify the initscript itself. Instead, create a systemd drop-in configuration file for the service as described in: Extending the default unit configuration and Overriding the default unit configuration. Then manage this service in the same way as a normal systemd service. For example, to extend the configuration of the network service, do not modify the /etc/rc.d/init.d/network initscript file. Instead, create new directory /etc/systemd/system/network.service.d/ and a systemd drop-in file /etc/systemd/system/network.service.d/my_config.conf. Then, put the modified values into the drop-in file. Note: systemd knows the network service as network.service, which is why the created directory must be called network.service.d 17.14. Extending the default unit configurationThis section describes how to extend the default unit file with additional configuration options. Procedure
Example 17.1. Extending the httpd.service configuration To modify the httpd.service unit so that a custom shell script is automatically executed when starting the Apache service, perform the following steps.
The configuration files from configuration directories in /etc/systemd/system/ take precedence over unit files in /usr/lib/systemd/system/. Therefore, if the configuration files contain an option that can be specified only once, such as Description or ExecStart, the default value of this option is overridden. Note that in the output of the systemd-delta command, described in Monitoring overridden units ,such units are always marked as [EXTENDED], even though in sum, certain options are actually overridden. 17.15. Overriding the default unit configurationThis section describes how to override the default unit configuration. Procedure
17.16. Changing the timeout limitYou can specify a timeout value per service to prevent a malfunctioning service from freezing the system. Otherwise, timeout is set by default to 90 seconds for normal services and to 300 seconds for SysV-compatible services. For example, to extend timeout limit for the httpd service: Procedure
17.17. Monitoring overridden unitsThis section describes how to display an overview of overridden or modified unit files. Procedure
17.18. Working with instantiated unitsIt is possible to instantiate multiple units from a single template configuration file at runtime. The "@" character is used to mark the template and to associate units with it. Instantiated units can be started from another unit file (using Requires or Wants options), or with the systemctl start command. Instantiated service units are named the following way: template_name@instance_name.serviceWhere template_name stands for the name of the template configuration file. Replace instance_name with the name for the unit instance. Several instances can point to the same template file with configuration options common for all instances of the unit. Template unit name has the form of: unit_name@.serviceFor example, the following Wants setting in a unit file: Wants=first makes systemd search for given service units. If no such units are found, the part between "@" and the type suffix is ignored and systemd searches for the [email protected] file, reads the configuration from it, and starts the services. For example, the [email protected] template contains the following directives: [Unit] Description=Getty on %I … [Service] ExecStart=-/sbin/agetty --noclear %I $TERM …When the and are instantiated from the above template, Description= is resolved as Getty on ttyA and Getty on ttyB. 17.19. Important unit specifiersWildcard characters, called unit specifiers, can be used in any unit configuration file. Unit specifiers substitute certain unit parameters and are interpreted at runtime. The following table lists unit specifiers that are particularly useful for template units. Table 17.5. Important unit specifiers
For a complete list of unit specifiers, see the systemd.unit(5) manual page. 17.20. Additional resources
Chapter 18. Optimizing systemd to shorten the boot timeThere is a list of systemd unit files that are enabled by default. System services that are defined by these unit files are automatically run at boot, which influences the boot time. This section describes:
18.1. Examining system boot performanceTo examine system boot performance, you can use the systemd-analyze command. This command has many options available. However, this section covers only the selected ones that may be important for systemd tuning in order to shorten the boot time. For a complete list and detailed description of all options, see the systemd-analyze man page. Prerequisites
Procedure $ systemctl list-unit-files --state=enabledAnalyzing overall boot timeProcedure
Analyzing unit initialization timeProcedure
The output lists the units in descending order according to the time they took to initialize during the last successful boot. Identifying critical unitsProcedure
The output highlights the units that critically slow down the boot with the red color. Figure 18.1. The output of the systemd-analyze critical-chain command 18.2. A guide to selecting services that can be safely disabledIf you find the boot time of your system long, you can shorten it by disabling some of the services enabled on boot by default. To list such services, run: $ systemctl list-unit-files --state=enabledTo disable a service, run: # systemctl disable service_nameHowever, certain services must stay enabled in order that your operating system is safe and functions in the way you need. You can use the table below as a guide to selecting the services that you can safely disable. The table lists all services enabled by default on a minimal installation of Red Hat Enterprise Linux, and for each service it states whether this service can be safely disabled. The table also provides more information about the circumstances under which the service can be disabled, or the reason why you should not disable the service. Table 18.1. Services enabled by default on a minimal installation of RHEL
To find more information about a service, you can run one of the following commands: $ systemctl cat The systemctl cat command provides the content of the service file located under /usr/lib/systemd/system/ For more information on drop-in files, see the systemd.unit man page. The systemctl help command shows the man page of the particular service. 18.3. Additional resources
Chapter 19. Introduction to managing user and group accountsThe control of users and groups is a core element of Red Hat Enterprise Linux (RHEL) system administration. Each RHEL user has distinct login credentials and can be assigned to various groups to customize their system privileges. 19.1. Introduction to users and groupsA user who creates a file is the owner of that file and the group owner of that file. The file is assigned separate read, write, and execute permissions for the owner, the group, and those outside that group. The file owner can be changed only by the root user. Access permissions to the file can be changed by both the root user and the file owner. A regular user can change group ownership of a file they own to a group of which they are a member of. Each user is associated with a unique numerical identification number called user ID (UID). Each group is associated with a group ID (GID). Users within a group share the same permissions to read, write, and execute files owned by that group. 19.2. Configuring reserved user and group IDsRHEL reserves user and group IDs below 1000 for system users and groups. You can find the reserved user and group IDs in the setup package. To view reserved user and group IDs, use: cat /usr/share/doc/setup*/uidgidIt is recommended to assign IDs to the new users and groups starting at 5000, as the reserved range can increase in the future. To make the IDs assigned to new users start at 5000 by default, modify the UID_MIN and GID_MIN parameters in the /etc/login.defs file. Procedure To modify and make the IDs assigned to new users start at 5000 by default:
19.3. User private groupsRHEL uses the user private group (UPG) system configuration, which makes UNIX groups easier to manage. A user private group is created whenever a new user is added to the system. The user private group has the same name as the user for which it was created and that user is the only member of the user private group. UPGs simplify the collaboration on a project between multiple users. In addition, UPG system configuration makes it safe to set default permissions for a newly created file or directory, as it allows both the user, and the group this user is a part of, to make modifications to the file or directory. A list of all groups is stored in the /etc/group configuration file. Chapter 20. Managing user accounts in the web consoleThe RHEL web console offers a graphical interface that enables you to execute a wide range of administrative tasks without accessing your terminal directly. For example, you can add, edit or remove system user accounts. After reading this section, you will know:
Prerequisites
20.1. System user accounts managed in the web consoleWith user accounts displayed in the RHEL web console you can:
The RHEL web console displays all user accounts located in the system. Therefore, you can see at least one user account just after the first login to the web console. After logging into the RHEL web console, you can perform the following operations:
20.2. Adding new accounts using the web consoleUse the following steps for adding user accounts to the system and setting administration rights to the accounts through the RHEL web console. Procedure
20.3. Enforcing password expiration in the web consoleBy default, user accounts have set passwords to never expire. You can set system passwords to expire after a defined number of days. When the password expires, the next login attempt will prompt for a password change. Procedure
Verification steps
20.4. Terminating user sessions in the web consoleA user creates user sessions when logging into the system. Terminating user sessions means to log the user out from the system. It can be helpful if you need to perform administrative tasks sensitive to configuration changes, for example, system upgrades. In each user account in the RHEL 8 web console, you can terminate all sessions for the account except for the web console session you are currently using. This prevents you from loosing access to your system. Procedure
Chapter 21. Managing users from the command lineYou can manage users and groups using the command-line interface (CLI). This enables you to add, remove, and modify users and user groups in Red Hat Enterprise Linux environment. 21.1. Adding a new user from the command lineThis section describes how to use the useradd utility to add a new user. Prerequisites
Procedure
Verification steps
Additional resources
21.2. Adding a new group from the command lineThis section describes how to use the groupadd utility to add a new group. Prerequisites
Procedure
Verification steps
Additional resources
21.3. Adding a user to a supplementary group from the command lineYou can add a user to a supplementary group to manage permissions or enable access to certain files or devices. Prerequisites
Procedure
Verification steps
21.4. Creating a group directoryUnder the UPG system configuration, you can apply the set-group identification permission (setgid bit) to a directory. The setgid bit makes managing group projects that share a directory simpler. When you apply the setgid bit to a directory, files created within that directory are automatically assigned to a group that owns the directory. Any user that has the permission to write and execute within this group can now create, modify, and delete files in the directory. The following section describes how to create group directories. Prerequisites
Procedure
Verification steps
Chapter 22. Editing user groups using the command lineA user belongs to a certain set of groups that allow a logical collection of users with a similar access to files and folders. You can edit the primary and supplementary user groups from the command line to change the user’s permissions. 22.1. Primary and supplementary user groupsA group is an entity which ties together multiple user accounts for a common purpose, such as granting access to particular files. On Linux, user groups can act as primary or supplementary. Primary and supplementary groups have the following properties: Primary group
22.2. Listing the primary and supplementary groups of a userYou can list the groups of users to see which primary and supplementary groups they belong to. Procedure
22.3. Changing the primary group of a userYou can change the primary group of an existing user to a new group. Prerequisites:
Procedure
Verification steps
22.4. Adding a user to a supplementary group from the command lineYou can add a user to a supplementary group to manage permissions or enable access to certain files or devices. Prerequisites
Procedure
Verification steps
22.5. Removing a user from a supplementary groupYou can remove an existing user from a supplementary group to limit their permissions or access to files and devices. Prerequisites
Procedure
Verification steps
22.6. Changing all of the supplementary groups of a userYou can overwrite the list of supplementary groups that you want the user to remain a member of. Prerequisites
Procedure
Verification steps
Chapter 23. Managing sudo accessSystem administrators can grant sudo access to allow non-root users to execute administrative commands that are normally reserved for the root user. As a result, non-root users can execute such commands without logging in to the root user account. 23.1. User authorizations in sudoersThe /etc/sudoers file specifies which users can run which commands using the sudo command. The rules can apply to individual users and user groups. You can also use aliases to simplify defining rules for groups of hosts, commands, and even users. Default aliases are defined in the first part of the /etc/sudoers file. When a user tries to use sudo privileges to run a command that is not allowed in the /etc/sudoers file, the system records a message containing username : user NOT in sudoers to the journal log. The default /etc/sudoers file provides information and examples of authorizations. You can activate a specific example rule by removing the # comment character from the beginning of the line. The authorizations section relevant for user is marked with the following introduction: ## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems).You can use the following format to create new sudoers authorizations and to modify existing authorizations: username hostname=path/to/commandWhere:
You can replace any of these variables with ALL to apply the rule to all users, hosts, or commands. With overly permissive rules, such as ALL ALL=(ALL) ALL, all users are able to run all commands as all users on all hosts. This can lead to security risks. You can specify the arguments negatively using the ! operator. For example, use !root to specify all users except the root user. Note that using the allowlists to allow specific users, groups, and commands, is more secure than using the blocklists to disallowing specific users, groups, and commands. By using the allowlists you also block new unauthorized users or groups. Avoid using negative rules for commands because users can overcome such rules by renaming commands using the alias command. The system reads the /etc/sudoers file from beginning to end. Therefore, if the file contains multiple entries for a user, the entries are applied in order. In case of conflicting values, the system uses the last match, even if it is not the most specific match. The preferred way of adding new rules to sudoers is by creating new files in the /etc/sudoers.d/ directory instead of entering rules directly to the /etc/sudoers file. This is because the contents of this directory are preserved during system updates. In addition, it is easier to fix any errors in the separate files than in the /etc/sudoers file. The system reads the files in the /etc/sudoers.d directory when it reaches the following line in the /etc/sudoers file: #includedir /etc/sudoers.dNote that the number sign # at the beginning of this line is part of the syntax and does not mean the line is a comment. The names of files in that directory must not contain a period . and must not end with a tilde ~. 23.2. Granting sudo access to a userSystem administrators can grant sudo access to allow non-root users to execute administrative commands. The sudo command provides users with administrative access without using the password of the root user. When users need to perform an administrative command, they can precede that command with sudo. The command is then executed as if they were the root user. Be aware of the following limitations:
Prerequisites
Procedure
23.3. Enabling unprivileged users to run certain commandsYou can configure a policy that allows unprivileged user to run certain command on a specific workstation. To configure this policy, you need to create and edit file in the sudoers.d directory. Prerequisites
Procedure
23.4. Additional resources
Chapter 24. Changing and resetting the root passwordIf the existing root password is no longer satisfactory or is forgotten, you can change or reset it both as the root user and a non-root user. 24.1. Changing the root password as the root userThis section describes how to use the passwd command to change the root password as the root user. Prerequisites
Procedure
24.2. Changing or resetting the forgotten root password as a non-root userThis section describes how to use the passwd command to change or reset the forgotten root password as a non-root user. Prerequisites
Procedure
24.3. Resetting the root password on bootIf you are unable to log in as a non-root user or do not belong to the administrative wheel group, you can reset the root password on boot by switching into a specialized chroot jail environment. Procedure
Verification steps
Chapter 25. Managing file permissionsFile permissions control the ability of user and group accounts to view, modify, access, and execute the contents of the files and directories. Every file or directory has three levels of ownership:
Each level of ownership can be assigned the following permissions:
Note that the execute permission for a file allows you to execute that file. The execute permission for a directory allows you to access the contents of the directory, but not execute it. When a new file or directory is created, the default set of permissions are automatically assigned to it. The default permissions for a file or directory are based on two factors:
25.1. Base file permissionsWhenever a new file or directory is created, a base permission is automatically assigned to it. Base permissions for a file or directory can be expressed in symbolic or octal values.
The base permission for a directory is 777 (drwxrwxrwx), which grants everyone the permissions to read, write, and execute. This means that the directory owner, the group, and others can list the contents of the directory, create, delete, and edit items within the directory, and descend into it. Note that individual files within a directory can have their own permission that might prevent you from editing them, despite having unrestricted access to the directory. The base permission for a file is 666 (-rw-rw-rw-), which grants everyone the permissions to read and write. This means that the file owner, the group, and others can read and edit the file. Example 25.1. Permissions for a file If a file has the following permissions: $ ls -l -rwxrw----. 1 sysadmins sysadmins 2 Mar 2 08:43 file
Example 25.2. Permissions for a directory If a directory has the following permissions: $ ls -dl directory drwxr-----. 1 sysadmins sysadmins 2 Mar 2 08:43 directory
The base permission that is automatically assigned to a file or directory is not the default permission the file or directory ends up with. When you create a file or directory, the base permission is altered by the umask. The combination of the base permission and the umask creates the default permission for files and directories. 25.2. User file-creation mode maskThe user file-creation mode mask (umask) is variable that controls how file permissions are set for newly created files and directories. The umask automatically removes permissions from the base permission value to increase the overall security of a linux system. The umask can be expressed in symbolic or octal values.
The default umask for a standard user is 0002. The default umask for a root user is 0022. The first digit of the umask represents special permissions (sticky bit, ). The last three digits of the umask represent the permissions that are removed from the user owner (u), group owner (g), and others (o) respectively. Example 25.3. Applying the umask when creating a file The following example illustrates how the umask with an octal value of 0137 is applied to the file with the base permission of 777, to create the file with the default permission of 640. 25.3. Default file permissionsThe default permissions are set automatically for all newly created files and directories. The value of the default permissions is determined by applying the umask to the base permission. Example 25.4. Default permissions for a directory created by a standard user When a standard user creates a new directory, the umask is set to 002 (rwxrwxr-x), and the base permissions for a directory are set to 777 (rwxrwxrwx). This brings the default permissions to 775 (drwxrwxr-x).
This means that the directory owner and the group can list the contents of the directory, create, delete, and edit items within the directory, and descend into it. Other users can only list the contents of the directory and descend into it. Example 25.5. Default permissions for a file created by a standard user When a standard user creates a new file, the umask is set to 002 (rwxrwxr-x), and the base permissions for a file are set to 666 (rw-rw-rw-). This brings the default permissions to 664 (-rw-rw-r--).
This means that the file owner and the group can read and edit the file, while other users can only read the file. Example 25.6. Default permissions for a directory created by the root user When a root user creates a new directory, the umask is set to 022 (rwxr-xr-x), and the base permissions for a directory are set to 777 (rwxrwxrwx). This brings the default permissions to 755 (rwxr-xr-x).
This means that the directory owner can list the contents of the directory, create, delete, and edit items within the directory, and descend into it. The group and others can only list the contents of the directory and descend into it. Example 25.7. Default permissions for a file created by the root user When a root user creates a new file, the umask is set to 022 (rwxr-xr-x), and the base permissions for a file are set to 666 (rw-rw-rw-). This brings the default permissions to 644 (-rw-r—r--).
This means that the file owner can read and edit the file, while the group and others can only read the file. For security reasons, regular files cannot have execute permissions by default, even if the umask is set to 000 (rwxrwxrwx). However, directories can be created with execute permissions. 25.4. Changing file permissions using symbolic valuesYou can use the chmod utility with symbolic values (a combination letters and signs) to change file permissions for a file or directory. You can assign the following permissions:
Permissions can be assigned to the following levels of ownership:
To add or remove permissions you can use the following signs:
Procedure
Verification steps
Example 25.8. Changing permissions for files and directories
25.5. Changing file permissions using octal valuesYou can use the chmod utility with octal values (numbers) to change file permissions for a file or directory. Procedure
Chapter 26. Managing the umaskYou can use the umask utility to display, set, or change the current or default value of the umask. 26.1. Displaying the current value of the umaskYou can use the umask utility to display the current value of the umask in symbolic or octal mode. Procedure
26.2. Displaying the default bash umaskThere are a number of shells you can use, such as bash, ksh, zsh and tcsh. Those shells can behave as login or non-login shells. You can invoke the login shell by opening a native or a GUI terminal. To determine whether you are executing a command in a login or a non-login shell, use the echo $0 command. Example 26.1. Determining if you are working in a login or a non-login bash shell
Procedure
26.3. Setting the umask using symbolic valuesYou can use the umask utility with symbolic values (a combination letters and signs) to set the umask for the current shell session You can assign the following permissions:
Permissions can be assigned to the following levels of ownership:
To add or remove permissions you can use the following signs:
Procedure
26.4. Setting the umask using octal valuesYou can use the umask utility with octal values (numbers) to set the umask for the current shell session. Procedure
26.5. Changing the default umask for the non-login shellYou can change the default bash umask for standard users by modifying the /etc/bashrc file. Prerequisites
Procedure
26.6. Changing the default umask for the login shellYou can change the default bash umask for the root user by modifying the /etc/profile file. Prerequisites
Procedure
26.7. Changing the default umask for a specific userYou can change the default umask for a specific user by modifying the .bashrc for that user. Procedure
26.8. Setting default permissions for newly created home directoriesYou can change the permission modes for home directories of newly created users by modifying the /etc/login.defs file. Procedure
Chapter 27. Using dnstap in RHELThe dnstap utility provides an advanced way to monitor and log details of incoming name queries. It records sent messages from the named service. This section explains how to record DNS queries using dnstap. 27.1. Recording DNS queries using dnstap in RHELThe network administrators can record the DNS queries to collect the website or IP address information along with the domain health. Prerequisites
If you already have a BIND version installed and running, adding a new version of BIND will overwrite the existing version. Procedure Following are the steps to record DNS queries:
Chapter 28. Managing the Access Control ListEach file and directory can only have one user owner and one group owner at a time. If you want to grant a user permissions to access specific files or directories that belong to a different user or group while keeping other files and directories private, you can utilize Linux Access Control Lists (ACLs). 28.1. Displaying the current Access Control ListYou can use the getfacl utility to display the current ACL. Procedure
28.2. Setting the Access Control ListYou can use the setfacl utility to set the ACL for a file or directory. Prerequisites
Procedure
Replace username with the name of the user, symbolic_value with a symbolic value, and file-name with the name of the file or directory. For more information see the setfacl man page. Example 28.1. Modifying permissions for a group project The following example describes how to modify permissions for the group-project file owned by the root user that belongs to the root group so that this file is:
Procedure # setfacl -m u:andrew:rw- group-project # setfacl -m u:susan:--- group-projectVerification steps
Chapter 29. Using the Chrony suite to configure NTPAccurate timekeeping is important for a number of reasons in IT. In networking for example, accurate time stamps in packets and logs are required. In Linux systems, the NTP protocol is implemented by a daemon running in user space. The user space daemon updates the system clock running in the kernel. The system clock can keep time by using various clock sources. Usually, the Time Stamp Counter (TSC) is used. The TSC is a CPU register which counts the number of cycles since it was last reset. It is very fast, has a high resolution, and there are no interruptions. Starting with Red Hat Enterprise Linux 8, the NTP protocol is implemented by the chronyd daemon, available from the repositories in the chrony package. The following sections describe how to use the chrony suite to configure NTP. 29.1. Introduction to chrony suitechrony is an implementation of the Network Time Protocol (NTP). You can use chrony:
chrony performs well in a wide range of conditions, including intermittent network connections, heavily congested networks, changing temperatures (ordinary computer clocks are sensitive to temperature), and systems that do not run continuously, or run on a virtual machine. Typical accuracy between two machines synchronized over the Internet is within a few milliseconds, and for machines on a LAN within tens of microseconds. Hardware timestamping or a hardware reference clock may improve accuracy between two machines synchronized to a sub-microsecond level. chrony consists of chronyd, a daemon that runs in user space, and chronyc, a command line program which can be used to monitor the performance of chronyd and to change various operating parameters when it is running. The chrony daemon, chronyd, can be monitored and controlled by the command line utility chronyc. This utility provides a command prompt which allows entering a number of commands to query the current state of chronyd and make changes to its configuration. By default, chronyd accepts only commands from a local instance of chronyc, but it can be configured to accept monitoring commands also from remote hosts. The remote access should be restricted. 29.2. Using chronyc to control chronydThis section describes how to control chronyd using the chronyc command line utility. Procedure
Changes made using chronyc are not permanent, they will be lost after a chronyd restart. For permanent changes, modify /etc/chrony.conf. 29.3. Migrating to chronyIn Red Hat Enterprise Linux 7, users could choose between ntp and chrony to ensure accurate timekeeping. For differences between ntp and chrony, ntpd and chronyd, see Differences between ntpd and chronyd. Starting with Red Hat Enterprise Linux 8, ntp is no longer supported. chrony is enabled by default. For this reason, you might need to migrate from ntp to chrony. Migrating from ntp to chrony is straightforward in most cases. The corresponding names of the programs, configuration files and services are: Table 29.1. Corresponding names of the programs, configuration files and services when migrating from ntp to chrony
The ntpdate and sntp utilities, which are included in the ntp distribution, can be replaced with chronyd using the -q option or the -t option. The configuration can be specified on the command line to avoid reading /etc/chrony.conf. For example, instead of running ntpdate ntp.example.com, chronyd could be started as: # chronyd -q 'server ntp.example.com iburst' 2018-05-18T12:37:43Z chronyd version 3.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG) 2018-05-18T12:37:43Z Initial frequency -2.630 ppm 2018-05-18T12:37:48Z System clock wrong by 0.003159 seconds (step) 2018-05-18T12:37:48Z chronyd exitingThe ntpstat utility, which was previously included in the ntp package and supported only ntpd, now supports both ntpd and chronyd. It is available in the ntpstat package. 29.3.1. Migration scriptA Python script called ntp2chrony.py is included in the documentation of the chrony package (/usr/share/doc/chrony). The script automatically converts an existing ntp configuration to chrony. It supports the most common directives and options in the ntp.conf file. Any lines that are ignored in the conversion are included as comments in the generated chrony.conf file for review. Keys that are specified in the ntp key file, but are not marked as trusted keys in ntp.conf are included in the generated chrony.keys file as comments. By default, the script does not overwrite any files. If /etc/chrony.conf or /etc/chrony.keys already exist, the -b option can be used to rename the file as a backup. The script supports other options. The --help option prints all supported options. An example of an invocation of the script with the default ntp.conf provided in the ntp package is: # python3 /usr/share/doc/chrony/ntp2chrony.py -b -v Reading /etc/ntp.conf Reading /etc/ntp/crypto/pw Reading /etc/ntp/keys Writing /etc/chrony.conf Writing /etc/chrony.keysThe only directive ignored in this case is disable monitor, which has a chrony equivalent in the noclientlog directive, but it was included in the default ntp.conf only to mitigate an amplification attack. The generated chrony.conf file typically includes a number of allow directives corresponding to the restrict lines in ntp.conf. If you do not want to run chronyd as an NTP server, remove all allow directives from chrony.conf. Chapter 30. Using ChronyThe following sections describe how to install, start, and stop chronyd, and how to check if chrony is synchronized. Sections also describe how to manually adjust System Clock. 30.1. Managing chronyThe following procedure describes how to install, start, stop, and check the status of chronyd. Procedure
30.2. Checking if chrony is synchronizedThe following procedure describes how to check if chrony is synchronized with the use of the tracking, sources, and sourcestats commands. Procedure
Additional resources
30.3. Manually adjusting the System ClockThe following procedure describes how to manually adjust the System Clock. Procedure
If the rtcfile directive is used, the real-time clock should not be manually adjusted. Random adjustments would interfere with chrony's need to measure the rate at which the real-time clock drifts. 30.4. Setting up chrony for a system in an isolated networkFor a network that is never connected to the Internet, one computer is selected to be the master timeserver. The other computers are either direct clients of the master, or clients of clients. On the master, the drift file must be manually set with the average rate of drift of the system clock. If the master is rebooted, it will obtain the time from surrounding systems and calculate an average to set its system clock. Thereafter it resumes applying adjustments based on the drift file. The drift file will be updated automatically when the settime command is used. The following procedure describes how to set up chrony for asystem in an isolated network. Procedure
On the client systems which are not to be direct clients of the master, the /etc/chrony.conf file should be the same except that the local and allow directives should be omitted. In an isolated network, you can also use the local directive that enables a local reference mode, which allows chronyd operating as an NTP server to appear synchronized to real time, even when it was never synchronized or the last update of the clock happened a long time ago. To allow multiple servers in the network to use the same local configuration and to be synchronized to one another, without confusing clients that poll more than one server, use the orphan option of the local directive which enables the orphan mode. Each server needs to be configured to poll all other servers with local. This ensures that only the server with the smallest reference ID has the local reference active and other servers are synchronized to it. When the server fails, another one will take over. 30.5. Configuring remote monitoring accesschronyc can access chronyd in two ways:
By default, chronyc connects to the Unix domain socket. The default path is /var/run/chrony/chronyd.sock. If this connection fails, which can happen for example when chronyc is running under a non-root user, chronyc tries to connect to 127.0.0.1 and then ::1. Only the following monitoring commands, which do not affect the behavior of chronyd, are allowed from the network:
The set of hosts from which chronyd accepts these commands can be configured with the cmdallow directive in the configuration file of chronyd, or the cmdallow command in chronyc. By default, the commands are accepted only from localhost (127.0.0.1 or ::1). All other commands are allowed only through the Unix domain socket. When sent over the network, chronyd responds with a Not authorised error, even if it is from localhost. The following procedure describes how to access chronyd remotely with chronyc. Procedure
Additional resources
30.6. Managing time synchronization using RHEL System RolesYou can manage time synchronization on multiple target machines using the timesync role. The timesync role installs and configures an NTP or PTP implementation to operate as an NTP client or PTP slave in order to synchronize the system clock with NTP servers or grandmasters in PTP domains. Note that using the timesync role also facilitates migration to chrony, because you can use the same playbook on all versions of Red Hat Enterprise Linux starting with RHEL 6 regardless of whether the system uses ntp or chrony to implement the NTP protocol. The timesync role replaces the configuration of the given or detected provider service on the managed host. Previous settings are lost, even if they are not specified in the role variables. The only preserved setting is the choice of provider if the timesync_ntp_provider variable is not defined. The following example shows how to apply the timesync role in a situation with just one pool of servers. Example 30.1. An example playbook applying the timesync role for a single pool of servers --- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: 2.rhel.pool.ntp.org pool: yes iburst: yes roles: - rhel-system-roles.timesync For a detailed reference on timesync role variables, install the rhel-system-roles package, and see the README.md or README.html files in the /usr/share/doc/rhel-system-roles/timesync directory. 30.7. Additional resources
Chapter 31. Chrony with HW timestampingHardware timestamping is a feature supported in some Network Interface Controller (NICs) which provides accurate timestamping of incoming and outgoing packets. NTP timestamps are usually created by the kernel and chronyd with the use of the system clock. However, when HW timestamping is enabled, the NIC uses its own clock to generate the timestamps when packets are entering or leaving the link layer or the physical layer. When used with NTP, hardware timestamping can significantly improve the accuracy of synchronization. For best accuracy, both NTP servers and NTP clients need to use hardware timestamping. Under ideal conditions, a sub-microsecond accuracy may be possible. Another protocol for time synchronization that uses hardware timestamping is PTP. Unlike NTP, PTP relies on assistance in network switches and routers. If you want to reach the best accuracy of synchronization, use PTP on networks that have switches and routers with PTP support, and prefer NTP on networks that do not have such switches and routers. The following sections describe how to:
31.1. Verifying support for hardware timestampingTo verify that hardware timestamping with NTP is supported by an interface, use the ethtool -T command. An interface can be used for hardware timestamping with NTP if ethtool lists the SOF_TIMESTAMPING_TX_HARDWARE and SOF_TIMESTAMPING_TX_SOFTWARE capabilities and also the HWTSTAMP_FILTER_ALL filter mode. Example 31.1. Verifying support for hardware timestamping on a specific interface # ethtool -T eth0 Output: Timestamping parameters for eth0: Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) software-system-clock (SOF_TIMESTAMPING_SOFTWARE) hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) PTP Hardware Clock: 0 Hardware Transmit Timestamp Modes: off (HWTSTAMP_TX_OFF) on (HWTSTAMP_TX_ON) Hardware Receive Filter Modes: none (HWTSTAMP_FILTER_NONE) all (HWTSTAMP_FILTER_ALL) ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC) ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) ptpv2-l4-sync (HWTSTAMP_FILTER_PTP_V2_L4_SYNC) ptpv2-l4-delay-req (HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) ptpv2-l2-sync (HWTSTAMP_FILTER_PTP_V2_L2_SYNC) ptpv2-l2-delay-req (HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT) ptpv2-sync (HWTSTAMP_FILTER_PTP_V2_SYNC) ptpv2-delay-req (HWTSTAMP_FILTER_PTP_V2_DELAY_REQ)31.2. Enabling hardware timestampingTo enable hardware timestamping, use the hwtimestamp directive in the /etc/chrony.conf file. The directive can either specify a single interface, or a wildcard character can be used to enable hardware timestamping on all interfaces that support it. Use the wildcard specification in case that no other application, like ptp4l from the linuxptp package, is using hardware timestamping on an interface. Multiple hwtimestamp directives are allowed in the chrony configuration file. Example 31.2. Enabling hardware timestamping by using the hwtimestamp directive hwtimestamp eth0 hwtimestamp eth2 hwtimestamp * 31.3. Configuring client polling intervalThe default range of a polling interval (64-1024 seconds) is recommended for servers on the Internet. For local servers and hardware timestamping, a shorter polling interval needs to be configured in order to minimize offset of the system clock. The following directive in /etc/chrony.conf specifies a local NTP server using one second polling interval: server ntp.local minpoll 0 maxpoll 0
31.4. Enabling interleaved modeNTP servers that are not hardware NTP appliances, but rather general purpose computers running a software NTP implementation, like chrony, will get a hardware transmit timestamp only after sending a packet. This behavior prevents the server from saving the timestamp in the packet to which it corresponds. In order to enable NTP clients receiving transmit timestamps that were generated after the transmission, configure the clients to use the NTP interleaved mode by adding the xleave option to the server directive in /etc/chrony.conf: server ntp.local minpoll 0 maxpoll 0 xleave31.5. Configuring server for large number of clientsThe default server configuration allows a few thousands of clients at most to use the interleaved mode concurrently. To configure the server for a larger number of clients, increase the clientloglimit directive in /etc/chrony.conf. This directive specifies the maximum size of memory allocated for logging of clients' access on the server: clientloglimit 10000000031.6. Verifying hardware timestampingTo verify that the interface has successfully enabled hardware timestamping, check the system log. The log should contain a message from chronyd for each interface with successfully enabled hardware timestamping. Example 31.3. Log messages for interfaces with enabled hardware timestamping chronyd[4081]: Enabled HW timestamping on eth0 chronyd[4081]: Enabled HW timestamping on eth2 When chronyd is configured as an NTP client or peer, you can have the transmit and receive timestamping modes and the interleaved mode reported for each NTP source by the chronyc ntpdata command: Example 31.4. Reporting the transmit, receive timestamping and interleaved mode for each NTP source # chronyc ntpdata Output: Remote address : 203.0.113.15 (CB00710F) Remote port : 123 Local address : 203.0.113.74 (CB00714A) Leap status : Normal Version : 4 Mode : Server Stratum : 1 Poll interval : 0 (1 seconds) Precision : -24 (0.000000060 seconds) Root delay : 0.000015 seconds Root dispersion : 0.000015 seconds Reference ID : 47505300 (GPS) Reference time : Wed May 03 13:47:45 2017 Offset : -0.000000134 seconds Peer delay : 0.000005396 seconds Peer dispersion : 0.000002329 seconds Response time : 0.000152073 seconds Jitter asymmetry: +0.00 NTP tests : 111 111 1111 Interleaved : Yes Authenticated : No TX timestamping : Hardware RX timestamping : Hardware Total TX : 27 Total RX : 27 Total valid RX : 27Example 31.5. Reporting the stability of NTP measurements # chronyc sourcestats With hardware timestamping enabled, stability of NTP measurements should be in tens or hundreds of nanoseconds, under normal load. This stability is reported in the Std Dev column of the output of the chronyc sourcestats command: Output: 210 Number of sources = 1 Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev ntp.local 12 7 11 +0.000 0.019 +0ns 49ns31.7. Configuring PTP-NTP bridgeIf a highly accurate Precision Time Protocol (PTP) grandmaster is available in a network that does not have switches or routers with PTP support, a computer may be dedicated to operate as a PTP slave and a stratum-1 NTP server. Such a computer needs to have two or more network interfaces, and be close to the grandmaster or have a direct connection to it. This will ensure highly accurate synchronization in the network. Configure the ptp4l and phc2sys programs from the linuxptp packages to use one interface to synchronize the system clock using PTP. Configure chronyd to provide the system time using the other interface: Example 31.6. Configuring chronyd to provide the system time using the other interface bindaddress 203.0.113.74 hwtimestamp eth2 local stratum 1 Chapter 32. Achieving some settings previously supported by NTP in chronySome settings that were in previous major version of Red Hat Enterprise Linux supported by ntp, are not supported by chrony. The following sections list such settings, and describe ways to achieve them on a system with chrony. 32.1. Monitoring by ntpq and ntpdcchronyd cannot be monitored by the ntpq and ntpdc utilities from the ntp distribution, because chrony does not support the NTP modes 6 and 7. It supports a different protocol and chronyc is the client implementation. For more information, see the chronyc(1) man page. To monitor the status of the system clock sychronized by chronyd, you can:
Example 32.1. Using the tracking command $ chronyc -n tracking Reference ID : 0A051B0A (10.5.27.10) Stratum : 2 Ref time (UTC) : Thu Mar 08 15:46:20 2018 System time : 0.000000338 seconds slow of NTP time Last offset : +0.000339408 seconds RMS offset : 0.000339408 seconds Frequency : 2.968 ppm slow Residual freq : +0.001 ppm Skew : 3.336 ppm Root delay : 0.157559142 seconds Root dispersion : 0.001339232 seconds Update interval : 64.5 seconds Leap status : Normal Example 32.2. Using the ntpstat utility $ ntpstat synchronised to NTP server (10.5.27.10) at stratum 2 time correct to within 80 ms polling server every 64 s 32.2. Using authentication mechanism based on public key cryptographyIn Red Hat Enterprise Linux 7, ntp supported Autokey, which is an authentication mechanism based on public key cryptography. In Red Hat Enterprise Linux 8, chronyd supports Network Time Security (NTS), a modern secure authentication mechanism, instead of Autokey. For more information, see Overview of Network Time Security (NTS) in chrony.
32.3. Using ephemeral symmetric associationsIn Red Hat Enterprise Linux 7, ntpd supported ephemeral symmetric associations, which can be mobilized by packets from peers which are not specified in the ntp.conf configuration file. In Red Hat Enterprise Linux 8, chronyd needs all peers to be specified in chrony.conf. Ephemeral symmetric associations are not supported. Note that using the client/server mode enabled by the server or pool directive is more secure compared to the symmetric mode enabled by the peer directive. 32.4. multicast/broadcast clientRed Hat Enterprise Linux 7 supported the broadcast/multicast NTP mode, which simplifies configuration of clients. With this mode, clients can be configured to just listen for packets sent to a multicast/broadcast address instead of listening for specific names or addresses of individual servers, which may change over time. In Red Hat Enterprise Linux 8, chronyd does not support the broadcast/multicast mode. The main reason is that it is less accurate and less secure than the ordinary client/server and symmetric modes. There are several options of migration from an NTP broadcast/multicast setup:
Chapter 33. Overview of Network Time Security (NTS) in chrony
Network Time Security (NTS) is an authentication mechanism for Network Time Protocol (NTP), designed to scale substantial clients. It verifies that the packets received from the server machines are unaltered while moving to the client machine. Network Time Security (NTS) includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. 33.1. Enabling Network Time Security (NTS) in the client configuration fileBy default, Network Time Security (NTS) is not enabled. You can enable NTS in the /etc/chrony.conf. For that, perform the following steps: Prerequisites
Procedure In the client configuration file:
Verification
Additional resources
33.2. Enabling Network Time Security (NTS) on the serverIf you run your own Network Time Protocol (NTP) server, you can enable the server Network Time Security (NTS) support to facilitate its clients to synchronize securely. If the NTP server is a client of other servers, that is, it is not a Stratum 1 server, it should use NTS or symmetric key for its synchronization. Prerequisites
Procedure
Verification
Chapter 34. Using secure communications between two systems with OpenSSHSSH (Secure Shell) is a protocol which provides secure communications between two systems using a client-server architecture and allows users to log in to server host systems remotely. Unlike other remote communication protocols, such as FTP or Telnet, SSH encrypts the login session, which prevents intruders to collect unencrypted passwords from the connection. Red Hat Enterprise Linux includes the basic OpenSSH packages: the general openssh package, the openssh-server package and the openssh-clients package. Note that the OpenSSH packages require the OpenSSL package openssl-libs, which installs several important cryptographic libraries that enable OpenSSH to provide encrypted communications. 34.1. SSH and OpenSSHSSH (Secure Shell) is a program for logging into a remote machine and executing commands on that machine. The SSH protocol provides secure encrypted communications between two untrusted hosts over an insecure network. You can also forward X11 connections and arbitrary TCP/IP ports over the secure channel. The SSH protocol mitigates security threats, such as interception of communication between two systems and impersonation of a particular host, when you use it for remote shell login or file copying. This is because the SSH client and server use digital signatures to verify their identities. Additionally, all communication between the client and server systems is encrypted. A host key authenticates hosts in the SSH protocol. Host keys are cryptographic keys that are generated automatically when OpenSSH is first installed, or when the host boots for the first time. OpenSSH is an implementation of the SSH protocol supported by Linux, UNIX, and similar operating systems. It includes the core files necessary for both the OpenSSH client and server. The OpenSSH suite consists of the following user-space tools:
Two versions of SSH currently exist: version 1, and the newer version 2. The OpenSSH suite in RHEL supports only SSH version 2. It has an enhanced key-exchange algorithm that is not vulnerable to exploits known in version 1. OpenSSH, as one of core cryptographic subsystems of RHEL, uses system-wide crypto policies. This ensures that weak cipher suites and cryptographic algorithms are disabled in the default configuration. To modify the policy, the administrator must either use the update-crypto-policies command to adjust the settings or manually opt out of the system-wide crypto policies. The OpenSSH suite uses two sets of configuration files: one for client programs (that is, ssh, scp, and sftp), and another for the server (the sshd daemon). System-wide SSH configuration information is stored in the /etc/ssh/ directory. User-specific SSH configuration information is stored in ~/.ssh/ in the user’s home directory. For a detailed list of OpenSSH configuration files, see the FILES section in the sshd(8) man page. 34.2. Configuring and starting an OpenSSH serverUse the following procedure for a basic configuration that might be required for your environment and for starting an OpenSSH server. Note that after the default RHEL installation, the sshd daemon is already started and server host keys are automatically created. Prerequisites
Procedure
Verification
Additional resources
34.3. Setting an OpenSSH server for key-based authenticationTo improve system security, enforce key-based authentication by disabling password authentication on your OpenSSH server. Prerequisites
Procedure
Additional resources
34.4. Generating SSH key pairsUse this procedure to generate an SSH key pair on a local system and to copy the generated public key to an OpenSSH server. If the server is configured accordingly, you can log in to the OpenSSH server without providing any password. If you complete the following steps as root, only root is able to use the keys. Procedure
If you reinstall your system and want to keep previously generated key pairs, back up the ~/.ssh/ directory. After reinstalling, copy it back to your home directory. You can do this for all users on your system, including root. Verification
Additional resources
34.5. Using SSH keys stored on a smart cardRed Hat Enterprise Linux enables you to use RSA and ECDSA keys stored on a smart card on OpenSSH clients. Use this procedure to enable authentication using a smart card instead of using a password. Prerequisites
Procedure
If you skip the id= part of a PKCS #11 URI, OpenSSH loads all keys that are available in the proxy module. This can reduce the amount of typing required: $ ssh -i pkcs11: example.com Enter PIN for 'SSH key': [example.com] $34.6. Making OpenSSH more secureThe following tips help you to increase security when using OpenSSH. Note that changes in the /etc/ssh/sshd_config OpenSSH configuration file require reloading the sshd daemon to take effect: # systemctl reload sshdThe majority of security hardening configuration changes reduce compatibility with clients that do not support up-to-date algorithms or cipher suites. Disabling insecure connection protocols
Enabling key-based authentication and disabling password-based authentication
Key types
Non-default port
No root login
Using the X Security extension
Restricting access to specific users, groups, or domains
Changing system-wide cryptographic policies
Additional resources
34.7. Connecting to a remote server using an SSH jump hostUse this procedure for connecting your local system to a remote server through an intermediary server, also called jump host. Prerequisites
Procedure
You can specify more jump servers and you can also skip adding host definitions to the configurations file when you provide their complete host names, for example: $ ssh -J jump1.example.com,jump2.example.com,jump3.example.com remote1.example.comChange the host name-only notation in the previous command if the user names or SSH ports on the jump servers differ from the names and ports on the remote server, for example: $ ssh -J johndoe@jump1.example.com:75,johndoe@jump2.example.com:75,:75 :220Additional resources
34.8. Connecting to remote machines with SSH keys using ssh-agentTo avoid entering a passphrase each time you initiate an SSH connection, you can use the ssh-agent utility to cache the private SSH key. The private key and the passphrase remain secure. Prerequisites
For more information, see Generating SSH key pairs. Procedure
Verification
34.9. Additional resources
Chapter 35. Configuring a remote logging solutionTo ensure that logs from various machines in your environment are recorded centrally on a logging server, you can configure the Rsyslog application to record logs that fit specific criteria from the client system to the server. 35.1. The Rsyslog logging serviceThe Rsyslog application, in combination with the systemd-journald service, provides local and remote logging support in Red Hat Enterprise Linux. The rsyslogd daemon continuously reads syslog messages received by the systemd-journald service from the Journal. rsyslogd then filters and processes these syslog events and records them to rsyslog log files or forwards them to other services according to its configuration. The rsyslogd daemon also provides extended filtering, encryption protected relaying of messages, input and output modules, and support for transportation using the TCP and UDP protocols. In /etc/rsyslog.conf, which is the main configuration file for rsyslog, you can specify the rules according to which rsyslogd handles the messages. Generally, you can classify messages by their source and topic (facility) and urgency (priority), and then assign an action that should be performed when a message fits these criteria. In /etc/rsyslog.conf, you can also see a list of log files maintained by rsyslogd. Most log files are located in the /var/log/ directory. Some applications, such as httpd and samba, store their log files in a subdirectory within /var/log/. Additional resources
35.2. Installing Rsyslog documentationThe Rsyslog application has extensive online documentation that is available at https://www.rsyslog.com/doc/, but you can also install the rsyslog-doc documentation package locally. Prerequisites
Procedure
Verification
35.3. Configuring a server for remote logging over TCPThe Rsyslog application enables you to both run a logging server and configure individual systems to send their log files to the logging server. To use remote logging through TCP, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems. With the Rsyslog application, you can maintain a centralized logging system where log messages are forwarded to a server over the network. To avoid message loss when the server is not available, you can configure an action queue for the forwarding action. This way, messages that failed to be sent are stored locally until the server is reachable again. Note that such queues cannot be configured for connections using the UDP protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, it does not have to be loaded. By default, rsyslog uses TCP on port 514. Prerequisites
Procedure
Your log server is now configured to receive and store log files from the other systems in your environment. Additional resources
35.4. Configuring remote logging to a server over TCPFollow this procedure to configure a system for forwarding log messages to a server over the TCP protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it. Prerequisites
Procedure
Verification To verify that the client system sends messages to the server, follow these steps:
Additional resources
35.5. Configuring TLS-encrypted remote loggingBy default, Rsyslog sends remote-logging communication in the plain text format. If your scenario requires to secure this communication channel, you can encrypt it using TLS. To use encrypted transport through TLS, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems. You can use either the ossl network stream driver (OpenSSL) or the gtls stream driver (GnuTLS). If you have a separate system with higher security, for example, a system that is not connected to any network or has stricter authorizations, use the separate system as the certifying authority (CA). Prerequisites
Procedure
Verification To verify that the client system sends messages to the server, follow these steps:
Additional resources
35.6. Configuring a server for receiving remote logging information over UDPThe Rsyslog application enables you to configure a system to receive logging information from remote systems. To use remote logging through UDP, configure both the server and the client. The receiving server collects and analyzes the logs sent by one or more client systems. By default, rsyslog uses UDP on port 514 to receive log information from remote systems. Follow this procedure to configure a server for collecting and analyzing logs sent by one or more client systems over the UDP protocol. Prerequisites
Procedure
Additional resources
35.7. Configuring remote logging to a server over UDPFollow this procedure to configure a system for forwarding log messages to a server over the UDP protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it. Prerequisites
Procedure
Verification To verify that the client system sends messages to the server, follow these steps:
Additional resources
35.8. Load balancing helper in RsyslogThe RebindInterval setting specifies an interval at which the current connection is broken and is re-established. This setting applies to TCP, UDP, and RELP traffic. The load balancers perceive it as a new connection and forward the messages to another physical target system. The RebindInterval setting proves to be helpful in scenarios when a target system has changed its IP address. The Rsyslog application caches the IP address when the connection establishes, therefore, the messages are sent to the same server. If the IP address changes, the UDP packets will be lost until the Rsyslog service restarts. Re-establishing the connection will ensure the IP to be resolved by DNS again. action(type=”omfwd” protocol=”tcp” RebindInterval=”250” target=”example.com” port=”514” …) action(type=”omfwd” protocol=”udp” RebindInterval=”250” target=”example.com” port=”514” …) action(type=”omrelp” RebindInterval=”250” target=”example.com” port=”6514” …)35.9. Configuring reliable remote loggingWith the Reliable Event Logging Protocol (RELP), you can send and receive syslog messages over TCP with a much reduced risk of message loss. RELP provides reliable delivery of event messages, which makes it useful in environments where message loss is not acceptable. To use RELP, configure the imrelp input module, which runs on the server and receives the logs, and the omrelp output module, which runs on the client and sends logs to the logging server. Prerequisites
Procedure
Verification To verify that the client system sends messages to the server, follow these steps:
Additional resources
35.10. Supported Rsyslog modulesTo expand the functionality of the Rsyslog application, you can use specific modules. Modules provide additional inputs (Input Modules), outputs (Output Modules), and other functionalities. A module can also provide additional configuration directives that become available after you load the module. You can list the input and output modules installed on your system by entering the following command: # ls /usr/lib64/rsyslog/{i,o}m*You can view the list of all available rsyslog modules in the /usr/share/doc/rsyslog/html/configuration/modules/idx_output.html file after you install the rsyslog-doc package. 35.11. Additional resources
Chapter 36. Using the Logging System RoleAs a system administrator, you can use the Logging System Role to configure a RHEL host as a logging server to collect logs from many client systems. 36.1. The Logging System RoleWith the Logging System Role, you can deploy logging configurations on local and remote hosts. To apply a Logging System Role on one or more systems, you define the logging configuration in a playbook. A playbook is a list of one or more plays. Playbooks are human-readable, and they are written in the YAML format. For more information about playbooks, see Working with playbooks in Ansible documentation. The set of systems that you want to configure according to the playbook is defined in an inventory file. For more information on creating and using inventories, see How to build your inventory in Ansible documentation. Logging solutions provide multiple ways of reading logs and multiple logging outputs. For example, a logging system can receive the following inputs:
In addition, a logging system can have the following outputs:
With the Logging System Role, you can combine the inputs and outputs to fit your scenario. For example, you can configure a logging solution that stores inputs from journal in a local file, whereas inputs read from files are both forwarded to another logging system and stored in the local log files. 36.2. Logging System Role parametersIn a Logging System Role playbook, you define the inputs in the logging_inputs parameter, outputs in the logging_outputs parameter, and the relationships between the inputs and outputs in the logging_flows parameter. The Logging System Role processes these variables with additional options to configure the logging system. You can also enable encryption. Currently, the only available logging system in the Logging System Role is Rsyslog.
Additional resources
36.3. Applying a local Logging System RoleFollow these steps to prepare and apply an Ansible playbook to configure a logging solution on a set of separate machines. Each machine will record logs locally. Prerequisites
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information on how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article. RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
You do not have to have the rsyslog package installed, because the system role installs rsyslog when deployed. Procedure
Verification
36.4. Filtering logs in a local Logging System RoleYou can deploy a logging solution which filters the logs based on the rsyslog property-based filter. Prerequisites
You do not have to have the rsyslog package installed, because the System Role installs rsyslog when deployed. Procedure
Verification
Additional resources
36.5. Applying a remote logging solution using the Logging System RoleFollow these steps to prepare and apply a Red Hat Ansible Core playbook to configure a remote logging solution. In this playbook, one or more clients take logs from systemd-journal and forward them to a remote server. The server receives remote input from remote_rsyslog and remote_files and outputs the logs to local files in directories named by remote host names. Prerequisites
You do not have to have the rsyslog package installed, because the System Role installs rsyslog when deployed. Procedure
Verification
Additional resources
36.6. Using the Logging System Role with TLSTransport Layer Security (TLS) is a cryptographic protocol designed to securely communicate over the computer network. As an administrator, you can use the Logging RHEL System Role to configure secure transfer of logs using Red Hat Ansible Automation Platform. 36.6.1. Configuring client logging with TLSYou can use the Logging System Role to configure logging in RHEL systems that are logged on a local machine and can transfer logs to the remote logging system with TLS by running an Ansible playbook. This procedure configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network. Prerequisites
Procedure
36.6.2. Configuring server logging with TLSYou can use the Logging System Role to configure logging in RHEL systems as a server and can receive logs from the remote logging system with TLS by running an Ansible playbook. This procedure configures TLS on all hosts in the server group in the Ansible inventory. Prerequisites
Procedure
36.7. Using the Logging System Roles with RELPReliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss. The RELP sender transfers log entries in form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery. You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system. Administrators can use the Logging System Role to configure the logging system to reliably send and receive log entries. 36.7.1. Configuring client logging with RELPYou can use the Logging System Role to configure logging in RHEL systems that are logged on a local machine and can transfer logs to the remote logging system with RELP by running an Ansible playbook. This procedure configures RELP on all hosts in the clients group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network. Prerequisites
Procedure
36.7.2. Configuring server logging with RELPYou can use the Logging System Role to configure logging in RHEL systems as a server and can receive logs from the remote logging system with RELP by running an Ansible playbook. This procedure configures RELP on all hosts in the server group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network. Prerequisites
Procedure
36.8. Additional resources
Chapter 37. Introduction to PythonPython is a high-level programming language that supports multiple programming paradigms, such as object-oriented, imperative, functional, and procedural paradigms. Python has dynamic semantics and can be used for general-purpose programming. With Red Hat Enterprise Linux, many packages that are installed on the system, such as packages providing system tools, tools for data analysis, or web applications, are written in Python. To use these packages, you must have the python* packages installed. 37.1. Python versionsTwo incompatible versions of Python are widely used, Python 2.x and Python 3.x. RHEL 8 provides the following versions of Python. Table 37.1. Python versions in RHEL 8
For details about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux 8 Application Streams Life Cycle. Each of the Python versions is distributed in a separate module and by design you can install multiple modules in parallel on the same system. The python38 and python39 modules do not include the same bindings to system tools (RPM, DNF, SELinux, and others) that are provided for the python36 module. Therefore, use python36 in instances where the greatest compatibility with the base operating system or binary compatibility is necessary. In unique instances where system bindings are necessary together with later versions of various Python modules, use the python36 module in combination with third-party upstream Python modules installed through pip into Python’s venv or virtualenv environments. Always specify the version of Python when installing it, invoking it, or otherwise interacting with it. For example, use python3 instead of python in package and command names. All Python-related commands should also include the version, for example, pip3, pip2, pip3.8, or pip3.9. The unversioned python command (/usr/bin/python) is not available by default in RHEL 8. You can configure it using the alternatives command; for instructions, see Configuring the unversioned Python. Any manual changes to /usr/bin/python, except changes made using the alternatives command, might be overwritten upon an update. As a system administrator, use Python 3 for the following reasons:
For developers, Python 3 has the following advantages over Python 2:
However, legacy software might require /usr/bin/python to be configured to Python 2. For this reason, no default python package is distributed with Red Hat Enterprise Linux 8, and you can choose between using Python 2 and 3 as /usr/bin/python, as described in Configuring the unversioned Python. System tools in Red Hat Enterprise Linux 8 use Python version 3.6 provided by the internal platform-python package. Red Hat advises customers to use the python36 package instead. Chapter 38. Installing and using PythonIn Red Hat Enterprise Linux 8, Python 3 is distributed in versions 3.6, 3.8, and 3.9, provided by the python36, python38, and python39 modules in the AppStream repository. Using the unversioned python command to install or run Python does not work by default due to ambiguity. Always specify the version of Python, or configure the system default version by using the alternatives command. 38.1. Installing Python 3By design, you can install RHEL 8 modules in parallel, including the python27, python36, python38, and python39 modules. Note that parallel installation is not supported for multiple streams within a single module. You can install Python 3.8 and Python 3.9, including packages built for either version, in parallel with Python 3.6 on the same system, with the exception of the mod_wsgi module. Due to a limitation of the Apache HTTP Server, only one of the python3-mod_wsgi, python38-mod_wsgi, or python39-mod_wsgi packages can be installed on a system. Procedure
Verification steps
38.2. Installing additional Python 3 packagesPackages with add-on modules for Python 3.6 generally use the python3- prefix, packages for Python 3.8 include the python38- prefix, and packages for Python 3.9 include the python39- prefix. Always include the prefix when installing additional Python packages, as shown in the examples below. Procedure
38.3. Installing additional Python 3 tools for developersAdditional Python tools for developers are distributed through the CodeReady Linux Builder repository in the respective python3x-devel module. The python38-devel module contains the python38-pytest package and its dependencies: the pyparsing, atomicwrites, attrs, packaging, py, more-itertools, pluggy, and wcwidth packages. The python39-devel module contains the python39-pytest package and its dependencies: the pyparsing, attrs, packaging, py, more-itertools, pluggy, wcwidth, iniconfig, and pybind11 packages. The python39-devel module also contains the python39-debug and python39-Cython packages. The CodeReady Linux Builder repository and its content is unsupported by Red Hat. To install packages from the python39-devel module, use the following the procedure. Procedure
To install packages from the python38-devel module, replace python39- with python38- in the commands above. 38.4. Installing Python 2Some applications and scripts have not yet been fully ported to Python 3 and require Python 2 to run. Red Hat Enterprise Linux 8 allows parallel installation of Python 3 and Python 2. If you need the Python 2 functionality, install the python27 module, which is available in the AppStream repository. Note that Python 3 is the main development direction of the Python project. Support for Python 2 is being phased out. The python27 module has a shorter support period than Red Hat Enterprise Linux 8. Procedure
Packages with add-on modules for Python 2 generally use the python2- prefix. Always include the prefix when installing additional Python packages, as shown in the examples below.
Verification steps
By design, you can install RHEL 8 modules in parallel, including the python27, python36, python38, and python39 modules. 38.5. Migrating from Python 2 to Python 3As a developer, you may want to migrate your former code that is written in Python 2 to Python 3. For more information on how to migrate large code bases to Python 3, see The Conservative Python 3 Porting Guide. Note that after this migration, the original Python 2 code becomes interpretable by the Python 3 interpreter and stays interpretable for the Python 2 interpreter as well. 38.6. Using PythonWhen running the Python interpreter or Python-related commands, always specify the version. Prerequisites
Procedure
Chapter 39. Configuring the unversioned PythonSystem administrators can configure the unversioned python command, located at /usr/bin/python, using the alternatives command. Note that the required package, python3, python38, python39, or python2, must be installed before configuring the unversioned command to the respective version. The /usr/bin/python executable is controlled by the alternatives system. Any manual changes may be overwritten upon an update. Additional Python-related commands, such as pip3, do not have configurable unversioned variants. 39.1. Configuring the unversioned python command directlyYou can configure the unversioned python command directly to a selected version of Python. Prerequisites
Procedure
39.2. Configuring the unversioned python command to the required Python version interactivelyYou can configure the unversioned python command to the required Python version interactively. Prerequisites
Procedure
39.3. Additional resources
Chapter 40. Packaging Python 3 RPMsMost Python projects use Setuptools for packaging, and define package information in the setup.py file. For more information about Setuptools packaging, see the Setuptools documentation. You can also package your Python project into an RPM package, which provides the following advantages compared to Setuptools packaging:
40.1. SPEC file description for a Python packageA SPEC file contains instructions that the rpmbuild utility uses to build an RPM. The instructions are included in a series of sections. A SPEC file has two main parts in which the sections are defined:
An RPM SPEC file for Python projects has some specifics compared to non-Python RPM SPEC files. Most notably, a name of any RPM package of a Python library must always include the prefix determining the version, for example, python3 for Python 3.6, python38 for Python 3.8, or python39 for Python 3.9. Other specifics are shown in the following SPEC file example for the python3-detox package. For description of such specifics, see the notes below the example. %global modname detox 1 Name: python3-detox 2 Version: 0.12 Release: 4%{?dist} Summary: Distributing activities of the tox tool License: MIT URL: https://pypi.io/project/detox Source0: https://pypi.io/packages/source/d/%{modname}/%{modname}-%{version}.tar.gz BuildArch: noarch BuildRequires: python36-devel 3 BuildRequires: python3-setuptools BuildRequires: python36-rpm-macros BuildRequires: python3-six BuildRequires: python3-tox BuildRequires: python3-py BuildRequires: python3-eventlet %?python_enable_dependency_generator 4 %description Detox is the distributed version of the tox python testing tool. It makes efficient use of multiple CPUs by running all possible activities in parallel. Detox has the same options and configuration that tox has, so after installation you can run it in the same way and with the same options that you use for tox. $ detox %prep %autosetup -n %{modname}-%{version} %build %py3_build 5 %install %py3_install %check %{__python3} setup.py test 6 %files -n python3-%{modname} %doc CHANGELOG %license LICENSE %{_bindir}/detox %{python3_sitelib}/%{modname}/ %{python3_sitelib}/%{modname}-%{version}* %changelog ...1 The modname macro contains the name of the Python project. In this example it is detox. 2When packaging a Python project into RPM, the python3 prefix always needs to be added to the original name of the project. The original name here is detox and the name of the RPM is python3-detox. 3BuildRequires specifies what packages are required to build and test this package. In BuildRequires, always include items providing tools necessary for building Python packages: python36-devel and python3-setuptools. The python36-rpm-macros package is required so that files with /usr/bin/python3 interpreter directives are automatically changed to /usr/bin/python3.6. 4Every Python package requires some other packages to work correctly. Such packages need to be specified in the SPEC file as well. To specify the dependencies, you can use the %python_enable_dependency_generator macro to automatically use dependencies defined in the setup.py file. If a package has dependencies that are not specified using Setuptools, specify them within additional Requires directives. 5The %py3_build and %py3_install macros run the setup.py build and setup.py install commands, respectively, with additional arguments to specify installation locations, the interpreter to use, and other details. 6The check section provides a macro that runs the correct version of Python. The %{__python3} macro contains a path for the Python 3 interpreter, for example /usr/bin/python3. We recommend to always use the macro rather than a literal path. 40.2. Common macros for Python 3 RPMsIn a SPEC file, always use the macros that are described in the following Macros for Python 3 RPMs table rather than hardcoding their values. In macro names, always use python3 or python2 instead of unversioned python. Configure the particular Python 3 version in the BuildRequires of the SPEC file to python36-rpm-macros, python38-rpm-macros, or python39-rpm-macros. Table 40.1. Macros for Python 3 RPMs
40.3. Automatic provides for Python RPMsWhen packaging a Python project, make sure that the following directories are included in the resulting RPM if these directories are present:
From these directories, the RPM build process automatically generates virtual pythonX.Ydist provides, for example, python3.6dist(detox). These virtual provides are used by packages that are specified by the %python_enable_dependency_generator macro. Chapter 41. Handling interpreter directives in Python scriptsIn Red Hat Enterprise Linux 8, executable Python scripts are expected to use interpreter directives (also known as hashbangs or shebangs) that explicitly specify at a minimum the major Python version. For example: #!/usr/bin/python3 #!/usr/bin/python3.6 #!/usr/bin/python2The /usr/lib/rpm/redhat/brp-mangle-shebangs buildroot policy (BRP) script is run automatically when building any RPM package, and attempts to correct interpreter directives in all executable files. The BRP script generates errors when encountering a Python script with an ambiguous interpreter directive, such as: #!/usr/bin/pythonor #!/usr/bin/env python41.1. Modifying interpreter directives in Python scriptsModify interpreter directives in the Python scripts that cause the build errors at RPM build time. Prerequisites
Procedure To modify interpreter directives, complete one of the following tasks:
If the packaged Python scripts require a version other than Python 3.6, adjust the preceding commands to include the required version. 41.2. Changing /usr/bin/python3 interpreter directives in your custom packagesBy default, interpreter directives in the form of /usr/bin/python3 are replaced with interpreter directives pointing to Python from the platform-python package, which is used for system tools with Red Hat Enterprise Linux. You can change the /usr/bin/python3 interpreter directives in your custom packages to point to a specific version of Python that you have installed from the AppStream repository. Procedure
To prevent the BRP script from checking and modifying interpreter directives, use the following RPM directive: %undefine __brp_mangle_shebangsChapter 42. Using the PHP scripting languageHypertext Preprocessor (PHP) is a general-purpose scripting language mainly used for server-side scripting, which enables you to run the PHP code using a web server. In RHEL 8, the PHP scripting language is provided by the php module, which is available in multiple streams (versions). Depending on your use case, you can install a specific profile of the selected module stream:
42.1. Installing the PHP scripting languageThis section describes how to install a selected version of the php module. Procedure
Additional resources
42.2. Using the PHP scripting language with a web server42.2.1. Using PHP with the Apache HTTP ServerIn Red Hat Enterprise Linux 8, the Apache HTTP Server enables you to run PHP as a FastCGI process server. FastCGI Process Manager (FPM) is an alternative PHP FastCGI daemon that allows a website to manage high loads. PHP uses FastCGI Process Manager by default in RHEL 8. This section describes how to run the PHP code using the FastCGI process server. Procedure
Example 42.1. Running a "Hello, World!" PHP script using the Apache HTTP Server
42.2.2. Using PHP with the nginx web serverThis section describes how to run PHP code through the nginx web server. Procedure
Example 42.2. Running a "Hello, World!" PHP script using the nginx server
42.3. Running a PHP script using the command-line interfaceA PHP script is usually run using a web server, but also can be run using the command-line interface. If you want to run php scripts using only command-line, install the minimal profile of a php module stream. See Installing the PHP scripting language. Procedure
Example 42.3. Running a "Hello, World!" PHP script using the command-line interface
42.4. Additional resources
Chapter 43. Using langpacksLangpacks are meta-packages which install extra add-on packages containing translations, dictionaries and locales for every package installed on the system. On a Red Hat Enterprise Linux 8 system, langpacks installation is based on the langpacks- There are two prerequisites to be able to use langpacks for a selected language. If these prerequisites are fulfilled, the language meta-packages pull their langpack for the selected language automatically in the transaction set. Prerequisites
43.1. Checking languages that provide langpacksFolow this procedure to check which languages provide langpacks. Procedure
43.2. Working with RPM weak dependency-based langpacksThis section describes multiple actions that you may want to perform when querying RPM weak dependency-based langpacks, installing or removing language support. 43.2.1. Listing already installed language supportTo list the already installed language support, use this procedure. Procedure
43.2.2. Checking the availability of language supportTo check if language support is available for any language, use the following procedure. Procedure
43.2.3. Listing packages installed for a languageTo list what packages get installed for any language, use the following procedure: Procedure
43.2.4. Installing language supportTo add new a language support, use the following procedure. Procedure
43.2.5. Removing language supportTo remove any installed language support, use the following procedure. Procedure
43.3. Saving disk space by using glibc-langpack-Currently, all locales are stored in the /usr/lib/locale/locale-archive file, which requires a lot of disk space. On systems where disk space is a critical issue, such as containers and cloud images, or only a few locales are needed, you can use the glibc
locale langpack packages (glibc-langpack- To install locales individually, and thus gain a smaller package installation footprint, use the following procedure. Procedure
When installing the operating system with Anaconda, glibc-langpack- Note that installing only selected glibc-langpack- If disk space is not an issue, keep all locales installed by using the glibc-all-langpacks package. Chapter 44. Getting started with Tcl/Tk44.1. Introduction to Tcl/TkTool command language (Tcl) is a dynamic programming language. The interpreter for this language, together with the C library, is provided by the tcl package. Using Tcl paired with Tk (Tcl/Tk) enables creating cross-platform GUI applications. Tk is provided by the tk package. Note that Tk can refer to any of the the following:
For more information about Tcl/Tk, see the Tcl/Tk manual or Tcl/Tk documentation web page. 44.2. Notable changes in Tcl/Tk 8.6Red Hat Enterprise Linux 7 used Tcl/Tk 8.5. With Red Hat Enterprise Linux 8, Tcl/Tk version 8.6 is provided in the Base OS repository. Major changes in Tcl/Tk 8.6 compared to Tcl/Tk 8.5 are:
Major changes in Tk include:
For the detailed list of changes between Tcl 8.5 and Tcl 8.6, see Changes in Tcl/Tk 8.6. 44.3. Migrating to Tcl/Tk 8.6Red Hat Enterprise Linux 7 used Tcl/Tk 8.5. With Red Hat Enterprise Linux 8, Tcl/Tk version 8.6 is provided in the Base OS repository. This section describes migration path to Tcl/Tk 8.6 for:
44.3.1. Migration path for developers of Tcl extensionsTo make your code compatible with Tcl 8.6, use the following procedure. Procedure
44.3.2. Migration path for users scripting their tasks with Tcl/TkIn Tcl 8.6, most scripts work the same way as with the previous version of Tcl. To migrate you code into Tcl 8.6, use this procedure. Procedure
Legal NoticeCopyright © 2022 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its affiliates. XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project. The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. All other trademarks are the property of their respective owners. What Windows utility can be used to find out what processes are launched at startup group of answer choices?40 Cards in this Set. What Windows utility can be used to find out what processes are launched at startup quizlet?Windows Task Scheduler can be set to launch a task or program at a future time, including at startup.
Which built in Windows utility can be used to modify the boot process?To edit boot options in Windows, one option is to use BCDEdit (BCDEdit.exe), a tool included in Windows. To use BCDEdit, you must be a member of the Administrators group on the computer.
What is the main purpose of the DxDiag utility?Short for DirectX diagnostics, DxDiag is a Microsoft tool included with DirectX that allow you to view system information and DirectX information relating to your video card and sound card. This tool is helpful when needing to troubleshoot driver related issues being encountered by DirectX.
|