Linux Books: Links to May Linux Books
Linux Online is working together with Amazon.com to deliver the best books available about Linux and open source.
We ask you to bookmark this page and use our site when searching for and purchasing books because Linux Online receives a commission for every book sold through this website. That way, in the true spirit of the Linux community, you'll be helping us to keep bringing you the best and most comprehensive Linux information on the Web.
Remember, any book bought on Amazon.com after clicking through from our book section pages, even books that aren't related to Linux, are credited to us.
Installation and Getting started:
chap-introIntroduction to Linux Linux is quite possibly the most important free software achievement since the original Space War, or, more recently, Emacs. It has developed into an operating system for business, education, and personal productivity. Linux is no longer only for UNIX wizards who sit for hours in front of a glowing console (although we assure you that many users fall into this category). This book will help you get the most from Linux.
Linux (pronounced with a short i, as in LIH-nucks) is a UNIX operating system clone which runs on a variety of platforms, especially personal computers with Intel 80386 or better processors. It supports a wide range of software, from TeX, to the X Window System, to the GNU C/C++ compiler, to TCP/IP. It's a versatile, bona fide implementation of UNIX, freely distributed under the terms of the GNU General Public License.
Unlike most other operating systems, Linux can be obtained free of charge. Due to the GNU General Public License under which Linux is distributed
no one can sell you a license for the software. You can use Linux at no charge and are encouraged to make it available to others.
But that doesn't mean companies aren't entitled to reimbursement for copying costs plus a profit. They may also add software that is not free that runs on the system.
This gives you the freedom to choose. If purchasing a CD-ROM is not within your budget, you may simply borrow a friend's copy or download the source from the Internet. Whether purchased from a major Linux distributor or downloaded from their FTP site
you get the same operating system and the software packages that they offer. In fact, you can get more free software from one of the FTP sites than the companies can distribute on CD, due to restrictions some authors place on the distribution of their software.
If you are a developer for the GNU/Linux system, this book will help you to:
Develop GNU/Linux software that works the way users expect it to.
Write more sophisticated programs with features such as multiprocessing, multi-threading, interprocess communication, and interaction with hardware devices.
Improve your programs by making them run faster, more reliably, and more securely.
Understand the preculiarities of a GNU/Linux system, including its limitations, special capabilities, and conventions.
Advanced Linux Programming is published under the Open Publication License, Version 1, no options exercised. (Due to an oversight in final production, the copyright notice on the book is incorrect.) The full text may be downloaded from this site. Code samples in the book are covered by the GNU General Public License and are also available.
Cluster" is an ambiguous term in computer industry. Depending on the vendor and specific contexts, a cluster may refer to wide a variety of environments. In general a cluster refers to a set of computer systems connected together. For the purposes of this book a cluster is set of computers which are connected to each other, and are physically located close to each other, in order to solve problems more efficiently. These types of clusters are also referred to as High Performance Computing (HPC) clusters, or simply Compute clusters.
Another popular usage of the term cluster is to describe High Availability environments. In this environment a computer system acts as the backup system to one or more primary systems. When there is a failure in a primary system, the critical applications running on that system are failed over to its designated backup system. Detailed usage and technology behind these types of clusters is outside the scope of this book. Nevertheless, we will touch upon specific usage of high availability technology within the context of compute clusters.
The IA-32 is sometimes generically called x86 or even x86-32. The term means Intel Architecture, 32 bit which distinguishes it from the 16 bit versions that preceded it and the 64 bit version referred to as IA-64 that followed it. Within various programming language directives it is also referred to as i386; this directive would inform the compiler to generate code only for the IA-32 instruction set. This instruction set was introduced in the Intel 80386 microprocessor in 1985. Even though the basic instruction set has remained intact the successive generation of microprocessors that run it have become much faster at running it. The biggest supplier and inventor of this class of processors is Intel. However it is not the only supplier of this family of processors. The second biggest supplier is AMD and there are also numerous even smaller specialized supplier of these processors. The following sections will briefly describe the various features of the IA-32 family of processors.
The network fabric connecting the compute nodes in a cluster to carry inter-node message traffic is referred to as cluster interconnect or simply interconnect. The term System Area Network(SAN) is also sometimes used to denote a cluster interconnect. Although the scope of SANs is somewhat broader than cluster interconnects. Note that an interconnect does not hook up to the user LAN. It is completely under the administrative domain of the cluster administrator. The LAN administrator need not be even aware of the type of interconnect being used in a cluster, unless of course the same person is playing both roles. The user connectivity of the cluster is simply through a LAN connection on the head node of the cluster and does not require much design consideration.
The choice of optimal interconnect for a cluster is very important and can be very difficult to make. If the right balance is not struck between the processing power of the nodes in a cluster and the communication ability of its interconnect, the nodes will potentially waste their computing resources while waiting for data
Unix, the original ancestor of Linux, is an operating system. Or at least it was an operating system; the original system known as Unix proper is not the "Unix" we know and use today; there are now many "flavors" of Unix, of which Linux has become the most popular.
A product of the 1960s, Unix and its related software was invented by Dennis Ritchie, Ken Thompson, Brian Kernighan, and other hackers at Bell Labs in 1969; its name was a play on "Multics," another operating system of the time.(3)
In the early days of Unix, any interested party who had the hardware to run it on could get a tape of the software from Bell Labs, with printed manuals, for a very nominal charge. (This was before the era of personal computing, and in practice, mostly only universities and research laboratories did this). Local sites played with the software's source code, extending and customizing the system to their needs and liking.
As the popularity of the Linux system continues to grow, the interest in writing Linux device drivers steadily increases. Most of Linux is independent of the hardware it runs on, and most users can be (happily) unaware of hardware issues. But, for each piece of hardware supported by Linux, somebody somewhere has written a driver to make it work with the system. Without device drivers, there is no functioning system.
Device drivers take on a special role in the Linux kernel. They are distinct "black boxes" that make a particular piece of hardware respond to a well-defined internal programming interface; they hide completely the details of how the device works. User activities are performed by means of a set of standardized calls that are independent of the specific driver; mapping those calls to device-specific operations that act on real hardware is then the role of the device driver. This programming interface is such that drivers can be built separately from the rest of the kernel, and "plugged in" at runtime when needed. This modularity makes Linux drivers easy to write, to the point that there are now hundreds of them available.
The goal of this chapter is to write a complete char device driver. We'll develop a character driver because this class is suitable for most simple hardware devices. Char drivers are also easier to understand than, for example, block drivers or network drivers. Our ultimate aim is to write a modularized char driver, but we won't talk about modularization issues in this chapter.
Throughout the chapter, we'll present code fragments extracted from a real device driver: scull, short for Simple Character Utility for Loading Localities. scull is a char driver that acts on a memory area as though it were a device. A side effect of this behavior is that, as far as scull is concerned, the word device can be used interchangeably with "the memory area used by scull."
The advantage of scull is that it isn't hardware dependent, since every computer has memory. scull just acts on some memory, allocated using kmalloc. Anyone can compile and run scull, and scull is portable across the computer architectures on which Linux runs. On the other hand, the device doesn't do anything "useful" other than demonstrating the interface between the kernel and char drivers and allowing the user to run some tests.
Char Drivers", we built a complete device driver that the user can write to and read from. But a real device usually offers more functionality than synchronous read and write. Now that we're equipped with debugging tools should something go awry, we can safely go ahead and implement new operations.
What is normally needed, in addition to reading and writing the device, is the ability to perform various types of hardware control via the device driver. Control operations are usually supported via the ioctl method. The alternative is to look at the data flow being written to the device and use special sequences as control commands. This latter technique should be avoided because it requires reserving some characters for controlling purposes; thus, the data flow can't contain those characters. Moreover, this technique turns out to be more complex to handle than ioctl. Nonetheless, sometimes it's a useful approach to device control and is used by tty's and other devices. We'll describe it later in this chapter in "Device Control Without ioctl".
Flow of Time:
At this point, we know the basics of how to write a full-featured char module. Real-world drivers, however, need to do more than implement the necessary operations; they have to deal with issues such as timing, memory management, hardware access, and more. Fortunately, the kernel makes a number of facilities available to ease the task of the driver writer. In the next few chapters we'll fill in information on some of the kernel resources that are available, starting with how timing issues are addressed. Dealing with time involves the following, in order of increasing complexity:
Understanding kernel timing
Knowing the current time
Delaying operation for a specified amount of time
Scheduling asynchronous functions to happen after a specified time lapse
Although playing with scull and similar toys is a good introduction to the software interface of a Linux device driver, implementing a real device requires hardware. The driver is the abstraction layer between software concepts and hardware circuitry; as such, it needs to talk with both of them. Up to now, we have examined the internals of software concepts; this chapter completes the picture by showing you how a driver can access I/O ports and I/O memory while being portable across Linux platforms.
This chapter continues in the tradition of staying as independent of specific hardware as possible. However, where specific examples are needed, we use simple digital I/O ports (like the standard PC parallel port) to show how the I/O instructions work, and normal frame-buffer video memory to show memory-mapped I/O.
is Computer Security:
Security is risk management. - unknown "A computer is secure if you can depend on it and it's software to behave as you expect" - Practical UNIX and Internet Security
Security is: availability, consistency, access control, data confidentiality, authentication. -
"The principal objective of computer security is to protect and assure the confidentiality, integrity, and availability of automated information systems and the data they contain." - http://csrc.nist.gov/publications/secpubs/cslaw.txt
There are numerous definitions for "computer security", and most of them are correct. Essentially computer security means enforcement of usage policies, this must be done since people and software have flaws that can result in accidents, but also because someone may want to steal your information, use your resources inappropriately or simply deny you the use of your resources.
Remember that an attacker may not want to break into your desktop machine or network, they may be looking for a quick way to make $200, and stealing a desktop computer is one way to do that. All systems should be securely fastened to something with a cable system, or locked in an equipment cage if possible. Case locks should be used when possible to slow attackers down (thefts of ram and CPU's are becoming increasingly popular). Some systems, like Apple G4's, when cable locked cannot be opened, if you need machines for public areas features like this are ideal. For secure rooms make sure that the walls go beyond the false ceiling, and below the raised floor, large vents should also be covered with bars if possible. Monitoring the room with CCTV and possibly requiring a remote person to "buzz"
There are a variety of tools to make administration of systems easier, from local tools like sudo which grant limited
super user privileges to www based systems that allow for remote management from a cybercafe while on
vacation. While it is possible to administer a Linux system from the command line using no "additional" tools it can be bothersome. If you wish to split up administrative tasks the
"sub administrators" will often require root access to restart daemons, modify configuration files and so forth. Simply giving them all root access, or sharing the root password is often the first step to serious problem (this is one of the major reasons many large sites get broken into).
YaST (Yet Another Setup Tool) is a rather nice command line graphical interface (very similar to scoadmin) that provides an easy interface to most administrative tasks. It does not however have any provisions for giving users limited access, so it is really only useful for cutting down on errors, and allowing new users to administer their systems. Another problem is unlike Linuxconf it is not network aware, meaning you must log into each system you want to manipulate. YaST version two is now available and includes many new features as well as bug fixes, it is recommended you upgrade.
I don't know how many times I can tell people, but it never ceases to amaze me how often people are surprised by the fact that if they do not backup their data it will be gone, if the drive suffers a head crash on them or they hit 'delete' without thinking. Always backup your system, even if it's just the config files, you'll save yourself time and money in the long run. This is even on the SANS top 20 list.
To backup your data under Linux there are many solutions, all with various pro's and con's. There are also several industrial strength backup programs, the better ones support network backups which are a definite plus in a large non-homogenous environment.
One of the other critical things to remember with backups is that whoever has access to them (backup admin, cleaning staff) will have access to all your files unless you encrypt the backups. Physically securing backups is critical, damaging backups physically so they cannot be recovered is extremely easy, with magnetic media simply place a strong magnet near them, for CD's simply scratching the surface or cracking the CD will prevent usage. You should also keep a relatively recent set of backups offsite in case the building burns down or is inaccessible for some other reason (such as a chemical spill).
A solid house needs a solid foundation, otherwise it will collapse. In Linux's case this is the ext2 (EXTended, version 2) filesystem. Pretty much your everyday standard UNIX-like filesystem. It supports file permissions (read, write, execute, sticky bit, suid, sgid and so on), file ownership (user, group, other), and other standard things. Some of its drawbacks are: no journaling, and especially no Access Control Lists, which are rumored to be in the upcoming ext3. On the plus side, Linux has excellent software RAID, supporting Level 0, 1 and 5 very well (RAID isn't security related, but it certainly is safety/stability related). There is an excellent HOWTO on file systems in regards to Linux
http://www.penguin.cz/~mhi/fs/Filesystems-HOWTO/Filesystems-HOWTO.html. So the attacker somehow manages to circumvent the physical security and console security you put in place and now has a root prompt.
Authentication is typically one of the two main lines of defense that systems and networks rely upon, so ensuring that your authentication subsystems are implemented correctly is important. The majority of Linux systems rely on usernames and passwords, while support for tokens, smartcards and other authentication systems are available they are still relatively rare. On top of this sits PAM, as far as I know all major vendors use PAM by default, so understanding how PAM works and using it correctly is very
.Pluggable Authentication Modules for Linux is a suite of shared libraries that enable the local system administrator to choose how applications authenticate users." Straight from the PAM documentation, I don't think I could have said it any better. But what does this actually mean? For example; take the program login, when a user connects to a tty (via a serial port or over the network) a program answers the call (getty for serial lines, telnet or SSH for network connections) and starts up a login program, login then typically requests a username, followed by a password, which it checks against the /etc/passwd file.
System and User Logging:
One integral part of any UNIX system are the logging facilities. The majority of logging in Linux is provided by two main programs, sysklogd and klogd, the first providing logging services to programs and applications, the second providing logging capability to the Linux kernel. Klogd actually sends most messages to the syslogd facility but will on occasion pop up messages at the console (i.e. kernel panics). Sysklogd actually handles the task of processing most messages and sending them to the appropriate file or device, this is configured from within /etc/syslog.conf. By default most logging to files takes place in /var/log/, and generally speaking programs that handle their own logging (most httpd servers handle their logging internally) log to /var/log/program-name/, which allows you to centralize the log files and makes it easier to place them on a separate partition (some attacks can fill your logs quite quickly, and a full / partition is no fun). Additionally there are programs that handle their own interval logging, one of the more interesting being the bash command shell. By default bash keeps a history file of commands executed in ~username/.bash_history, this file can make for extremely interesting reading, as oftentimes many admins will accidentally type their passwords in at the command line. Apache handles all of its logging internally, configurable from httpd.
No matter how good your security is you it can be compromised. That is why it is so important to have various forms of attack detection present, so that when an incident happens you are alerted to it as soon as possible (and not when you start getting complaints from other sites).One major oversight made by a lot of people when securing their machines is that they forget to create a baseline of the system, that is a profile of the system, its usage of resources, and so on in normal operation. For example something as simple as a "netstat -a -n > netstat-output" can give you a reference to latter check against and see if any ports are open that should not be. Memory usage and disk usage are also good things to keep an eye on. A sudden surge in memory usage could result in the system being starved of resources. Likewise for disk usage. It might be a user accident, a malicious user, or a worm program that has compromised your system and is now scanning other systems. Various tools exist to measure memory and disk usage: vmstat, free, df, du, all of which are covered by their respective man pages.
Over the last few years the number of security tools for Windows and UNIX has risen dramatically, even more surprising is the fact that most of them are freely available on the Internet. I will only cover the free tools since they tend to be the best (i.e. nmap, Nessus, etc.) and any serious cracker will have these tools at their disposal, why shouldn't you? There are several main categories of tools; ones that scan hosts from within that host, ones that scan other hosts and report back variously what OS they are running, services that are available and so on (i.e. TCP level scanners), and at the top of the food chain are the intrusion tools (i.e. application level scanners) that can actually execute exploits, and report back whether they worked or not, and lastly I include the exploits category, while not strictly an intrusion tool per se they do exist and you should be aware of them. These tools and techniques can also be used to conduct a self audit and ensure that the systems react as you think they should (i.e. you should be able to run the denial of service attacks that Nessus is capable of with no ill effects on your servers).
Firewalling is the practice of filtering network traffic, typically at the point where your network connects to another (e.g. the Internet, a customers LAN, etc.) network, that may be untrusted (in the case of the Internet) or perhaps even trusted (another floor of your building). Like firewalls in a large building, a network firewall can prevent and even block the spread of an attack if one segment is compromised successfully, like their namesake firewalls can stop your network from being further compromised.There is a good FAQ on Internet firewalls at: http://www.interhack.net/pubs/fwfaq/. A list of port numbers and what they are is available http://seifried.org/security/ports/
.Linux has gone through a series of firewalls. The most recent is IPTables (sometimes referred to as NETFILTER), preceding that was ipchains, and preceding this was ipfwadm, as well there are a number of other firewalls such as IPF and SINUS firewall (some of which only work on 2.0 and 2.2 series kernels so are largely obsolete). I would recommend using a 2.4 kernel with IPTables if possible since it offers stateful packet inspection, which makes firewalling protocols such as DNS and FTP properly a lot easier
Network security is a pretty broad topic, so I've broken it down into a couple of sections. In this area I cover the bottom 4 or so layers (transport, network, datalink, physical) of the 7 layer OSI protocol stack, the top 3 (application, presentation, session) are in the network server section and so forth (roughly speaking). I also cover some of the basic network configuration files, since an explanation is always useful.
Physical protols such as PPP and Ethernet provide the foundation for TCP-IP.
Before you even begin to think about TCP-IP and network security it is good to have a grasp of the protocols that sit beneath TCP-IP. The two most common protocols most people will encounter are PPP, commonly used for dial-up access and Ethernet, a mainstay of LAN's and the interface that you will see most often on networking equipment like DSL modems and cable modems.
Software package formats and the like. RPM is a software management tool originally created by Red Hat, and later GNU'ed and given to the public at http://www.rpm.org/. It forms the core of administration on most systems, since one of the major tasks for any administrator is installing and keeping software up to date. Various estimates place most of the blame for security break-ins on bad passwords, and old software with known vulnerabilities. This isn't exactly surprising one would think, but while the average server contains 200-400 software packages on average, one begins to see why keeping software up to date can be a major task.
The man page for RPM is pretty bad, there is no nice way of putting it. The book "Maximum RPM" (ISBN: 0-672-31105-4) on the other hand is really wonderful (freely available at http://www.rpm.org/ in post script format).
Several encryption programs are also available to encrypt your data, some at the file level (PGP, GnuPG, etc.) and some at the drive level (Cryptographic File System for example). These systems are very appropriate for the storage of secure data, and to some degree for the transmission of secure data. However both ends will require the correct software, compatible versions, and an exchange of public keys will somehow have to take place, which is unfortunately, an onerous task for most people. In addition to this you have no easy way of trusting someone's public key unless you receive it directly from them (such as at a key signing party), or unless it is signed by someone else you trust (but how do you get the trusted signer's key securely?). Systems for drive encryption such as CFS (Cryptographic FileSystem) are typically easy to implement, and only require the user to provide a password or key of some form to access their files. There is a really good article on choosing key sizes at http://www.cryptosavvy.com/ which raises some issues you probably hadn't considered. I would recomend reading it.
Many security problems arise from the simple fact that you must allow user access to your systems. In some cases users can be malicious (i.e. internal attacker, university students, etc.) or they may accidentally expose their password to an attacker or simply use a weak password (like their username). In any event it is all to possible for users to attack a system, so it is advisable to monitor users, and limit the amount of damage they can do.Users require resources like CPU time, memory and drive space to do their work. On many systems it is possible for users to hog resources, reducing the usefulness of the system to other users or in some cases actually bringing the server to a slow halt or crashing it. Users can also inadvertently use up more resources then they mean to, limiting their resources can prevent several classes of problems. Programs can crash. generating huge core dumps, or go crazy and user up all the available memory. Something to remember: global limits apply to root, so be careful! If you do not allow root to run enough processes for example cron jobs may fail or you may not even be able to log in to fix any problems.
Linux is not as susceptible to viruses in the same ways that a Dos/Windows or Mac platform is. In UNIX, security controls are a fundamental part of the operating system. For example users are not allowed to write promiscuously to any location in memory that they choose to, something that Dos/Windows and the Mac allow.
To be fair there are viruses for UNIX. However the only Linux one I have seen was called "bliss", had an uninstall option ("--uninstall-please") and had to be run as root to be effective. Or to quote an old Unix favorite "if you don't know what an executable does, don't run it as root". Worms are much more prevalent in the UNIX world, the first major occurrence being the Morris Internet worm which exploited a vulnerability in sendmail. Current Linux worms exploit broken versions of imapd, sendmail, WU-FTPD and other daemons. The simplest fix is to keep up to date and not make daemons accessible unless necessary.
There are a variety of VPN solutions for Linux. I would strongly advise using IPSec if possible since it is the emerging standard for VPN's on the Internet, and will be incorporated with IPv6. On the other hand if you are behind a firewall and want to tunnel out the SSH based solution and so on will do the trick, whereas IPSec will typically fail (since the packet headers are being rewritten).
OpenVPN is a high quality VPN based on OpenSSL. It has nearly as many features as IPSec and is much more easy to setup and deploy. Since VPNs are used, by its own nature, in non public deployments I don't think the standarization of IPSec is a determinant factor. And of course since IPSec is more complex and usually implemented at the "kernel" level it is potentially less secure than OpenVPN that works as a normal daemon in user space (with moderm virtual machines technologies like Xen that must not be true actually but I didn't have the time to check how IPSec of OpenVPN work in such environments.).
Linux (or GNU/Linux according to Stallman if you're referring to a complete distribution) is actually just the kernel of the operating system. The kernel is the core of the system, it handles access to the harddrive, security mechanisms, networking and pretty much everything. It had better be secure or you are screwed.
In addition to this we have hardware problems like the Pentium F00F bug, and problems inherent to the TCP-IP protocol, the Linux kernel has it's work cut out for it. Kernel versions are labeled as X.Y.Z, Z are minor revision numbers, Y define whether the kernel is a test (odd number) or production (even number), and X defines the major revision (0, 1, 2 etc). I would highly recommend running kernel 2.6.x, as of July 2005 this is 18.104.22.168. The 2.2.x series of kernel has major improvements over the 2.0.x series. Using the 2.2.x kernels also allows you access to newer features such as ipchains (instead of ipfwadm) and other advanced security features. The 2.0.x series has also been officially discontinued as of June 1999. To find out what the latest kernel(s) are simply finger @finger.kernel.org
is Beyond Linux:
Beyond Linux From Scratch (BLFS) is a project that continues where the LFS book finishes. It assists users in developing their systems according to their needs by providing a broad range of instructions for installing and configuring various packages on top of a base LFS system. Linux From Scratch (LFS) is a project that provides you with step-by-step instructions for building your own customized Linux system entirely from source.When you install a regular distribution, you often end up installing a lot of programs that you would probably never use. They're just sitting there taking up (precious) disk space. It's not hard to get an LFS system installed under 100 MB.
We are relative Linux newbies (with Linux since Summer 1998). We run mostly RedHat and Mandrake -> the solutions might not be directly applicable to other Linux distributions (although most of them probably will be). Hope this helps; we try to be as practical as possible. Of course, we provide no warranty whatsoever. If you spotted a bad error or would like to contribute a part on a topic of your choice, we would like to hear from you.A complete reference for new Linux users who wish to set up and administer their own Linux home computer, workstation and/or their home or small office network. The answers are meant to be simple, with just sufficient detail, and always supported with a readily usable example. The work is still in progress, but we hope the Guide can be helpful already. We welcome your corrections, advice, criticism, links, translations, and CONTRIBUTIONS. Pls note that there are no ad banners on our pages.
This chapter introduces you to the upstart operating system Linux. It helps you determine whether Linux is right for you, by surveying the features and performance that Linux offers. It demonstrates that choosing Linux is a practical - even wise - decision for many computer users. The chapter also helps you feel at home with Linux and other Linux users, by introducing you to the history and culture of Linux. Finally, it points you to some popular gathering places on the Internet where you can correspond with other Linux users, get up-to-the-minute Linux news and information, and obtain free technical support.Perhaps you learned about Linux from a trusted friend, whose enthusiasm and ready answers convinced you to learn more about Linux, or perhaps an article or anecdote that mentioned Linux simply sparked your curiosity.
Linux supports a wide range of PC hardware; but not even Linux supports every known device and system. Your PC must meet certain minimum requirements in order to run Linux. The following sections present these minimum requirements; however, for the latest and most complete information, you should check the Debian Project web site at http://www.debian.org/. The Debian web site will also help you determine if Linux supports all the devices installed in your system.
Linux does not support the Intel 286 and earlier processors. However, it fully supports the Intel 80386, 80486, Pentium, Pentium Pro, Pentium II, and Pentium III processors. Nevertheless, some users feel that their 80386 Linux systems respond sluggishly, particularly when running X. So, if you want optimum performance, you should install Linux on a PC having an 80486 processor or better.
Introduction to the Linux 2.4 kernel. The latest copy of this document can be always downloaded from: http://www.moses.uklinux.net/patches/lki.sgml This guide is now part of the Linux Documentation Project and can also be downloaded in various formats from: http://www.linuxdoc.org/guides.html or can be read online (latest version) at:
http://www.moses.uklinux.net/patches/lki.html This documentation is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. The author is working as senior Linux kernel engineer at VERITAS Software Ltd and wrote this book for the purpose of supporting the short training course/lectures he gave on this subject, internally at VERITAS. Thanks to Juan J. Quintela (email@example.com), Francis Galiegue (firstname.lastname@example.org), Hakjun Mun (email@example.com), Matt Kraai (firstname.lastname@example.org), Nicholas Dronen (email@example.com), Samuel S Chessman (firstname.lastname@example.org), Nadeem Hasan (email@example.com), Michael Svetlik (firstname.lastname@example.org) for various corrections and suggestions. The Linux Page Cache chapter was written by: Christoph Hellwig (email@example.com).
If you are running a one- or two-system LAN using Linux, you probably only need simple connectivity between your systems. However, if you are setting up a Linux server for your network and its connection to the Internet, you've got a lot of work to do in installation, configuration, and maintenance---and you probably require some assistance. Olaf Kirch wrote Linux Network Administrator's Guide as part of the Linux Documentation Project to cover just such information. Although you can download the book for free, the O'Reilly version of the book looks (better layout and graphics) and feels better than the online version and has a superlative index.
of Linux :
This chapter covers the many types of Linux systems and distributions available. I strongly recommend that you read this chapter at least once before starting the installation process. Included in this chapter are the following topics:
The various distribution types in Linux
The types of disk sets for each distribution
Which distribution sets are important for you
How to look for files on the CD-ROM
How to get Linux from FTP sites
How to get Linux from BBS sites
How to find FTP sites where you can get Linux updates
We are now through discussing char and block drivers and are ready to move on to the fascinating world of networking. Network interfaces are the third standard class of Linux devices, and this chapter describes how they interact with the rest of the kernel.
The role of a network interface within the system is similar to that of a mounted block device. A block device registers its features in the blk_dev array and other kernel structures, and it then "transmits" and "receives" blocks on request, by means of its request function. Similarly, a network interface must register itself in specific data structures in order to be invoked when packets are exchanged with the outside world.
There are a few important differences between mounted disks and packet-delivery interfaces. To begin with, a disk exists as a special file in the /dev directory, whereas a network interface has no such entry point. The normal file operations (read, write, and so on) do not make sense when applied to network interfaces, so it is not possible to apply the Unix "everything is a file" approach to them. Thus, network interfaces exist in their own namespace and export a different set of operations.
Samba is a suite of Unix applications that speak the SMB (Server Message Block) protocol. Many operating systems, including Windows and OS/2, use SMB to perform client-server networking. By supporting this protocol, Samba allows Unix servers to get in on the action, communicating with the same networking protocol as Microsoft Windows products. Thus, a Samba-enabled Unix machine can masquerade as a server on your Microsoft network and offer the following services:
Share one or more filesystems
Share printers installed on both the server and its clients
Assist clients with Network Neighborhood browsing
Authenticate clients logging onto a Windows domain
Provide or assist with WINS name server resolution
The Linux Documentation Project (LDP) is working on developing good, reliable documentation for the Linux operating system. The overall goal of the LDP is to collaborate in taking care of all of the issues of Linux documentation, ranging from online documentation (man pages, HTML, and so on) to printed manuals covering topics such as installing, using, and running Linux. Here is the Linux Documentation Project Manifesto and Copyright License for LDP works.
For translations of LDP works and documentation in languages other than English, see the "Non-English Linux Info" links
page. The Linux HOWTOs are detailed "how to" documents on specific subjects. The HOWTO index lists all HOWTOs along with short descriptions. The HOWTOs are written in SGML or XML, and translated to different output formats using SGML-Tools (Linuxdoc DTD) or the DocBook/DSSSL tools. Join the LDP announcements mailing list for news on HOWTO updates.
Documentation project Guide:
The Linux Documentation Project (LDP) is working on developing good, reliable documentation for the Linux operating system. The overall goal of the LDP is to collaborate in taking care of all of the issues of Linux documentation, ranging from online documentation (man pages, HTML, and so on) to printed manuals covering topics such as installing, using, and running Linux. Here is the Linux Documentation Project Manifesto and Copyright License for LDP works.This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter. For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. We hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Hat Linux 9 :
Red Hat Linux 9 contains several new features beyond the 8.0 release:
Updated Bluecurve look and feel
Improved desktop menus
Samba configuration tool
New mouse cursor code allowing animated cursors and more
This book covers Red Hat Linux installation methods. Every installation step is discussed in detail, including system requirements, starting the installation, choosing an installation type, configuring X, and entering network information.
Hat Linux 7.3:
Red Hat Linux 7.3 contains several new features beyond the 7.2 release:
Evolution personal management application, including email, calendar, and contact manager
XFree86 4.2.0 for improved hardware support
GNOME Meeting video conferencing software
gphoto2 for improved digital camera support
This book covers Red Hat Linux installation methods. Every installation step is discussed in detail, including system requirements, starting the installation, choosing an installation type, configuring X, and entering network information.
x 86 Installation Guide:
Welcome to the Official Red Hat Linux x86 Installation Guide. This guide contains useful information to assist you during the installation of Red Hat Linux. From fundamental concepts such as installation preparation to the step-by-step installation procedure, this book will be a valuable resource as you install Red Hat Linux.
This manual will walk you through a typical Red Hat Linux CD-ROM installation. Once you have completed the installation, you will have a fully functioning Red Hat Linux desktop
system. Linux commands (and other operating system commands, when used) are represented this way. This style should indicate to you that you can type the word or phrase on the command line and press [Enter] to invoke a command. Sometimes a command contains words that would be displayed in a different style on their own (such as filenames). In these cases, they are considered to be part of the command, so the entire phrase will be displayed as a command. For example:
Use the cat testfile command to view the contents of a file, named testfile, in the current working directory.
Layout of The Kernel Source:
So far, we've talked about the Linux kernel from the perspective of writing device drivers. Once you begin playing with the kernel, however, you may find that you want to "understand it all." In fact, you may find yourself passing whole days navigating through the source code and grepping your way through the source tree to uncover the relationships among the different parts of the kernel.
This kind of "heavy grepping" is one of the tasks your authors perform quite often, and it is an efficient way to retrieve information from the source code. Nowadays you can even exploit Internet resources to understand the kernel source tree; some of them are listed in the Preface. But despite Internet resources, wise use of grep, less, and possibly ctags or etagscan still be the best way to extract information from the kernel sources.
programming on Linux:
Java Programming on Linux is the first and, to date, only book devoted to the topic of using Java on Linux. You'll get some introduction to both technologies, but we're not out to duplicate the many fine Java and Linux books already out there: this book is unique in focusing on how you can use the two together. You may be surprised at how many options you have for developing and deploying in Java on Linux.For more detail about what you'll find between the covers, take a look at the organization of the
book. Nathan Meyers spent 20 years in the corporate software trenches, as a developer and architect for Hewlett-Packard Company, working in handheld calculators, UNIX workstations, and inkjet printers. His experience includes development of embedded systems, device driver implementation, creation of development tools, definition and implementation work on the X Window System and the Common Desktop Environment, development of 2D and 3D graphics applications, UNIX application performance tuning, design of evolutionary algorithms, and implementation of financial algorithms.
Linux Development :
Auerbach Publications is shipping a book about embedded Linux. Embedded Linux System Design and Development targets systems architects and software developers, and covers the MTD (Memory Technology Device) Flash storage device model, POSIX.1b real-time extensions, uClinux, and porting from traditional real-time operating systems to Linux.
According to its publisher, the new embedded Linux book was co-authored by three software engineers working in the embedded field in Bangalore, India. The three include P. Raghavan, of Philips Innovation Campus, and Amol Lad and Sriram Neelakandan, of Verismo Networks.
- ARM Linux:
ARM Linux is good, but it is new enough that documentation is poor in many areas. We are producing a series of 'Guide to
ARM Linux' books, to tell you what you need to know. The books are published in different editions for different platforms. A suitable edition is supplied with all Aleph One distributions or installations in both paper and electronic form. You can also purchase the book alone.
This book is for developers and includes extensive documentation on embedded linux fundamentals, the toolchain, cross compilers, moving from other CPUs to ARM, and explicitly covers setting up the LART (SA1100) and Intel Assabet (SA1110) development systems, and the Psion5 PDA (ARM7 core), as well as useful basic GNU/Linux info for those new to GNU/Linux.
With so many interesting desktop-oriented Linux books published on an ongoing basis, we thought it would be valuable to create a convenient "Desktop Linux Book Roundup." We plan to update the Roundup on a continuing basis, so be sure to check back now and then to see the latest
listings. Linux is often thought to be inferior to Windows and Mac OS when it comes to multimedia applications. However, by using open source software and a few simple tips, one can easily turn a simple Linux box into a multimedia powerhouse, according to Kyle Rankin, author of Linux Multimedia
Hacks. Linux has often been overlooked for these types of applications, he writes, but multimedia programs under Linux are getting more and more mature. Rankin's book, one of O'Reilly's "Hacks" series, contains 100 hacks to help readers get the best multimedia experience from their computers, the publisher said.
- Linux Device Driver Concluded :
This is the last of five articles about character device drivers. In this final section, Georg deals with memory mapping devices, beginning with an overall descriptoin of Linux memory management concepts.
Though only a few drivers implement the memory mapping technique, it gives an interesting insight into the Linux system. I introduce memory management and its features, enabling us to play with the console, include memory mapping in drivers, and crash
systems.Since the days of the 80386, the Intel world has supported a technique called virtual addressing. Coming from the Z80 and 68000 world, my first thought about this was: ``You can allocate more memory than you have as physical RAM, as some addresses will be associated with portions of your hard disk''.
Hat Linux 6 Unleashed:
Red Hat Linux 6 Unleashed shows you how to install, configure, and manage version 6.0, the latest version of Red Hat operating system. Version 6.0 boasts the latest Linux kernel, a new and improved installation program, updated libraries and a host of other improvements. Learn how the operating system works from the inside out. The book covers everything from managing the installation processes to configuring the system as an Internet server. Extensive coverage of shells and shell programming is also covered, along with the programming tools and utilities that help the administrator and programmer perform their tasks more
efficiently. Red Hat Linux 6 Unleashed ships with its eponymous operating system on CD-ROM. Based on that fact alone, you're ahead of the game in terms of cost when you buy this book. Way ahead, in fact, considering that this book contains some of the best printed documentation of the Red Hat distribution around.
The UNIX Programming Environment Designed for first-time and experienced users, this book describes the
UNIX programming environment and philosophy in detail. Readers will gain an understanding not only of how to use the system, its components, and the programs, but also how these fit into the total environment.The bible of programming theory and practice is being updated for the first time in more than 20 years. The book is concerned with information structures--the representation of information within a computer, the structural interrelations between data elements and how to work with them efficiently, and applications to simulation, numerical methods and software design.
Linux Companion for System Administrators is a book for anyone who wants to get more out of their Linux box. It provides guidance for adapting the system to meet personal requirements and offers advice on getting real practical value from the various tools available. Focusing on the use of LINUX in a private or corporate environment, this book will help the experienced UNIX user attain a more efficient level of system administration. Linux Companion for System Administrators goes beyond general system administration, providing detailed coverage of networking with Linux. With the emphasis on the integration of Linux into heterogeneous networks and its employment as a file, DNS or BOOTP server, this book illustrates just why Linux is being employed more and more widely as an Intranet server.
World Linux Security:
You have in your hands a book I've been waiting to read for years a practical, hands-on guide to hardening your Linux system." align="right"> From the foreword by Eric S. Raymond Safeguard your Linux systems against today's most vicious attacks! Realistic, step-by-step techniques from one of the world's leading Linux security experts Covers IP Tables, ARP attacks, adaptive firewalls, VPNs, wireless networks, Samba, monitoring, 2.4 kernel security, and much more Quick and effective recovery from intrusions Web site contains important new tools for monitoring networks and locking out hackers "A comprehensive guide to system security: covers everything from hardening a system to system recovery after an attack." align="right"> Steve Bourne, Creator of the Bourne Shell Stop today's most vicious Internet attackers right in their tracks! Your Linux system will be attacked: maybe in minutes, certainly in days. Be ready! Real World Linux Security, Second Edition brings together state-of-the-art solutions and exclusive software for safeguarding any Linux-based system or network, and fighting off any intrusion.
For The Rest US:
A great many Linux books are available, covering the gamut from installation and advanced configuration through networks, programming and many other special uses. The majority of these books are lengthy, containing volumes of information. Those new to Linux often find them intimidating. So what should the Linux neophyte do? Turn to Linux for the Rest of Us, of course. Mark Rais' new book promises to be readable and to offer the information and first-hand knowledge the beginner needs.
The meat of the book is divided into five sections, for a total of 22 chapters and an appendix. Although the book is short and to the point, 108 pages, it does not skimp on information. Much of it reads like a technical or instruction manual, but with some real-world experience mixed in.
To Linux Guide:
O'Reilly's "Pocket Guide" series of books are among the most-used books on my book shelf. These slim volumes are a refreshing departure from the "bigger is better" norm among computer book publishers, most of whom use large fonts, out-of-date excerpts from the Linux Documentation Project How-Tos, and of course a CD or two in the back to amass reference tomes more impressive in bulk than in content or organization. As an editor myself, I understand the value of organizing and condensing information. Mark Twain certainly said it well when he ended a long letter with an apology about not having had the time to write at shorter length.
The first Pocket Guide I read was Rasmus Lehrdorf's guide to PHP. That book succeeded because 1) It was written by no less an authority on PHP than its originator 2) It contains the 20 percent of PHP commands that you use 80 percent of the time 3) It is readable not only as a reference but as a how-to, or guide.
Although many computer users think of PCI (Peripheral Component Interconnect) as a way of laying out electrical wires, it is actually a complete set of specifications defining how different parts of a computer should interact.
The PCI specification covers most issues related to computer interfaces. We are not going to cover it all here; in this section we are mainly concerned with how a PCI driver can find its hardware and gain access to it. The probing techniques discussed in "Automatic and Manual Configuration" in Chapter 2, "Building and Running Modules", and "Autodetecting the IRQ Number" in Chapter 9, "Interrupt Handling" can be used with PCI devices, but the specification offers a preferable alternative to probing.
The PCI architecture was designed as a replacement for the ISA standard, with three main goals: to get better performance when transferring data between the computer and its peripherals, to be as platform independent as possible, and to simplify adding and removing peripherals to the system.
Most of them are either too technical, too superficial, or they cover the wrong distribution. Finally, there's a book that offers the perfect balance: The Linux Book. It'll never insult your intelligence, nor will it make you wade through 1,000 pages of technical gibberish! You'll find the information you really need to install, configure, and maintain any current version of Linux (and integrate it seamlessly with your existing computers). If you don't need it, it's not here. If you do need it, it is. It's that simple! The Linux Book even presents a chapter-length cost-benefit analysis for any organization considering Linux. Whether you plan to run your desktop or your entire network with Linux, you won't find a more useful, practical guide.
The eCos embedded operating system offers an alternative to UNIX®-style operating systems for development work. This article examines how its architecture influences the development process, building a sample application and exploring the differences in architecture between eCos and UNIX.The eCos embedded operating system is closer in design to embedded systems such as
Vx Works than to embedded Linux®. In the UNIX environment, the operating system (OS) provides services to separate programs which run under it. By contrast, an eCos program contains both the entire OS and the target application. While the application can be multithreaded, only one application can run on the system at a given time.
With this overview, we can start talking about what it takes to build a real application on eCos. You can't just load up a Web server, perl, and some CGI scripts; you have to build everything as a single executable.
The key distinction is address space. In UNIX systems, the kernel has its own address space, which applications don't have access to, and each application can (and most do) have separate address spaces. eCos only has one address space. Similarly, each program on the UNIX system is a separate file, while with eCos, there is only one monolithic file.
This is a comprehensive book of commands from every major operating system including cross- references to each other based on functionality. With this book you can re-use or recycle knowledge you already have and use this knowledge in new or unfamiliar operating systems.
I believe in many cases you have the necessary knowledge to perform the task needed when working with a new operating system. You simply do not know what command to use. Using our unique Cross-Reference section of the book you can now work with an operating system that you are not so familiar with. You simply look up a command from an OS that you are familiar with and you can quickly see the same functionality of that command in every other OS. I have chosen 20+ different operating systems for this book