diff --git a/documentation/content/de/books/handbook/mac/_index.adoc b/documentation/content/de/books/handbook/mac/_index.adoc index 204e17e0a6..bc6c0186f6 100644 --- a/documentation/content/de/books/handbook/mac/_index.adoc +++ b/documentation/content/de/books/handbook/mac/_index.adoc @@ -1,936 +1,934 @@ --- title: Kapitel 15. Verbindliche Zugriffskontrolle part: Teil III. Systemadministration prev: books/handbook/jails next: books/handbook/audit showBookMenu: true weight: 19 params: path: "/books/handbook/mac/" --- [[mac]] = Verbindliche Zugriffskontrolle :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 15 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == Übersicht In FreeBSD 5.X wurden neue Sicherheits-Erweiterungen verfügbar, die aus dem TrustedBSD-Projekt übernommen wurden und auf dem Entwurf POSIX(R).1e basieren. Die beiden bedeutendsten neuen Sicherheits-Mechanismen sind Berechtigungslisten (Access Control Lists, ACL) und die verbindliche Zugriffskontrolle (Mandatory Access Control, MAC). Durch die MAC können Module geladen werden, die neue Sicherheitsrichtlinien bereitstellen. Mit Hilfe einiger Module kann beispielsweise ein eng umgrenzter Bereich des Betriebssystems gesichert werden, indem die Sicherheitsfunktionen spezieller Dienste unterstützt bzw. verstärkt werden. Andere Module wiederum betreffen in ihrer Funktion das gesamte System - alle vorhandenen Subjekte und Objekte. Das "Verbindliche" in der Namensgebung erwächst aus dem Fakt, dass die Kontrolle allein Administratoren und dem System obliegt und nicht dem Ermessen der Nutzer, wie es mit Hilfe der benutzerbestimmbaren Zugriffskontrolle (Discrectionary Access Control / DAC), dem Zugriffstandard für Dateien, gar der System V IPC in FreeBSD, normalerweise umgesetzt wird. Dieses Kapitel wird sich auf die Grundstruktur der Verbindlichen Zugriffskontrolle und eine Auswahl der Module, die verschiedenste Sicherheitsfunktionen zur Verfügung stellen, konzentrieren. Beim Durcharbeiten dieses Kapitels erfahren Sie: * Welche MAC Module für Sicherheitsrichtlinien derzeit in FreeBSD eingebettet sind und wie die entsprechenden Mechanismen funktionieren. * Was die einzelnen MAC Module an Funktionen realisieren und auch, was der Unterschied zwischen einer Richtlinie, die _mit_ Labels arbeitet, und einer, die _ohne_ Labels arbeitet, ist. * Wie Sie die MAC in ein System einbetten und effizient einrichten. * Wie die verschiedenen Richtlinienmodule einer MAC konfiguriert werden. * Wie mit einer MAC und den gezeigten Beispielen eine sicherere Umgebung erstellt werden kann. * Wie die Konfiguration einer MAC auf korrekte Einrichtung getestet wird. Vor dem Lesen dieses Kapitels sollten Sie bereits: * Grundzüge von UNIX(R) und FreeBSD verstanden haben. (crossref:basics[basics,Grundlagen des FreeBSD Betriebssystems]). * Mit den Grundzügen der Kernelkonfiguration und -kompilierung vertraut sein (crossref:kernelconfig[kernelconfig,Konfiguration des FreeBSD-Kernels]). * Einige Vorkenntnisse über Sicherheitskonzepte im Allgemeinen und deren Umsetzung in FreeBSD im Besonderen mitbringen (crossref:security[security,Sicherheit)]). [WARNING] ==== Der unsachgemäße Gebrauch der in diesem Kapitel enthaltenen Informationen kann den Verlust des Systemzugriffs, Ärger mit Nutzern oder die Unfähigkeit, grundlegende Funktionen des X-Windows-Systems zu nutzen, verursachen. Wichtiger noch ist, dass man sich nicht allein auf die MAC verlassen sollte, um ein System zu sichern. Die MAC verbessert und ergänzt lediglich die schon existierenden Sicherheits-Richtlinien - ohne eine gründliche und fundierte Sicherheitspraxis und regelmäßige Sicherheitsprüfungen wird Ihr System nie vollständig sicher sein. Außerdem sollte angemerkt werden, dass die Beispiele in diesem Kapitel auch genau dasselbe sein sollen, nämlich Beispiele. Es wird nicht empfohlen, diese bestimmten Beispiele auf einem Arbeitssystem umzusetzen. Das Einarbeiten der verschiedenen Sicherheitsmodule erfordert eine Menge Denkarbeit und viele Tests. Jemand, der nicht versteht, wie diese Module funktionieren, kann sich schnell darin wiederfinden, dass er (oder sie) das ganze System durchforsten und viele Dateien und Verzeichnisse neu konfigurieren muß. ==== === Was in diesem Kapitel nicht behandelt wird Dieses Kapitel behandelt einen großen Teil sicherheitsrelevanter Themen, bezogen auf die Verbindliche Zugriffskontrolle (MAC). Die gegenwärtige Entwicklung neuer MAC Module ist nicht abgedeckt. Einige weitere Module, die im MAC Framework enthalten sind, haben besondere Charakteristika, die zum Testen und Entwickeln neuer Module gedacht sind. Dies sind unter anderem man:mac_test[4], man:mac_stub[4] und man:mac_none[4]. Für weitere Informationen zu diesen Modulen und den entsprechend angebotenen Funktionen lesen Sie bitte die Manpages. [[mac-inline-glossary]] == Schlüsselbegriffe Bevor Sie weiterlesen, müssen noch einige Schlüsselbegriffe geklärt werden. Dadurch soll jegliche auftretende Verwirrung von vornherein beseitigt und die plötzliche Einführung neuer Begriffe und Informationen vermieden werden. * _Verbund_: Ein Verbund ist ist ein Satz von Programmen und Daten, die speziell und zusammen abgeschottet wurden, um Nutzern Zugriff auf diese ausgewiesenen Systembereiche zu gewähren. Man kann sagen, ein solcher Verbund ist eine Gruppierung, ähnlich einer Arbeitsgruppe, einer Abteilung, einem Projekt oder einem Thema. Durch die Nutzung von Verbünden (_compartments_) kann man Sicherheitsrichtlinien erstellen, die alles notwendige Wissen und alle Werkzeuge zusammenfassen. * _Hochwassermarkierung_: Eine solche Richtlinie erlaubt die Erhöhung der Sicherheitsstufe in Abhängigkeit der Klassifikation der gesuchten bzw. bereitzustellenden Information. Normalerweise wird nach Abschluss des Prozesses die ursprüngliche Sicherheitsstufe wieder hergestellt. Derzeit enthält die MAC Grundstruktur keine Möglichkeit, eine solche Richtlinie umzusetzen, der Vollständigkeit halber ist die Definition hier jedoch aufgeführt. * _Integrität_: Das Schlüsselkonzept zur Klassifizierung der Vertraulichkeit von Daten nennt man Integrität. Je weiter die Integrität erhöht wird, umso mehr kann man den entsprechenden Daten vertrauen. * _Label_: Ein Label ist ein Sicherheitsmerkmal, welches mit Dateien, Verzeichnissen oder anderen Elementen im System verbunden wird. Man sollte es wie einen Vertraulichkeitsstempel auffassen, der Dateien angehört wie beispielsweise die Zugriffszeit, das Erstellungsdatum oder auch der Name; sobald Dateien derart gekennzeichnet werden, bezeichnen diese Label die sicherheitsrelevanten Eigenschaften. Zugriff ist nur noch dann möglich, wenn das zugreifende Subjekt eine korrespondierende Kennzeichnung trägt. Die Bedeutung und Verarbeitung der Label-Werte ist von der Einrichtung der Richtlinie abhängig: Während einige Richtlinien das Label zum Kennzeichnen der Vertraulichkeit oder Geheimhaltungsstufe eines Objekts nutzen, können andere Richtlinien an derselben Stelle Zugriffsregeln festschreiben. * _Level_: Eine erhöhte oder verminderte Einstellung eines Sicherheitsmerkmals. Wenn das Level erhöht wird, wird auch die ensprechende Sicherheitsstufe angehoben. * _Niedrigwassermarkierung_: Eine solche Richtlinie erlaubt das Herabstufen des Sicherheitslevels, um weniger sensible Daten verfügbar zu machen. In die meisten Fällen wird das ursprüngliche Sicherheitslevel des Nutzers wiederhergestellt, sobald der Vorgang abgeschlossen ist. Das einzige Modul in FreeBSD, welches von dieser Richtlinie Gebrauch macht, ist man:mac_lomac[4]. * _Multilabel_: Die Eigenschaft `multilabel` ist eine Dateisystemoption, die entweder im Einzelbenutzermodus mit Hilfe des Werkzeugs man:tunefs[8], während des Bootvorgangs in der Datei man:fstab[5] oder aber beim Erstellen einen neues Dateisystems aktiviert werden kann. Diese Option erlaubt einem Administrator, verschiedenen Objekten unterschiedliche Labels zuzuordnen - kann jedoch nur zusammen mit Modulen angewendet werden, die auch tatsächlich mit Labels arbeiten. * _Objekt_: Ein Objekt oder auch Systemobjekt ist theoretisch eine Einheit, durch welche Information fließt, und zwar unter der Lenkung eines _Subjektes_. Praktisch schliesst diese Definition Verzeichnisse, Dateien, Felder, Bildschirme, Tastaturen, Speicher, Bandlaufwerke, Drucker und jegliche anderen Datenspeicher- oder -verarbeitungsgeräte ein. Im Prinzip ist ein Objekt ein Datenkontainer oder eine Systemressource - Zugriff auf ein _Objekt_ bedeutet, auf Daten zuzugreifen. * _Richtlinie_: Eine Sammlung von Regeln, die definiert, wie Zielvorgaben umgesetzt werden, nennt man Richtlinie. Eine _Richtlinie_ dokumentiert normalerweise, wie mit bestimmten Elementen umgegangen wird. Dieses Kapitel faßt den Begriff in diesem Kontext als _Sicherheitsrichtlinie_ auf; als eine Sammlung von Regeln, die den Fluß von Daten und Informationen kontrolliert und die gleichzeitig definiert, wer auf diese Daten und Informationen zugreifen darf. * _Anfälligkeit_: Dieser Begriff wird normalerweise verwendet, wenn man über MLS (Multi Level Security) spricht. Das Anfälligkeits-Level beschreibt, wie wichtig oder geheim die Daten sein sollen. Um so höher das Anfälligkeits-Level, um so wichtiger die Geheimhaltung bzw. Vertraulichkeit der Daten. * _Einzel-Label_: Von einem Einzel-Label spricht man, wenn für ein ganzes Dateisystem lediglich ein einziges Label verwendet wird, um Zugriffskontrolle über den gesamten Datenfluss zu erzwingen. Sobald diese Option verwendet wird - und das ist zu jeder Zeit, wenn die Option `multilabel` nicht explizit gesetzt wurde - sind alle Dateien und Verzeichnisse mit dem gleichen Label gekennzeichnet. * _Subjekt_: Ein Subjekt ist jedwede Einheit, die Information in Fluss zwischen Objekten bringt: Zum Beispiel ein Nutzer, ein Nutzerprozessor, ein Systemprozeß usw. In FreeBSD handelt es sich meistens um einen Thread, der als Prozeß im Namen eines Nutzers arbeitet. [[mac-initial]] == Erläuterung Mit all diesen neuen Begriffen im Kopf können wir nun überlegen, wie die Möglichkeiten der verbindlichen Zugriffskontrolle (MAC) die Sicherheit eines Betriebssystems als Ganzes erweitern. Die verschiedenen Module, die durch die MAC bereitgestellt werden, können verwendet werden, um das Netzwerk oder Dateisysteme zu schützen, Nutzern den Zugang zu bestimmten Ports oder Sockets zu verbieten und vieles mehr. Die vielleicht beste Weise, die Module zu verwenden, ist, sie miteinander zu kombinieren, indem mehrere Sicherheitsrichtlinienmodule gleichzeitig eine mehrschichtige Sicherheitsumgebung schaffen. Das ist etwas anderes als singuläre Richtlinien wie zum Beispiel die Firewall, die typischerweise Elemente eines Systems stabilisiert, das nur für einen speziellen Zweck verwendet wird. Der Verwaltungsmehraufwand ist jedoch von Nachteil, zum Beispiel durch die Verwendung von mehreren Labels oder dem eigenhändigen Erlauben von Netzwerkzugriffen für jeden einzelnen Nutzer. Solche Nachteile sind allerdings gering im Vergleich zum bleibenden Effekt der erstellten Struktur. Die Möglichkeit zum Beispiel, für konkrete Anwendungen genau die passenden Richtlinien auszuwählen und einzurichten, senkt gleichzeitig die Arbeitskosten. Wenn man unnötige Richtlinien aussortiert, kann man die Gesamtleistung des Systems genauso steigern wie auch eine höhere Anpassungsfähigkeit gewährleisten. Eine gute Umsetzung der MAC beinhaltet eine Prüfung der gesamten Sicherheitsanforderungen und einen wirksamen Einsatz der verschiedenen Module. Ein System, auf dem eine MAC verwendet wird, muß zumindest garantieren, dass einem Nutzer nicht gestattet wird, Sicherheitsmerkmale nach eigenem Ermessen zu verändern; dass Arbeitswerkzeuge, Programme und Skripte, innerhalb der Beschränkungen arbeiten können, welche die Zugriffsregeln der ausgewählten Module dem System auferlegen; und dass die volle Kontrolle über die Regeln der MAC beim Administrator ist und bleibt. Es ist die einsame Pflicht des zuständigen Administrators, die richtigen Module sorgfältig auszuwählen. Einige Umgebungen könnten eine Beschränkung der Zugriffe über die Netzwerkschnittstellen benötigen - hier wären die Module man:mac_portacl[4], man:mac_ifoff[4] und sogar man:mac_biba[4] ein guter Anfang. In anderen Fällen muß man sehr strenge Vertraulichkeit von Dateisystemobjekten gewährleisten - dafür könnte man man:mac_bsdextended[4] oder man:mac_mls[4] einsetzen. Die Entscheidung, welche Richtlinien angewandt werden, kann auch anhand der Netzwerk-Konfiguration getroffen werden. Nur bestimmten Benutzern soll erlaubt werden, via man:ssh[1] auf das Netzwerk oder Internet zuzugreifen - man:mac_portacl[4] wäre eine gute Wahl. Aber für was entscheidet man sich im Falle eines Dateisystems? Soll der Zugriff auf bestimmte Verzeichnisse von spezifischen Nutzern oder Nutzergruppen separiert werden? Oder wollen wir den Zugriff durch Nutzer oder Programme auf spezielle Dateien einschränken, indem wir gewisse Objekte als geheim einstufen? Der Zugriff auf Objekte kann einigen vertraulichen Nutzern gestattet werden, anderen wiederum verwehrt. Als Beispiel sei hierzu ein großes Entwicklerteam angeführt, das in kleine Gruppen von Mitarbeitern aufgeteilt wurde. Die Entwickler von Projekt A dürfen nicht auf Objekte zugreifen, die von den Entwicklern von Projekt B geschrieben wurden. Sie müssen aber trotzdem auf Objekte zugreifen können, die von einem dritten Entwicklerteam geschaffen wurden - alles in allem eine verzwickte Situation. Wenn man die verschiedenen Module der MAC richtig verwendet, können Anwender in solche Gruppen getrennt und ihnen der Zugriff zu den gewünschten Systemobjekten gestattet werden - ohne Angst haben zu müssen, dass Informationen in die falschen Hände geraten. So hat jedes Modul, das eine Sicherheitsrichtlinie verfügbar macht, einen eigenen Weg, die Sicherheit des Systems zu verstärken. Die Auswahl der Module sollte auf einem gut durchdachten Sicherheitskonzept gründen. In vielen Fällen muß das gesamte Konzept eines Systems überarbeitet und neu eingepflegt werden. Ein guter Überblick über die Möglichkeiten der verschiedenen von der MAC angebotenen Module hilft einem Administrator, die besten Richtlinien für seine spezielle Situation auszuwählen. Im FreeBSD-Standardkernel ist die Option zur Verwendung der MAC nicht enthalten. Daher muß die Zeile [.programlisting] .... options MAC .... der Kernelkonfiguration hinzugefügt und der Kernel neu übersetzt und installiert werden. [CAUTION] ==== Verschiedenen Anleitungen für die MAC empfehlen, die einzelnen Module direkt in den Kernel einzuarbeiten. Dabei ist es jedoch möglich, das System aus dem Netzwerk auszusperren oder gar schlimmeres. Die Arbeit mit der MAC ist ähnlich der Arbeit mit einer Firewall - man muß, wenn man sich nicht selbst aus dem System aussperren will, genau aufpassen. Man sollte sich eine Möglichkeit zurechtlegen, wie man eine Implementation einer MAC rückgängig machen kann - genauso wie eine Ferninstallation über das Netzwerk nur mit äußerster Vorsicht vorgenommen werden sollte. Es wird daher empfohlen, die Module nicht in den Kernel einzubinden, sondern sie beim Systemstart via [.filename]#/boot/loader.conf# zu laden. ==== [[mac-understandlabel]] == MAC Labels verstehen MAC Label sind Sicherheitsmerkmale, die, wenn sie zum Einsatz kommen, allen Subjekten und Objekten im System zugeordnet werden. Wenn ein Administrator ein solches Merkmal bzw. Attribut setzen will, muß er/sie verstehen können, was da genau passiert. Die Attribute, die im speziellen Fall zu vergeben sind, hängen vom geladenen Modul und den darin jeweils implementierten Richtlinien ab. Jedes dieser Richtlinienmodule setzt die Arbeit mit seinen entsprechenden Attributen in individueller Weise um. Falls der Nutzer nicht versteht, was er da konfiguriert, oder auch, was seine Konfiguration für Begleiterscheinungen mit sich bringt, ergibt sich meist als Resultat ein unerwartetes, ja sogar unerwünschtes Verhalten des gesamten Systems. Ein Label, einem Objekt verliehen, wird verwendet, um anhand einer Richtlinie eine sicherheitsrelevante Entscheidung über Zugriffsrechte zu fällen. In einigen Richtlinien enthält bereits das Label selbst alle dafür nötigen Informationen. Andere Richtlinien verwenden diese Informationen, um zunächst ein komplexes Regelwerk abzuarbeiten. Wenn man zum Beispiel einer Datei das Attribut `biba/low` zuordnet, wird dieses durch das Biba Sicherheitsrichtlinienmodul, und zwar mit dem Wert "low", verarbeitet. Einige der Richtlinienmodule, die die Möglichkeit zum Vergeben von Labels unter FreeBSD unterstützen, bieten drei vordefinierte Labels an. Dieses nennen sich "high", "low" und "equal". Obwohl die verschiedenen Module die Zugriffskontrolle auf verschiedene Weisen regeln, kann man sich sicher sein, das das "low"-Label der untersten, unsichersten Einstellung entspricht, das "equal"-Label die Verwendung des Moduls für das jeweilige Objekt oder Subjekt deaktiviert - und das "high"-Label die höchstmögliche Einstellung erzwingt. Im Speziellen gilt diese Aussage für die Richtlinien(-module) MLS und Biba. In den meisten Umgebungen, sogenannten Single Label Environments, wird Objekten nur ein einzelnes Label zugewiesen. Dadurch wird nur ein Regelsatz für die Zugriffskontrolle auf das gesamte System verwendet - und das ist meistens auch tatsächlich ausreichend. Es gibt wenige Fälle, in denen mehrere Labels auf Dateisystemobjekte oder -subjekte verwendet werden. In einem solchen Fall muß das Dateisystem mit der man:tunefs[8]-Option `multilabel` angepaßt werden, da `single label` die Standardeinstellung ist. Bei der Verwendung von Biba oder MLS kann man numerische Labels vergeben, die genau das Level angeben, an welcher Stelle in der Hierarchie das Subjekt oder Objekt einzuordnen ist. Dieses numerische Level wird verwendet, um Informationen in verschiedene Gruppen aufzuteilen oder zu sortieren - damit zum Beispiel nur Subjekte, die zu einer gewissen Vertraulichkeitsstufe gehören, Zugang zu einer Gruppe von Objekten erhalten. In den meisten Fällen wird ein Administrator nur ein einzelnes Label für das gesamte Dateisystem verwenden. _Moment mal, dass ist doch dasselbe wie DAC! Ich dachte, MAC würde die Kontrolle strengstens an den Administrator binden!_ Diese Aussage hält immer noch stand - `root` ist derjenige, der die Kontrolle ausübt und die Richtlinie konfiguriert, so dass Nutzer in die entsprechenden, angemessenen Kategorien / Zugriffsklassen eingeordnet werden. Nunja, einige Module schränken `root` selbst ein. Die Kontrolle über Objekte wird dann einer Gruppe zugewiesen, jedoch hat `root` die Möglichkeit, die Einstellungen jederzeit zu widerrufen oder zu ändern. Dies ist das Hierarchie/Freigabe-Modell, das durch Richtlinien wie MLS oder Biba bereitgestellt wird. === Konfigurieren der Labels Gewissermaßen alle Aspekte der Labelkonfiguration werden durch Werkzeuge das Basissystems umgesetzt. Die entsprechenden Kommandos bieten eine einfache Schnittstelle zum Konfigurieren, Manipulieren und auch Verifizieren der gekennzeichneten Objekte. Mit den beiden Kommandos man:setfmac[8] und man:setpmac[8] kann man eigentlich schon alles machen. Das Kommando `setfmac` wird verwendet, um ein MAC-Label auf einem Systemobjekt zu setzen, `setpmac` hingegen zum Setzen von Labels auf Systemsubjekte. Als Beispiel soll hier dienen: [source,shell] .... # setfmac biba/high test .... Wenn bei der Ausführung dieses Kommandos keine Fehler aufgetreten sind, gelangt man zur Eingabeaufforderung zurück. Nur wenn ein Fehler auftritt, verhalten sich diese Kommandos nicht still, ganz wie auch die Kommandos man:chmod[1] und man:chown[8]. In einigen Fällen wird dieser Fehler `Permission denied` lauten und gewöhnlich dann auftreten, wenn ein Label an einem Objekt angebracht oder verändert werden soll, das bereits (Zugriffs-)Beschränkungen unterliegt. Der Systemadministrator kann so eine Situation mit Hilfe der folgenden Kommandos überwinden: [source,shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... Wie wir hier sehen, kann `setpmac` verwendet werden, um die vorhandene Einstellungen zu umgehen, indem dem gestarteten Prozeß ein anderes, valides Label zugeordnet wird. Das Werkzeug `getpmac` wird normalerweise auf gerade laufende Prozesse angewendet. Ähnlich sendmail: Als Argument wird statt eines Kommandos eine eine Prozeß-ID übergeben, es verbirgt sich doch dieselbe Logik dahinter. Wenn ein Nutzer versucht, eine Datei zu verändern, auf die er keinen Zugriff hat, entsprechend der Regeln eines geladenen Richtlinienmoduls, wird der Fehler `Operation not permitted` durch die Funktion `mac_set_link` angezeigt. ==== Übliche Typen von Labeln Wenn man die Module man:mac_biba[4], man:mac_mls[4] und man:mac_lomac[4] verwendet, hat man die Möglichkeit, einfache Label zu vergeben. Diese nennen sich `high`, `low` und `equal`. Es folgt eine kurze Beschreibung, was diese Labels bedeuten: * Das Label `low` ist definitionsgemäß das niedrigeste Label, das einem Objekt oder Subjekt verliehen werden kann. Wird es gesetzt, kann die entsprechende Entität nicht mehr auf Entitäten zugreifen, die das Label `high` tragen. * Das Label `equal` wird Entitäten verliehen, die von der Richtlinie ausgenommen sein sollen. * Das Label `high` verleiht einer Entität die höchstmögliche Einstellung. Unter Beachtung jedes einzelnen Richtlinienmoduls moduliert und beschränkt jede dieser Einstellungen den Informationsfluß unterschiedlich. Genaue Erklärungen zu den Charakteristika der einfachen Labels in den verschiedenen Modulen finden sich im entsprechenden Unterabschnitt dieses Kapitels oder in den Manpages. ===== Fortgeschrittene Label-Konfiguration Numerische klassifizierte Labels werden verwendet in der Form `Klasse:Verbund+Verbund`. Demnach ist das Label [.programlisting] .... biba/10:2+3+6(5:2+3-15:2+3+4+5+6) .... folgendermaßen zu lesen: "Biba Policy Label"/"effektive Klasse 10" :"Verbund 2,3 und 6": ("Low-Klasse 5:..."- "High-Klasse 15:...") In diesem Beispiel ist die erstgenannte Klasse als "effektive Klasse" zu bezeichnen. Ihr werden die "effektiven Verbünde" zugeordnet. Die zweite Klasse ist die "Low"-Klasse und die letzte die "high"-Klasse. Die allermeisten Konfigurationen kommen ohne die Verwendungen von solchen Klassen aus, nichtsdestotrotz kann man sie für erweiterte Konfigurationen verwenden. Sobald sie auf _Systemsubjekte_ angewendet werden, haben diese eine gegenwärtige Klasse/Verbund- Konfiguration und diese muß im definierten Rahmen gegebenenfalls angepaßt (erhöht oder gesenkt) werden. Im Gegensatz dazu haben _Systemobjekte_ alle eingestellten (effektive, High- und Low-Klasse) gleichzeitig. Dies ist notwendig, damit auf Sie von den _Systemsubjekten_ in den verschiedenen Klassen gleichzeitig zugegriffen werden kann. Die Klasse und und die Verbünde in einem Subjekt-Objekt-Paar werden zum Erstellen einer sogenannten Dominanz-Relation verwendet, in welcher entweder das Subjekt das Objekt, das Objekt das Subjekt, keines das andere dominiert oder sich beide gegenseitig dominieren. Der Fall, dass sich beide dominieren, tritt dann ein, wenn die beiden Labels gleich sind. Wegen der Natur des Informationsflusses in Biba kann man einem Nutzer Rechte für einen Reihe von Abteilungen zuordnen, die zum Beispiel mit entsprechenden Projekten korrespondieren. Genauso können aber auch Objekten mehrere Abteilungen zugeordnet sein. Die Nutzer müssen eventuell ihre gegenwärtigen Rechte mithilfe von `su` or `setpmac` anpassen um auf Objekte in einer Abteilung zuzugreifen, zu der sie laut ihrer effektiven Klasse nicht berechtigt sind. ==== Nutzer- und Label-Einstellungen Nutzer selbst brauchen Labels damit ihre Dateien und Prozesse korrekt mit der Sicherheitsrichtlinie zusammenarbeitet, die für das System definiert wurde. Diese werden in der Datei [.filename]#login.conf# durch die Verwendung von Login- Klassen zugeordnet. Jedes Richtlinienmodul, das Label verwendet, arbeitet mit diesen Login-Klassen. Beispielhaft wird der folgende Eintrag, der für jede Richtlinie eine Einstellung enthält, gezeigt: [.programlisting] .... default:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... Die Label-Option in der letzten Zeile legt fest, welches Standard-Label für einen Nutzer erzwungen wird. Nutzern darf niemals gestattet werden, diese Werte selbst zu verändern, demnach haben Nutzer in dieser Beziehung auch keine Wahlfreiheit. In einer richtigen Konfiguration jedoch wird kein Administrator alle Richtlinienmodule aktivieren wollen. Es wird an dieser Stelle ausdrücklich empfohlen, dieses Kapitel zu Ende zu lesen, bevor irgendein Teil dieser Konfiguration ausprobiert wird. [NOTE] ==== Nutzer können ihr eigenes Label nach dem Loginvorgang durchaus ändern. Jedoch kann diese Änderung nur unter den Auflagen der gerade gültigen Richtlinie geschehen. Im Beispiel oben wird für die Biba-Richtlinie eine minimale Prozeßintegrität von 5, eine maximale von 15 angegeben, aber die Voreinstellung des tatsächlichen Labels ist 10. Der Nutzerprozeß läuft also mit einer Integrität von 10 bis das Label verändert wird, zum Beispiel durch eine Anwendung des Kommandos `setpmac`, welches jedoch auf den Bereich eingeschränkt wird, der zum Zeitpunkt des Logins angegeben wurde, in diesem Fall von 5 bis 15. ==== Nach einer Änderung der Datei [.filename]#login.conf# muß in jedem Fall die Befähigungsdatenbank mit dem Kommando `cap_mkdb` neu erstellt werden - und das gilt für alle im weiteren Verlauf gezeigten Beispiele und Diskussionspunkte. Es ist nützlich anzumerken, dass viele Einsatzorte eine große Anzahl von Nutzern haben, die wiederum viele verschiedenen Nutzerklassen angehören sollen. Hier ist eine Menge Planungsarbeit notwendig, da die Verwaltung sehr unübersichtlich und schwierig ist. ==== Netzwerkschnittstellen und die zugehörigen Label Labels können auch, wenn man sie an Netzwerkschittstellen vergibt, helfen, den Datenfluß durch das Netzwerk zu kontrollieren. Das funktioniert in allen Fällen genau so wie mit Objekten. Nutzer, die in der Biba-Richtlinie das Label `high` tragen, dürfen nicht auf Schnittstellen zugreifen, die `low` markiert sind usw. Die Option `maclabel` wird via `ifconfig` übergeben. Zum Beispiel [source,shell] .... # ifconfig bge0 maclabel biba/equal .... belegt die Schnittstelle mit dem MAC Label `biba/equal`. Wenn eine komplexe Einstellung wie `biba/high(low-high)` verwendet wird, muß das gesamte Label in Anführungszeichen geschrieben werden, da sonst eine Fehlermeldung zurückgegeben wird. Jedes Richtlinienmodul, das die Vergabe von Labels unterstützt, stellt einen Parameter bereit, mit dem das MAC Label für Netzwerkschnittstellen deaktiviert werden kann. Das Label der Netzwerkschnittstelle auf `equal` zu setzen, führt zum selben Ergebnis. Beachten Sie die Ausgabe von `sysctl`, die Manpages der verschiedenen Richtlinien oder eben die Informationen, die im weiteren Verlauf dieses Kapitels angeboten werden, um mehr zu diesen Parametern zu erfahren. === Single- oder Multilabel? Als Standardeinstellung verwendet das System die Option `single label`. Was bedeutet das für den Administrator? Es gibt einige Unterschiede zwischen `single label` und `multilabel`. In ihrer ureigenen Weise bieten beide Vor- und Nachteile bezogen auf die Flexibilität bei der Modellierung der Systemsicherheit. Die Option `single label` gibt jedem Subjekt oder Objekt genau ein einziges Label, zum Beispiel `biba/high`. Mit dieser Option hat man einen geringeren Verwaltungsaufwand, aber die Flexibilität beim Einsatzes von Richtlinien ist ebenso gering. Viele Administratoren wählen daher auch die Option `multilabel` im Sicherheitsmodell, wenn die Umstände es erfordern. Die Option `multilabel` gestattet, jedem einzelnen Subjekt oder Objekt seine eigenen unabhängigen Label zu zuzuordnen. Die Optionen `multilabel` und `singlelabel` betreffen jedoch nur die Richtlinien, die Labels als Leistungsmerkmal verwenden, einschließlich der Richtlinien Biba, Lomac, MLS und SEBSD. Wenn Richtlinien benutzt werden sollen, die ohne Labels auskommen, wird die Option `multilabel` nicht benötigt. Dies betrifft die Richtlinien `seeotheruids`, `portacl` und `partition`. Man sollte sich dessen bewußt sein, dass die Verwendung der Option `multilabel` auf einer Partition und die Erstellung eines Sicherheitsmodells auf der Basis der FreeBSD `multilevel` Funktionalität einen hohen Verwaltungsaufwand bedeutet, da alles im Dateisystem ein Label bekommt. Jedes Verzeichnis, jede Datei und genauso jede Schnittstelle. Das folgende Kommando aktiviert `multilabel` für ein Dateisystem. Dies funktioniert nur im Einzelbenutzermodus: [source,shell] .... # tunefs -l enable / .... In einer Swap-Partition wird dies nicht benötigt. [NOTE] ==== Falls Sie Probleme beim Setzen der Option `multilabel` auf der Root-Partition bemerken, lesen Sie bitte <> dieses Kapitels. ==== [[mac-planning]] == Planung eines Sicherheitsmodells Wann immer eine neue Technologie eingepflegt werden soll, ist es wichtig, vorher einen Plan zu erstellen. In den verschiedenen Etappen der Planung sollte der Administrator nie das "Große Ganze" aus den Augen verlieren und mindestens die folgenden Punkte beachten: * Die Anforderungen * Die Ziele Wenn Sie MAC verwenden möchten, sind das im Besonderen folgende Punkte: * Wie werden Informationen und Ressourcen auf den Zielsystemen klassifiziert? * Welche Arten von Informationen bzw. Ressourcen sollen im Zugang beschränkt sein und welche Art Einschränkung soll verwendet werden? * Welche(s) MAC Modul(e) wählt man, um sein Ziel zu erreichen? Es ist immer möglich, die Einstellungen des Systems und der Systemressourcen im Nachhinein zu "optimieren". Es ist aber wirklich lästig, das gesamte Dateisystem zu durchsuchen, um Dateien oder Benutzerkonten zu reparieren. Eine gute Planung hilft dem Administrator, sich einer sorgenfreien und effizienten Umsetzung eines Sicherheitsmodells zu versichern. Testlauf des Sicherheitsmodells _vor_ dem Einsatz in seiner richtigen Arbeitsumgebung ist auf jeden Fall empfehlenswert. Die Idee, ein System mit einer MAC einfach loslaufen zu lassen, ist wie direkt auf einen Fehlschlag hinzuarbeiten. Jede Umgebung hat ihre eigenen Anforderungen. Ein tiefgreifendes und vollständiges Sicherheitsprofil zu erstellen spart weitere Änderungen, nachdem das System in Betrieb genommen wurde. Also werden die folgenden Abschnitte die verschiedenen Module vorstellen, die den Administratoren zur Verfügung gestellt werden, die Nutzung und Konfiguration der einzelnen Module beschreiben; und in einigen Fällen Einblicke gewähren, für welche Situationen welche Module besonders geeignet sind. Zum Beispiel ein Webserver kann von der Verwendung der man:mac_biba[4] oder der man:mac_bsdextended[4] Richtlinie profitieren. In anderen Fällen, an einem Rechner mit nur wenigen lokalen Benutzern, ist die man:mac_partition[4] die Richtlinie der Wahl. [[mac-modules]] == Modulkonfiguration Jedes Modul, das in der MAC enthalten ist, kann entweder direkt in den Kernel eingefügt werden oder als Kernelmodul in der Laufzeit des Systems geladen werden. Empfohlen wird, den Modulnamen in der Datei [.filename]#/boot/loader.conf# anzufügen, so dass das Modul am Anfang des Bootvorgangs eingebunden wird. Die folgenden Abschnitte werden verschiedene MAC Module und ihre jeweiligen Vor- und Nachteile vorstellen. Außerdem wird erklärt, wie sie in bestimmte Umgebungen eingearbeitet werden können. Einige Module unterstützen die Verwendung von `Labels`, das heißt Zugriffskontrolle durch hinzufügen einer Kennzeichnung in der Art von "dieses ist erlaubt, jenes aber nicht". Eine Label-Konfigurationdatei kontrolliert unter anderem, wie auf Dateien zugegriffen oder wie über das Netzwerk kommuniziert werden darf. Im vorangehenden Abschnitt wurde bereits erläutert, wie die Option `multilabel` auf Dateisysteme angewendet wird, um eine Zugriffskontrolle auf einzelne Dateien oder ganze Dateisysteme zu konfigurieren. Eine `single label` Konfiguration erzwingt ein einzelnes Label für das gesamte System. Daher wird die `tunefs`-Option `multilabel` genannt. [[mac-seeotheruids]] == Das MAC Modul seeotheruids Modulename: [.filename]#mac_seeotheruids.ko# Parameter in der Kernelkonfiguration: `options MAC_SEEOTHERUIDS` Bootparameter: `mac_seeotheruids_load="YES"` Das Modul man:mac_seeotheruids[4] erweitert die `sysctl`-Variablen `security.bsd.see_other_uids` und `security.bsd.see_other_gids`. Diese Optionen benötigen keine im Vorhinein zu setzenden Labels und können leicht durchschaubar mit den anderen MAC-Modulen zusammenarbeiten. Nachdem das Modul geladen wurde, können die folgenden `sysctl` Variablen verwendet werden. * `security.mac.seeotheruids.enabled` dient zur Aktivierung des Moduls, zunächst mit den Standardeinstellungen. Diese verhindern, dass Nutzer Prozesse und Sockets sehen können, die ihnen nicht selbst gehöen. * `security.mac.seeotheruids.specificgid_enabled` kann eine spezifizierte Nutzergruppe von dieser Richtlinie ausnehmen. Die entsprechende Gruppe muß an den Parameter `security.mac.seeotheruids.specificgid=XXX` übergeben werden, wobei _XXX_ die ID der Gruppe ist, die von der Richtlinie ausgenommen werden soll. * `security.mac.seeotheruids.primarygroup_enabled` kann verwendet werden, um eine spezifische, _primäre_ Nutzergruppe von der Richtlinie auszuschliessen. Dieser Parameter und `security.mac.seeotheruids.specificgid_enabled` schließen einander aus. [[mac-bsdextended]] == Das MAC Modul bsdextended Modulname: [.filename]#mac_bsdextended.ko# Parameter in der Kernelkonfiguration: `options MAC_BSDEXTENDED` Bootparameter: `mac_bsdextended_load="YES"` Das Modul man:mac_bsdextended[4] erstellt eine Firewall für das Dateisystem und ist eine Erweiterung des sonst üblichen Rechtemodells. Es erlaubt einem Administrator einen Regelsatz zum Schutz von Dateien, Werkzeugen und Verzeichnissen in der Dateisystemhierarchie zu erstellen, der einer Firewall ähnelt. Sobald auf ein Objekt im Dateisystem zugegriffen werden soll, wird eine Liste von Regel abgearbeitet, bis eine passende Regel gefunden wird oder die Liste zu Ende ist. Das Verhalten kann durch die Änderung des man:sysctl[8] Parameters `security.mac.bsdextended.firstmatch_enabled` eingestellt werden. Ähnlich wie bei den anderen Firewallmodulen in FreeBSD wird eine Datei erstellt, welche die Zugriffsregeln enthält. Diese wird beim Systemstart durch eine Variable in man:rc.conf[5] eingebunden. Der Regelsatz kann mit dem Programm man:ugidfw[8] eingepflegt werden, welches eine Syntax bereitstellt, die der von man:ipfw[8] gleicht. Weitere Werkzeuge können auch selbst erstellt werden, indem die Funktionen der Bibliothek man:libugidfw[3] verwendet werden. Bei der Arbeit mit diesem Modul ist äußerste Vorsicht geboten - falscher Gebrauch kann den Zugriff auf Teile des Dateisystems komplett unterbinden. === Beispiele Nachdem das Modul man:mac_bsdextended[4] erfolgreich geladen wurde, zeigt das folgende Kommando die gegenwärtig aktiven Regeln an: [source,shell] .... # ugidfw list 0 slots, 0 rules .... Wie erwartet, sind keine Regeln definiert. Das bedeutet, das auf alle Teile des Dateisystems zugegriffen werden kann. Um eine Regel zu definieren, die jeden Zugriff durch Nutzer blockiert und nur die Rechte von `root` unangetastet läßt, muß lediglich dieses Kommando ausgeführt werden: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... Das ist allerdings keine gute Idee, da nun allen Nutzern der Zugriff auf selbst die einfachsten Programme wie `ls` untersagt wird. Angemessener wäre etwas wie: [source,shell] .... # ugidfw set 2 subject uid user1 object uid user2 mode n # ugidfw set 3 subject uid user1 object gid user2 mode n .... Diese Befehle bewirken, dass `user1` keinen Zugriff mehr auf Dateien und Programme hat, die `_user2_` gehören. Dies schließt das Auslesen von Verzeichniseinträgen ein. Anstelle `uid user1` könnte auch `not uid _user2_` als Parameter übergeben werden. Dies würde diesselben Einschränkungen für alle Nutzer bewirken anstatt nur einen einzigen. [NOTE] ==== `root` ist von diesen Einstellungen nicht betroffen. ==== Dies sollte als Überblick ausreichen, um zu verstehen, wie das Modul man:mac_bsdextended[4] helfen kann, das Dateisystem abzuschotten. Weitere Informationen bieten die Manpages man:mac_bsdextended[4] und man:ugidfw[8]. [[mac-ifoff]] == Das MAC Modul ifoff Modulname: [.filename]#mac_ifoff.ko# Parameter für die Kernelkonfiguration: `options MAC_IFOFF` Bootparameter: `mac_ifoff_load="YES"` Das Modul man:mac_ifoff[4] ist einzig dazu da, Netzwerkschnittstellen im laufenden Betrieb zu deaktivieren oder zu verhindern, das Netzwerkschnittstellen während der Bootphase gestartet werden. Dieses Modul benötigt für seinen Betrieb weder Labels, die auf dem System eingerichtet werden müssen, noch hat es Abhängigkeiten zu anderen MAC Modulen. Der größte Teil der Kontrolle geschieht über die im folgenden aufgelisteten `sysctl`-Parameter: * `security.mac.ifoff.lo_enabled` schaltet den gesamten Netzwerkverkehr auf der Loopback-Schnittstelle man:lo[4] an bzw. aus. * `security.mac.ifoff.bpfrecv_enabled` macht das Gleiche für den Berkeley Paket Filter man:bpf[4]. * `security.mac.ifoff.other_enabled` schaltet den Verkehr für alle anderen Netzwerkschnittstellen. Die wahrscheinlich häufigste Nutzung von man:mac_ifoff[4] ist die Überwachung des Netzwerks in einer Umgebung, in der kein Netzwerkverkehr während des Bootvorgangs erlaubt werden soll. Eine andere mögliche Anwendung wäre ein Script, das mit Hilfe von package:security/aide[] automatisch alle Schnittstellen blockiert, sobald Dateien in geschützten Verzeichnissen angelegt oder verändert werden. [[mac-portacl]] == Das MAC Modul portacl Modulname: [.filename]#mac_portacl.ko# Parameter für die Kernelkonfiguration: `options MAC_PORTACL` Bootparameter: `mac_portacl_load="YES"` Mit Hilfe des Moduls man:mac_portacl[4] können die Anbindungen an die lokalen TCP und UDP Ports durch eine Vielzahl von `sysctl` Variablen beschränkt werden. Genauer gesagt ermöglicht man:mac_portacl[4] Nutzern ohne `root`-Rechten den Zugriff auf zu bestimmende privilegierte Ports, also denen innerhalb der ersten 1024. Sobald das Modul geladen wurde, ist die Richtlinie für alle Sockets verfügbar. Die folgenden Variablen können für die Konfiguration verwendet werden: * `security.mac.portacl.enabled` schaltet die Anwendung der Richtlinie ein oder aus. * `security.mac.portacl.port_high` gibt den höchsten Port an, der von der Richtlinie man:mac_portacl[4] betroffen sein soll. * `security.mac.portacl.suser_exempt` nimmt, wenn es einen Wert ungleich Null zugewiesen bekommt, `root` von der Richtlinie aus. * `security.mac.portacl.rules` enthält als Wert die eigentliche `mac_portacl` Richtlinie. Die eigentliche Konfiguration der `mac_portacl` Richtlinie wird der `sysctl`-Variablen `security.mac.portacl.rules` als Zeichenkette der Form `rule[,rule,...]` übergeben. Jede einzelne Regel hat die Form `idtype:id:protocol:port`. Der Parameter [parameter]#idtype# ist entweder `uid` oder `gid` und wird verwendet, um den Parameter [parameter]#id# als Nutzer-ID oder Gruppen-ID zu kennzeichnen. Der Parameter [parameter]#protocol# gibt an, ob die Regel ür TCP oder UDP gelten soll (indem man den Wert auf `tcp` oder `udp` setzt). Und der letzte Parameter, [parameter]#port#, enthält die Nummer des Ports, auf den der angegebene Nutzer bzw. die angegebene Gruppe Zugriff erhalten soll. [NOTE] ==== Da der Regelsatz direkt vom Kernel ausgewertet wird, können nur Zahlenwerte übergeben werden. Das heißt, Namen von Nutzern, Gruppen oder Dienstnamen aus der Datei [.filename]#/etc/services# funktionieren nicht. ==== Auf UNIX(R)-artigen Betriebssystemen sind die Ports kleiner 1024 privilegierten Prozessen vorbehalten, müssen also mit als/von `root` gestartet werden und weiterhin laufen. Damit man:mac_portacl[4] die Vergabe von Ports kleiner als 1024 an nicht privilegierte Prozesse übernehmen kann, muß die UNIX(R) Standardeinstellung deaktiviert werden. Dazu ändert man die man:sysctl[8] Variablen `net.inet.ip.portrange.reservedlow` und `net.inet.ip.portrange.reservedhigh` auf den Wert "0". Weiterführende Informationen entnehmen Sie bitte den unten aufgeführten Beispielen oder der Man-Page man:mac_portacl[4]! === Beispiele Die folgenden Beispiele sollten ein wenig Licht in die obige Diskussion bringen: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 net.inet.ip.portrange.reservedhigh=0 .... Zunächst bestimmen wir, dass man:mac_portacl[4] für alle privilegierten Ports gelten soll und deaktivieren die normale UNIX(R)-Beschränkung. [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... Da `root` von dieser Richtlinie nicht beeinträchtigt werden soll, setzen wir hier `security.mac.portacl.suser_exempt` auf einen Wert ungleich Null. Das Modul man:mac_portacl[4] ist nun so eingerichtet, wie es UNIX(R)-artige Betriebssysteme normal ebenfalls tun. [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... Nun erlauben wir dem Nutzer mit der UID 80, normalerweise dem Nutzer `www`, den Port 80 zu verwenden. Dadurch kann der Nutzer `www` einen Webserver betreiben, ohne dafür mit `root`-Privilegien ausgestattet zu sein. [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... Hier wird dem Nutzer mit der UID 1001 erlaubt, die TCP Ports 110 ("pop3") und 995 ("pop3s") zu verwenden. Dadurch kann dieser Nutzer einen Server starten, der Verbindungen an diesen beiden Ports annehmen kann. [[mac-partition]] == Das MAC Modul partition Modulname: [.filename]#mac_partition.ko# Parameter für die Kernelkonfiguration: `options MAC_PARTITION` Bootparameter `mac_partition_load="YES"` Die Richtlinie man:mac_partition[4] setzt Prozesse in spezielle "Partitionen", entsprechend dem zugewiesenen MAC Label. Man kann sich das vorstellen wie eine spezielle Art man:jail[8], auch wenn das noch kein wirklich guter Vergleich ist. Es wird empfohlen, dieses Modul durch einen Eintrag in man:loader.conf[5] zu aktivieren, so dass die Richtlinie während des Bootvorganges eingebunden wird. Der Großteil der Konfiguration geschieht mit dem Kommando man:setpmac[8], wie gleich erklärt wird. Außerdem gibt es folgenden `sysctl` Parameter für diese Richtlinie. * `security.mac.partition.enabled` erzwingt die Verwendung von MAC Prozeß-Partitionen. Sobald diese Richtlinie aktiv ist, sehen Nutzer nur noch ihre eigenen Prozesse, und alle anderen Prozesse, die ebenfalls derselben Prozeß-Partition zugeordnet sind. Sie können jedoch nicht auf Prozesse oder Werkzeuge außerhalb des Anwendungsbereich dieser Partition zugreifen. Das bedeutet unter anderem, das ein Nutzer, der einer Klasse `insecure` zugeordnet ist, nicht auf das Kommando `top` zugreifen kann - wie auch auf viele anderen Befehle, die einen eigenen Prozeß erzeugen. Um einen Befehl einer Prozeß-Partition zuzuordnen, muß dieser durch das Kommando `setpmac` mit einem Label versehen werden: [source,shell] .... # setpmac partition/13 top .... Diese Zeile fügt das Kommando `top` dem Labelsatz für Nutzer der Klasse `insecure` hinzu, sofern die Partition 13 mit der Klasse `insecure` übereinstimmt. Beachten Sie, dass alle Prozesse, die von Nutzern dieser Klasse erzeugt werden, das Label `partition/13` erhalten, und dieses auch nicht durch den Nutzer geändert werden kann. === Beispiele Der folgende Befehl listet die vergebenen Label für Prozeß-Partitionen und die laufenden Prozesse auf. [source,shell] .... # ps Zax .... Das nächste Kommando liefert das Label der Prozeß-Partition eines anderen Nutzers `trhodes` und dessen gegenwärtig laufenden Prozesse zurück. [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== Jeder Nutzer kann die Prozesse in der Prozeß-Partition von `root` betrachten, solange nicht die Richtlinie man:mac_seeotheruids[4] geladen wurde. ==== Eine ausgefeilte Umsetzung dieser Richtlinie deaktiviert alle Dienste in [.filename]#/etc/rc.conf# und startet diese dann später durch ein Skript, das jedem Dienst das passende Label zuordnet. [NOTE] ==== Die folgenden Richtlinien verwenden Zahlenwerte anstatt der drei Standardlabels. Diese Optionen, und ihre Grenzen, werden in den zugehörigen Manpages genauer erklärt. ==== [[mac-mls]] == Das MAC Modul Multi-Level Security Modulname: [.filename]#mac_mls.ko# Parameter für die Kernelkonfiguration: `options MAC_MLS` Bootparameter: `mac_mls_load="YES"` Die Richtlinie man:mac_mls[4] kontrolliert die Zugriffe zwischen Subjekten und Objekten, indem sie den Informationsfluß strengen Regeln unterwirft. In MLS Umgebungen wird jedem Subjekt oder Objekt ein "Freigabe"-Level zugeordnet, und diese werden wiederum zu einzelnen Verbünden zusammengefaßt. Da diese Freigabe- oder Anfälligkeits-Level Zahlen größer 6000 erreichen können, ist es für jeden Systemadministrator eine undankbare Aufgabe, jede Entität von Grund auf zu konfigurieren. Zum Glück gibt es 3 "instant" Labels, die in der Richtlinie zur Anwendung bereit stehen. Diese drei Labels heißen `mls/low`, `mls/equal` und `mls/high`. Da sie in den Manpages man:mac_mls[4] ausführlich beschrieben werden, gibt es hier nur einen kurzen Abriß: * Das Label `mls/low` ist eine niedrige Einstellung, die von allen anderen dominiert werden darf. Alles, was mit `mls/low` versehen wird, hat ein niedriges Freigabe-Level und darf auf keine Informationen zugreifen, denen ein höheres Freigabe-Level zugeordnet wurde. Einem Objekt mit diesem Label kann außerdem keine Information durch ein Objekt höherer Freigabe übergeben werden, es kann also auch nicht durch solche Objekte editiert oder überschrieben werden. * Das Label `mls/equal` wird an Objekte vergeben, die von dieser Richtlinie ausgenommen werden sollen. * Das Label `mls/high` verkörpert das höchstmögliche Freigabe-Level. Objekte, denen dieses Label zugeordnet wird, dominieren alle anderen Objekte des Systems. Trotzdem können sie Objekten mit einem niedrigeren Freigabe-Level keine Informationen zuspielen. MLS bietet: * Eine hierarchische Sicherheitsschicht und Zuordnung nichthierarchischer Kategorien; * Feste Regeln: kein "Read-Up", kein "Write-Down" (ein Subjekt kann nur Objekte gleicher oder _niedrigerer_ Stufe lesen, und es kann nur Objekte gleicher oder _höherer_ Stufe schreiben); * Geheimhaltung (indem unangemessene Offenlegung von Daten verhindert wird); * Eine Basis zum Entwerfen von Systemen, die Daten verschiedener Vertraulichkeitsebenen gleichzeitig handhaben sollen (ohne das geheime und vertrauliche Informationen untereinander ausgetauscht werden können). Nachfolgend werden die `sysctl`-Variablen vorgestellt, die für die Einrichtung spezieller Dienste und Schnittstellen vorhanden sind. * `security.mac.mls.enabled` schaltet die Richtlinie MLS ein (oder aus). * `security.mac.mls.ptys_equal` sorgt dafür, dass während der Initialisierung alle man:pty[4]-Geräte als `mls/equal` gekennzeichnet werden. * `security.mac.mls.revocation_enabled` sorgt dafür, dass die Zugriffsrechte von Objekten wieder zurückgesetzt werden, nachdem deren Label vorübergehend auf ein niedrigeres Freigabe-Level geändert wurde. * `security.mac.mls.max_compartments` gibt die maximale Anzahl von Verbünden an. Im Prinzip ist es die höchste Nummer eines Verbundes auf dem System. Um die Labels der MLS Richtlinie zu bearbeiten verwendet man man:setfmac[8]. Um ein Objekt zu kennzeichnen, benutzen Sie folgendes Kommando: [source,shell] .... # setfmac mls/5 test .... Um das MLS-Label der Datei [.filename]#test# auszulesen, verwenden Sie dieses Kommando: [source,shell] .... # getfmac test .... Dies ist eine Zusammenstellung der Merkmale von [.filename]#test#. Ein anderer Ansatz ist, für diese Richtlinie eine Konfigurationsdatei in [.filename]#/etc# abzulegen, die alle Informationen enthält und mit der dann das Kommando `setfmac` gefüttert wird. Diese Vorgehensweise wird erklärt, nachdem alle Richtlinien vorgestellt wurden. === Verbindlicher Vertraulichkeit in der Planungsphase Mit dem Richtlinienmodul `Multi-Level Security` bereitet sich ein Administrator darauf vor, den Fluß vertraulicher Informationen zu kontrollieren. Beim Starten der Richtlinie ist immer `mls/low` voreingestellt - alles kann auf alles zugreifen. Der Administrator ändert dies während der eigentlichen Konfiguration, indem er die Vertraulichkeit bestimmter Objekte erhöht. Jenseits der drei Grundeinstellungen des Labels kann der Administrator einzelne Nutzer oder Nutzergruppen nach Bedarf zusammenschließen und den Informationsaustausch zwischen diesen gestatten oder unterbinden. Es ist sicher eine Vereinfachung, die Freigabe-Level mit Begriffen wie `vertraulich`, `geheim` oder `streng geheim` zu bezeichnen. Einige Administratoren erstellen einfach verschiedene Gruppen auf der Ebene von gegenwärtigen Projekten. Ungeachtet der Herangehensweise bei der Klassifizierung muß ein gut durchdachter Plan existieren, bevor eine derart einengende Richtlinie umgesetzt wird. Exemplarisch für die Anwendung dieses Moduls bzw. dieser Richtlinie seien angeführt: * Ein E-Commerce Webserver * Ein Dateiserver, der vertrauliche Informationen einer Firma oder eines Konzerns speichert * Umgebungen in Finanzeinrichtungen Der unsinnigste Einsatzort für diese Richtlinie wäre ein Arbeitsplatzrechner mit nur zwei oder drei Benutzern. [[mac-biba]] == Das MAC Modul Biba Modulname: [.filename]#mac_biba.ko# Parameter für die Kernelkonfiguration: `options MAC_BIBA` Bootparameter: `mac_biba_load="YES"` Das Modul man:mac_biba[4] lädt die MAC Biba Richtlinie. Diese ähnelt stark der MLS Richtlinie, nur das die Regeln für den Informationsfluß ein wenig vertauscht sind. Es wird in diesem Fall der absteigende Fluß sicherheitskritischer Information geregelt, während die MLS Richtlinie den aufsteigenden Fluß regelt. In gewissen Sinne treffen dieses und das vorangegangene Unterkapitel also auf beide Richtlinien zu. In einer Biba-Umgebung wird jedem Subjekt und jedem Objekt ein "Integritäts"-Label zugeordnet. Diese Labels sind in hierarchischen Klassen und nicht-hierarchischen Komponenten geordnet. Je höher die Klasse, um so höher die Integrität. Die unterstützten Labels heißen `biba/low`, `biba/equal` und `biba/high`. Sie werden im Folgenden erklärt: * `biba/low` ist die niedrigste Stufe der Integrität, die einem Objekt verliehen werden kann. Wenn sie einem Objekt oder Subjekt zugeordnet wird, kann dieses auf Objekte oder Subjekte, die biba/high markiert wurden, zwar lesend zugreifen, nicht jedoch schreibend. * Das Label `biba/equal` ist, wie der aufmerksame Leser sicherlich schon ahnt, für die Ausnahmen dieser Richtlinie gedacht und sollte nur diesen Ausnahmen entsprechenden Objekten verliehen werden. * `biba/high` markierte Subjekte und Objekte können Objekte niedrigerer Stufe schreiben , nicht jedoch lesen. Es wird empfohlen, dass dieses Label an Objekte vergeben wird, die sich auf Integrität des gesamten Systems auswirken. Biba stellt bereit: * Hierarchische Integritätsstufen mit einem Satz nichthierarchischer Integritätskategorien; * Festgeschriebene Regeln: kein "Write-Up", kein "Read-Down" (der Gegensatz zu MLS - ein Subjekt erhält schreibenden Zugriff auf Objekte gleicher oder geringerer Stufe, aber nicht bei höherer, und lesenden Zugriff bei gleicher Stufe oder höerer, aber nicht bei niedrigerer); * Integrität (es wird die Echtheit der Daten gewährleistet, indem unangemessene Veränderungen verhindert werden); * Eine Abstufung der Gewährleistung (im Gegensatz zu MLS, bei der eine Abstufung der Vertraulichkeit vorgenommen wird). Folgende `sysctl` Parameter werden zur Nutzung der Biba-Richtlinie angeboten: * `security.mac.biba.enabled` zum Aktivieren/Deaktivieren der Richtlinie auf dem Zielsystem. * `security.mac.biba.ptys_equal` wird verwendet, um die Biba-Richtlinie auf der man:pty[4]-Schnittstelle zu deaktivieren. * `security.mac.biba.revocation_enabled` erzwingt das Zurücksetzen des Labels, falls dieses zeitweise geändert wurde um ein Subjekt zu dominieren. Um Einstellungen der Biba Richtlinie für Systemobjekte zu verändern werden die Befehle `setfmac` und `getfmac` verwendet: [source,shell] .... # setfmac biba/low test # getfmac test test: biba/low .... === Verbindliche Integrität in der Planungsphase Integrität garantiert, im Unterschied zu Sensitivität, dass Informationen nur durch vertraute Parteien verändert werden können. Dies schließt Informationen ein, die zwischen Subjekten ausgetauscht werden, zwischen Objekt, oder auch zwischen den beiden. Durch Integrität wird gesichert, das Nutzer nur Informationen verändern, oder gar nur lesen können, die sie explizit benötigen. Das Modul man:mac_biba[4] eröffnet einem Administrator die Möglichkeit zu bestimmen, welche Dateien oder Programme ein Nutzer oder eine Nutzergruppe sehen bzw. aufrufen darf. Gleichzeitig kann er zusichern, dass dieselben Programme und Dateien frei von Bedrohungen sind und das System die Echtheit gewährleistet - für diesen Nutzer oder die Nutzergruppe. Während der anfänglichen Phase der Planung muß der Administrator vorbereitet sein, Nutzer in Klassen, Stufen und Bereiche einzuteilen. Der Zugriff auf Dateien und insbesondere auch Programme wird verhindert sowohl vor als auch nachdem sie gestartet wurden. Das System selbst erhält als Voreinstellung das Label `biba/high` sobald das Modul aktiviert wird - und es liegt allein am Administrator, die verschiedenen Klassen und Stufen für die einzelnen Nutzer zu konfigurieren. Anstatt mit Freigaben zu arbeiten, wie weiter oben gezeigt wurde, könnte man auch Überbegriffe für Projekte oder Systemkomponenten entwerfen. Zum Beispiel, ausschließlich Entwicklern den Vollzugriff auf Quellcode, Compiler und Entwicklungswerkzeuge gewähren, während man andere Nutzer in Kategorien wie Tester, Designer oder einfach nur "allgemeiner Nutzer" zusammenfaßt, die für diese Bereiche lediglich lesenden Zugriff erhalten sollen. Mit seinem ursprünglichen Sicherheits-Standpunkt ist ein Subjekt niedrigerer Integrität unfähig, ein Subjekt höherer Integrität zu verändern. Ein Subjekt höherer Integrität kann ein Subjekt niedrigerer Integrität weder beobachten noch lesen. Wenn man ein Label für die niedrigstmögliche Klasse erstellt, kann man diese allen Subjekten verwehren. Einige weitsichtig eingerichtete Umgebungen, die diese Richtlinie verwenden, sind eingeschränkte Webserver, Entwicklungs- oder Test-Rechner oder Quellcode-Sammlungen. Wenig sinnvoll ist diese Richtlinie auf einer Arbeitsstation, oder auf Rechnern die als Router oder Firewall verwendet werden. [[mac-lomac]] == Das MAC Modul LOMAC Modulname: [.filename]#mac_lomac.ko# Parameter für die Kernelkonfiguration: `options MAC_LOMAC` Bootparameter: `mac_lomac_load="YES"` Anders als die Biba Richtlinie erlaubt die man:mac_lomac[4] Richtlinie den Zugriff auf Objekte niedrigerer Integrität nur, nachdem das Integritätslevel gesenkt wurde. Dadurch wird eine Störung derIntegritätsregeln verhindert. Die MAC Version der "Low-Watermark" Richtlinie, die nicht mit der älteren -Implementierung verwechselt werden darf, arbeitet fast genauso wie Biba. Anders ist, dass hier "schwebende" Label verwendet werden, die ein Herunterstufen von Subjekten durch Hilfsverbünde ermöglichen. Dieser zweite Verbund wird in der Form `[auxgrade]` angegeben und sollte in etwa aussehen wie `lomac/10[2]`, wobei die Ziffer zwei (2) hier den Hilfsverbund abbildet. Die MAC Richtlinie `LOMAC` beruht auf einer durchgängigen Etikettierung aller Systemobjekte mit Integritätslabeln, die Subjekten das Lesen von Objekten niedriger Integrität gestatten und dann das Label des Subjektes herunterstufen - um zukünftige Schreibvorgänge auf Objekte hoher Integrität zu unterbinden. Dies ist die Funktion der Option `[auxgrade]`, die eben vorgestellt wurde. Durch sie erhält diese Richtlinie eine bessere Kompatibilität und die Initialisierung ist weniger aufwändig als bei der Richtlinie Biba. === Beispiele Wie schon bei den Richtlinien Biba und MLS werden die Befehle `setfmac` und `setpmac` verwendet, um die Labels an den Systemobjekten zu setzen: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... Beachten Sie, dass hier der Hilfswert auf `low` gesetzt wurde - dieses Leistungsmerkmal ist nur in der MAC `LOMAC` Richtlinie enthalten. [[mac-implementing]] == Beispiel 1: Nagios in einer MAC Jail Die folgende Demonstration setzt eine sichere Umgebung mithilfe verschiedener MAC Module und sorgfältig konfigurierter Richtlinien um. Es handelt sich jedoch nur um einen Test und sollte nicht als Antwort auf jedes Problem in Fragen Sicherheit gesehen werden. Eine Richtlinie nur umzusetzen und dann einfach laufen zu lassen, funktioniert nie und kann eine echte Arbeitsumgebung in eine Katastrophe stürzen. Bevor es losgeht, muß jedes Dateisystem mit der Option `multilabel`, wie weiter oben beschrieben, markiert werden. Dies nicht zu tun, führt zu Fehlern. Außerdem müssen die Ports package:net-mngt/nagios-plugins[], package:net-mngt/nagios[] und package:www/apache22[] installiert und konfiguriert sein, so dass sie ordentlich laufen. === Erstellen einer Nutzerklasse `insecure` Beginnen wir die Prozedur mit dem Hinzufügen einer Nutzerklasse in der Datei [.filename]#/etc/login.conf#: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin-- :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... Zusätzlich fügen wir beim Standardnutzer folgende Zeile hinzu: [.programlisting] .... :label=biba/high: .... Anschließend muß die Datenbank neu erstellt werden: [source,shell] .... # cap_mkdb /etc/login.conf .... === Boot-Konfiguration Starten Sie den Rechner noch nicht neu. Fügen Sie zunächst noch die folgenden Zeilen in die Datei [.filename]#/boot/loader.conf# ein, damit die benötigten Module während des Systemstarts geladen werden: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" .... === Nutzer einrichten Ordnen Sie den Superuser `root` der Klasse `default` zu: [source,shell] .... # pw usermod root -L default .... Alle Nutzerkonten, die weder `root` noch Systemkonten sind, brauchen nun eine Loginklasse, da sie sonst keinen Zugriff auf sonst übliche Befehle erhalten, wie bspw. man:vi[1]. Das folgende `sh` Skript wird diese Aufgabe erledigen: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ /etc/passwd`; do pw usermod $x -L default; done; .... Verschieben Sie die Nutzer `nagios` und `www` in die `insecure` Klasse: [source,shell] .... # pw usermod nagios -L insecure .... [source,shell] .... # pw usermod www -L insecure .... === Die Kontextdatei erstellen Nun muß eine Kontextdatei erstellt werden. Die folgende Beispieldatei soll dazu in [.filename]#/etc/policy.contexts# gespeichert werden: [.programlisting] .... # This is the default BIBA policy for this system. # System: /var/run biba/equal /var/run/* biba/equal /dev biba/equal /dev/* biba/equal /var biba/equal /var/spool biba/equal /var/spool/* biba/equal /var/log biba/equal /var/log/* biba/equal /tmp biba/equal /tmp/* biba/equal /var/tmp biba/equal /var/tmp/* biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # For Nagios: /usr/local/etc/nagios /usr/local/etc/nagios/* biba/10 /var/spool/nagios biba/10 /var/spool/nagios/* biba/10 # For apache /usr/local/etc/apache biba/10 /usr/local/etc/apache/* biba/10 .... Die Richtlinie erzwingt Sicherheit, indem der Informationsfluß Einschränkungen unterworfen wird. In der vorliegenden Konfiguration kann kein Nutzer, weder `root` noch andere, auf Nagios zugreifen. Konfigurationsdateien und die Prozesse, die Teil von Nagios sind, werden durch unsere MAC vollständig abgegrenzt. Die Kontextdatei kann nun vom System eingelesen werden, indem folgender Befehl ausgeführt wird: [source,shell] .... # setfmac -ef /etc/policy.contexts / # setfmac -ef /etc/policy.contexts / .... [NOTE] ==== Das obenstehende Dateisystem-Layout kann, je nach Umgebung, sehr unterschiedlich aussehen. Außerdem muß es auf jedem einzelnen Dateisystem ausgeführt werden. ==== In die Datei [.filename]#/etc/mac.conf# müssen nun noch diese Änderungen eingetragen werden: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === Netzwerke einbinden Tragen Sie die folgende Zeile in die Datei [.filename]#/boot/loader.conf# ein: [.programlisting] .... security.mac.biba.trust_all_interfaces=1 .... Und das Folgende gehört in Datei [.filename]#rc.conf# zu den Optionen für die Netzwerkkarte. Falls die Netzwerkverbindung(-en) via DHCP konfiguriert werden, muß man dies nach jedem Systemstart eigenhändig nachtragen: [.programlisting] .... maclabel biba/equal .... === Testen der Konfiguration Versichern Sie sich, dass der Webserver und Nagios nicht automatisch geladen werden und starten Sie den Rechner neu. Prüfen Sie nun, ob `root` wirklich keinen Zugriff auf die Dateien im Konfigurationsverzeichnis von Nagios hat. Wenn `root` den Befehl man:ls[1] auf [.filename]#/var/spool/nagios# ausführen kann, ist irgendwas schief gelaufen. Es sollte ein `permission denied` Fehler ausgegeben werden. Wenn alles gut aussieht, können Nagios, Apache und Sendmail gestartet werden - allerdings auf eine Weise, die unserer Richtlinie gerecht wird. Zum Beispiel durch die folgenden Kommandos: [source,shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Versichern Sie sich lieber doppelt, dass alles ordentlich läuft. Wenn nicht, prüfen Sie die Logs und Fehlermeldungen. Verwenden Sie das man:sysctl[8] Werkzeug um die Sicherheitsrichtlinie man:sysctl[8] zu deaktivieren und versuchen Sie dann alles noch einmal zu starten. [NOTE] ==== Der Superuser kann den Vollzug der Richtlinie schalten und die Konfiguration ohne Furcht verändern. Folgender Befehl stuft eine neu gestartete Shell herunter: [source,shell] .... # setpmac biba/10 csh .... Um dies zu vermeiden, werden die Nutzer durch man:login.conf[5] eingeschränkt. Wenn man:setpmac[8] einen Befehl außerhalb der definierten Schranken ausführen soll, wird ein Fehler zurückgeliefert. In so einem Fall muß `root` auf `biba/high(high-high)` gesetzt werden. ==== [[mac-userlocked]] == Beispiel 2: User Lock Down Grundlage dieses Beispiels ist ein relativ kleines System zur Datenspeicherung mit weniger als 50 Benutzern. Diese haben die Möglichkeit, sich einzuloggen und dürfen nicht nur Daten speichern, sondern auch auf andere Ressourcen zugreifen. Die Richtlinien man:mac_bsdextended[4] und man:mac_seeotheruids[4] können gleichzeitig eingesetzt werden. Zusammen kann man mit ihnen nicht nur den Zugriff auf Systemobjekte einschränken, sondern auch Nutzerprozesse verstecken. Beginnen Sie, indem Sie die folgende Zeile in die Datei [.filename]#/boot/loader.conf# eintragen: [.programlisting] .... mac_seeotheruids_load="YES" .... Die Richtlinie man:mac_bsdextended[4] wird durch den anschließenden Eintrag in [.filename]#/etc/rc.conf# hinzugefügt: [.programlisting] .... ugidfw_enable="YES" .... Die Standardregeln, welche in [.filename]#/etc/rc.bsdextended# gespeichert sind, werden zum Systemstart geladen. Sie müssen aber noch angepaßt werden. Da dieser Computer nur Nutzern dienen soll und weitere Dienste gestartet werden, kann alles bis auf die beiden letzten Zeilen auskommentiert werden. Das sorgt dafür dass jeder Nutzer seine eigenen Systemobjekte erhält. Nun fügen wir alle benötigten Nutzer auf der Maschine hinzu und starten neu. Zum Testen der Einstellungen loggen Sie sich parallel zwei mal mit unterschiedlichen Nutzernamen ein und starten Sie das Kommando `ps aux`. Dort sehen Sie, dass Sie die Prozesse des anderen Nutzers nicht sehen können. Versuchen Sie, man:ls[1] auf das Heimatverzeichnis eines anderen Nutzers auszuführen. Auch dieser Versuch wird fehlschlagen. Solange nicht die speziellen `sysctl`-Variablen geändert wurden, hat der Superuser noch vollen Zugriff. Sobald auch diese Einstellungen angepaßt wurden, führen Sie ruhig auch den obigen Test als `root` aus. [NOTE] ==== Wenn ein neuer Benutzer hinzugefügt wird, ist für diesen zunächst keine man:mac_bsdextended[4] Regel im Regelsatz vorhanden. Schnelle Abhilfe schafft hier, einfach das Kernelmodul mit man:kldunload[8] zu entladen und mit man:kldload[8] erneut einzubinden. ==== [[mac-troubleshoot]] == Fehler im MAC beheben Während der Entwicklung des Frameworks haben einige Nutzer auf Probleme hingewiesen. Einige davon werden hier aufgeführt: === Die Option `multilabel` greift nicht auf der [.filename]#/#-Partition Es scheint, dass etwa jedem fünfzigsten Nutzer dieses Problem widerfährt. Und in der Tat - auch wir kennen es aus der Entwicklung. Genauere Untersuchungen dieses "Bugs" machten uns glauben, dass es sich entweder um einen Fehler in oder eine fehlerhafte Interpretation der Dokumentation handelt. Warum auch immer dieser Fehler auftritt - er kann mit folgender Prozedur behoben werden: [.procedure] . Öffnen Sie die Datei [.filename]#/etc/fstab# und setzen Sie die Rootpartition auf `ro` wie "read-only". . Starten Sie in den Einzelnutzermodus. . Rufen Sie `tunefs -l enable` für [.filename]#/# auf. . Starten Sie in den Mehrbenutzermodus. . Führen Sie `mount -urw`[.filename]#/# aus und ändern Sie anschließend in der Datei [.filename]#/etc/fstab# die Option `ro` zurück in `rw`. Starten Sie das System noch einmal neu. . Achten Sie besonders auf die Ausgabe von `mount` um sich zu versichern, dass die `multilabel` korrekt für das root-Dateisystem gesetzt wurde. === Mit der aktivierten MAC kann ich keinen X11 Server starten Dies kann durch die Richtlinie `partition` oder einer fehlerhaften Verwendung einer Richtlinie, die mit Labels arbeitet, auftreten. Zum debuggen versuchen Sie folgendes: [.procedure] . Schauen Sie sich die Fehlermeldungen genau an. Wenn der Nutzer einer `insecure` Klasse angehört, ist wahrscheinlich die Richtlinie `partition` die Ursache. Versuchen Sie, die Nutzerklasse auf `default` zu stellen und danach die Datenbank mit `cap_mkdb` zu erneuern. Wenn das Problem dadurch nicht gelöst wird, gehen Sie weiter zu Schritt 2. . Gehen Sie die Label-Richtlinien Schritt für Schritt nocheinmal durch. Achten Sie darauf, dass für den Nutzer, bei dem das Problem auftritt, für X11 und das Verzeichnis [.filename]#/dev# alle Einstellungen korrekt sind. . Falls all dies nicht helfen sollte, senden Sie die Fehlermeldung und eine Beschreibung ihrer Arbeitsumgebung an die (englisch-sprachige) TrustedBSD Diskussionsliste auf der http://www.TrustedBSD.org[TrustedBSD] Webseite oder an die {freebsd-questions} Mailingliste. === Error: cannot stat [.filename]#.login_conf# Wenn ich versuche, von `root` zu einem anderen Nutzer des Systems zu wechseln, erhalte ich die Fehlermeldung `_secure_path: unable to state .login_conf`. Diese Meldung wird gewöhnlich ausgegeben, wenn der Nutzer ein höhere Label-Einstellung hat als der, dessen Identität man annehmen möchte. Ausführlich: Wenn ein Nutzer `joe` als `biba/low` gelabelt wurde, kann `root`, der `biba/high` als Voreinstellung trägt, das Heimatverzeichnis von `joe` nicht einsehen. Das passiert unabhänig davon, ob `root` vorher mit `su` die Identität von `joe` angenommen hat oder nicht, da das Label sich nicht ändert. Hier haben wir also einen Fall, in dem das Gewährleistungsmodell von Biba verhindert, das der Superuser Objekte einer niedrigeren Integrität betrachten kann. === Der Nutzer `root` ist kaputt! Im normalen oder sogar im Einzelbenutzermodus wird `root` nicht anerkannt. Das Kommando `whoami` liefert 0 (null) und `su` liefert `who are you?` zurück. Was geht da vor? Das kann passieren, wenn eine Label-Richtlinie ausgeschaltet wird - entweder durch man:sysctl[8] oder wenn das Richtlinienmodul entladen wurde. Wenn eine Richtlinie deaktiviert oder auch nur vorübergehen deaktiviert wird, muß die Befähigungsdatenbank neu konfiguriert werden, d.h. die `label` Option muß entfernt werden. Überprüfen Sie, ob alle `label` Einträge aus der Datei [.filename]#/etc/login.conf# entfernt wurden und bauen Sie die Datenbank mit `cap_mkdb` neu. Dieser Fehler kann auch auftreten, wenn eine Richtlinie den Zugriff auf die Datei [.filename]#master.passwd# einschränkt. Normalerweise passiert das nur, wenn ein Administrator ein Label an diese Datei vergibt, das mit der allgemeingültigen Richtlinie, die das System verwendet, in Konflikt steht. In solchen Fällen werden die Nutzerinformationen vom System ausgelesen und jeder weitere Zugriff wird blockiert, sobald das neue Label greift. Wenn man die Richtlinie via man:sysctl[8] ausschaltet, sollte es erstmal wieder gehen. diff --git a/documentation/content/de/books/handbook/network-servers/_index.adoc b/documentation/content/de/books/handbook/network-servers/_index.adoc index e3d04167eb..875a01cbd0 100644 --- a/documentation/content/de/books/handbook/network-servers/_index.adoc +++ b/documentation/content/de/books/handbook/network-servers/_index.adoc @@ -1,2478 +1,2477 @@ --- title: Kapitel 29. Netzwerkserver part: Teil IV. Netzwerke prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 34 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Netzwerkserver :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 29 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Übersicht Dieses Kapitel beschreibt einige der häufiger verwendeten Netzwerkdienste auf UNIX(R)-Systemen. Dazu zählen Installation und Konfiguration sowie Test und Wartung verschiedener Netzwerkdienste. Zusätzlich sind im ganzen Kapitel Beispielkonfigurationen als Referenz enthalten. Nachdem Sie dieses Kapitel gelesen haben, werden Sie * Den inetd-Daemon konfigurieren können. * Wissen, wie das Network File System (NFS) eingerichtet wird. * Einen Network Information Server (NIS) einrichten können, um damit Benutzerkonten im Netzwerk zu verteilen. * Wissen, wie Sie FreeBSD einrichten, um als LDAP-Server oder -Client zu agieren. * Rechner durch Nutzung von DHCP automatisch für ein Netzwerk konfigurieren können. * In der Lage sein, einen Domain Name Server (DNS) einzurichten. * Den ApacheHTTP-Server konfigurieren können. * Wissen, wie man einen File Transfer Protocol (FTP)-Server einrichtet. * Mit Samba einen Datei- und Druckserver für Windows(R)-Clients konfigurieren können. * Unter Nutzung des NTP-Protokolls Datum und Uhrzeit synchronisieren sowie einen Zeitserver installieren können. * Wissen, wie iSCSI eingerichtet wird. Dieses Kapitel setzt folgende Grundkenntnisse voraus: * [.filename]#/etc/rc#-Skripte. * Netzwerkterminologie * Installation zusätzlicher Software von Drittanbietern (crossref:ports[ports,Installieren von Anwendungen: Pakete und Ports]). [[network-inetd]] == Der inetd"Super-Server" Der man:inetd[8]-Daemon wird manchmal auch als "Internet Super-Server" bezeichnet, weil er Verbindungen für viele Dienste verwaltet. Anstatt mehrere Anwendungen zu starten, muss nur der inetd-Dienst gestartet werden. Wenn eine Verbindung für einen Dienst eintrifft, der von inetd verwaltet wird, bestimmt inetd, welches Programm für die eingetroffene Verbindung zuständig ist, aktiviert den entsprechenden Prozess und reicht den Socket an ihn weiter. Der Einsatz von inetd an Stelle viele einzelner Daemonen kann auf nicht komplett ausgelasteten Servern zu einer Verringerung der Systemlast führen. inetd wird vor allem dazu verwendet, andere Daemonen zu aktivieren, einige Protokolle werden aber auch intern verwaltet. Dazu gehören chargen, auth, time, echo, discard sowie daytime. Dieser Abschnitt beschreibt die Konfiguration von inetd. [[network-inetd-conf]] === Konfigurationsdatei Die Konfiguration von inetd erfolgt über [.filename]#/etc/inetd.conf# Jede Zeile dieser Datei repräsentiert eine Anwendung, die von inetd gestartet werden kann. In der Voreinstellung beginnt jede Zeile mit einem Kommentar (`#`), was bedeutet dass inetd keine Verbindungen für Anwendungen akzeptiert. Entfernen Sie den Kommentar am Anfang der Zeile, damit inetd Verbindungen für diese Anwendung entgegennimmt. Nachdem Sie die Änderungen gespeichert haben, fügen Sie folgende Zeile in [.filename]#/etc/rc.conf# ein, damit inetd bei Booten automatisch gestartet wird: [.programlisting] .... inetd_enable="YES" .... Starten Sie jetzt inetd, so dass er Verbindungen für die von Ihnen konfigurierten Dienste entgegennimmt: [source,shell] .... # service inetd start .... Sobald inetd gestartet ist, muss der Dienst benachrichtigt werden, wenn eine Änderung in [.filename]#/etc/inetd.conf# gemacht wird: [[network-inetd-reread]] .Die Konfigurationsdatei von inetd neu einlesen [example] ==== [source,shell] .... # service inetd reload .... ==== Normalerweise müssen Sie lediglich den Kommentar vor der Anwendung entfernen. In einigen Situationen kann es jedoch sinnvoll sein, den Eintrag weiter zu bearbeiten. Als Beispiel dient hier der Standardeintrag für man:ftpd[8] über IPv4: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... Die sieben Spalten in diesem Eintrag haben folgende Bedeutung: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... service-name:: Der Dienstname eines bestimmten Daemons. Er muss einem in [.filename]#/etc/services# aufgelisteten Dienst entsprechen. Hier wird festgelegt, auf welchen Port inetd eingehende Verbindungen für diesen Dienst entgegennimmt. Wenn ein neuer Dienst benutzt wird, muss er zuerst in [.filename]#/etc/services# eingetragen werden. socket-type:: Entweder `stream`, `dgram`, `raw`, oder `seqpacket`. Nutzen Sie `stream` für TCP-Verbindungen und `dgram` für UDP-Dienste. protocol:: Benutzen Sie eines der folgenden Protokolle: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protokoll | Bedeutung |tcp oder tcp4 |TCP (IPv4) |udp oder udp4 |UDP (IPv4) |tcp6 |TCP (IPv6) |udp6 |UDP (IPv6) |tcp46 |TCP sowohl unter IPv4 als auch unter IPv6 |udp46 |UDP sowohl unter IPv4 als auch unter IPv6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: In diesem Feld muss `wait` oder `nowait` angegeben werden. `max-child`, `max-connections-per-ip-per-minute` sowie `max-child-per-ip` sind optional. + `wait|nowait` gibt an, ob der Dienst seinen eigenen Socket verwalten kann oder nicht. `dgram`-Sockets müssen `wait` verwenden, während Daemonen mit `stream`-Sockets, die normalerweise auch aus mehreren Threads bestehen, `nowait` verwenden sollten. `wait` gibt in der Regel mehrere Sockets an einen einzelnen Daemon weiter, während `nowait` für jeden neuen Socket einen Childdaemon erzeugt. + Die maximale Anzahl an Child-Daemonen, die inetd erzeugen kann, wird durch die Option `max-child` festgelegt. Wenn ein bestimmter Daemon 10 Instanzen benötigt, wird der Wert `/10` hinter die Option `nowait` gesetzt. Der Wert `/0` gibt an, das es keine Beschränkung gibt. + `max-connections-per-ip-per-minute` legt die maximale Anzahl von Verbindungsversuchen pro Minute fest, die von einer bestimmten IP-Adresse aus unternommen werden können. Sobald das Limit erreicht ist, werden weitere Verbindungen von dieser IP-Adresse geblockt, bis die Minute vorüber ist. Ein Wert von `/10` würde die maximale Anzahl der Verindungsversuche einer bestimmten IP-Adresse auf zehn Versuche in der Minute beschränken. `max-child-per-ip` legt fest, wie viele Child-Daemonen von einer bestimmten IP-Adresse aus gestartet werden können. Durch diese Optionen lassen sich Ressourcenverbrauch sowie die Auswirkungen eines `Denial of Service (DoS)`-Angriffs begrenzen. + Ein Beispiel finden Sie in den Voreinstellungen für man:fingerd[8]: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s .... user:: Der Benutzername, unter dem der jeweilige Daemon laufen soll. Meistens laufen Daemonen als `root`, `daemon` oder `nobody`. server-program:: Der vollständige Pfad des Daemons. Wird der Daemon von inetd intern bereitgestellt, verwenden Sie `internal`. server-program-arguments:: Dieser Eintrag legt die Argumente fest, die bei der Aktivierung an den Daemon übergeben werden. Wenn es sich beim Daemon um einen internen Dienst handelt, verwenden Sie wiederum `internal`. [[network-inetd-cmdline]] === Kommandozeilenoptionen Wie die meisten anderen Server-Daemonen lässt sich auch inetd über verschiedene Optionen steuern. In der Voreinstellung wird inetd mit `-wW -C 60` gestartet. Durch das Setzen dieser Werte wird das TCP-Wrapping für alle inetd-Dienste aktiviert. Zudem wird verhindert, dass eine IP-Adresse eine Dienst öfter als 60 Mal pro Minute anfordern kann. Um die Voreinstellungen für inetd zu ändern, fügen Sie einen Eintrag für `inetd_flags` in [.filename]#/etc/rc.conf# hinzu. Wenn inetd bereits ausgeführt wird, starten Sie ihn mit `service inetd restart` neu. Die verfügbaren Optionen sind: -c maximum:: Legt die maximale Anzahl von parallelen Aufrufen eines Dienstes fest; in der Voreinstellung gibt es keine Einschränkung. Diese Einstellung kann für jeden Dienst durch Setzen des Parameters `max-child` in [.filename]#/etc/inetd.conf# festgelegt werden. -C rate:: Legt fest, wie oft ein Dienst von einer einzelnen IP-Adresse in einer Minute aufgerufen werden kann; in der Voreinstellung gibt es keine Einschränkung. Dieser Wert kann für jeden Dienst durch das Setzen des Parameters `max-connections-per-ip-per-minute` in [.filename]#/etc/inetd.conf# festgelegt werden. -R rate:: Legt fest, wie oft ein Dienst in der Minute aktiviert werden kann; in der Voreinstellung sind dies `256` Aktivierungen pro Minute. Ein Wert von `0` erlaubt unbegrenzt viele Aktivierungen. -s maximum:: Legt fest, wie oft ein Dienst in der Minute von einer einzelnen IP-Adresse aus aktiviert werden kann; in der Voreinstellung gibt es hier keine Beschränkung. Diese Einstellung kann für jeden Dienst durch die Angabe von `max-child-per-ip` in [.filename]#/etc/inetd.conf# angepasst werden. Es sind noch weitere Optionen verfügbar. Eine vollständige Liste der Optionen finden Sie in man:inetd[8]. [[network-inetd-security]] === Sicherheitsbedenken Viele Daemonen, die von inetd verwaltet werden, sind nicht auf Sicherheit bedacht. Einige Damonen, wie beispielsweise fingerd, liefern Informationen, die für einen Angreifer nützlich sein könnten. Aktivieren Sie nur erforderliche Dienste und überwachen Sie das System auf übermäßige Verbindungsversuche. `max-connections-per-ip-per-minute`, `max-child` und `max-child-per-ip` können verwendet werden, um solche Angriffe zu begrenzen. TCP-Wrapper ist in der Voreinstellung aktiviert. Lesen Sie man:hosts_access[5], wenn Sie weitere Informationen zum Setzen von TCP-Beschränkungen für verschiedene von inetd aktivierte Daemonen benötigen. [[network-nfs]] == Network File System (NFS) FreeBSD unterstützt das Netzwerkdateisystem NFS, das es einem Server erlaubt, Dateien und Verzeichnisse über ein Netzwerk mit Clients zu teilen. Mit NFS können Benutzer und Programme auf Daten entfernter Systeme zugreifen, und zwar so, als ob es sich um lokal gespeicherte Daten handeln würde. Die wichtigsten Vorteile von NFS sind: * Daten, die sonst auf jeden Client dupliziert würden, können an einem zentralen Ort aufbewahrt, und von den Clients über das Netzwerk aufgerufen werden. * Verschiedene Clients können auf ein gemeinsames Verzeichnis [.filename]#/usr/ports/distfiles# zugreifen. Die gemeinsame Nutzung dieses Verzeichnisses ermöglicht einen schnellen Zugriff auf die Quelldateien, ohne sie auf jede Maschine zu kopieren zu müssen. * In größeren Netzwerken ist es praktisch, einen zentralen NFS-Server einzurichten, auf dem die Heimatverzeichnisse der Benutzer gespeichert werden. Dadurch steht den Benutzern immer das gleiche Heimatverzeichnis zur Verfügung, unabhängig davon, an welchem Client im Netzwerk sie sich anmelden. * Die Verwaltung der NFS-Exporte wird vereinfacht. Zum Beispiel gibt es dann nur noch ein Dateisystem, für das Sicherheits- oder Backup-Richtlinien festgelegt werden müssen. * Wechselmedien können von anderen Maschinen im Netzwerk verwendet werden. Dies reduziert die Anzahl von Geräten im Netzwerk und bietet einen zentralen Ort für die Verwaltung. Oft ist es einfacher, über ein zentrales Installationsmedium Software auf mehreren Computern zu installieren. NFS besteht aus einem Server und einem oder mehreren Clients. Der Client greift über das Netzwerk auf die Daten zu, die auf dem Server gespeichert sind. Damit dies korrekt funktioniert, müssen einige Prozesse konfiguriert und gestartet werden: Folgende Daemonen müssen auf dem Server ausgeführt werden: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Daemon | Beschreibung |nfsd |Der NFS-Daemon. Er bearbeitet Anfragen der NFS-Clients. |mountd |Der NFS-Mount-Daemon. Er bearbeitet die Anfragen von `nfsd`. |rpcbind |Der Portmapper-Daemon. Durch ihn erkennen die NFS-Clients, welchen Port der NFS-Server verwendet. |=== Der Einsatz von man:nfsiod[8] ist nicht zwingend erforderlich, kann aber die Leistung auf dem Client verbessern. [[network-configuring-nfs]] === Konfiguration des Servers Die Dateisysteme, die der NFS-Server exportieren soll, werden in [.filename]#/etc/exports# festgelegt. Jede Zeile in dieser Datei beschreibt ein zu exportierendes Dateisystem, Clients, die darauf Zugriff haben sowie alle Zugriffsoptionen. Die Optionen eines auf einen anderen Rechner exportierten Dateisystems müssen alle in einer Zeile stehen. Wird in einer Zeile kein Rechner festgelegt, dürfen alle Clients im Netzwerk das exportierte Dateisystem einhängen. Wie Dateisysteme exportiert werden, ist in der folgenden [.filename]#/etc/exports# zu sehen. Diese Beispiele müssen natürlich an die Arbeitsumgebung und die Netzwerkkonfiguration angepasst werden. Es existieren viele verschiedene Optionen, allerdings werden hier nur wenige von ihnen erwähnt. Eine vollständige Liste der Optionen finden Sie in man:exports[5]. Dieses Beispiel exportiert [.filename]#/cdrom# für drei Clients, _alpha_, _bravo_ und _charlie_: [.programlisting] .... /cdrom -ro alpha bravo charlie .... Die Option `-ro` kennzeichnet das exportierte Dateisystem als schreibgeschützt. Dadurch sind Clients nicht in der Lage, das exportierte Dateisystem zu verändern. Dieses Beispiel geht davon aus, dass die Hostnamen entweder über DNS oder über [.filename]#/etc/hosts# aufgelöst werden können. Lesen Sie man:hosts[5] falls das Netzwerk über keinen DNS-Server verfügt. Das nächste Beispiel exportiert [.filename]#/home# auf drei durch IP-Adressen bestimmte Clients. Diese Einstellung kann für Netzwerke ohne DNS-Server und [.filename]#/etc/hosts# nützlich sein. Die Option `-alldirs` ermöglicht es, auch Unterverzeichnisse als Mountpunkte festzulegen. Dies bedeutet aber nicht, dass alle Unterverzeichnisse eingehängt werden, vielmehr wird es dem Client ermöglicht, nur diejenigen Verzeichnisse einzuhängen, die auch benötigt werden. [.programlisting] .... /usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... Das nächste Beispiel exportiert [.filename]#/a#, damit Clients von verschiedenen Domänen auf das Dateisystem zugreifen können. Die Option `-maproot=root` erlaubt es dem Benutzer `root` des Clients, als `root` auf das exportierte Dateisystem zu schreiben. Wenn diese Option nicht gesetzt ist, wird der `root`-Benutzer des Clients dem `nobody`-Konto des Servers zugeordnet und unterliegt somit den Zugriffsbeschränkungen dieses Kontos. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... Ein Client kann für jedes Dateisystem nur einmal definiert werden. Wenn beispielsweise [.filename]#/usr# ein gesondertes Dateisystem ist, dann wären die folgenden Einträge falsch, da in beiden Einträgen der gleiche Rechner angegeben wird: [.programlisting] .... #Nicht erlaubt, wenn /usr ein einziges Dateisystem ist /usr/src client /usr/ports client .... Das richtige Format für eine solche Situation ist: [.programlisting] .... /usr/src /usr/ports client .... Das Folgende ist ein Beispiel für eine gültige Exportliste, in der [.filename]#/usr# und [.filename]#/exports# lokale Dateisysteme sind: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... Damit die vom NFS-Server benötigen Prozesse beim Booten gestartet werden, fügen Sie folgende Optionen in [.filename]#/etc/rc.conf# hinzu: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_enable="YES" .... Der Server kann jetzt mit diesem Kommando gestartet werden: [source,shell] .... # service nfsd start .... Wenn der NFS-Server startet, wird auch mountd automatisch gestartet. Allerdings liest mountd [.filename]#/etc/exports# nur, wenn der Server gestartet wird. Um nachfolgende Änderungen an [.filename]#/etc/exports# wirksam werden zu lassen, kann mountd angewiesen werden, die Datei neu einzulesen: [source,shell] .... # service mountd reload .... === Konfiguration des Clients Um den NFS-Client zu aktivieren, setzen Sie folgende Option in [.filename]#/etc/rc.conf# auf jedem Client: [.programlisting] .... nfs_client_enable="YES" .... Der Client ist nun in der Lage, ein entferntes Dateisystem einzuhängen. In diesen Beispielen ist der Name des Servers `server` und der Name des Clients `client`. Fügen Sie folgenden Befehl aus, um das Verzeichnis [.filename]#/home# vom `server` auf dem `client` ins Verzeichnis [.filename]#/mnt# einzuhängen: [source,shell] .... # mount server:/home /mnt .... Die Dateien und Verzeichnisse in [.filename]#/home# stehen dem Rechner `client` nun im Verzeichnis [.filename]#/mnt# zur Verfügung. Um ein entferntes Dateisystem bei jedem Systemstart automatisch einzuhängen, fügen Sie das Dateisystem in [.filename]#/etc/fstab# ein: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... man:fstab[5] enthält eine Beschreibung aller Optionen. === Dateien sperren (Locking) Einige Anwendungen erfordern die Sperrung von Dateien, damit sie korrekt arbeiten. Um diese Sperre zu aktivieren, müssen diese Zeilen in [.filename]#/etc/rc.conf# sowohl auf dem Client als auch auf dem Server hinzugefügt werden: [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... Danach starten Sie die beiden Anwendungen: [source,shell] .... # service lockd start # service statd start .... Wenn keine Dateisperren zwischen den NFS-Clients und dem NFS-Server benötigt werden, können Sie den NFS-Client durch die Übergabe der Option `-L` an mount zu einer lokalen Sperrung von Dateien zwingen. Weitere Details finden Sie in man:mount_nfs[8]. [[network-autofs]] === Automatisches Einhängen mit man:autofs[5] [NOTE] ==== man:autofs[5] wird seit FreeBSD 10.1-RELEASE unterstützt. Um die Funktionalität des automatischen Einhängens in älteren FreeBSD-Versionen zu benutzen, verwenden Sie stattdessen man:amd[8]. In diesem Kapitel wird nur das automatische Einhängen mit Hilfe von man:autofs[5] beschrieben. ==== man:autofs[5] ist eine gebräuchliche Bezeichnung für verschiedene Komponenten, welche es erlauben, lokale und entfernte Dateisysteme automatisch einzuhängen, sobald auf eine Datei oder ein Verzeichnis in diesem Dateisystem zugegriffen wird. Es besteht aus einer Kernel-Komponente man:autofs[5] und mehreren Benutzerprogrammen: man:automount[8], man:automountd[8] und man:autounmountd[8]. man:autofs[5] ist eine Alternative für man:amd[8] aus früheren FreeBSD-Versionen. man:amd[8] steht nach wie vor zur Verfügung, da beide Programme ein unterschiedliches Format verwenden. Das Format welches man:autofs[5] verwendet ist das gleiche wie bei anderen SVR4 Automountern, beispielsweise denen aus Solaris(TM), Mac OS(R) X und Linux(R). Das virtuelle man:autofs[5]-Dateisystem wird von man:automount[8] in einen bestimmten Mountpunkt eingehängt. Dies geschieht gewöhnlich während des Bootens. Jedes Mal, wenn ein Prozess versucht auf eine Datei unterhalb des man:autofs[5]-Mountpunkts zuzugreifen, wird der Kernel den man:automountd[8]-Daemon benachrichtigen und den aktuellen Prozess anhalten. Der man:automountd[8]-Daemon wird dann die Anfrage des Kernels bearbeiten und das entsprechende Dateisystem einhängen. Anschließend wird der Daemon den Kernel benachrichtigen, dass der angehaltene Prozess wieder freigegeben werden kann. Der man:autounmountd[8]-Daemon hängt automatisch Dateisysteme nach einiger Zeit ab, sofern sie nicht mehr verwendet werden. Die primäre Konfigurationsdatei von autofs ist [.filename]#/etc/auto_master#. Sie enthält die einzelnen Zuordnungen zu den Mountpunkten. Eine Erklärung zu [.filename]#auto_master# und der Syntax für die Zuordnungen finden Sie in man:auto_master[5]. Eine spezielle Automounter Zuordnung wird in [.filename]#/net# eingehängt. Wenn auf eine Datei in diesem Verzeichnis zugegriffen wird, hängt man:autofs[5] einen bestimmten, entfernen Mountpunkt ein. Wenn beispielsweise auf eine Datei unterhalb von [.filename]#/net/foobar/usr# zugegriffen werden soll, würde man:automountd[8] das exportierte Dateisystem [.filename]#/usr# von dem Rechner `foobar` einhängen. .Ein exportiertes Dateisystem mit man:autofs[5] in den Verzeichnisbaum einhängen [example] ==== In diesem Beispiel zeigt `showmount -e` die exportierten Dateisysteme des NFS-Servers `foobar`: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /net/foobar/usr .... ==== Die Ausgabe von `showmount` zeigt das exportierte Dateisystem [.filename]#/usr#. Wenn in das Verzeichnis [.filename]#/host/foobar/usr# gewechselt wird, fängt man:automountd[8] die Anforderung ab und versucht, den Rechnernamen `foobar` aufzulösen. Gelingt dies, wird man:automountd[8] automatisch das exportierte Dateisystem einhängen. Um man:autofs[5] beim Booten zu aktivieren, fügen Sie diese Zeile in [.filename]#/etc/rc.conf# ein: [.programlisting] .... autofs_enable="YES" .... Danach kann man:autofs[5] gestartet werden: [source,shell] .... # service automount start # service automountd start # service autounmountd start .... Obwohl das Format von man:autofs[5] das gleiche ist wie in anderen Betriebssystemen, kann es wünschenswert sein, Informationen von anderen Betriebssystemen zu Rate zu ziehen, wie dieses http://images.apple.com/business/docs/Autofs.pdf[Mac OS X Dokument]. Weitere Informationen finden Sie in den Manualpages man:automount[8], man:automountd[8], man:autounmountd[8] und man:auto_master[5]. [[network-nis]] == Network Information System (NIS) Das Network Information System (NIS) wurde entwickelt, um UNIX(R)-Systeme zentral verwalten zu können. Dazu zählen beispielsweise Solaris(TM), HP-UX, AIX(R), Linux(R), NetBSD, OpenBSD und FreeBSD. NIS war ursprünglich als _Yellow Pages_ bekannt, aus markenrechtlichen Gründen wurde der Name aber geändert. Dies ist der Grund, warum NIS-Kommandos mit `yp` beginnen. Bei NIS handelt es sich um ein RPC-basiertes Client/Server-System. Eine Gruppe von Rechnern greift dabei innerhalb einer NIS-Domäne auf gemeinsame Konfigurationsdateien zu. Dies erlaubt es einem Systemadministrator, NIS-Clients mit minimalem Aufwand einzurichten, sowie Änderungen an der Systemkonfiguration von einem zentralen Ort aus durchzuführen. FreeBSD verwendet die Version 2 des NIS-Protokolls. === NIS-Begriffe und -Prozesse Tabelle 30.1 fasst die Begriffe und Anwenderprozesse zusammen, die von NIS verwendet werden: .NIS Begriffe [cols="1,1", frame="none", options="header"] |=== | Begriff | Beschreibung |NIS-Domänenname |NIS-Masterserver und Clients benutzen einen gemeinsamen NIS-Domänennamen. In der Regel hat dieser Name nichts mit DNS zu tun. |man:rpcbind[8] |Dieser Dienst aktiviert RPC und muss gestartet sein, damit ein NIS-Server oder -Client ausgeführt werden kann. |man:ypbind[8] |Dieser Dienst "bindet" einen NIS-Client an seinen NIS-Server. Der Client bezieht den NIS-Domänennamen vom System und stellt über das RPC-Protokoll eine Verbindung zum NIS-Server her. ypbind ist der zentrale Bestandteil der Client-Server-Kommunikation in einer NIS-Umgebung. Wird der Dienst auf einem Client beendet, ist dieser nicht mehr in der Lage, auf den NIS-Server zuzugreifen. |man:ypserv[8] |Dies ist der Prozess für den NIS-Server. Wenn dieser Dienst nicht mehr läuft, kann der Server nicht mehr auf NIS-Anforderungen reagieren. Wenn ein Slaveserver existiert, kann dieser als Ersatz fungieren. Einige NIS-Systeme (allerdings nicht das von FreeBSD) versuchen allerdings erst gar nicht, sich mit einem anderen Server zu verbinden, wenn der Masterserver nicht mehr reagiert. Die einzige Lösung besteht darin, den Serverprozess oder den ypbind-Prozess auf dem Client neu zu starten. |man:rpc.yppasswdd[8] |Dieser Prozess läuft nur auf dem NIS-Masterserver. Es handelt sich um einen Daemonprozess, der es NIS-Clients ermöglicht, ihre NIS-Passwörter zu ändern. Wenn dieser Daemon nicht läuft, müssen sich die Benutzer am NIS-Masterserver anmelden und ihre Passwörter dort ändern. |=== === Arten von NIS-Rechnern * NIS-Masterserver + Dieser Server dient als zentraler Speicherort für Rechnerkonfigurationen. Zudem verwaltet er die maßgebliche Kopie, der von den NIS-Clients gemeinsam verwendeten Dateien. [.filename]#passwd#, [.filename]#group#, sowie verschiedene andere von den Clients verwendete Dateien existieren auf dem Masterserver. Obwohl ein Rechner auch für mehrere NIS-Domänen als Masterserver fungieren kann, wird diese Art von Konfiguration nicht behandelt, da sich dieser Abschnitt auf eine relativ kleine NIS-Umgebung konzentriert. * NIS-Slaveserver + NIS-Slaveserver verwalten Kopien der Daten des NIS-Masterservers um Redundanz zu bieten. Zudem entlasten Slaveserver den Masterserver: NIS-Clients verbinden sich immer mit dem NIS-Server, welcher zuerst reagiert. Dieser Server kann auch ein Slaveserver sein. * NIS-Clients + NIS-Clients identifizieren sich gegenüber dem NIS-Server während der Anmeldung. Mit NIS können Informationen aus verschiedenen Dateien von mehreren Rechnern gemeinsam verwendet werden. [.filename]#master.passwd#, [.filename]#group#, und [.filename]#hosts# werden oft gemeinsam über NIS verwendet. Immer, wenn ein Prozess auf einem Client auf Informationen zugreifen will, die normalerweise in lokalen Dateien vorhanden wären, wird stattdessen eine Anfrage an den NIS-Server gestellt, an den der Client gebunden ist. === Planung Dieser Abschnitt beschreibt eine einfache NIS-Umgebung, welche aus 15 FreeBSD-Maschinen besteht, für die keine zentrale Verwaltung existiert. Jeder Rechner hat also eine eigene Version von [.filename]#/etc/passwd# und [.filename]#/etc/master.passwd#. Diese Dateien werden manuell synchron gehalten; wird ein neuer Benutzer angelegt, so muss dies auf allen fünfzehn Rechnern manuell erledigt werden. In Zukunft soll die Konfiguration wie folgt aussehen: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Rechnername | IP-Adresse | Rechneraufgabe |`ellington` |`10.0.0.2` |NIS-Master |`coltrane` |`10.0.0.3` |NIS-Slave |`basie` |`10.0.0.4` |Workstation der Fakultät |`bird` |`10.0.0.5` |Clientrechner |`cli[1-11]` |`10.0.0.[6-17]` |Verschiedene andere Clients |=== Wenn erstmalig ein NIS-Schema eingerichtet wird, sollte es im Voraus sorgfältig geplant werden. Unabhängig von der Größe des Netzwerks müssen einige Entscheidungen im Rahmen des Planungsprozesses getroffen werden. ==== Einen NIS-Domänennamen wählen Wenn ein Client Informationen anfordert, ist in dieser Anforderung der Name der NIS-Domäne enthalten. Dadurch weiß jeder Server im Netzwerk, auf welche Anforderung er antworten muss. Stellen Sie sich den NIS-Domänennamen als einen Namen einer Gruppe von Rechnern vor. Manchmal wird der Name der Internetdomäne auch für die NIS-Domäne verwendet. Dies ist allerdings nicht empfehlenswert, da es bei der Behebung von Problemen verwirrend sein kann. Der Name der NIS-Domäne sollte innerhalb des Netzwerks eindeutig sein. Hilfreich ist es, wenn der Name die Gruppe der in ihr zusammengefassten Rechner beschreibt. Die Kunstabteilung von Acme Inc. hätte daher vielleicht die NIS-Domäne "acme-art". Für dieses Beispiel wird der Name `test-domain` verwendet. Es gibt jedoch auch Betriebssysteme, die als NIS-Domänennamen den Namen der Internetdomäne verwenden. Wenn dies für einen oder mehrere Rechner des Netzwerks zutrifft, _muss_ der Name der Internetdomäne als NIS-Domänennamen verwendet werden. ==== Anforderungen an den Server Bei der Wahl des NIS-Servers müssen einige Dinge beachtet werden. Da die NIS-Clients auf die Verfügbarkeit des Servers angewiesen sind, sollten Sie einen Rechner wählen, der nicht regelmäßig neu gestartet werden muss. Der NIS-Server sollte idealerweise ein alleinstehender Rechner sein, dessen einzige Aufgabe es ist, als NIS-Server zu dienen. Wenn das Netzwerk nicht zu stark ausgelastet ist, ist es auch möglich, den NIS-Server als weiteren Dienst auf einem anderen Rechner laufen zu lassen. Wenn jedoch ein NIS-Server ausfällt, wirkt sich dies negativ auf _alle_ NIS-Clients aus. === Einen NIS-Masterserver konfigurieren Die verbindlichen Kopien aller NIS-Dateien befinden sich auf dem Masterserver. Die Datenbanken, in denen die Informationen gespeichert sind, bezeichnet man als NIS-Maps. Unter FreeBSD werden diese Maps unter [.filename]#/var/yp/[domainname]# gespeichert, wobei [.filename]#[domainname]# der Name der NIS-Domäne ist. Da ein NIS-Server mehrere Domänen verwalten kann, können auch mehrere Verzeichnisse vorhanden sein. Jede Domäne verfügt über ein eigenes Verzeichnis sowie einen eigenen, von anderen Domänen unabhängigen Satz von NIS-Maps. NIS-Master- und Slaveserver verwenden man:ypserv[8], um NIS-Anfragen zu bearbeiten. Dieser Daemon ist für eingehende Anfragen der NIS-Clients verantwortlich. Er ermittelt aus der angeforderten Domäne und Map einen Pfad zur entsprechenden Datenbank und sendet die angeforderten Daten von der Datenbank zum Client. Abhängig von den Anforderungen ist die Einrichtung eines NIS-Masterservers relativ einfach, da NIS von FreeBSD bereits in der Standardkonfiguration unterstützt wird. Es kann durch folgende Zeilen in [.filename]#/etc/rc.conf# aktiviert werden: [.programlisting] .... nisdomainname="test-domain" <.> nis_server_enable="YES" <.> nis_yppasswdd_enable="YES" <.> .... <.> Diese Zeile setzt den NIS-Domänennamen auf `test-domain`. <.> Dadurch werden die NIS-Serverprozesse beim Systemstart automatisch ausgeführt. <.> Durch diese Zeile wird der man:rpc.yppasswdd[8]-Daemon aktiviert, der die Änderung von NIS-Passwörtern von einem Client aus ermöglicht. Wird ypserv in einer Multi-Serverdomäne verwendet, in der NIS-Server gleichzeitig als NIS-Clients arbeiten, ist es eine gute Idee, diese Server zu zwingen, sich an sich selbst zu binden. Damit wird verhindert, dass Bindeanforderungen gesendet werden und sich die Server gegenseitig binden. Sonst könnten seltsame Fehler auftreten, wenn ein Server ausfällt, auf den andere Server angewiesen sind. Letztlich werden alle Clients einen Timeout melden, und versuchen, sich an andere Server zu binden. Die dadurch entstehende Verzögerung kann beträchtlich sein. Außerdem kann der Fehler erneut auftreten, da sich die Server wiederum aneinander binden könnten. Server, die auch als Client arbeiten, können durch das Hinzufügen der folgenden Zeilen in [.filename]#/etc/rc.conf# zu gezwungen werden, sich an einen bestimmten Server zu binden: [.programlisting] .... nis_client_enable="YES" <.> nis_client_flags="-S test-domain,server" <.> .... <.> Ermöglicht die Aktivierung der Client-Komponenten. <.> Diese Zeile setzt den NIS-Domain Namen `test-domain` und bindet sich an sich selbst. Nachdem die Parameter konfiguriert wurden, muss noch `/etc/netstart` ausgeführt werden, um alles entsprechend den Vorgaben in [.filename]#/etc/rc.conf# einzurichten. Bevor die NIS-Maps einrichtet werden können, muss der man:ypserv[8]-Daemon manuell gestartet werden: [source,shell] .... # service ypserv start .... ==== Die NIS-Maps initialisieren NIS-Maps Sie werden am NIS-Masterserver aus den Konfigurationsdateien unter [.filename]#/etc# erzeugt. Einzige Ausnahme: [.filename]#/etc/master.passwd#. Dies verhindert, dass die Passwörter für `root`- oder andere Administratorkonten an alle Server in der NIS-Domäne verteilt werden. Deshalb werden die primären Passwort-Dateien konfiguriert, bevor die NIS-Maps initialisiert werden: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... Es ist ratsam, alle Einträge für Systemkonten sowie Benutzerkonten, die nicht an die NIS-Clients weitergegeben werden sollen, wie beispielsweise `root` und weitere administrative Konten, zu entfernen. [NOTE] ==== Stellen Sie sicher, dass [.filename]#/var/yp/master.passwd# weder von der Gruppe noch von der Welt gelesen werden kann, indem Sie Zugriffsmodus auf `600` einstellen. ==== Nun können die NIS-Maps initialisiert werden. FreeBSD verwendet dafür das Skript man:ypinit[8]. Geben Sie `-m` und den NIS-Domänennamen an, wenn Sie NIS-Maps für den Masterserver erzeugen: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... Dadurch erzeugt `ypinit`[.filename]#/var/yp/Makefile# aus [.filename]#/var/yp/Makefile.dist#. Diese Datei geht in der Voreinstellung davon aus, dass in einer NIS-Umgebung mit nur einem Server gearbeitet wird und dass alle Clients unter FreeBSD laufen. Da `test-domain` aber auch über einen Slaveserver verfügt, muss [.filename]#/var/yp/Makefile# entsprechend angepasst werden, sodass es mit einem Kommentar (`#`) beginnt: [.programlisting] .... NOPUSH = "True" .... ==== Neue Benutzer hinzufügen Jedes Mal, wenn ein neuer Benutzer angelegt wird, muss er am NIS-Masterserver hinzugefügt und die NIS-Maps anschließend neu erzeugt werden. Wird dieser Punkt vergessen, kann sich der neue Benutzer _nur_ am NIS-Masterserver anmelden. Um beispielsweise den neuen Benutzer `jsmith` zur Domäne `test-domain` hinzufügen wollen, müssen folgende Kommandos auf dem Masterserver ausgeführt werden: [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... Statt `pw useradd jsmith` kann auch `adduser jsmith` verwendet werden. === Einen NIS-Slaveserver einrichten Um einen NIS-Slaveserver einzurichten, melden Sie sich am Slaveserver an und bearbeiten Sie [.filename]#/etc/rc.conf# analog zum Masterserver. Erzeugen Sie aber keine NIS-Maps, da diese bereits auf dem Server vorhanden sind. Wenn `ypinit` auf dem Slaveserver ausgeführt wird, benutzen Sie `-s` (Slave) statt `-m` (Master). Diese Option benötigt den Namen des NIS-Masterservers und den Domänennamen, wie in diesem Beispiel zu sehen: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. .... Hierbei wird auf dem Slaveserver ein Verzeichnis namens [.filename]#/var/yp/test-domain# erstellt, welches Kopien der NIS-Masterserver-Maps enthält. Durch hinzufügen der folgenden Zeilen in [.filename]#/etc/crontab# wird der Slaveserver angewiesen, seine Maps mit den Maps des Masterservers zu synchronisieren: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... Diese Einträge sind nicht zwingend notwendig, da der Masterserver automatisch versucht, alle Änderungen seiner NIS-Maps an seine Slaveserver weiterzugeben. Da Passwortinformationen aber auch für nur vom Slaveserver abhängige Systeme vital sind, ist es eine gute Idee, diese Aktualisierungen zu erzwingen. Besonders wichtig ist dies in stark ausgelasteten Netzen, in denen Map-Aktualisierungen unvollständig sein könnten. Um die Konfiguration abzuschließen, führen Sie `/etc/netstart` auf dem Slaveserver aus, um die NIS-Dienste erneut zu starten. === Einen NIS-Client einrichten Ein NIS-Client `bindet` sich unter Verwendung von `ypbind` an einen NIS-Server. Dieser Daemon sendet RPC-Anfragen auf dem lokalen Netzwerk. Diese Anfragen legen den Namen der Domäne fest, die auf dem Client konfiguriert ist. Wenn der Server der entsprechenden Domäne eine solche Anforderung erhält, schickt er eine Antwort an `ypbind`, das wiederum die Adresse des Servers speichert. Wenn mehrere Server verfügbar sind, verwendet der Client die erste erhaltene Adresse und richtet alle Anfragen an genau diesen Server. `ypbind` "pingt" den Server gelegentlich an, um sicherzustellen, dass der Server funktioniert. Antwortet der Server innerhalb eines bestimmten Zeitraums nicht (Timeout), markiert `ypbind` die Domäne als ungebunden und beginnt erneut, RPCs über das Netzwerk zu verteilen, um einen anderen Server zu finden. Einen FreeBSD-Rechner als NIS-Client einrichten: [.procedure] . Fügen Sie folgende Zeilen in [.filename]#/etc/rc.conf# ein, um den NIS-Domänennamen festzulegen, und um man:ypbind[8] bei der Initialisierung des Netzwerks zu starten: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... . Um alle Passworteinträge des NIS-Servers zu importieren, löschen Sie alle Benutzerkonten in [.filename]#/etc/master.passwd# mit `vipw`. Denken Sie daran, zumindest ein lokales Benutzerkonto zu behalten. Dieses Konto sollte außerdem Mitglied der Gruppe `wheel` sein. Wenn es mit NIS Probleme gibt, können Sie diesen Zugang verwenden, um sich als Superuser anzumelden und das Problem zu beheben. Bevor Sie die Änderungen speichern, fügen Sie folgende Zeile am Ende der Datei hinzu: + [.programlisting] .... +::::::::: .... + Diese Zeile legt für alle gültigen Benutzerkonten der NIS-Server-Maps einen Zugang an. Es gibt verschiedene Wege, den NIS-Client durch Änderung dieser Zeile zu konfigurieren. Eine Methode wird in <> beschrieben. Weitere detaillierte Informationen finden Sie im Buch `Managing NFS and NIS` vom O'Reilly Verlag. . Um alle möglichen Gruppeneinträge vom NIS-Server zu importieren, fügen Sie folgende Zeile in [.filename]#/etc/group# ein: + [.programlisting] .... +:*:: .... Um den NIS-Client direkt zu starten, führen Sie als Superuser die folgenden Befehle aus: [source,shell] .... # /etc/netstart # service ypbind start .... Danach sollte bei der Eingabe von `ypcat passwd` auf dem Client die `passwd-Map` des NIS-Servers angezeigt werden. === Sicherheit unter NIS Da RPC ein Broadcast-basierter Dienst ist, kann jedes System innerhalb der Domäne mittels ypbind den Inhalt der NIS-Maps abrufen. Um nicht autorisierte Transaktionen zu verhindern, unterstützt man:ypserv[8] eine Funktion namens "securenets", durch die der Zugriff auf bestimmte Rechner beschränkt werden kann. In der Voreinstellung sind diese Informationen in [.filename]#/var/yp/securenets# gespeichert, es sei denn, man:ypserv[8] wurde mit der Option `-p` und einem alternativen Pfad gestartet. Diese Datei enthält Einträge, die aus einer Netzwerkadresse und einer Netzmaske bestehen. Kommentarzeilen beginnen mit "#". [.filename]##/var/yp/securnets## könnte beispielsweise so aussehen: [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... Wenn man:ypserv[8] eine Anforderung von einer zu diesen Regeln passenden Adresse erhält, wird die Anforderung bearbeitet. Gibt es keine passende Regel, wird die Anforderung ignoriert und eine Warnmeldung aufgezeichnet. Wenn [.filename]#securenets# nicht existiert, erlaubt `ypserv` Verbindungen von jedem Rechner. crossref:security[tcpwrappers,"TCP Wrapper"] beschreibt eine alternative Methode zur Zugriffskontrolle. Obwohl beide Methoden einige Sicherheit gewähren, sind sie anfällig für "IP-Spoofing"-Angriffe. Der NIS-Verkehr sollte daher von einer Firewall blockiert werden. Server, die [.filename]#securenets# verwenden, können Schwierigkeiten bei der Anmeldung von NIS-Clients haben, die ein veraltetes TCP/IP-Subsystem besitzen. Einige dieser TCP/IP-Subsysteme setzen alle Rechnerbits auf Null, wenn sie einen `Broadcast` durchführen oder können die Subnetzmaske nicht auslesen, wenn sie die Broadcast-Adresse berechnen. Einige Probleme können durch Änderungen der Clientkonfiguration behoben werden. Andere hingegen lassen sich nur durch das Entfernen des betreffenden Rechners aus dem Netzwerk oder den Verzicht auf [.filename]#securenets# umgehen. Die Verwendung der TCP-Wrapper verlangsamt die Reaktion des NIS-Servers. Diese zusätzliche Reaktionszeit kann in Clientprogrammen zu Timeouts führen. Dies vor allem in Netzwerken, die stark ausgelastet sind, oder nur über langsame NIS-Server verfügen. Wenn ein oder mehrere Clients dieses Problem aufweisen, sollten Sie die betreffenden Clients in NIS-Slaveserver umwandeln, und diese an sich selbst binden. ==== Bestimmte Benutzer an der Anmeldung hindern In diesem Beispiel gibt es innerhalb der NIS-Domäne den Rechner `basie`, der nur für Mitarbeiter der Fakultät bestimmt ist. Die [.filename]#passwd# Datenbank des NIS-Masterservers enthält Benutzerkonten sowohl für Fakultätsmitarbeiter als auch für Studenten. Dieser Abschnitt beschreibt, wie Sie den Mitarbeitern der Fakultät die Anmeldung am System ermöglichen, während den Studenten die Anmeldung verweigert wird. Es gibt eine Möglichkeit, bestimmte Benutzer an der Anmeldung an einem bestimmten Rechner zu hindern, selbst wenn diese in der NIS-Datenbank vorhanden sind. Dazu kann mit `vipw` der Eintrag `-_Benutzername_` und die richtige Anzahl von Doppelpunkten an das Ende von [.filename]#/etc/master.passwd# gesetzt werden, wobei _Benutzername_ der zu blockierende Benutzername ist. Die Zeile mit dem geblockten Benutzer muss dabei vor der `+` Zeile, für zugelassene Benutzer stehen. In diesem Beispiel wird die Anmeldung für den Benutzer `bill` am Rechner `basie` blockiert: [source,shell] .... basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin operator:*:2:5::0:0:System &:/:/usr/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/usr/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin -bill::::::::: +::::::::: basie# .... [[network-netgroups]] === Netzgruppen verwenden Bestimmten Benutzern die Anmeldung an einzelnen Systemen zu verweigern, kann in großen Netzwerken schnell unübersichtlich werden. Dadurch verlieren Sie den Hauptvorteil von NIS: die _zentrale_ Verwaltung. Netzgruppen wurden entwickelt, um große, komplexe Netzwerke mit Hunderten Benutzern und Rechnern zu verwalten. Ihre Aufgabe ist vergleichbar mit UNIX(R) Gruppen. Die Hauptunterschiede sind das Fehlen einer numerischen ID sowie die Möglichkeit, Netzgruppen zu definieren, die sowohl Benutzer als auch andere Netzgruppen enthalten. Um das Beispiel in diesem Kapitel fortzuführen, wird die NIS-Domäne um zusätzliche Benutzer und Rechner erweitert: .Zusätzliche Benutzer [cols="1,1", frame="none", options="header"] |=== | Benutzername(n) | Beschreibung |`alpha`, `beta` |Mitarbeiter der IT-Abteilung |`charlie`, `delta` |Lehrlinge der IT-Abteilung |`echo`, `foxtrott`, `golf`, ... |Mitarbeiter |`able`, `baker`, ... |Praktikanten |=== .Zusätzliche Rechner [cols="1,1", frame="none", options="header"] |=== | Rechnername(n) | Beschreibung |`war`, `death`, `famine`, `pollution` |Nur Mitarbeiter der IT-Abteilung dürfen sich an diesen Rechnern anmelden. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |Nur Mitarbeiter und Lehrlinge der IT-Abteilung dürfen sich auf diesen Rechnern anmelden. |`one`, `two`, `three`, `four`, ... |Gewöhnliche Arbeitsrechner für Mitarbeiter. |`trashcan` |Ein sehr alter Rechner ohne kritische Daten. Sogar Praktikanten dürfen diesen Rechner verwenden. |=== Bei der Verwendung von Netzgruppen wird jeder Benutzer einer oder mehreren Netzgruppen zugewiesen und die Anmeldung wird dann für die Netzgruppe erlaubt oder verwehrt. Wenn ein neuer Rechner hinzugefügt wird, müssen die Zugangsbeschränkungen nur für die Netzgruppen festgelegt werden. Wird ein neuer Benutzer angelegt, muss er einer oder mehreren Netzgruppen zugewiesen werden. Wenn die Einrichtung von NIS sorgfältig geplant wurde, muss nur noch eine zentrale Konfigurationsdatei bearbeitet werden, um den Zugriff auf bestimmte Rechner zu erlauben oder zu verbieten. Dieses Beispiel erstellt vier Netzgruppen: IT-Mitarbeiter, IT-Lehrlinge, normale Mitarbeiter sowie Praktikanten: [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... Jede Zeile konfiguriert eine Netzgruppe. Die erste Spalte der Zeile bezeichnet den Namen der Netzgruppe. Die Einträge in den Klammern stehen entweder für eine Gruppe von einem oder mehreren Benutzern, oder für den Namen einer weiteren Netzgruppe. Wenn ein Benutzer angegeben wird, haben die drei Felder in der Klammer folgende Bedeutung: . Der Name des Rechner(s), auf dem die weiteren Felder für den Benutzer gültig sind. Wird kein Rechnername festgelegt, ist der Eintrag auf allen Rechnern gültig. . Der Name des Benutzerkontos, der zu dieser Netzgruppe gehört. . Die NIS-Domäne für das Benutzerkonto. Benutzerkonten können von anderen NIS-Domänen in eine Netzgruppe importiert werden. Wenn eine Gruppe mehrere Benutzer enthält, müssen diese durch Leerzeichen getrennt werden. Darüber hinaus kann jedes Feld Wildcards enthalten. Weitere Einzelheiten finden Sie in man:netgroup[5]. Netzgruppennamen sollten nicht länger als 8 Zeichen sein. Es wird zwischen Groß- und Kleinschreibung unterschieden. Die Verwendung von Großbuchstaben für Netzgruppennamen ermöglicht eine leichte Unterscheidung zwischen Benutzern, Rechnern und Netzgruppen. Einige NIS-Clients (dies gilt nicht für FreeBSD) können keine Netzgruppen mit mehr als 15 Einträgen verwalten. Diese Grenze kann umgangen werden, indem mehrere Subnetzgruppen mit weniger als fünfzehn Benutzern angelegt werden und diese Subnetzgruppen wiederum in einer Netzgruppe zusammengefasst wird, wie in diesem Beispiel zu sehen: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Wiederholen Sie diesen Vorgang, wenn mehr als 225 (15*15) Benutzer in einer einzigen Netzgruppe existieren. Die neue NIS-Map aktivieren und verteilen: [source,shell] .... ellington# cd /var/yp ellington# make .... Dadurch werden die NIS-Maps [.filename]#netgroup#, [.filename]#netgroup.byhost# und [.filename]#netgroup.byuser# erzeugt. Prüfen Sie die Verfügbarkeit der neuen NIS-Maps mit man:ypcat[1]: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... Die Ausgabe des ersten Befehls gibt den Inhalt von [.filename]#/var/yp/netgroup# wieder. Der zweite Befehl erzeugt nur dann eine Ausgabe, wenn rechnerspezifische Netzgruppen erzeugt wurden. Der dritte Befehl gibt die Netzgruppen nach Benutzern sortiert aus. Wenn Sie einen Client einrichten, verwenden Sie man:vipw[8] um den Namen der Netzgruppe anzugeben. Ersetzen Sie beispielsweise auf dem Server namens `war` die folgende Zeile: [.programlisting] .... +::::::::: .... durch [.programlisting] .... +@IT_EMP::::::::: .... ersetzt werden. Diese Zeile legt fest, dass nur noch Benutzer der Netzgruppe `IT_EMP` in die Passwortdatenbank dieses Systems importiert werden. Nur diese Benutzer dürfen sich an diesem Server anmelden. Diese Konfiguration gilt auch für die `~`-Funktion der Shell und für alle Routinen, die auf Benutzernamen und numerische Benutzer-IDs zugreifen. Oder anders formuliert, `cd ~_Benutzer_` ist nicht möglich, `ls -l` zeigt die numerische Benutzer-ID statt dem Benutzernamen und `find . -user joe -print` erzeugt die Fehlermeldung `No such user`. Um dieses Problem zu beheben, müssen alle Benutzereinträge importiert werden, ohne ihnen jedoch zu erlauben, sich am Server anzumelden. Dies kann durch das Hinzufügen einer zusätzlichen Zeile erreicht werden: [.programlisting] .... +:::::::::/usr/sbin/nologin .... Diese Zeile weist den Client an, alle Einträge zu importieren, aber die Shell in diesen Einträgen durch [.filename]#/usr/sbin/nologin# zu ersetzen. Stellen Sie sicher, dass die zusätzliche Zeile _nach_ der Zeile `+@IT_EMP:::::::::` eingetragen ist. Andernfalls haben alle via NIS importierten Benutzerkonten [.filename]#/usr/sbin/nologin# als Loginshell und niemand wird sich mehr am System anmelden können. Um die weniger wichtigen Server zu konfigurieren, ersetzen Sie den alten Eintrag `+:::::::::` auf den Servern mit diesen Zeilen: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/usr/sbin/nologin .... Die entsprechenden Zeilen für Arbeitsplätze lauten: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/usr/sbin/nologin .... NIS ist in der Lage, Netzgruppen aus anderen Netzgruppen zu bilden. Dies kann nützlich sein, wenn sich die Firmenpolitik ändert. Eine Möglichkeit ist die Erzeugung rollenbasierter Netzgruppen. Sie könnten eine Netzgruppe `BIGSRV` erzeugen, um den Zugang zu den wichtigsten Servern zu beschränken, eine weitere Gruppe `SMALLSRV` für die weniger wichtigen Server und eine dritte Netzgruppe `USERBOX` für die Arbeitsplatzrechner. Jede dieser Netzgruppen enthält die Netzgruppen, die sich auf diesen Rechnern anmelden dürfen. Die Einträge der Netzgruppen in der NIS-Map sollten ähnlich den folgenden aussehen: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... Diese Methode funktioniert besonders gut, wenn Rechner in Gruppen mit identischen Beschränkungen eingeteilt werden können. Unglücklicherweise ist dies die Ausnahme und nicht die Regel. Meistens wird die Möglichkeit zur rechnerspezischen Zugangsbeschränkung benötigt. Rechnerspezifische Netzgruppen sind eine weitere Möglichkeit, um mit den oben beschriebenen Änderungen umzugehen. In diesem Szenario enthält [.filename]#/etc/master.passwd# auf jedem Rechner zwei mit "+" beginnende Zeilen. Die erste Zeile legt die Netzgruppe mit den Benutzern fest, die sich auf diesem Rechner anmelden dürfen. Die zweite Zeile weist allen anderen Benutzern [.filename]#/usr/sbin/nologin# als Shell zu. Verwenden Sie auch hier (analog zu den Netzgruppen) Großbuchstaben für die Rechnernamen: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/usr/sbin/nologin .... Sobald dies für alle Rechner erledigt ist, müssen die lokalen Versionen von [.filename]#/etc/master.passwd# nie mehr verändert werden. Alle weiteren Änderungen geschehen über die NIS-Maps. Nachfolgend ein Beispiel für eine mögliche Netzgruppen-Map: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... Es ist nicht immer ratsam, rechnerbasierte Netzgruppen zu verwenden. Wenn Dutzende oder Hunderte identische Rechner eingerichtet werden müssen, sollten rollenbasierte Netzgruppen verwendet werden, um die Größe der NIS-Maps in Grenzen zu halten. === Passwortformate Alle Rechner innerhalb der NIS-Domäne müssen für die Verschlüsselung von Passwörtern das gleiche Format benutzen. Wenn Benutzer Schwierigkeiten bei der Authentifizierung auf einem NIS-Client haben, liegt dies möglicherweise an einem anderen Passwort-Format. In einem heterogenen Netzwerk muss das verwendete Format von allen Betriebssystemen unterstützt werden, wobei DES der kleinste gemeinsame Standard ist. Welches Format die Server und Clients verwenden, steht in [.filename]#/etc/login.conf#: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [weitere Einträge] .... In diesem Beispiel verwendet das System das Format DES. Weitere mögliche Werte sind unter anderem `blf` und `md5` (mit Blowfish und MD5 verschlüsselte Passwörter). Wird auf einem Rechner das Format entsprechend der NIS-Domäne geändert, muss anschließend die Login-Capability Datenbank neu erstellt werden: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== Das Format der schon bestehenden Passwörter wird erst aktualisiert, wenn ein Benutzer sein Passwort ändert, _nachdem_ die Datenbank neu erstellt wurde. ==== [[network-ldap]] == Lightweight Access Directory Protocol (LDAP) Das Lightweight Directory Access Protocol (LDAP) ist ein Protokoll der Anwendungsschicht, das verwendet wird um Objekte mithilfe eines verteilten Verzeichnisdienstes abzurufen, zu verändern und zu authentifizieren. Betrachten Sie es als ein Telefonbuch, das homogene Informationen in mehreren hierarchischen Ebenen speichert. Es wird in Active Directory und OpenLDAP-Netzwerken eingesetzt, in denen Benutzer unter Verwendung eines einzigen Kontos auf diverse interne Informationen zugreifen. Beispielsweise kann E-Mail-Authentifizierung, Abfrage von Kontaktinformationen und Website-Authentifizierung über ein einzelnes Benutzerkonto aus der Datenbank des LDAP-Servers erfolgen. Dieser Abschnitt enthält eine kompakte Anleitung, um einen LDAP-Server auf einem FreeBSD-System zu konfigurieren. Es wird vorausgesetzt, dass der Administrator bereits einen Plan erarbeitet hat, der verschiedene Punkte umfasst, unter anderem die Art der zu speichernden Informationen, für was die Informationen verwendet werden, welche Benutzer Zugriff auf die Informationen haben und wie die Informationen vor unbefugtem Zugriff geschützt werden. === LDAP Terminologie und Struktur LDAP verwendet mehrere Begriffe die Sie verstehen sollten bevor Sie die Konfiguration beginnen. Alle Verzeichniseinträge bestehen aus einer Gruppe von _Attributen_. Jede Attributgruppe enthält einen eindeutigen Bezeichner, der als distinguished name (DN) bekannt ist. Dieser setzt sich normalerweise aus mehreren anderen Attributen, wie dem Relative Distinguished Name (RDN) zusammen. Wie bei Verzeichnissen gibt es auch hier absolute und relative Pfade. Betrachten Sie DN als absoluten Pfad und RDN als relativen Pfad. Beispielsweise könnte ein LDAP-Eintrag wie folgt aussehen. Dieses Beispiel sucht nach dem Eintrag für das angegebene Benutzerkonto (`uid`), Organisationseinheit (`ou` und Organisation (`o`): [source,shell] .... % ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries:1 .... Die Einträge in diesem Beispiel zeigen die Werte für die Attribute `dn`, `mail`, `cn`, `uid` und `telephoneNumber`. Das Attribut `cn` ist der RDN. Weitere Informationen über LDAP und dessen Terminologie finden Sie unter http://www.openldap.org/doc/admin24/intro.html[ http://www.openldap.org/doc/admin24/intro.html]. [[ldap-config]] === Konfiguration eines LDAP-Servers FreeBSD integriert keinen LDAP-Server. Beginnen Sie die Konfiguration mit der Installation des Ports oder Pakets package:net/openldap-server[]: [source,shell] .... # pkg install openldap-server .... Im extref:{linux-users}[Paket,#software] sind eine große Anzahl an Optionen aktiviert. Mit dem Befehl `pkg info openldap-server` können diese überprüft werden. Falls die Optionen nicht ausreichend sind (weil bspw. SQL-Unterstützung benötigt wird), sollten Sie in Betracht ziehen, den Port mit dem entsprechenden Framework neu zu übersetzen. Während der Installation wird für die Daten das Verzeichnis [.filename]#/var/db/openldap-data# erstellt. Das Verzeichnis für die Ablage der Zertifikate muss manuell angelegt werden: [source,shell] .... # mkdir /usr/local/etc/openldap/private .... Im nächsten Schritt wird die Zertifizierungsstelle konfiguriert. Die folgenden Befehle müssen in [.filename]#/usr/local/etc/openldap/private# ausgeführt werden. Dies ist wichtig, da die Dateiberechtigungen restriktiv gesetzt werden und Benutzer keinen direkten Zugriff auf diese Daten haben sollten. Weitere Informationen über Zertifikate und deren Parameter finden Sie im crossref:security[openssl,"OpenSSL"]. Geben Sie folgenden Befehl ein, um die Zertifizierungsstelle zu erstellen und folgen Sie den Anweisungen: [source,shell] .... # openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt .... Diese Einträge sind frei wählbar, _mit Ausnahme_ von _Common Name_. Hier muss etwas anderes als der Hostname des Systems eingetragen werden. Wenn ein selbstsigniertes Zertifikat verwendet wird, stellen Sie dem Hostnamen einfach das Präfix `CA` für die Zertifizierungsstelle voran. Die nächste Aufgabe besteht darin, einen Zertifikatsregistrierungsanforderung (CSR) sowie einen privaten Schlüssel zu erstellen. Geben Sie folgenden Befehl ein und folgen Sie den Anweisungen: [source,shell] .... # openssl req -days 365 -nodes -new -keyout server.key -out server.csr .... Stellen Sie hierbei sicher, dass `Common Name` richtig eingetragen wird. Die Zertifikatsregistrierungsanforderung muss mit dem Schlüssel der Zertifizierungsstelle unterschrieben werden, um als gültiges Zertifikat verwendet zu werden: [source,shell] .... # openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial .... Der letzte Schritt für die Erstellung der Zertifikate besteht darin, die Client-Zertifikate zu erstellen und zu signieren: [source,shell] .... # openssl req -days 365 -nodes -new -keyout client.key -out client.csr # openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CAkey ca.key .... Achten Sie wieder auf das Attribut `Common name`. Stellen Sie außerdem sicher, dass bei diesem Verfahren acht (8) neue Dateien erzeugt worden sind. Der Daemon, auf dem der OpenLDAP-Server läuft, heißt [.filename]#slapd#. Die Konfiguration erfolgt über [.filename]#slapd.ldif#. Die alte [.filename]#slapd.conf# wird von OpenLDAP nicht mehr verwendet. http://www.openldap.org/doc/admin24/slapdconf2.html[Konfigurationsbeispiele] für [.filename]#slapd.ldif# finden sich auch in [.filename]#/usr/local/etc/openldap/slapd.ldif.sample#. Optionen sind in slapd-config(5) dokumentiert. Jeder Abschnitt in [.filename]#slapd.ldif# wird, wie alle anderen LDAP-Attributgruppen, durch einen DN eindeutig identifiziert. Achten Sie darauf, dass keine Leerzeilen zwischen der Anweisung `dn:` und dem gewünschten Ende des Abschnitts verbleiben. Im folgenden Beispiel wird TLS verwendet, um einen sicheren Kanal zu implementieren. Der erste Abschnitt stellt die globale Konfiguration dar: [.programlisting] .... # # See slapd-config(5) for details on configuration options. # This file should NOT be world readable. # dn: cn=config objectClass: olcGlobal cn: config # # # Define global ACLs to disable default read access. # olcArgsFile: /var/run/openldap/slapd.args olcPidFile: /var/run/openldap/slapd.pid olcTLSCertificateFile: /usr/local/etc/openldap/server.crt olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt #olcTLSCipherSuite: HIGH olcTLSProtocolMin: 3.1 olcTLSVerifyClient: never .... Hier müssen die Zertifizierungsstelle, das Serverzertifikat und die privaten Schlüssel des Servers angegeben werden. Es wird empfohlen, den Clients die Wahl der Sicherheits-Chiffre zu überlassen und die Option `olcTLSCipherSuite` wegzulassen (inkompatibel mit anderen TLS-Clients als [.filename]#openssl#). Mit der Option `olcTLSProtocolMin` benötigt der Server nur eine minimale Sicherheitsstufe. Diese Option wird empfohlen. Während die Verfizierung für den Server verpflichtend ist, ist sie es nicht für den Client: `olcTLSVerifyClient: never`. Der zweite Abschnitt behandelt die Backend-Module und kann wie folgt konfiguriert werden: [.programlisting] .... # # Load dynamic backend modules: # dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/local/libexec/openldap olcModuleload: back_mdb.la #olcModuleload: back_bdb.la #olcModuleload: back_hdb.la #olcModuleload: back_ldap.la #olcModuleload: back_passwd.la #olcModuleload: back_shell.la .... Der dritte Abschnitt widmet sich dem Laden der benötigten ldif-Schemata, die von den Datenbanken verwendet werden sollen. Diese Dateien sind essentiell. [.programlisting] .... dn: cn=schema,cn=config objectClass: olcSchemaConfig cn: schema include: file:///usr/local/etc/openldap/schema/core.ldif include: file:///usr/local/etc/openldap/schema/cosine.ldif include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif include: file:///usr/local/etc/openldap/schema/nis.ldif .... Als nächstes folgt der Abschnitt zur Frontend-Konfiguration: [.programlisting] .... # Frontend settings # dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: to * by * read # # Sample global access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # #olcAccess: to dn.base="" by * read #olcAccess: to dn.base="cn=Subschema" by * read #olcAccess: to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! # olcPasswordHash: {SSHA} # {SSHA} is already the default for olcPasswordHash .... Ein weiterer Abschnitt ist dem Konfigurations-Backend gewidmet, der einzige Weg, später auf die OpenLDAP-Serverkonfiguration zuzugreifen, ist als globaler Superuser. [.programlisting] .... dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: to * by * none olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U .... Der voreingestellte Benutzername für den Administrator lautet `cn=config`. Geben Sie [.filename]#slappasswd# in eine Shell ein, wählen Sie ein Passwort und verwenden Sie seinen Hash in `olcRootPW`. Wenn diese Option jetzt nicht angegeben ist, kann vor dem Import der [.filename]#slapd.ldif# niemand später den Abschnitt _global configuration_ ändern. Der letzte Abschnitt befasst sich mit dem Datenbank-Backend: [.programlisting] .... ####################################################################### # LMDB database definitions ####################################################################### # dn: olcDatabase=mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: mdb olcDbMaxSize: 1073741824 olcSuffix: dc=domain,dc=example olcRootDN: cn=mdbadmin,dc=domain,dc=example # Cleartext passwords, especially for the rootdn, should # be avoided. See slappasswd(8) and slapd-config(5) for details. # Use of strong authentication encouraged. olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+ # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. olcDbDirectory: /var/db/openldap-data # Indices to maintain olcDbIndex: objectClass eq .... Diese Datenbank enthält den _eigentlichen Inhalt_ des LDAP-Verzeichnisses. Neben `mdb` sind weitere Versionen verfügbar. Dessen Superuser, nicht zu verwechseln mit dem globalen, wird hier konfiguriert: ein Benutzername in `olcRootDN` und der Passworthash in `olcRootPW`; [.filename]#slappasswd# kann wie zuvor benutzt werden. Dieses http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/data/regressions/its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;hb=HEAD[Repository] enthält vier Beispiele für [.filename]#slapd.ldif#. Lesen Sie diese Seite, um eine bestehende [.filename]#slapd.conf# in [.filename]#slapd.ldif# zu konvertieren. Beachten Sie, dass dies einige unbrauchbare Optionen einführen kann. Wenn die Konfiguration abgeschlossen ist, muss [.filename]#slapd.ldif# in ein leeres Verzeichnis verschoben werden. Folgendes ist die empfohlene Vorgehensweise: [source,shell] .... # mkdir /usr/local/etc/openldap/slapd.d/ .... Importieren Sie die Konfigurationsdatenbank: [source,shell] .... # /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif .... Starten Sie den [.filename]#slapd#-Daemon: [source,shell] .... # /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/ .... Die Option `-d` kann, wie in slapd(8) beschrieben, zur Fehlersuche benutzt werden. Stellen Sie sicher, dass der Server läuft und korrekt arbeitet: [source,shell] .... # ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContexts # # dn: namingContexts: dc=domain,dc=example # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... Dem Server muss noch vertraut werden. Wenn dies noch nie zuvor geschehen ist, befolgen Sie diese Anweisungen. Installieren Sie das Paket oder den Port OpenSSL: [source,shell] .... # pkg install openssl .... Aus dem Verzeichnis, in dem [.filename]#ca.crt# gespeichert ist (in diesem Beispiel [.filename]#/usr/local/etc/openldap#), starten Sie: [source,shell] .... # c_rehash . .... Sowohl die CA als auch das Serverzertifikat werden nun in ihren jeweiligen Rollen korrekt erkannt. Um dies zu überprüfen, führen die folgenden Befehl aus dem Verzeichnis der [.filename]#server.crt# aus: [source,shell] .... # openssl verify -verbose -CApath . server.crt .... Falls [.filename]#slapd# ausgeführt wurde, muss der Daemon neu gestartet werden. Wie in [.filename]#/usr/local/etc/rc.d/slapd# angegeben, müssen die folgenden Zeilen in [.filename]#/etc/rc.conf# eingefügt werden, um [.filename]#slapd# beim Booten ordnungsgemäß auszuführen: [.programlisting] .... lapd_enable="YES" slapd_flags='-h "ldapi://%2fvar%2frun%2fopenldap%2fldapi/ ldap://0.0.0.0/"' slapd_sockets="/var/run/openldap/ldapi" slapd_cn_config="YES" .... [.filename]#slapd# bietet beim Booten keine Möglichkeit zur Fehlersuche. Überprüfen Sie dazu [.filename]#/var/log/debug.log#, `dmesg -a` und [.filename]#/var/log/messages#. Das folgende Beispiel fügt die Gruppe `team` und den Benutzer `john` zur LDAP-Datenbank `domain.example` hinzu, die bislang leer ist. Erstellen Sie zunächst die Datei [.filename]#domain.ldif#: [source,shell] .... # cat domain.ldif dn: dc=domain,dc=example objectClass: dcObject objectClass: organization o: domain.example dc: domain dn: ou=groups,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: groups dn: ou=users,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: users dn: cn=team,ou=groups,dc=domain,dc=example objectClass: top objectClass: posixGroup cn: team gidNumber: 10001 dn: uid=john,ou=users,dc=domain,dc=example objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: John McUser uid: john uidNumber: 10001 gidNumber: 10001 homeDirectory: /home/john/ loginShell: /usr/bin/bash userPassword: secret .... Weitere Informationen finden Sie in der OpenLDAP-Dokumentation. Benutzen Sie [.filename]#slappasswd#, um das Passwort durch einen Hash in `userPassword` zu ersetzen. Der in `loginShell` angegebene Pfad muss in allen Systemen existieren, in denen `john` sich anmelden darf. Benutzen Sie schließlich den `mdb`-Administrator, um die Datenbank zu ändern: [source,shell] .... # ldapadd -W -D "cn=mdbadmin,dc=domain,dc=example" -f domain.ldif .... Änderungen im Bereich _global configuration_ können nur vom globalen Superuser vorgenommen werden. Angenommen die Option `olcTLSCipherSuite: HIGH:MEDIUM:SSLv3` wurde ursprünglich definiert und soll nun gelöscht werden. Dazu erstellen Sie zunächst eine Datei mit folgendem Inhalt: [source,shell] .... # cat global_mod dn: cn=config changetype: modify delete: olcTLSCipherSuite .... Übernehmen Sie dann die Änderungen: [source,shell] .... # ldapmodify -f global_mod -x -D "cn=config" -W .... Geben Sie bei Aufforderung das im Abschnitt _configuration backend_ gewählte Passwort ein. Der Benutzername ist nicht erforderlich: Hier repräsentiert `cn=config` den DN des zu ändernden Datenbankabschnitts. Alternativ können Sie mit `ldapmodify` eine einzelne Zeile der Datenbank löschen, mit `ldapdelete` einen ganzen Eintrag. Wenn etwas schief geht oder der globale Superuser nicht auf das Konfigurations-Backend zugreifen kann, ist es möglich, die gesamte Konfiguration zu löschen und neu zu schreiben: [source,shell] .... # rm -rf /usr/local/etc/openldap/slapd.d/ .... [.filename]#slapd.ldif# kann dann bearbeitet und erneut importiert werden. Bitte folgenden Sie dieser Vorgehensweise nur, wenn keine andere Lösung verfügbar ist. Dies ist nur die Konfiguration des Servers. Auf demselben Rechner kann auch ein LDAP-Client mit eigener, separater Konfiguration betrieben werden. [[network-dhcp]] == Dynamic Host Configuration Protocol (DHCP) Das Dynamic Host Configuration Protocol (DHCP) ermöglicht es einem System, sich mit einem Netzwerk zu verbinden und die für die Kommunikation mit diesem Netzwerk nötigen Informationen zu beziehen. FreeBSD verwendet den von OpenBSD stammenden `dhclient`, um die Adressinformationen zu beziehen. FreeBSD installiert keinen DHCP-Server, aber es stehen einige Server in der FreeBSD Ports-Sammlung zu Verfügung. Das DHCP-Protokoll wird vollständig im http://www.freesoft.org/CIE/RFC/2131/[ RFC 2131] beschrieben. Eine weitere, lehrreiche Informationsquelle existiert unter http://www.isc.org/downloads/dhcp[ isc.org/downloads/dhcp/]. In diesem Abschnitt wird beschrieben, wie der integrierte DHCP-Client verwendet wird. Anschließend wird erklärt, wie ein DHCP-Server zu installieren und konfigurieren ist. [NOTE] ==== Unter FreeBSD wird das Gerät man:bpf[4] für den DHCP-Server und den DHCP-Client benötigt. Das Gerät ist bereits im [.filename]#GENERIC#-Kernel enthalten. Benutzer, die es vorziehen einen angepassten Kernel zu erstellen, müssen dieses Gerät behalten, wenn DHCP verwendet wird. Es sei darauf hingewiesen, dass [.filename]#bpf# es priviligierten Benutzern ermöglicht einen Paket-Sniffer auf dem System auszuführen. ==== === Einen DHCP-Client konfigurieren Die Unterstützung für den DHCP-Client ist im Installationsprogramm von FreeBSD enthalten, sodass ein neu installiertes System automatisch die Adressinformationen des Netzwerks vom DHCP-Server erhält. In crossref:bsdinstall[bsdinstall-post,"Benutzerkonten, Zeitzone, Dienste und Sicherheitsoptionen"] finden Sie Beispiele für eine Netzwerkkonfiguration. `dhclient` beginnt von einem Clientrechner aus über den UDP-Port 68 Konfigurationsinformationen anzufordern. Der Server antwortet auf dem UDP-Port 67, indem er dem Client eine IP-Adresse zuweist und ihm weitere relevante Informationen über das Netzwerk, wie Netzmasken, Router und DNS-Server mitteilt. Diese Informationen werden als DHCP-Lease bezeichnet und sind nur für bestimmte Zeit, die vom Administrator des DHCP-Servers vorgegeben wird, gültig. Dadurch fallen verwaiste IP-Adressen, deren Clients nicht mehr mit dem Netzwerk verbunden sind, automatisch an den Server zurück. DHCP-Clients können sehr viele Informationen von einem DHCP-Server erhalten. Eine ausführliche Liste finden Sie in man:dhcp-options[5]. Das Gerät [.filename]#bpf# ist im [.filename]#GENERIC#-Kernel bereits enthalten. Für die Nutzung von DHCP muss also kein angepasster Kernel erzeugt werden. In einer angepassten Kernelkonfigurationsdatei muss das Gerät enthalten sein, damit DHCP ordnungsgemäß funktioniert. Standardmässig läuft die DHCP-Konfiguration bei FreeBSD im Hintergrund oder auch _asynchron_. Andere Startskripte laufen weiter, während DHCP fertig abgearbeitet wird, was den Systemstart beschleunigt. DHCP im Hintergrund funktioniert gut, wenn der DHCP-Server schnell auf Anfragen der Clients antwortet. Jedoch kann DHCP eine lange Zeit benötigen, um auf manchen Systemen fertig zu werden. Falls Netzwerkdienste gestartet werden, bevor DHCP die Informationen und Netzwerkadressen gesetzt hat, werden diese fehlschlagen. Durch die Verwendung von DHCP im _asynchronen_ Modus wird das Problem verhindert, so dass die Startskripte pausiert werden, bis die DHCP-Konfiguration abgeschlossen ist. Diese Zeile wird in [.filename]#/etc/rc.conf# verwendet, um den asynchronen Modus zu aktivieren: [.programlisting] .... ifconfig_fxp0="DHCP" .... Die Zeile kann bereits vorhanden sein, wenn bei der Installation des Systems DHCP konfiguriert wurde. Ersetzen Sie _fxp0_ durch die entsprechende Schnittstelle. Die dynamische Konfiguration von Netzwerkkarten wird in crossref:config[config-network-setup,“Einrichten von Netzwerkkarten”] beschrieben. Um stattdessen den synchronen Modus zu verwenden, der während des Systemstarts pausiert bis die DHCP-Konfiguration abgeschlossen ist, benutzen Sie "SYNCDHCP": [.programlisting] .... ifconfig_fxp0="SYNCDHCP" .... Es stehen weitere Optionen für den Client zur Verfügung. Suchen Sie in man:rc.conf[5] nach `dhclient`, wenn Sie an Einzelheiten interessiert sind. Der DHCP-Client verwendet die folgenden Dateien: * [.filename]#/etc/dhclient.conf# + Die Konfigurationsdatei von `dhclient`. Diese Datei enthält normalerweise nur Kommentare, da die Vorgabewerte zumeist ausreichend sind. Diese Konfigurationsdatei wird in man:dhclient.conf[5] beschrieben. * [.filename]#/sbin/dhclient# + Weitere Informationen über dieses Kommando finden Sie in man:dhclient[8]. * [.filename]#/sbin/dhclient-script# + Das FreeBSD-spezifische Konfigurationsskript des DHCP-Clients. Es wird in man:dhclient-script[8] beschrieben und kann meist unverändert übernommen werden. * [.filename]#/var/db/dhclient.leases.interface# + Der DHCP-Client verfügt über eine Datenbank, die alle derzeit gültigen Leases enthält und als Logdatei erzeugt wird. Diese Datei wird in man:dhclient.leases[5] beschrieben. [[network-dhcp-server]] === Einen DHCP-Server installieren und einrichten Dieser Abschnitt beschreibt die Einrichtung eines FreeBSD-Systems als DHCP-Server. Dazu wird die DHCP-Implementation von ISC (Internet Systems Consortium) verwendet. Diese Implementation und die Dokumentation können als Port oder Paket package:net/isc-dhcp44-server[] installiert werden. Der Port package:net/isc-dhcp44-server[] installiert eine Beispiel-Konfigurationsdatei. Kopieren Sie [.filename]#/usr/local/etc/dhcpd.conf.example# nach [.filename]#/usr/local/etc/dhcpd.conf# und nehmen Sie die Änderungen an der neuen Datei vor. Diese Konfigurationsdatei umfasst Deklarationen für Subnetze und Rechner, die den DHCP-Cleints zur Verfügung gestellt wird. Die folgenden Zeilen konfigurieren Folgendes: [.programlisting] .... option domain-name "example.org";<.> option domain-name-servers ns1.example.org;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 600;<.> max-lease-time 72400;<.> ddns-update-style none;<.> subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20;<.> option routers rtr-239-0-1.example.org;<.> } host fantasia { hardware ethernet 08:00:07:26:c0:a5;<.> fixed-address fantasia.fugue.com;<.> } .... <.> Diese Option beschreibt die Standardsuchdomäne, die den Clients zugewiesen wird. Weitere Informationen finden Sie in man:resolv.conf[5]. <.> Diese Option legt eine, durch Kommata getrennte Liste von DNS-Servern fest, die von den Clients verwendet werden sollen. Die Server können über den Namen (FQDN) oder die IP-Adresse spezifiziert werden. <.> Die den Clients zugewiesene Subnetzmaske. <.> Die Voreinstellung für die Ablaufzeit des Lease in Sekunden. Ein Client kann diesen Wert in der Konfiguration überschreiben. <.> Die maximale Zeitdauer, für die der Server Leases vergibt. Sollte ein Client eine längere Zeitspanne anfordern, wird dennoch nur der Wert `max-lease-time` zugewiesen. <.> Die Voreinstellung `none` deaktiviert dynamische DNS-Updates. Bei der Einstellung `interim` aktualisiert der DHCP-Server den DNS-Server, wenn ein Lease vergeben oder zurückgezogen wurde. Ändern Sie die Voreinstellung nicht, wenn der Server so konfiguriert wurde, dynamische DNS-Updates zu unterstützen. <.> Diese Zeile erstellt einen Pool der verfügbaren IP-Adressen, die für die Zuweisung der DHCP-Clients reserviert sind. Der Bereich muss für das angegebene Netz oder Subnetz aus der vorherigen Zeile gültig sein. <.> Legt das Standard-Gateway für das Netz oder Subnetz fest, das nach der öffnenden Klammer `{` gültig ist. <.> Bestimmt die Hardware-MAC-Adresse eines Clients, durch die der DHCP-Server den Client erkennt, der eine Anforderung an ihn stellt. <.> Einem Rechner soll immer die gleiche IP-Adresse zugewiesen werden. Hier ist auch ein Rechnername gültig, da der DHCP-Server den Rechnernamen auflöst, bevor er das Lease zuweist. Die Konfigurationsdatei unterstützt viele weitere Optionen. Lesen Sie man:dhcpd.conf[5], die mit dem Server installiert wird, für Details und Beispiele. Nachdem [.filename]#dhcpd.conf# konfiguriert ist, aktivieren Sie den DHCP-Server in [.filename]#/etc/rc.conf#: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... Dabei müssen Sie `dc0` durch die Gerätedatei (mehrere Gerätedateien müssen durch Leerzeichen getrennt werden) ersetzen, die der DHCP-Server auf Anfragen von DHCP-Clients hin überwachen soll. Starten Sie den Server mit folgenden Befehl: [source,shell] .... # service isc-dhcpd start .... Künftige Änderungen an der Konfiguration des Servers erfordern, dass der Dienst `dhcpd` gestoppt und anschließend mit man:service[8] gestartet wird. * [.filename]#/usr/local/sbin/dhcpd# + Weitere Informationen zu dhcpd finden Sie in man:dhcpd[8]. * [.filename]#/usr/local/etc/dhcpd.conf# + Die Konfigurationsdatei des Servers muss alle Informationen enthalten, die an die Clients weitergegeben werden soll. Außerdem sind hier Informationen zur Konfiguration des Servers enthalten. Diese Konfigurationsdatei wird in man:dhcpd.conf[5] beschrieben. * [.filename]#/var/db/dhcpd.leases# + Der DHCP-Server hat eine Datenbank, die alle vergebenen Leases enthält. Diese wird als Logdatei erzeugt. man:dhcpd.leases[5] enthält eine ausführliche Beschreibung. * [.filename]#/usr/local/sbin/dhcrelay# + Dieser Daemon wird in komplexen Umgebungen verwendet, in denen ein DHCP-Server eine Anfrage eines Clients an einen DHCP-Server in einem separaten Netzwerk weiterleitet. Wenn Sie diese Funktion benötigen, müssen Sie package:net/isc-dhcp44-relay[] installieren. Weitere Informationen zu diesem Thema finden Sie in man:dhcrelay[8]. [[network-dns]] == Domain Name System (DNS) DNS ist das für die Umwandlung von Rechnernamen in IP-Adressen zuständige Protokoll. Im Internet wird DNS durch ein komplexes System von autoritativen Root-Nameservern, Top Level Domain-Servern (TLD) sowie anderen kleineren Nameservern verwaltet, die individuelle Domaininformationen speichern und untereinander abgleichen. Für einfache DNS-Anfragen wird auf dem lokalen System kein Nameserver benötigt. Die folgende Tabelle beschreibt einige mit DNS verbundenen Begriffe: .DNS-Begriffe [cols="1,1", frame="none", options="header"] |=== | Begriff | Bedeutung |Forward-DNS |Rechnernamen in IP-Adressen umwandeln. |Origin (Ursprung) |Die in einer bestimmten Zonendatei beschriebene Domäne. |Resolver |Ein Systemprozess, durch den ein Rechner Zoneninformationen von einem Nameserver anfordert. |Reverse-DNS |die Umwandlung von IP-Adressen in Rechnernamen |Root-Zone |Der Beginn der Internet-Zonenhierarchie. Alle Zonen befinden sich innerhalb der Root-Zone. Dies ist analog zu einem Dateisystem, in dem sich alle Dateien und Verzeichnisse innerhalb des Wurzelverzeichnisses befinden. |Zone |Eine individuelle Domäne, Unterdomäne, oder ein Teil von DNS, der von der gleichen Autorität verwaltet wird. |=== Es folgen nun einige Zonenbeispiele: * Innerhalb der Dokumentation wird die Root-Zone in der Regel mit `.` bezeichnet. * `org.` ist eine Top level Domain (TLD) innerhalb der Root-Zone. * `example.org.` ist eine Zone innerhalb der `org.`-TLD. * `1.168.192.in-addr.arpa.` ist die Zone mit allen IP-Adressen des Bereichs `192.168.1.*`. Wie man an diesen Beispielen erkennen kann, befindet sich der spezifischere Teil eines Rechnernamens auf der linken Seite der Adresse. `example.org.` beschreibt einen Rechner also genauer als `org.`, während `org.` genauer als die Root-Zone ist. Jeder Teil des Rechnernamens hat Ähnlichkeiten mit einem Dateisystem, in dem etwa [.filename]#/dev# dem Wurzelverzeichnis untergeordnet ist. === Gründe für die Verwendung eines Nameservers Es gibt zwei Arten von Nameservern: Autoritative Nameserver sowie zwischenspeichernde (cachende, auch bekannt als auflösende) Nameserver. Ein autoritativer Nameserver ist notwendig, wenn * Sie anderen verbindliche DNS-Auskünfte erteilen wollen. * eine Domain, beispielsweise `example.org`, registriert wird, und den zu dieser Domain gehörenden Rechnern IP-Adressen zugewiesen werden müssen. * ein IP-Adressblock reverse-DNS-Einträge benötigt, um IP-Adressen in Rechnernamen auflösen zu können. * ein Backup-Nameserver (auch Slaveserver genannt) oder ein zweiter Nameserver auf Anfragen antworten soll. Ein cachender Nameserver ist notwendig, weil * ein lokaler DNS-Server Daten zwischenspeichern und daher schneller auf Anfragen reagieren kann als ein entfernter Server. Wird nach `www.FreeBSD.org` gesucht, leitet der Resolver diese Anfrage an den Nameserver des ISPs weiter und nimmt danach das Ergebnis der Abfrage entgegen. Existiert ein lokaler, zwischenspeichernder DNS-Server, muss dieser die Anfrage nur einmal nach außen weitergeben. Für alle weiteren Anfragen ist dies nicht mehr nötig, da diese Information nun lokal gespeichert ist. === DNS-Server Konfiguration Unbound ist im Basissystem von FreeBSD enthalten. In der Voreinstellung bietet es nur die DNS-Auflösung auf dem lokalen Rechner. Obwohl das im Basissystem enthaltene Unbound konfiguriert werden kann, um Namensauflösung über den lokalen Rechner hinweg bereitzustellen, ist es empfehlenswert für solche Anforderungen Unbound aus der FreeBSD Ports-Sammlung zu installieren. Um Unbound zu aktivieren, fügen Sie folgende Zeile in [.filename]#/etc/rc.conf# ein: [.programlisting] .... local_unbound_enable="YES" .... Alle vorhandenen Nameserver aus [.filename]#/etc/resolv.conf# werden als Forwarder in der neuen Unbound-Konfiguration benutzt. [NOTE] ==== Wenn einer der aufgeführten Nameserver kein DNSSEC unterstützt, wird die lokale DNS-Auflösung nicht funktionieren. Testen Sie jeden Server und entfernen Sie die Server, die den Test nicht bestehen. Das folgende Beispiel zeigt einen Trust Tree beziehungsweise einen Fehler für den Nameserver auf `192.168.1.1`: ==== [source,shell] .... # drill -S FreeBSD.org @192.168.1.1 .... Nachdem jeder Server für DNSSEC konfiguriert ist, starten Sie Unbound: [source,shell] .... # service local_unbound onestart .... Dieses Kommando sorgt für die Aktualisierung von [.filename]#/etc/resolv.conf#, so dass Abfragen für DNSSEC gesicherte Domains jetzt funktionieren. Führen Sie folgenden Befehl aus, um den DNSSECTrust Tree für FreeBSD.org zu überprüfen: [source,shell] .... % drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful .... [[network-apache]] == Apache HTTP-Server Der Open Source Apache HTTP-Server ist der am weitesten verbreitete Webserver. Dieser Webserver ist nicht im Basissystem von FreeBSD enthalten, kann aber als Paket oder Port package:www/apache24[] installiert werden. Dieser Abschnitt beschreibt die Konfiguration der Version 2._x_ des Apache HTTP-Server. Weiterführende Informationen und Konfigurationsanweisungen für Apache 2.X finden Sie unter http://httpd.apache.org/[ httpd.apache.org]. === Apache konfigurieren und starten Der Apache HTTP-Server wird unter FreeBSD primär in [.filename]#/usr/local/etc/apache2x/httpd.conf# konfiguriert, wobei das _x_ die Versionsnummer darstellt. In dieser Textdatei leitet ein `#` einen Kommentar ein. Die am häufigsten verwendeten Optionen sind: `ServerRoot "/usr/local"`:: Legt das Standardwurzelverzeichnis für die Apache-Installation fest. Binärdateien werden in die Verzeichnisse [.filename]#bin# und [.filename]#sbin# unterhalb des Serverwurzelverzeichnisses installiert, während sich Konfigurationsdateien im Unterverzeichnis [.filename]#etc/apache2x# befinden. `ServerAdmin you@example.com`:: Die E-Mail-Adresse, an die Mitteilungen über Serverprobleme geschickt werden. Diese Adresse erscheint auf vom Server erzeugten Seiten, beispielsweise auf Fehlerseiten. `ServerName www.example.com:80`:: Erlaubt dem Administrator, einen Rechnernamen festzulegen, den der Server an die Clients sendet. Beispielsweise könnte `www` statt des richtigen Rechnernamens verwendet werden. Wenn das System keinen eingetragenen DNS-Namen hat, kann stattdessen die IP-Adresse eingetragen werden. Lauscht der Server auf einem anderen Port, tauschen Sie die `80` gegen eine entsprechende Portnummer. `DocumentRoot "/usr/local/www/apache2__x__/data"`:: Das Verzeichnis, in dem die Dokumente abgelegt sind. In der Voreinstellung befinden sich alle Seiten in diesem Verzeichnis, durch symbolische Links oder Aliase lassen sich aber auch andere Orte festlegen. Es ist empfehlenswert, eine Sicherungskopie der Apache-Konfigurationsdatei anzulegen, bevor Änderungen durchgeführt werden. Wenn die Konfiguration von Apache abgeschlossen ist, speichern Sie die Datei und überprüfen Sie die Konfiguration mit `apachectl`. Der Befehl `apachectl configtest` sollte `Syntax OK` zurückgeben. Um den Apache beim Systemstart zu starten, fügen Sie folgende Zeile in [.filename]#/etc/rc.conf# ein: [.programlisting] .... apache24_enable="YES" .... Wenn Sie während des Systemstarts weitere Parameter an den Apache übergeben wollen, können Sie diese durch eine zusätzliche Zeile in [.filename]#rc.conf# angeben: [.programlisting] .... apache24_flags="" .... Wenn apachectl keine Konfigurationsfehler meldet, starten Sie `httpd`: [source,shell] .... # service apache24 start .... Sie können den `httpd`-Dienst testen, indem Sie `http://_localhost_` in einen Browser eingeben, wobei Sie _localhost_ durch den vollqualifizierten Domainnamen der Maschine ersetzen, auf dem der `httpd` läuft. Die Standard Webseite, die angezeigt wird, ist [.filename]#/usr/local/www/apache24/data/index.html#. Die Konfiguration von Apache kann bei nachfolgenden Änderungen an der Konfigurationsdatei bei laufendem `httpd`, auf Fehler überprüft werden. Geben Sie dazu folgendes Kommando ein: [source,shell] .... # service apache24 configtest .... [NOTE] ==== Es ist wichitg zu beachten, dass `configtest` kein man:rc[8]-Standard ist, und somit nicht zwingend mit anderen man:rc[8]-Startskripten funktioniert. ==== === Virtual Hosting Virtual Hosting ermöglicht es, mehrere Webseiten auf einem Apache-Server laufen zu lassen. Die virtuellen Hosts können _IP-basiert_ oder _namensbasiert_ sein. IP-basiertes virtual Hosting verwendet eine IP-Adresse für jede Webseite. Beim namensbasierten virtual Hosting wird der HTTP/1.1-Header der Clients dazu verwendet, den Rechnernamen zu bestimmen. Dadurch wird es möglich, mehrere Domains unter der gleichen IP-Adresse zu betreiben. Damit der Apache namenbasierte virtuelle Domains verwalten kann, fügen Sie für jede Webseite einen separaten `VirtualHost`-Block ein. Wenn der Webserver beispielsweise `www.domain.tld` heißt und die virtuelle Domain `www.someotherdomain.tld` einrichtet werden soll, ergänzen Sie [.filename]#httpd.conf# um folgende Einträge: [.programlisting] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... Setzen Sie für jeden virtuellen Host die entsprechenden Werte für `ServerName` und `DocumentRoot`. Ausführliche Informationen zum Einrichten von virtuellen Hosts finden Sie in der offiziellen Apache-Dokumentation unter http://httpd.apache.org/docs/vhosts/[ http://httpd.apache.org/docs/vhosts/]. === Häufig verwendete Apache-Module Apache verwendet Module, die den Server um zusätzliche Funktionen erweitern. Eine vollständige Auflistung der zur Verfügung stehenden Module und Konfigurationsdetails finden Sie unter http://httpd.apache.org/docs/current/mod/[ http://httpd.apache.org/docs/current/mod/]. In FreeBSD können einige Module mit dem Port package:www/apache24[] kompiliert werden. Geben Sie in [.filename]#/usr/ports/www/apache24#`make config` ein, um zu sehen, welche Module zur Verfügung stehen und welche Module in der Voreinstellung aktiviert sind. Wenn ein Modul nicht zusammen mit dem Port kompiliert wird, bietet die Ports-Sammlung die Möglichkeit viele Module zu installieren. Dieser Abschnitt beschreibt drei der am häufigsten verwendeten Module. ==== SSL-Unterstützung Zu einem bestimmten Zeitpunkt erforderte die Unterstützung von SSL innerhalb von Apache ein separates Modul namens [.filename]#mod_ssl#. Dies ist nicht mehr der Fall und die Installation des Apache-Webservers wird im Standard mit SSL-Unterstützung ausgeliefert. Ein Beispiel, wie Sie SSL-Unterstützung für einen Webserver aktivieren können, finden Sie in der Datei [.filename]#httpd-ssl.conf# im Verzeichnis [.filename]#/usr/local/etc/apache24/extra#. In diesem Verzeichnis befindet sich auch eine Beispieldatei namens [.filename]#ssl.conf-sample#. Es wird empfohlen, beide Dateien zu überprüfen, um sichere Webseiten auf dem Apache-Webserver einzurichten. Nachdem die Konfiguration von SSL abgeschlossen ist, muss die folgende Zeile in [.filename]#httpd.conf# auskommentiert werden, um die Änderungen beim nächsten Neustart oder erneuten Laden der Konfiguration zu aktivieren: [.programlisting] .... #Include etc/apache24/extra/httpd-ssl.conf .... [WARNING] ==== SSL in Version 2 und 3 haben bekannte Schwachstellen. Es wird dringend empfohlen, TLS Version 1.2 und 1.3 anstelle der älteren SSL-Optionen zu aktivieren. Dies kann durch die Einstellung der folgenden Optionen in [.filename]#ssl.conf# erreicht werden: ==== [.programlisting] .... SSLProtocol all -SSLv3 -SSLv2 +TLSv1.2 +TLSv1.3 SSLProxyProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 .... Um die Konfiguration von SSL im Webserver abzuschließen, entfernen Sie den Kommentar in der folgenden Zeile, um sicherzustellen, dass die Konfiguration bei einem Neustart oder beim erneuten laden der Konfiguration von Apache übernommen wird: [.programlisting] .... # Secure (SSL/TLS) connections Include etc/apache24/extra/httpd-ssl.conf .... Diese Zeilen müssen in [.filename]#httpd.conf# ebenfalls auskommentiert bleiben, um SSL in Apache vollständig zu unterstützen: [.programlisting] .... LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so LoadModule ssl_module libexec/apache24/mod_ssl.so .... Der nächste Schritt ist die Kooperation mit einer Zertifizierungsstelle, um die entsprechenden Zertifikate auf dem System installieren zu lassen. Dadurch wird eine Vertrauenskette für die Webseite etabliert und jegliche Warnungen vor selbstsignierten Zertifikaten verhindert. ==== [.filename]#mod_perl# Das Modul [.filename]#mod_perl# macht es möglich, vollständig in Perl geschriebene Apache-Module zu erzeugen. Da der Perl-Interpreter in den Server eingebettet wird, muss weder ein externer Interpreter noch Perl zusätzlich aufgerufen werden. [.filename]#mod_perl# wird über den Port oder das Paket package:www/mod_perl2[] installiert. Dokumentation für dieses Modul finden Sie unter http://perl.apache.org/docs/2.0/index.html[ http://perl.apache.org/docs/2.0/index.html]. ==== [.filename]#mod_php# _PHP: Hypertext Preprocessor_ (PHP) ist eine vielseitig verwendbare Skriptsprache, die besonders für die Web-Entwicklung geeignet ist. PHP kann in HTML eingebettet werden und ähnelt von der Syntax her Sprachen wie C, Java(TM) und Perl. Das Hauptanliegen von PHP ist es, Web-Entwicklern die rasche Erstellung von dynamisch erzeugten Internetseiten zu ermöglichen. PHP und weitere in PHP geschriebene Funktionen unterstützt, muss das entsprechende Paket installiert werden. Sie können mit `pkg` die Paketdatenbank nach allen unterstützten PHP-Versionen durchsuchen: [source,shell] .... # pkg search php .... Die Ausgabe ist eine Liste mit Versionen und Funktionen des jeweiligen Pakets. Die Komponenten sind vollständig modular, d.h. die Funktionen werden durch die Installation des entsprechenden Pakets aktiviert. Geben Sie folgenden Befehl ein, um PHP-Version 7.4 für Apache zu installieren: [source,shell] .... # pkg install mod_php74 .... Falls irgendwelche Pakete Abhängigkeiten besitzen, werden diese zusätzlichen Pakete ebenfalls installiert. Standardmäßig ist PHP nicht aktiviert. Die folgenden Zeilen müssen in der Apache-Konfigurationsdatei unterhalb von [.filename]#/usr/local/etc/apache24# hinzugefügt werden, um PHP zu aktivieren: [.programlisting] .... SetHandler application/x-httpd-php SetHandler application/x-httpd-php-source .... Zusätzlich muss auch der `DirectoryIndex` in der Konfigurationsdatei aktualisiert werden und Apache muss entweder neu gestartet, oder die Konfiguration neu geladen werden, damit die Änderungen wirksam werden. Mit `pkg` kann die Unterstützung für viele weitere PHP-Funktionen installiert werden. Um beispielsweise die Unterstützung für XML oder SSL zu erhalten, installieren Sie die entsprechenden Pakete: [source,shell] .... # pkg install php74-xml php74-openssl .... Wie zuvor muss die Konfiguration von Apache neu geladen werden, damit die Änderungen wirksam werden. Dies gilt auch für Fälle, in denen lediglich ein Modul installiert wurde. Geben Sie folgenden Befehl ein, um einen geordneten Neustart durchzuführen und die Konfiguration neu zu laden: [source,shell] .... # apachectl graceful .... Sobald die Installation abgeschlossen ist, gibt es zwei Möglichkeiten, um eine Liste der installierten PHP-Module und Informationen über die Umgebung der Installation zu erhalten. Die erste Möglichkeit besteht darin, die vollständige PHP-Binärdatei zu installieren und den Befehl auszuführen, um die Informationen zu erhalten: [source,shell] .... # pkg install php74 .... [source,shell] .... # php -i | less .... Da die Ausgabe des Befehls sehr umfangreich ist, ist die Weiterleitung an einen Pager, wie beispielsweise `more` oder `less`, sinnvoll. Um Änderungen an der globalen Konfiguration von PHP vorzunehmen, gibt es schließlich eine gut dokumentierte Datei, die in [.filename]#/usr/local/etc/php.ini# installiert ist. Zum Zeitpunkt der Installation wird diese Datei nicht existieren, da zwei Versionen zur Auswahl stehen. Eine [.filename]#php.ini-development# und eine [.filename]#php.ini-production#. Diese Dateien sind Ansatzpunkte, die Administratoren bei der Implementierung unterstützen sollen. === Dynamische Webseiten Neben mod_perl und mod_php stehen noch weitere Sprachen zur Erstellung von dynamischen Inhalten zur Verfügung. Dazu gehören auch Django und Ruby on Rails. ==== Django Bei Django handelt es sich um ein unter der BSD-Lizenz verfügbares Framework zur schnellen Erstellung von mächtigen Internet-Applikationen. Es beinhaltet einen objekt-relationalen Mapper (wodurch Datentypen als Phyton-Objekte entwickelt werden können) sowie eine API für den dynamischen Datenbankzugriff auf diese Objekte, ohne dass Entwickler jemals SQL-Code schreiben müssen. Zusätzlich existiert ein umfangreiches Template-System, wodurch die Programmlogik von der HTML-Präsentation getrennt werden kann. Django setzt das Modul mod_python und eine SQL-Datenbank voraus. In FreeBSD wird bei der Installation von package:www/py-django[] automatisch [.filename]#mod_python# installiert. Als Datenbanken werden PostgreSQL, MySQL und SQLite unterstützt, wobei SQLite die Voreinstellung ist. Wenn Sie die Datenbank ändern möchten, geben Sie in [.filename]#/usr/ports/www/py-django#`make config` ein und installieren Sie den Port neu. Nachdem Django installiert ist, benötigt die Anwendung ein Projektverzeichnis und die Apache-Konfiguration, um den eingebetteten Python-Interpreter zu nutzen. Dieser Interpreter wird verwendet um die Anwendung für spezifische URLs der Seite aufrufen. Damit Apache Anfragen für bestimmte URLs an die Web-Applikation übergeben kann, müssen Sie den vollständigen Pfad zum Projektverzeichnis in [.filename]#httpd.conf# festlegen: [source,shell] .... SetHandler python-program PythonPath "['/pfad/zu/den/django/paketen/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On .... Weitere Informationen zur Verwendung von Django finden Sie unter https://docs.djangoproject.com/en/1.6/[ https://docs.djangoproject.com/en/1.6/]. ==== Ruby on Rails Ruby on Rails ist ein weiteres, als Open Source verfügbares Webframework. Es bietet einen kompletten Entwicklungsstack und erlaubt es Webentwicklern, umfangreiche und mächtige Applikationen in kurzer Zeit zu programmieren. Unter FreeBSD kann das Framework über den Port oder das Paket package:www/rubygem-rails[] installiert werden. Weitere Informationen zur Verwendung von Ruby on Rails finden Sie unter http://rubyonrails.org/documentation[ http://rubyonrails.org/documentation]. [[network-ftp]] == File Transfer Protocol (FTP) Das File Transfer Protocol (FTP) ermöglicht auf einfache Art und Weise den Dateiaustausch mit einem FTP-Server. Der FTP-Server ftpd ist bei FreeBSD bereits im Basisystem enthalten. FreeBSD verwendet mehrere Konfigurationsdateien, um den Zugriff auf den FTP zu kontrollieren. Dieser Abschnitt fasst diese Dateien zusammen. In man:ftpd[8] finden Sie weitere Inforamtionen über den integrierten FTP-Server. === Konfiguration Der wichtigste Punkt ist hier die Entscheidung darüber, welche Benutzer auf den FTP-Server zugreifen dürfen. Ein FreeBSD-System verfügt über diverse Systembenutzerkonten, die jedoch nicht auf den FTP-Server zugreifen sollen. Die Datei [.filename]#/etc/ftpusers# enthält alle Benutzer, die vom FTP-Zugriff ausgeschlossen sind. In der Voreinstellung gilt dies auch die gerade erwähnten Systembenutzerkonten. Sie können über diese Datei weitere Benutzer vom FTP-Zugriff ausschließen. In einigen Fällen kann es wünschenswert sein, den Zugang für manche Benutzer einzuschränken, ohne dabei FTP komplett zu verbieten. Dazu passen Sie [.filename]#/etc/ftpchroot#, wie in man:ftpchroot[5] beschrieben, entsprechend an. Diese Datei enthält Benutzer und Gruppen sowie die für sie geltenden Einschränkungen für FTP. Um anonymen FTP-Zugriff auf dem Server zu aktivieren, muss ein Benutzer `ftp` auf dem FreeBSD-System angelegt werden. Danach können sich Benutzer mit dem Benutzernamen `ftp` oder `anonymous` am FTP-Server anmelden. Das Passwort ist dabei beliebig, allerdings wird dazu in der Regel eine E-Mail-Adresse verwendet. Meldet sich ein anonymer Benutzer an, aktiviert der FTP-Server man:chroot[2], um den Zugriff auf das Heimatverzeichnis des Benutzers `ftp` zu beschränken. Es gibt zwei Textdateien, deren Inhalt den FTP-Clients bei der Anmeldung angezeigt wird. Der Inhalt von [.filename]#/etc/ftpwelcome# wird angezeigt, bevor der Login-Prompt erscheint. Nach einer erfolgreichen Anmeldung wird der Inhalt von [.filename]#/etc/ftpmotd# angezeigt. Beachten Sie aber, dass es dabei um einen Pfad relativ zur Umgebung des anzumeldenden Benutzers handelt. Bei einer anonymen Anmeldung würde also der Inhalt von [.filename]#~ftp/etc/ftpmotd# angezeigt. Sobald der FTP-Server konfiguriert ist, setzen Sie die entsprechende Variable in [.filename]#/etc/rc.conf#, damit der Dienst beim Booten gestartet wird: [.programlisting] .... ftpd_enable="YES" .... Starten Sie den Dienst: [source,shell] .... # service ftpd start .... Testen Sie die Verbindung zum FTP-Server, indem Sie folgendes eingeben: [source,shell] .... % ftp localhost .... === Wartung Der ftpd-Daemon verwendet man:syslog[3], um Protokolldateien zu erstellen. In der Voreinstellung werden alle FTP betreffenden Nachrichten nach [.filename]#/var/log/xferlog# geschrieben. Dies lässt sich aber durch das Einfügen der folgenden Zeile in [.filename]#/etc/syslog.conf# ändern: [.programlisting] .... ftp.info /var/log/xferlog .... [NOTE] ==== Beachten Sie, dass mit dem Betrieb eines anonymen FTP-Servers verschiedene Sicherheitsrisiken verbunden sind. Problematisch ist hier vor allem die Erlaubnis zum anonymen Upload von Dateien. Dadurch könnte der Server zur Verbreitung von illegaler oder nicht lizensierter Software oder noch Schlimmeren missbraucht werden. Wenn anonyme FTP-Uploads dennoch erforderlich sind, sollten Sie die Zugriffsrechte so setzen, dass solche Dateien erst nach Zustimmung eines Administrators von anderen Benutzern heruntergeladen werden können. ==== [[network-samba]] == Datei- und Druckserver für Microsoft(R) Windows(R)-Clients (Samba) Samba ist ein beliebtes Open Source Softwarepaket, das Datei- und Druckdienste über das SMB/CIFS-Protokoll zur Verfügung stellt. Dieses Protokoll ist in Microsoft(R) Windows(R)-Systemen enthalten und kann über die Installation der Samba-Client-Bibliotheken in andere Betriebssysteme integriert werden. Das Protokoll ermöglicht es Clients auf freigegebene Daten und Drucker zuzugreifen, so als ob es sich um lokale Drucker und Festplatten handeln würde. Unter FreeBSD können die Samba-Client-Bibliotheken über den Port oder das Paket package:net/samba410[] installiert werden. Der Client ermöglicht es einem FreeBSD-System auf SMB/CIFS-Freigaben in einem Microsoft(R) Windows(R)-Netzwerk zuzugreifen. Ein FreeBSD-System kann auch als Samba-Server agieren, wenn Sie den Port oder das Paket package:net/samba410[] installieren. Dies erlaubt es dem Administrator SMB/CIFS-Freigaben auf dem FreeBSD-System einzurichten, auf welche dann Clients mit Microsoft(R) Windows(R) oder den Samba-Client-Bibliotheken zugreifen können. === Konfiguration des Servers Samba wird in [.filename]#/usr/local/etc/smb4.conf# konfiguriert. Diese Datei muss erstellt werden, bevor Samba benutzt werden kann. Eine einfache [.filename]#smb4.conf#, wie hier gezeigt, stellt den Zugriff auf Verzeichnisse und Drucker für Windows(R)-Clients in einer Arbeitsgruppe (engl. Workgroup) zur Verfügung. In aufwendigeren Installationen, in denen LDAP oder Active Directory zum Einsatz kommt, ist es einfacher die [.filename]#smb4.conf# mit dem Werkzeug man:samba-tool[8] zu erstellen. [.programlisting] .... [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = ExampleMachine wins support = Yes security = user passdb backend = tdbsam # Example: share /usr/src accessible only to 'developer' user [src] path = /usr/src valid users = developer writable = yes browsable = yes read only = no guest ok = no public = no create mask = 0666 directory mask = 0755 .... ==== Globale Einstellungen Einstellungen für das Netzwerk werden in [.filename]#/usr/local/etc/smb4.conf# definiert: `workgroup`:: Der Name der Arbeitsgruppe. `netbios name`:: Der NetBIOS-Namen fest, unter dem der Samba-Server bekannt ist. In der Regel handelt es sich dabei um den ersten Teil des DNS-Namens des Servers. `server string`:: Legt die Beschreibung fest, die angezeigt wird, wenn mit `net view` oder anderen Netzwerkprogrammen Informationen über den Server angefordert werden. `wins support`:: Legt fest, ob Samba als WINS-Server fungieren soll. Aktivieren Sie die Unterstützung für WINS auf maximal einem Server im Netzwerk. ==== Samba absichern Die wichtigsten Einstellungen in [.filename]#/usr/local/etc/smb4.conf# betreffen das zu verwendende Sicherheitsmodell sowie das Backend-Passwortformat. Die folgenden Direktiven steuern diese Optionen: `security`:: Die häufigsten Optionen sind `security = share` und `security = user`. Wenn die Clients Benutzernamen verwenden, die den Benutzernamen auf dem FreeBSD-Rechner entsprechen, dann sollte die Einstellung `user level` verwendet werden. Dies ist die Standardeinstellung. Allerdings ist es dazu erforderlich, dass sich die Clients auf dem Rechner anmelden, bevor sie auf gemeinsame Ressourcen zugreifen können. + In der Einstellung `share level` müssen sich Clients nicht unter Verwendung eines gültigen Logins auf dem Rechner anmelden, bevor sie auf gemeinsame Ressourcen zugreifen können. In früheren Samba-Versionen war dies die Standardeinstellung. `passdb backend`:: Samba erlaubt verschiedene Backend-Authentifizierungsmodelle. Clients können sich durch LDAP, NIS+, eine SQL-Datenbank oder eine Passwortdatei authentifizieren. Die empfohlene Authentifizierungsmethode, `tdbsam`, ist ideal für einfache Netzwerke und wird hier vorgestellt. Für größere oder komplexere Netzwerke wird `ldapsam` empfohlen. `smbpasswd` war der frühere Standard und gilt mittlerweile als veraltet. ==== Samba Benutzer Damit Windows(R)-Clients auf die Freigaben zugreifen können, müssen die FreeBSD-Benutzerkonten in der `SambaSAMAccount`-Datenbank zugeordnet werden. Für bereits vorhandene Benutzerkonten kann dazu man:pdbedit[8] benutzt werden: [source,shell] .... # pdbedit -a username .... Dieser Abschnitt beschreibt lediglich die am häufigsten verwendeten Einstellungen. Ausführliche Informationen zur Konfiguration von Samba finden Sie im http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/[ Official Samba HOWTO]. === Samba starten Damit Samba beim Systemstart automatisch aktiviert wird, fügen Sie die folgende Zeile in [.filename]#/etc/rc.conf# ein: [.programlisting] .... samba_server_enable="YES" .... Jetzt kann Samba direkt gestartet werden: [source,shell] .... # service samba_server start Performing sanity check on Samba configuration: OK Starting nmbd. Starting smbd. .... Samba verwendet drei Daemonen. Sowohl nmbd als auch smbd werden durch `samba_enable` gestartet. Wenn eine Namensauflösung über winbind benötigt wird, setzen Sie zusätzlich: [.programlisting] .... winbindd_enable="YES" .... Samba kann jederzeit durch folgenden Befehl beendet werden: [source,shell] .... # service samba_server stop .... Samba ist ein komplexes Softwarepaket mit umfassenden Funktionen, die eine weitreichende Integration von Microsoft(R) Windows(R)-Netzwerken ermöglichen. Für eine Beschreibung dieser Zusatzfunktionen sollten Sie sich auf http://www.samba.org[http://www.samba.org] umsehen. [[network-ntp]] == Die Uhrzeit mit NTP synchronisieren Die interne Uhrzeit eines Computers ist nie ganz exakt. Dies ist problematisch, da viele Dienste darauf angewiesen sind, dass die Computer im Netzwerk die exakte Uhrzeit übermitteln. Die exakte Uhrzeit ist auch erforderlich um sicherzustellen, dass die Zeitstempel der Dateien konsistent bleiben. Das Network Time Protocol (NTP) bietet die Möglichkeit, die exakte Uhrzeit in einem Netzwerk zur Verfügung zu stellen. FreeBSD enthält man:ntpd[8], das andere NTP-Server abfragen kann um die Uhrzeit auf diesem Computer zu synchronisieren, oder um selbst die Uhrzeit für andere Computer im Netzwerk bereitzustellen. Dieser Abschnitt beschreibt die Konfiguration von ntpd unter FreeBSD. Zusätzliche Dokumentation im HTML-Format finden Sie in [.filename]#/usr/shared/doc/ntp/#. === NTP konfigurieren FreeBSD enthält mit ntpd ein Werkzeug, das zur Synchronisation der Uhrzeit verwendet werden kann. Die Konfiguration von Ntpd erfolgt über Variablen in man:rc.conf[5] und [.filename]#/etc/ntp.conf#, und wird in den folgenden Abschnitten beschrieben. Ntpd kommuniziert über UDP mit mit seinen Peers. Sämtliche Firewalls zwischen Ihrem Rechner und seinen NTP-Peers müssen so konfiguriert sein, dass UDP-Pakete auf Port 123 ein- und ausgehen können. ==== [.filename]#/etc/ntp.conf# Ntpd liest [.filename]#/etc/ntp.conf# um herauszufinden, welche NTP-Server abgefragt werden sollen. Die Auswahl mehrerer NTP-Server wird empfohlen, falls einer der Server nicht erreichbar ist oder sich seine Uhr als unzuverlässig erweist. Wenn ntpd Antworten erhält, bevorzugt es zuverlässige Server gegenüber weniger zuverlässigen. Die abgefragten Server können lokal im Netzwerk, von einem ISP bereitgestellt oder aus einer http://support.ntp.org/bin/view/Servers/WebHome[Liste öffentlich zugänglicher NTP-Server] ausgewählt werden. Wenn Sie einen öffentlichen NTP-Server auswählen, wählen Sie einen geografisch nahen NTP-Server und überprüfen Sie dessen Nutzungsrichtlinien. Das Schlüsselwort `pool` wählt einen oder mehrere Server aus einem Pool von Servern aus. Eine http://support.ntp.org/bin/view/Servers/NTPPoolServers[ Liste mit öffentlich zugänglichen NTP-Pools] ist ebenfalls verfügbar, sortiert nach geografischen Gebieten. Darüber hinaus bietet FreeBSD einen vom Projekt gespendeten Pool, `0.freebsd.pool.ntp.org`. .Beispiel für [.filename]#/etc/ntp.conf# [example] ==== Dies ist ein einfaches Beispiel für eine [.filename]#ntp.conf#-Datei. Die Einträge können so übernommen werden, wie sie sind. Die Datei enthält die notwendigen Einschränkungen für den Betrieb an einer öffentlich zugänglichen Netzwerkverbindung. [.programlisting] .... # Disallow ntpq control/query access. Allow peers to be added only # based on pool and server statements in this file. restrict default limited kod nomodify notrap noquery nopeer restrict source limited kod nomodify notrap noquery # Allow unrestricted access from localhost for queries and control. restrict 127.0.0.1 restrict ::1 # Add a specific server. server ntplocal.example.com iburst # Add FreeBSD pool servers until 3-6 good servers are available. tos minclock 3 maxclock 6 pool 0.freebsd.pool.ntp.org iburst # Use a local leap-seconds file. leapfile "/var/db/ntpd.leap-seconds.list" .... ==== Das Format dieser Datei ist in man:ntp.conf[5] beschrieben. Die folgenden Erläuterungen geben einen Überblick über die Schlüsselwörter, die in dem obigen Beispiel benutzt werden. In der Voreinstellung ist ein NTP-Server für jeden Host im Netzwerk zugänglich. Das Schlüsselwort `restrict` steuert, welche Systeme auf den Server zugreifen dürfen. Es werden mehrere `restrict`-Einträge unterstützt, die jeweils die vorherigen Anweisungen verfeinern. Die im Beispiel gezeigten Werte gewährem dem lokalen System vollen Abfrage- und Kontrollzugriff, während entfernte Systemen nur die Möglichkeit gegeben wird, die Zeit abzufragen. Weitere Details finden Sie im Abschnitt `Access Control Support` von man:ntp.conf[5]. Das Schlüsselwort `server` gibt einen einzelnen Server zur Abfrage der Zeit an. Die Datei kann das Schlüsselwort `server` mehrmals enthalten, wobei pro Zeile jeweils ein Server aufgeführt ist. Das Schlüsselwort `pool` gibt einen Pool von Servern an. Ntpd fügt bei Bedarf einen oder mehrere Server aus diesem Pool hinzu, um die Anzahl der mit dem Wert `tos minclock` Peers zu erreichen. Das Schlüsselwort `iburst` weist ntpd an, einen Burst von acht schnellen Paketen mit dem Server auszutauschen, wenn der Kontakt zum ersten Mal hergestellt wird, um so die Systemzeit schneller zu synchronisieren. Das Schlüsselwort `leapfile` gibt den Pfad einer Datei an, die Informationen über Schaltsekunden enthält. Die Datei wird automatisch durch man:periodic[8] aktualisiert. Der angegebene Pfad muss mit dem in der Variable `ntp_db_leapfile` aus [.filename]#/etc/rc.conf# übereinstimmen. ==== NTP-Einträge in [.filename]#/etc/rc.conf# Um ntpd beim Booten zu starten, Sie in [.filename]#/etc/rc.conf# den Eintrag `ntpd_enable="YES"` hinzu. Danach kann ntpd direkt gestartet werden: [source,shell] .... # service ntpd start .... Lediglich `ntpd_enable` wird benötigt um ntpd benutzen zu können. Die unten aufgeführten [.filename]#rc.conf#-Variablen können bei Bedarf ebenfalls verwendet werden. Ist `ntpd_sync_on_start="YES"` konfiguriert, setzt ntpd die Uhrzeit beim Systemstart, unabhängig davon wie hoch die Abweichung ist. Normalerweise protokolliert ntpd eine Fehlermeldung und beendet sich selbst, wenn die Uhr um mehr als 1000 Sekunden abweicht. Diese Option ist besonders auf Systemem ohne batteriegepufferte Echtzeituhr nützlich. Setzen Sie `ntpd_oomprotect="YES"`, um ntpd-Daemon davor zu schützen, vom System beendet zu werden, das versucht, sich von einer Out of Memory (OOM) Situation zu retten. Mit `ntpd_config=` setzen Sie den Pfad auf eine alternative [.filename]#ntp.conf#-Datei. In `ntpd_flags=` können bei Bedarf weitere Werte enthalten sein. Vermeiden Sie jedoch die Werte, die intern von [.filename]#/etc/rc.d/ntpd# verwaltet werden: * `-p` (Pfad zur PID-Datei) * `-c` (Setzen Sie stattdessen `ntpd_config=`) ==== Ntpd und der nicht privilegierte `ntpd`-Benutzer In FreeBSD kann Ntpd als nicht privilegierter Benutzer gestartet und ausgeführt werden. Dies erfordert das Modul man:mac_ntpd[4]. Das Startskript [.filename]#/etc/rc.d/ntpd# untersucht zunächst die NTP Konfiguration. Wenn möglich, lädt es das `mac_ntpd`-Modul und startet dann ntpd als nicht privilegierten Benutzer `ntpd` (Benutzer-ID 123). Um Probleme mit dem Datei- und Verzeichniszugriff zu vermeiden, wird das Startskript ntpd nicht automatisch als Benutzer `ntpd` starten, falls die Konfiguration irgendwelche Datei-bezogenen Optionen enthält. Falls einer der folgenden Werte in `ntpd_flags` vorhanden ist, muss eine manuelle Konfiguration vorgenommen werden, damit der Daemon vom `ntpd`-Benutzer ausgeführt werden kann: * -f oder --driftfile * -i oder --jaildir * -k oder --keyfile * -l oder --logfile * -s oder --statsdir Wenn einer der folgenden Schlüsselwörter in [.filename]#ntp.conf# vorhanden ist, muss eine manuelle Konfiguration vorgenommen werden, damit der Daemon vom `ntpd`-Benutzer ausgeführt werden kann: * crypto * driftfile * key * logdir * statsdir Um ntpd so zu konfigurieren, dass der Daemon als Benutzer `ntpd` läuft, müssen folgende Voraussetzungen erfüllt sein: * Stellen Sie sicher, dass der `ntpd`-Benutzer Zugriff auf alle in der Konfiguration angegebenen Dateien und Verzeichnisse hat. * Stellen Sie sicher, dass das Modul `mac_ntpd` in den Kernel geladen oder kompiliert wird. man:mac_ntpd[4] enthält weitere Details. * Setzen Sie `ntpd_user="ntpd"` in [.filename]#/etc/rc.conf#. === NTP mit einer PPP-Verbindung verwenden ntpd benötigt keine ständige Internetverbindung. Wenn Sie sich über eine PPP-Verbindung ins Internet einwählen, sollten Sie verhindern, dass NTP-Verkehr eine Verbindung aufbauen oder aufrechterhalten kann. Dies kann in den `filter`-Direktiven von [.filename]#/etc/ppp/ppp.conf# festgelegt werden. Ein Beispiel: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... Weitere Informationen finden Sie im Abschnitt `PACKET FILTERING` von man:ppp[8] sowie in den Beispielen unter [.filename]#/usr/shared/examples/ppp/#. [NOTE] ==== Einige Internetprovider blockieren Ports mit niedrigen Nummern. In solchen Fällen funktioniert NTP leider nicht, da Antworten eines NTP-Servers den Rechner nicht erreichen werden. ==== [[network-iscsi]] == iSCSI Initiator und Target Konfiguration iSCSI bietet die Möglichkeit, Speicherkapazitäten über ein Netzwerk zu teilen. Im Gegensatz zu NFS, das auf Dateisystemebene arbeitet, funktioniert iSCSI auf Blockgerätebene. In der iSCSI-Terminologie wird das System, das den Speicherplatz zur Verfügung stellt, als _Target_ bezeichnet. Der Speicherplatz selbst kann aus einer physischen Festplatte bestehen, oder auch aus einem Bereich, der mehrere Festplatten, oder nur Teile einer Festplatte, repräsentiert. Wenn beispielsweise die Festplatte(n) mit ZFS formatiert ist, kann ein zvol erstellt werden, welches dann als iSCSI-Speicher verwendet werden kann. Die Clients, die auf den iSCSI-Speicher zugreifen, werden _Initiator_ genannt. Ihnen steht der verfügbare Speicher als rohe, nicht formatierte Festplatte, die auch als LUN bezeichnet wird, zur Verfügung. Die Gerätedateien für die Festplatten erscheinen in [.filename]#/dev/# und müssen separat formatiert und eingehangen werden. FreeBSD enthält einen nativen, kernelbasierten iSCSI _Target_ und _Initiator_. Dieser Abschnitt beschreibt, wie ein FreeBSD-System als Target oder Initiator konfiguriert wird. [[network-iscsi-target]] === Ein iSCSI-Target konfigurieren Um ein iSCSI-Target zu konfigurieren, erstellen Sie die Konfigurationsdatei [.filename]#/etc/ctl.conf# und fügen Sie eine Zeile in [.filename]#/etc/rc.conf# hinzu, um sicherzustellen, dass man:ctld[8] automatisch beim Booten gestartet wird. Starten Sie dann den Daemon. Das folgende Beispiel zeigt eine einfache [.filename]#/etc/ctl.conf#. Eine vollständige Beschreibung dieser Datei und der verfügbaren Optionen finden Sie in man:ctl.conf[5]. [.programlisting] .... portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... Der erste Eintrag definiert die Portalgruppe `pg0`. Portalgruppen legen fest, auf welchen Netzwerk-Adressen der man:ctld[8]-Daemon Verbindungen entgegennehmen wird. Der Eintrag `discovery-auth-group no-authentication` zeigt an, dass jeder Initiator iSCSI-Targets suchen darf, ohne sich authentifizieren zu müssen. Die dritte und vierte Zeilen konfigurieren man:ctld[8] so, dass er auf allen IPv4- (`listen 0.0.0.0`) und IPv6-Adressen (`listen [::]`) auf dem Standard-Port 3260 lauscht. Es ist nicht zwingend notwendig eine Portalgruppe zu definieren, da es bereits eine integrierte Portalgruppe namens `default` gibt. In diesem Fall ist der Unterschied zwischen `default` und `pg0` der, dass bei `default` eine Authentifizierung nötig ist, während bei `pg0` die Suche nach Targets immer erlaubt ist. Der zweite Eintrag definiert ein einzelnes Target. Ein Target hat zwei mögliche Bedeutungen: eine Maschine die iSCSI bereitstellt, oder eine Gruppe von LUNs. Dieses Beispiel verwendet die letztere Bedeutung, wobei `iqn.2012-06.com.example:target0` der Name des Targets ist. Dieser Name ist nur für Testzwecke geeignet. Für den tatsächlichen Gebrauch ändern Sie `com.example` auf einen echten, rückwärts geschriebenen Domainnamen. `2012-06` steht für das Jahr und den Monat, an dem die Domain erworben wurde. `target0` darf einen beliebigen Wert haben und in der Konfigurationsdatei darf eine beliebige Anzahl von Targets definiert werden. Der Eintrag `auth-group no-authentication` erlaubt es allen Initiatoren sich mit dem angegebenen Target zu verbinden und `portal-group pg0` macht das Target über die Portalgruppe `pg0` erreichbar. Die nächste Sektion definiert die LUN. Jede LUN wird dem Initiator als separate Platte präsentiert. Für jedes Target können mehrere LUNs definiert werden. Jede LUN wird über eine Nummer identifiziert, wobei LUN 0 verpflichtend ist. Die Zeile mit dem Pfad `path /data/target0-0` definiert den absoluten Pfad zu der Datei oder des zvols für die LUN. Der Pfad muss vorhanden sein, bevor man:ctld[8] gestartet wird. Die zweite Zeile ist optional und gibt die Größe der LUN an. Als nächstes fügen Sie folgende Zeile in [.filename]#/etc/rc.conf# ein, um man:ctld[8] automatisch beim Booten zu starten: [.programlisting] .... ctld_enable="YES" .... Um man:ctld[8] jetzt zu starten, geben Sie dieses Kommando ein: [source,shell] .... # service ctld start .... Der man:ctld[8]-Daemon liest beim Start [.filename]#/etc/ctl.conf#. Wenn diese Datei nach dem Starten des Daemons bearbeitet wird, verwenden Sie folgenden Befehl, damit die Änderungen sofort wirksam werden: [source,shell] .... # service ctld reload .... ==== Authentifizierung Die vorherigen Beispiele sind grundsätzlich unsicher, da keine Authentifizierung verwendet wird und jedermann vollen Zugriff auf alle Targets hat. Um für den Zugriff auf die Targets einen Benutzernamen und ein Passwort vorauszusetzen, ändern Sie die Konfigurationsdatei wie folgt: [.programlisting] .... auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... Die Sektion `auth-group` definiert die Benutzernamen und Passwörter. Um sich mit `iqn.2012-06.com.example:target0` zu verbinden, muss ein Initiator zuerst einen Benutzernamen und ein Passwort angeben. Eine Suche nach Targets wird jedoch immer noch ohne Authentifizierung gestattet. Um eine Authentifizierung zu erfordern, setzen Sie `discovery-auth-group` auf eine definierte `auth-group` anstelle von `no-autentication`. In der Regel wird für jeden Initiator ein einzelnes Target exportiert. In diesem Beispiel wird der Benutzername und das Passwort direkt im Target-Eintrag festgelegt: [.programlisting] .... target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } .... [[network-iscsi-initiator]] === Einen iSCSI-Initiator konfigurieren [NOTE] ==== Der in dieser Sektion beschriebene iSCSI-Initiator wird seit FreeBSD 10.0-RELEASE unterstützt. Lesen Sie man:iscontrol[8], wenn Sie den iSCSI-Initiator mit älteren Versionen benutzen möchten. ==== Um den Initiator zu verwenden, muss zunächst ein iSCSI-Daemon gestartet sein. Der Daemon des Initiators benötigt keine Konfigurationsdatei. Um den Daemon automatisch beim Booten zu starten, fügen Sie folgende Zeile in [.filename]#/etc/rc.conf# ein: [.programlisting] .... iscsid_enable="YES" .... Um man:iscsid[8] jetzt zu starten, geben Sie dieses Kommando ein: [source,shell] .... # service iscsid start .... Die Verbindung mit einem Target kann mit, oder ohne eine Konfigurationsdatei [.filename]#/etc/iscsi.conf# durchgeführt werden. Dieser Abschnitt beschreibt beide Möglichkeiten. ==== Verbindung zu einem Target herstellen - ohne Konfigurationsdatei Um einen Initiator mit einem Target zu verbinden, geben Sie die IP-Adresse des Portals und den Namen des Ziels an: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 .... Um zu überprüfen, ob die Verbindung gelungen ist, rufen Sie `iscsictl` ohne Argumente auf. Die Ausgabe sollte in etwa wie folgt aussehen: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 .... In diesem Beispiel wurde die iSCSI-Sitzung mit der LUN [.filename]#/dev/da0# erfolgreich hergestellt. Wenn das Target `iqn.2012-06.com.example:target0` mehr als nur eine LUN exportiert, werden mehrere Gerätedateien in der Ausgabe angezeigt: [source,shell] .... Connected: da0 da1 da2. .... Alle Fehler werden auf die Ausgabe und in die Systemprotokolle geschrieben. Diese Meldung deutet beispielsweise darauf hin, dass der man:iscsid[8]-Daemon nicht ausgeführt wird: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) .... Die folgende Meldung deutet auf ein Netzwerkproblem hin, zum Beispiel eine falsche IP-Adresse oder einen falschen Port: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused .... Diese Meldung bedeutet, dass der Name des Targets falsch angegeben wurde: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Not found .... Diese Meldung bedeutet, dass das Target eine Authentifizierung erfordert: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed .... Verwenden Sie diese Syntax, um einen CHAP-Benutzernamen und ein Passwort anzugeben: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret .... ==== Verbindung mit einem Target herstellen - mit Konfigurationsdatei Wenn Sie für die Verbindung eine Konfigurationsdatei verwenden möchten, erstellen Sie [.filename]#/etc/iscsi.conf# mit etwa folgendem Inhalt: [.programlisting] .... t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } .... `t0` gibt den Namen der Sektion in der Konfigurationsdatei an. Diser Name wird vom Initiator benutzt, um zu bestimmen, welche Konfiguration verwendet werden soll. Die anderen Einträge legen die Parameter fest, die während der Verbindung verwendet werden. `TargetAddress` und `TargetName` müssen angegeben werden, die restlichen sind optional. In diesen Beispiel wird der CHAP-Benuztername und das Passwort angegeben. Um sich mit einem bestimmten Target zu verbinden, geben Sie dessen Namen an: [source,shell] .... # iscsictl -An t0 .... Um sich stattdessen mit allen definierten Targets aus der Konfigurationsdatei zu verbinden, verwenden Sie: [source,shell] .... # iscsictl -Aa .... Damit sich der Initiator automatisch mit allen Targets aus [.filename]#/etc/iscsi.conf# verbindet, fügen Sie folgendes in [.filename]#/etc/rc.conf# hinzu: [.programlisting] .... iscsictl_enable="YES" iscsictl_flags="-Aa" .... diff --git a/documentation/content/el/books/handbook/mac/_index.adoc b/documentation/content/el/books/handbook/mac/_index.adoc index 1e6c0b4981..9fe3d82c0e 100644 --- a/documentation/content/el/books/handbook/mac/_index.adoc +++ b/documentation/content/el/books/handbook/mac/_index.adoc @@ -1,950 +1,948 @@ --- title: Κεφάλαιο 17. Υποχρεωτικός Έλεγχος Πρόσβασης part: Μέρος III. Διαχείριση Συστήματος prev: books/handbook/jails next: books/handbook/audit showBookMenu: true weight: 21 params: path: "/books/handbook/mac/" --- [[mac]] = Υποχρεωτικός Έλεγχος Πρόσβασης :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 17 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == Σύνοψη Το FreeBSD 5.X εισήγαγε νέες επεκτάσεις ασφαλείας από το TrustedBSD project, που βασίζονται στο προσχέδιο POSIX(R).1e. Δύο από τους πιο σημαντικούς νέους μηχανισμούς ασφαλείας, είναι οι Λίστες Ελέγχου Πρόσβασης (Access Control Lists, ACLs) στο σύστημα αρχείων και ο Υποχρεωτικός Έλεγχος Πρόσβασης (Mandatory Access Control, MAC). Ο Υποχρεωτικός Έλεγχος Πρόσβασης δίνει την δυνατότητας φόρτωσης αρθρωμάτων (modules) ελέγχου τα οποία υλοποιούν νέες πολιτικές ασφαλείας. Μερικά παρέχουν προστασία σε ένα στενό υποσύνολο του συστήματος, ενδυναμώνοντας την ασφάλεια μιας συγκεκριμένης υπηρεσίας. Άλλα παρέχουν συνοπτική ασφάλεια προς όλες τις υπηρεσίες και το σύστημα. Ο έλεγχος ονομάζεται υποχρεωτικός από το γεγονός ότι η επιβολή γίνεται από τους διαχειριστές και το σύστημα, και δεν αφήνεται στη διακριτική ευχέρεια των χρηστών όπως γίνεται με το διακριτικό έλεγχο πρόσβασης (Discretionary Access Control, DAC, τις τυποποιημένες άδειες αρχείων και IPC του System V στο FreeBSD). Το κεφάλαιο αυτό εστιάζει στο πλαίσιο του Υποχρεωτικού Ελέγχου Πρόσβασης (MAC Framework), και σε ένα σύνολο πρόσθετων αρθρωμάτων για πολιτικές ασφάλειας, που ενεργοποιούν διάφορους μηχανισμούς ασφάλειας. Αφού διαβάσετε αυτό το κεφάλαιο, θα ξέρετε: * Τι MAC αρθρώματα πολιτικών ασφαλείας περιλαμβάνονται αυτή τη στιγμή στο FreeBSD και τους σχετικούς μηχανισμούς τους. * Τι υλοποιούν τα MAC αρθρώματα πολιτικών ασφαλείας καθώς και τη διαφορά μεταξύ μια χαρακτηρισμένης (labeled) και μη χαρακτηρισμένης (non-labeled) πολιτικής. * Πως να ρυθμίσετε αποδοτικά ένα σύστημα για χρήση του πλαισίου λειτουργιών MAC. * Πως να ρυθμίσετε τα διαφορετικά αρθρώματα πολιτικών ασφάλειας τα οποία περιλαμβάνονται στο πλαίσιο λειτουργιών MAC . * Πως να υλοποιήσετε ένα πιο ασφαλές περιβάλλον, χρησιμοποιώντας το πλαίσιο λειτουργιών MAC και τα παραδείγματα που φαίνονται. * Πως να ελέγξετε τη ρύθμιση του MAC για να εξασφαλίσετε ότι έχει γίνει σωστή υλοποίηση του πλαισίου λειτουργιών. Πριν διαβάσετε αυτό το κεφάλαιο, θα πρέπει: * Να κατανοείτε τις βασικές έννοιες του UNIX(R) και του FreeBSD. (crossref:basics[basics,Βασικές Έννοιες στο UNIX(R)]). * Να είστε εξοικειωμένος με τις βασικές έννοιες της ρύθμισης και μεταγλώττισης του πυρήνα (crossref:kernelconfig[kernelconfig,Ρυθμίζοντας τον Πυρήνα του FreeBSD]). * Να έχετε κάποια εξοικείωση με την ασφάλεια και πως αυτή σχετίζεται με το FreeBSD (crossref:security[security,Ασφάλεια]). [WARNING] ==== Η κακή χρήση των πληροφοριών που παρέχονται εδώ μπορεί να προκαλέσει απώλεια πρόσβασης στο σύστημα, εκνευρισμό στους χρήστες ή αδυναμία πρόσβασης στις υπηρεσίες που παρέχονται από το Χ11. Ακόμα πιο σημαντικό είναι ότι δεν πρέπει να βασίζεστε στο MAC για την πλήρη ασφάλιση ενός συστήματος. Το πλαίσιο λειτουργιών MAC παρέχει απλώς επιπλέον υποστήριξη σε μια υπάρχουσα πολιτική ασφαλείας. Χωρίς σωστές πρακτικές και τακτικούς ελέγχους ασφαλείας, το σύστημα δεν θα είναι ποτέ απόλυτα ασφαλές. Θα πρέπει επίσης να σημειωθεί ότι τα παραδείγματα που περιέχονται σε αυτό το κεφάλαιο είναι ακριβώς και μόνο αυτό: παραδείγματα. Δεν συνίσταται να χρησιμοποιηθούν ακριβώς αυτές οι ρυθμίσεις σε ένα σύστημα παραγωγής. Η υλοποίηση των διάφορων αρθρωμάτων πολιτικών ασφαλείας απαιτεί αρκετή σκέψη και δοκιμές. Αν δεν κατανοείτε την ακριβή λειτουργία τους, μπορεί να βρεθείτε στη θέση να ελέγχετε ξανά ολόκληρο το σύστημα και να αλλάζετε ρυθμίσεις σε πολλά αρχεία και καταλόγους. ==== === Τι δεν Περιλαμβάνεται στο Κεφάλαιο Το κεφάλαιο αυτό καλύπτει μια ευρεία περιοχή προβλημάτων ασφαλείας που σχετίζονται με το πλαίσιο λειτουργιών MAC. Δεν θα καλυφθεί η ανάπτυξη νέων αρθρωμάτων πολιτικών ασφαλείας MAC. Ένας αριθμός από αρθρώματα που περιλαμβάνονται στο πλαίσιο MAC, έχουν ειδικά χαρακτηριστικά που παρέχονται τόσο για δοκιμές όσο και για ανάπτυξη νέων αρθρωμάτων. Αυτά περιλαμβάνουν τα man:mac_test[4], man:mac_stub[4] και man:mac_none[4]. Για περισσότερες πληροφορίες σχετικά με αυτά τα αρθρώματα και τους διάφορους μηχανισμούς που παρέχουν, παρακαλούμε ανατρέξτε στις αντίστοιχες σελίδες manual. [[mac-inline-glossary]] == Key Terms in this Chapter Before reading this chapter, a few key terms must be explained. This will hopefully clear up any confusion that may occur and avoid the abrupt introduction of new terms and information. * _compartment_: A compartment is a set of programs and data to be partitioned or separated, where users are given explicit access to specific components of a system. Also, a compartment represents a grouping, such as a work group, department, project, or topic. Using compartments, it is possible to implement a need-to-know security policy. * _high water mark_: A high water mark policy is one which permits the raising of security levels for the purpose of accessing higher level information. In most cases, the original level is restored after the process is complete. Currently, the FreeBSD MAC framework does not have a policy for this, but the definition is included for completeness. * _integrity_: Integrity, as a key concept, is the level of trust which can be placed on data. As the integrity of the data is elevated, so does the ability to trust that data. * _label_: A label is a security attribute which can be applied to files, directories, or other items in the system. It could be considered a confidentiality stamp; when a label is placed on a file it describes the security properties for that specific file and will only permit access by files, users, resources, etc. with a similar security setting. The meaning and interpretation of label values depends on the policy configuration: while some policies might treat a label as representing the integrity or secrecy of an object, other policies might use labels to hold rules for access. * _level_: The increased or decreased setting of a security attribute. As the level increases, its security is considered to elevate as well. * _low water mark_: A low water mark policy is one which permits lowering of the security levels for the purpose of accessing information which is less secure. In most cases, the original security level of the user is restored after the process is complete. The only security policy module in FreeBSD to use this is man:mac_lomac[4]. * _multilabel_: The `multilabel` property is a file system option which can be set in single user mode using the man:tunefs[8] utility, during the boot operation using the man:fstab[5] file, or during the creation of a new file system. This option will permit an administrator to apply different MAC labels on different objects. This option only applies to security policy modules which support labeling. * _object_: An object or system object is an entity through which information flows under the direction of a _subject_. This includes directories, files, fields, screens, keyboards, memory, magnetic storage, printers or any other data storage/moving device. Basically, an object is a data container or a system resource; access to an _object_ effectively means access to the data. * _policy_: A collection of rules which defines how objectives are to be achieved. A _policy_ usually documents how certain items are to be handled. This chapter will consider the term _policy_ in this context as a _security policy_; i.e. a collection of rules which will control the flow of data and information and define whom will have access to that data and information. * _sensitivity_: Usually used when discussing MLS. A sensitivity level is a term used to describe how important or secret the data should be. As the sensitivity level increases, so does the importance of the secrecy, or confidentiality of the data. * _single label_: A single label is when the entire file system uses one label to enforce access control over the flow of data. When a file system has this set, which is any time when the `multilabel` option is not set, all files will conform to the same label setting. * _subject_: a subject is any active entity that causes information to flow between _objects_; e.g. a user, user processor, system process, etc. On FreeBSD, this is almost always a thread acting in a process on behalf of a user. [[mac-initial]] == Explanation of MAC With all of these new terms in mind, consider how the MAC framework augments the security of the system as a whole. The various security policy modules provided by the MAC framework could be used to protect the network and file systems, block users from accessing certain ports and sockets, and more. Perhaps the best use of the policy modules is to blend them together, by loading several security policy modules at a time for a multi-layered security environment. In a multi-layered security environment, multiple policy modules are in effect to keep security in check. This is different to a hardening policy, which typically hardens elements of a system that is used only for specific purposes. The only downside is administrative overhead in cases of multiple file system labels, setting network access control user by user, etc. These downsides are minimal when compared to the lasting effect of the framework; for instance, the ability to pick and choose which policies are required for a specific configuration keeps performance overhead down. The reduction of support for unneeded policies can increase the overall performance of the system as well as offer flexibility of choice. A good implementation would consider the overall security requirements and effectively implement the various security policy modules offered by the framework. Thus a system utilizing MAC features should at least guarantee that a user will not be permitted to change security attributes at will; all user utilities, programs and scripts must work within the constraints of the access rules provided by the selected security policy modules; and that total control of the MAC access rules are in the hands of the system administrator. It is the sole duty of the system administrator to carefully select the correct security policy modules. Some environments may need to limit access control over the network; in these cases, the man:mac_portacl[4], man:mac_ifoff[4] and even man:mac_biba[4] policy modules might make good starting points. In other cases, strict confidentiality of file system objects might be required. Policy modules such as man:mac_bsdextended[4] and man:mac_mls[4] exist for this purpose. Policy decisions could be made based on network configuration. Perhaps only certain users should be permitted access to facilities provided by man:ssh[1] to access the network or the Internet. The man:mac_portacl[4] would be the policy module of choice for these situations. But what should be done in the case of file systems? Should all access to certain directories be severed from other groups or specific users? Or should we limit user or utility access to specific files by setting certain objects as classified? In the file system case, access to objects might be considered confidential to some users, but not to others. For an example, a large development team might be broken off into smaller groups of individuals. Developers in project A might not be permitted to access objects written by developers in project B. Yet they might need to access objects created by developers in project C; that is quite a situation indeed. Using the different security policy modules provided by the MAC framework; users could be divided into these groups and then given access to the appropriate areas without fear of information leakage. Thus, each security policy module has a unique way of dealing with the overall security of a system. Module selection should be based on a well thought out security policy. In many cases, the overall policy may need to be revised and reimplemented on the system. Understanding the different security policy modules offered by the MAC framework will help administrators choose the best policies for their situations. The default FreeBSD kernel does not include the option for the MAC framework; thus the following kernel option must be added before trying any of the examples or information in this chapter: [.programlisting] .... options MAC .... And the kernel will require a rebuild and a reinstall. [CAUTION] ==== While the various manual pages for MAC policy modules state that they may be built into the kernel, it is possible to lock the system out of the network and more. Implementing MAC is much like implementing a firewall, care must be taken to prevent being completely locked out of the system. The ability to revert back to a previous configuration should be considered while the implementation of MAC remotely should be done with extreme caution. ==== [[mac-understandlabel]] == Understanding MAC Labels A MAC label is a security attribute which may be applied to subjects and objects throughout the system. When setting a label, the user must be able to comprehend what it is, exactly, that is being done. The attributes available on an object depend on the policy module loaded, and that policy modules interpret their attributes in different ways. If improperly configured due to lack of comprehension, or the inability to understand the implications, the result will be the unexpected and perhaps, undesired, behavior of the system. The security label on an object is used as a part of a security access control decision by a policy. With some policies, the label by itself contains all information necessary to make a decision; in other models, the labels may be processed as part of a larger rule set, etc. For instance, setting the label of `biba/low` on a file will represent a label maintained by the Biba security policy module, with a value of "low". A few policy modules which support the labeling feature in FreeBSD offer three specific predefined labels. These are the low, high, and equal labels. Although they enforce access control in a different manner with each policy module, you can be sure that the low label will be the lowest setting, the equal label will set the subject or object to be disabled or unaffected, and the high label will enforce the highest setting available in the Biba and MLS policy modules. Within single label file system environments, only one label may be used on objects. This will enforce one set of access permissions across the entire system and in many environments may be all that is required. There are a few cases where multiple labels may be set on objects or subjects in the file system. For those cases, the `multilabel` option may be passed to man:tunefs[8]. In the case of Biba and MLS, a numeric label may be set to indicate the precise level of hierarchical control. This numeric level is used to partition or sort information into different groups of say, classification only permitting access to that group or a higher group level. In most cases the administrator will only be setting up a single label to use throughout the file system. _Hey wait, this is similar to DAC! I thought MAC gave control strictly to the administrator._ That statement still holds true, to some extent as `root` is the one in control and who configures the policies so that users are placed in the appropriate categories/access levels. Alas, many policy modules can restrict the `root` user as well. Basic control over objects will then be released to the group, but `root` may revoke or modify the settings at any time. This is the hierarchal/clearance model covered by policies such as Biba and MLS. === Label Configuration Virtually all aspects of label policy module configuration will be performed using the base system utilities. These commands provide a simple interface for object or subject configuration or the manipulation and verification of the configuration. All configuration may be done by use of the man:setfmac[8] and man:setpmac[8] utilities. The `setfmac` command is used to set MAC labels on system objects while the `setpmac` command is used to set the labels on system subjects. Observe: [source,shell] .... # setfmac biba/high test .... If no errors occurred with the command above, a prompt will be returned. The only time these commands are not quiescent is when an error occurred; similarly to the man:chmod[1] and man:chown[8] commands. In some cases this error may be a `Permission denied` and is usually obtained when the label is being set or modified on an object which is restricted. The system administrator may use the following commands to overcome this: [source,shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... As we see above, `setpmac` can be used to override the policy module's settings by assigning a different label to the invoked process. The `getpmac` utility is usually used with currently running processes, such as sendmail: although it takes a process ID in place of a command the logic is extremely similar. If users attempt to manipulate a file not in their access, subject to the rules of the loaded policy modules, the `Operation not permitted` error will be displayed by the `mac_set_link` function. ==== Common Label Types For the man:mac_biba[4], man:mac_mls[4] and man:mac_lomac[4] policy modules, the ability to assign simple labels is provided. These take the form of high, equal and low, what follows is a brief description of what these labels provide: * The `low` label is considered the lowest label setting an object or subject may have. Setting this on objects or subjects will block their access to objects or subjects marked high. * The `equal` label should only be placed on objects considered to be exempt from the policy. * The `high` label grants an object or subject the highest possible setting. With respect to each policy module, each of those settings will instate a different information flow directive. Reading the proper manual pages will further explain the traits of these generic label configurations. ===== Advanced Label Configuration Numeric grade labels are used for `comparison:compartment+compartment`; thus the following: [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... May be interpreted as: "Biba Policy Label"/"Grade 10" :"Compartments 2, 3 and 6": ("grade 5 ...") In this example, the first grade would be considered the "effective grade" with "effective compartments", the second grade is the low grade and the last one is the high grade. In most configurations these settings will not be used; indeed, they offered for more advanced configurations. When applied to system objects, they will only have a current grade/compartments as opposed to system subjects as they reflect the range of available rights in the system, and network interfaces, where they are used for access control. The grade and compartments in a subject and object pair are used to construct a relationship referred to as "dominance", in which a subject dominates an object, the object dominates the subject, neither dominates the other, or both dominate each other. The "both dominate" case occurs when the two labels are equal. Due to the information flow nature of Biba, you have rights to a set of compartments, "need to know", that might correspond to projects, but objects also have a set of compartments. Users may have to subset their rights using `su` or `setpmac` in order to access objects in a compartment from which they are not restricted. ==== Users and Label Settings Users themselves are required to have labels so that their files and processes may properly interact with the security policy defined on the system. This is configured through the [.filename]#login.conf# file by use of login classes. Every policy module that uses labels will implement the user class setting. An example entry containing every policy module setting is displayed below: [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... The `label` option is used to set the user class default label which will be enforced by MAC. Users will never be permitted to modify this value, thus it can be considered not optional in the user case. In a real configuration, however, the administrator will never wish to enable every policy module. It is recommended that the rest of this chapter be reviewed before any of this configuration is implemented. [NOTE] ==== Users may change their label after the initial login; however, this change is subject constraints of the policy. The example above tells the Biba policy that a process's minimum integrity is 5, its maximum is 15, but the default effective label is 10. The process will run at 10 until it chooses to change label, perhaps due to the user using the setpmac command, which will be constrained by Biba to the range set at login. ==== In all cases, after a change to [.filename]#login.conf#, the login class capability database must be rebuilt using `cap_mkdb` and this will be reflected throughout every forthcoming example or discussion. It is useful to note that many sites may have a particularly large number of users requiring several different user classes. In depth planning is required as this may get extremely difficult to manage. Future versions of FreeBSD will include a new way to deal with mapping users to labels; however, this will not be available until some time after FreeBSD 5.3. ==== Network Interfaces and Label Settings Labels may also be set on network interfaces to help control the flow of data across the network. In all cases they function in the same way the policies function with respect to objects. Users at high settings in `biba`, for example, will not be permitted to access network interfaces with a label of low. The `maclabel` may be passed to `ifconfig` when setting the MAC label on network interfaces. For example: [source,shell] .... # ifconfig bge0 maclabel biba/equal .... will set the MAC label of `biba/equal` on the man:bge[4] interface. When using a setting similar to `biba/high(low-high)` the entire label should be quoted; otherwise an error will be returned. Each policy module which supports labeling has a tunable which may be used to disable the MAC label on network interfaces. Setting the label to `equal` will have a similar effect. Review the output from `sysctl`, the policy manual pages, or even the information found later in this chapter for those tunables. === Singlelabel or Multilabel? By default the system will use the `singlelabel` option. But what does this mean to the administrator? There are several differences which, in their own right, offer pros and cons to the flexibility in the systems security model. The `singlelabel` only permits for one label, for instance `biba/high` to be used for each subject or object. It provides for lower administration overhead but decreases the flexibility of policies which support labeling. Many administrators may want to use the `multilabel` option in their security policy. The `multilabel` option will permit each subject or object to have its own independent MAC label in place of the standard `singlelabel` option which will allow only one label throughout the partition. The `multilabel` and `single` label options are only required for the policies which implement the labeling feature, including the Biba, Lomac, MLS and SEBSD policies. In many cases, the `multilabel` may not need to be set at all. Consider the following situation and security model: * FreeBSD web-server using the MAC framework and a mix of the various policies. * This machine only requires one label, `biba/high`, for everything in the system. Here the file system would not require the `multilabel` option as a single label will always be in effect. * But, this machine will be a web server and should have the web server run at `biba/low` to prevent write up capabilities. The Biba policy and how it works will be discussed later, so if the previous comment was difficult to interpret just continue reading and return. The server could use a separate partition set at `biba/low` for most if not all of its runtime state. Much is lacking from this example, for instance the restrictions on data, configuration and user settings; however, this is just a quick example to prove the aforementioned point. If any of the non-labeling policies are to be used, then the `multilabel` option would never be required. These include the `seeotheruids`, `portacl` and `partition` policies. It should also be noted that using `multilabel` with a partition and establishing a security model based on `multilabel` functionality could open the doors for higher administrative overhead as everything in the file system would have a label. This includes directories, files, and even device nodes. The following command will set `multilabel` on the file systems to have multiple labels. This may only be done in single user mode: [source,shell] .... # tunefs -l enable / .... This is not a requirement for the swap file system. [NOTE] ==== Some users have experienced problems with setting the `multilabel` flag on the root partition. If this is the case, please review the <> of this chapter. ==== [[mac-planning]] == Planning the Security Configuration Whenever a new technology is implemented, a planning phase is always a good idea. During the planning stages, an administrator should in general look at the "big picture", trying to keep in view at least the following: * The implementation requirements; * The implementation goals; For MAC installations, these include: * How to classify information and resources available on the target systems. * What sorts of information or resources to restrict access to along with the type of restrictions that should be applied. * Which MAC module or modules will be required to achieve this goal. It is always possible to reconfigure and change the system resources and security settings, it is quite often very inconvenient to search through the system and fix existing files and user accounts. Planning helps to ensure a trouble-free and efficient trusted system implementation. A trial run of the trusted system, including the configuration, is often vital and definitely beneficial _before_ a MAC implementation is used on production systems. The idea of just letting loose on a system with MAC is like setting up for failure. Different environments may have explicit needs and requirements. Establishing an in depth and complete security profile will decrease the need of changes once the system goes live. As such, the future sections will cover the different modules available to administrators; describe their use and configuration; and in some cases provide insight on what situations they would be most suitable for. For instance, a web server might roll out the man:mac_biba[4] and man:mac_bsdextended[4] policies. In other cases, a machine with very few local users, the man:mac_partition[4] might be a good choice. [[mac-modules]] == Module Configuration Every module included with the MAC framework may be either compiled into the kernel as noted above or loaded as a run-time kernel module. The recommended method is to add the module name to the [.filename]#/boot/loader.conf# file so that it will load during the initial boot operation. The following sections will discuss the various MAC modules and cover their features. Implementing them into a specific environment will also be a consideration of this chapter. Some modules support the use of labeling, which is controlling access by enforcing a label such as "this is allowed and this is not". A label configuration file may control how files may be accessed, network communication can be exchanged, and more. The previous section showed how the `multilabel` flag could be set on file systems to enable per-file or per-partition access control. A single label configuration would enforce only one label across the system, that is why the `tunefs` option is called `multilabel`. [[mac-seeotheruids]] === The MAC seeotheruids Module Module name: [.filename]#mac_seeotheruids.ko# Kernel configuration line: `options MAC_SEEOTHERUIDS` Boot option: `mac_seeotheruids_load="YES"` The man:mac_seeotheruids[4] module mimics and extends the `security.bsd.see_other_uids` and `security.bsd.see_other_gids sysctl` tunables. This option does not require any labels to be set before configuration and can operate transparently with the other modules. After loading the module, the following `sysctl` tunables may be used to control the features: * `security.mac.seeotheruids.enabled` will enable the module's features and use the default settings. These default settings will deny users the ability to view processes and sockets owned by other users. * `security.mac.seeotheruids.specificgid_enabled` will allow a certain group to be exempt from this policy. To exempt specific groups from this policy, use the `security.mac.seeotheruids.specificgid=XXX sysctl` tunable. In the above example, the _XXX_ should be replaced with the numeric group ID to be exempted. * `security.mac.seeotheruids.primarygroup_enabled` is used to exempt specific primary groups from this policy. When using this tunable, the `security.mac.seeotheruids.specificgid_enabled` may not be set. [[mac-bsdextended]] == The MAC bsdextended Module Module name: [.filename]#mac_bsdextended.ko# Kernel configuration line: `options MAC_BSDEXTENDED` Boot option: `mac_bsdextended_load="YES"` The man:mac_bsdextended[4] module enforces the file system firewall. This module's policy provides an extension to the standard file system permissions model, permitting an administrator to create a firewall-like ruleset to protect files, utilities, and directories in the file system hierarchy. When access to a file system object is attempted, the list of rules is iterated until either a matching rule is located or the end is reached. This behavior may be changed by the use of a man:sysctl[8] parameter, security.mac.bsdextended.firstmatch_enabled. Similar to other firewall modules in FreeBSD, a file containing access control rules can be created and read by the system at boot time using an man:rc.conf[5] variable. The rule list may be entered using a utility, man:ugidfw[8], that has a syntax similar to that of man:ipfw[8]. More tools can be written by using the functions in the man:libugidfw[3] library. Extreme caution should be taken when working with this module; incorrect use could block access to certain parts of the file system. === Examples After the man:mac_bsdextended[4] module has been loaded, the following command may be used to list the current rule configuration: [source,shell] .... # ugidfw list 0 slots, 0 rules .... As expected, there are no rules defined. This means that everything is still completely accessible. To create a rule which will block all access by users but leave `root` unaffected, simply run the following command: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... [NOTE] ==== In releases prior to FreeBSD 5.3, the [parameter]#add# parameter did not exist. In those cases the [parameter]#set# should be used instead. See below for a command example. ==== This is a very bad idea as it will block all users from issuing even the most simple commands, such as `ls`. A more patriotic list of rules might be: [source,shell] .... # ugidfw set 2 subject uid user1 object uid user2 mode n # ugidfw set 3 subject uid user1 object gid user2 mode n .... This will block any and all access, including directory listings, to ``_user2_``'s home directory from the username `user1`. In place of `user1`, the `not uid _user2_` could be passed. This will enforce the same access restrictions above for all users in place of just one user. [NOTE] ==== The `root` user will be unaffected by these changes. ==== This should provide a general idea of how the man:mac_bsdextended[4] module may be used to help fortify a file system. For more information, see the man:mac_bsdextended[4] and the man:ugidfw[8] manual pages. [[mac-ifoff]] == The MAC ifoff Module Module name: [.filename]#mac_ifoff.ko# Kernel configuration line: `options MAC_IFOFF` Boot option: `mac_ifoff_load="YES"` The man:mac_ifoff[4] module exists solely to disable network interfaces on the fly and keep network interfaces from being brought up during the initial system boot. It does not require any labels to be set up on the system, nor does it have a dependency on other MAC modules. Most of the control is done through the `sysctl` tunables listed below. * `security.mac.ifoff.lo_enabled` will enable/disable all traffic on the loopback (man:lo[4]) interface. * `security.mac.ifoff.bpfrecv_enabled` will enable/disable all traffic on the Berkeley Packet Filter interface (man:bpf[4]) * `security.mac.ifoff.other_enabled` will enable/disable traffic on all other interfaces. One of the most common uses of man:mac_ifoff[4] is network monitoring in an environment where network traffic should not be permitted during the boot sequence. Another suggested use would be to write a script which uses package:security/aide[] to automatically block network traffic if it finds new or altered files in protected directories. [[mac-portacl]] == The MAC portacl Module Module name: [.filename]#mac_portacl.ko# Kernel configuration line: `MAC_PORTACL` Boot option: `mac_portacl_load="YES"` The man:mac_portacl[4] module is used to limit binding to local TCP and UDP ports using a variety of `sysctl` variables. In essence man:mac_portacl[4] makes it possible to allow non-`root` users to bind to specified privileged ports, i.e. ports fewer than 1024. Once loaded, this module will enable the MAC policy on all sockets. The following tunables are available: * `security.mac.portacl.enabled` will enable/disable the policy completely. * `security.mac.portacl.port_high` will set the highest port number that man:mac_portacl[4] will enable protection for. * `security.mac.portacl.suser_exempt` will, when set to a non-zero value, exempt the `root` user from this policy. * `security.mac.portacl.rules` will specify the actual mac_portacl policy; see below. The actual `mac_portacl` policy, as specified in the `security.mac.portacl.rules` sysctl, is a text string of the form: `rule[,rule,...]` with as many rules as needed. Each rule is of the form: `idtype:id:protocol:port`. The [parameter]#idtype# parameter can be `uid` or `gid` and used to interpret the [parameter]#id# parameter as either a user id or group id, respectively. The [parameter]#protocol# parameter is used to determine if the rule should apply to TCP or UDP by setting the parameter to `tcp` or `udp`. The final [parameter]#port# parameter is the port number to allow the specified user or group to bind to. [NOTE] ==== Since the ruleset is interpreted directly by the kernel only numeric values can be used for the user ID, group ID, and port parameters. I.e. user, group, and port service names cannot be used. ==== By default, on UNIX(R)-like systems, ports fewer than 1024 can only be used by/bound to privileged processes, i.e. those run as `root`. For man:mac_portacl[4] to allow non-privileged processes to bind to ports below 1024 this standard UNIX(R) restriction has to be disabled. This can be accomplished by setting the man:sysctl[8] variables `net.inet.ip.portrange.reservedlow` and `net.inet.ip.portrange.reservedhigh` to zero. See the examples below or review the man:mac_portacl[4] manual page for further information. === Examples The following examples should illuminate the above discussion a little better: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 net.inet.ip.portrange.reservedhigh=0 .... First we set man:mac_portacl[4] to cover the standard privileged ports and disable the normal UNIX(R) bind restrictions. [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... The `root` user should not be crippled by this policy, thus set the `security.mac.portacl.suser_exempt` to a non-zero value. The man:mac_portacl[4] module has now been set up to behave the same way UNIX(R)-like systems behave by default. [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... Allow the user with UID 80 (normally the `www` user) to bind to port 80. This can be used to allow the `www` user to run a web server without ever having `root` privilege. [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... Permit the user with the UID of 1001 to bind to the TCP ports 110 ("pop3") and 995 ("pop3s"). This will permit this user to start a server that accepts connections on ports 110 and 995. [[mac-partition]] == The MAC partition Module Module name: [.filename]#mac_partition.ko# Kernel configuration line: `options MAC_PARTITION` Boot option: `mac_partition_load="YES"` The man:mac_partition[4] policy will drop processes into specific "partitions" based on their MAC label. Think of it as a special type of man:jail[8], though that is hardly a worthy comparison. This is one module that should be added to the man:loader.conf[5] file so that it loads and enables the policy during the boot process. Most configuration for this policy is done using the man:setpmac[8] utility which will be explained below. The following `sysctl` tunable is available for this policy: * `security.mac.partition.enabled` will enable the enforcement of MAC process partitions. When this policy is enabled, users will only be permitted to see their processes, and any others within their partition, but will not be permitted to work with utilities outside the scope of this partition. For instance, a user in the `insecure` class above will not be permitted to access the `top` command as well as many other commands that must spawn a process. To set or drop utilities into a partition label, use the `setpmac` utility: [source,shell] .... # setpmac partition/13 top .... This will add the `top` command to the label set on users in the `insecure` class. Note that all processes spawned by users in the `insecure` class will stay in the `partition/13` label. === Examples The following command will show you the partition label and the process list: [source,shell] .... # ps Zax .... This next command will allow the viewing of another user's process partition label and that user's currently running processes: [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== Users can see processes in ``root``'s label unless the man:mac_seeotheruids[4] policy is loaded. ==== A really crafty implementation could have all of the services disabled in [.filename]#/etc/rc.conf# and started by a script that starts them with the proper labeling set. [NOTE] ==== The following policies support integer settings in place of the three default labels offered. These options, including their limitations, are further explained in the module manual pages. ==== [[mac-mls]] == The MAC Multi-Level Security Module Module name: [.filename]#mac_mls.ko# Kernel configuration line: `options MAC_MLS` Boot option: `mac_mls_load="YES"` The man:mac_mls[4] policy controls access between subjects and objects in the system by enforcing a strict information flow policy. In MLS environments, a "clearance" level is set in each subject or objects label, along with compartments. Since these clearance or sensibility levels can reach numbers greater than six thousand; it would be a daunting task for any system administrator to thoroughly configure each subject or object. Thankfully, three "instant" labels are already included in this policy. These labels are `mls/low`, `mls/equal` and `mls/high`. Since these labels are described in depth in the manual page, they will only get a brief description here: * The `mls/low` label contains a low configuration which permits it to be dominated by all other objects. Anything labeled with `mls/low` will have a low clearance level and not be permitted to access information of a higher level. In addition, this label will prevent objects of a higher clearance level from writing or passing information on to them. * The `mls/equal` label should be placed on objects considered to be exempt from the policy. * The `mls/high` label is the highest level of clearance possible. Objects assigned this label will hold dominance over all other objects in the system; however, they will not permit the leaking of information to objects of a lower class. MLS provides for: * A hierarchical security level with a set of non hierarchical categories; * Fixed rules: no read up, no write down (a subject can have read access to objects on its own level or below, but not above. Similarly, a subject can have write access to objects on its own level or above but not beneath.); * Secrecy (preventing inappropriate disclosure of data); * Basis for the design of systems that concurrently handle data at multiple sensitivity levels (without leaking information between secret and confidential). The following `sysctl` tunables are available for the configuration of special services and interfaces: * `security.mac.mls.enabled` is used to enable/disable the MLS policy. * `security.mac.mls.ptys_equal` will label all man:pty[4] devices as `mls/equal` during creation. * `security.mac.mls.revocation_enabled` is used to revoke access to objects after their label changes to a label of a lower grade. * `security.mac.mls.max_compartments` is used to set the maximum number of compartment levels with objects; basically the maximum compartment number allowed on a system. To manipulate the MLS labels, the man:setfmac[8] command has been provided. To assign a label to an object, issue the following command: [source,shell] .... # setfmac mls/5 test .... To get the MLS label for the file [.filename]#test# issue the following command: [source,shell] .... # getfmac test .... This is a summary of the MLS policy's features. Another approach is to create a master policy file in [.filename]#/etc# which specifies the MLS policy information and to feed that file into the `setfmac` command. This method will be explained after all policies are covered. === Planning Mandatory Sensitivity With the Multi-Level Security Policy Module, an administrator plans for controlling the flow of sensitive information. By default, with its block read up block write down nature, the system defaults everything to a low state. Everything is accessible and an administrator slowly changes this during the configuration stage; augmenting the confidentiality of the information. Beyond the three basic label options above, an administrator may group users and groups as required to block the information flow between them. It might be easier to look at the information in clearance levels familiarized with words, for instance classifications such as `Confidential`, `Secret`, and `Top Secret`. Some administrators might just create different groups based on project levels. Regardless of classification method, a well thought out plan must exist before implementing such a restrictive policy. Some example situations for this security policy module could be an e-commerce web server, a file server holding critical company information, and financial institution environments. The most unlikely place would be a personal workstation with only two or three users. [[mac-biba]] == The MAC Biba Module Module name: [.filename]#mac_biba.ko# Kernel configuration line: `options MAC_BIBA` Boot option: `mac_biba_load="YES"` The man:mac_biba[4] module loads the MAC Biba policy. This policy works much like that of the MLS policy with the exception that the rules for information flow are slightly reversed. This is said to prevent the downward flow of sensitive information whereas the MLS policy prevents the upward flow of sensitive information; thus, much of this section can apply to both policies. In Biba environments, an "integrity" label is set on each subject or object. These labels are made up of hierarchal grades, and non-hierarchal components. As an object's or subject's grade ascends, so does its integrity. Supported labels are `biba/low`, `biba/equal`, and `biba/high`; as explained below: * The `biba/low` label is considered the lowest integrity an object or subject may have. Setting this on objects or subjects will block their write access to objects or subjects marked high. They still have read access though. * The `biba/equal` label should only be placed on objects considered to be exempt from the policy. * The `biba/high` label will permit writing to objects set at a lower label, but not permit reading that object. It is recommended that this label be placed on objects that affect the integrity of the entire system. Biba provides for: * Hierarchical integrity level with a set of non hierarchical integrity categories; * Fixed rules: no write up, no read down (opposite of MLS). A subject can have write access to objects on its own level or below, but not above. Similarly, a subject can have read access to objects on its own level or above, but not below; * Integrity (preventing inappropriate modification of data); * Integrity levels (instead of MLS sensitivity levels). The following `sysctl` tunables can be used to manipulate the Biba policy. * `security.mac.biba.enabled` may be used to enable/disable enforcement of the Biba policy on the target machine. * `security.mac.biba.ptys_equal` may be used to disable the Biba policy on man:pty[4] devices. * `security.mac.biba.revocation_enabled` will force the revocation of access to objects if the label is changed to dominate the subject. To access the Biba policy setting on system objects, use the `setfmac` and `getfmac` commands: [source,shell] .... # setfmac biba/low test # getfmac test test: biba/low .... === Planning Mandatory Integrity Integrity, different from sensitivity, guarantees that the information will never be manipulated by untrusted parties. This includes information passed between subjects, objects, and both. It ensures that users will only be able to modify and in some cases even access information they explicitly need to. The man:mac_biba[4] security policy module permits an administrator to address which files and programs a user or users may see and invoke while assuring that the programs and files are free from threats and trusted by the system for that user, or group of users. During the initial planning phase, an administrator must be prepared to partition users into grades, levels, and areas. Users will be blocked access not only to data but programs and utilities both before and after they start. The system will default to a high label once this policy module is enabled, and it is up to the administrator to configure the different grades and levels for users. Instead of using clearance levels as described above, a good planning method could include topics. For instance, only allow developers modification access to the source code repository, source code compiler, and other development utilities. While other users would be grouped into other categories such as testers, designers, or just ordinary users and would only be permitted read access. With its natural security control, a lower integrity subject is unable to write to a higher integrity subject; a higher integrity subject cannot observe or read a lower integrity object. Setting a label at the lowest possible grade could make it inaccessible to subjects. Some prospective environments for this security policy module would include a constrained web server, development and test machine, and source code repository. A less useful implementation would be a personal workstation, a machine used as a router, or a network firewall. [[mac-lomac]] == The MAC LOMAC Module Module name: [.filename]#mac_lomac.ko# Kernel configuration line: `options MAC_LOMAC` Boot option: `mac_lomac_load="YES"` Unlike the MAC Biba policy, the man:mac_lomac[4] policy permits access to lower integrity objects only after decreasing the integrity level to not disrupt any integrity rules. The MAC version of the Low-watermark integrity policy, not to be confused with the older man:lomac[4] implementation, works almost identically to Biba, but with the exception of using floating labels to support subject demotion via an auxiliary grade compartment. This secondary compartment takes the form of `[auxgrade]`. When assigning a lomac policy with an auxiliary grade, it should look a little bit like: `lomac/10[2]` where the number two (2) is the auxiliary grade. The MAC LOMAC policy relies on the ubiquitous labeling of all system objects with integrity labels, permitting subjects to read from low integrity objects and then downgrading the label on the subject to prevent future writes to high integrity objects. This is the `[auxgrade]` option discussed above, thus the policy may provide for greater compatibility and require less initial configuration than Biba. === Examples Like the Biba and MLS policies; the `setfmac` and `setpmac` utilities may be used to place labels on system objects: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... Notice the auxiliary grade here is `low`, this is a feature provided only by the MAC LOMAC policy. [[mac-implementing]] == Nagios in a MAC Jail The following demonstration will implement a secure environment using various MAC modules with properly configured policies. This is only a test and should not be considered the complete answer to everyone's security woes. Just implementing a policy and ignoring it never works and could be disastrous in a production environment. Before beginning this process, the `multilabel` option must be set on each file system as stated at the beginning of this chapter. Not doing so will result in errors. While at it, ensure that the package:net-mngt/nagios-plugins[], package:net-mngt/nagios[], and package:www/apache13[] ports are all installed, configured, and working correctly. === Create an insecure User Class Begin the procedure by adding the following user class to the [.filename]#/etc/login.conf# file: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... And adding the following line to the default user class: [.programlisting] .... :label=biba/high: .... Once this is completed, the following command must be issued to rebuild the database: [source,shell] .... # cap_mkdb /etc/login.conf .... === Boot Configuration Do not reboot yet, just add the following lines to [.filename]#/boot/loader.conf# so the required modules will load during system initialization: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" .... === Configure Users Set the `root` user to the default class using: [source,shell] .... # pw usermod root -L default .... All user accounts that are not `root` or system users will now require a login class. The login class is required otherwise users will be refused access to common commands such as man:vi[1]. The following `sh` script should do the trick: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ # /etc/passwd`; do pw usermod $x -L default; done; .... Drop the `nagios` and `www` users into the insecure class: [source,shell] .... # pw usermod nagios -L insecure .... [source,shell] .... # pw usermod www -L insecure .... === Create the Contexts File A contexts file should now be created; the following example file should be placed in [.filename]#/etc/policy.contexts#. [.programlisting] .... # This is the default BIBA policy for this system. # System: /var/run biba/equal /var/run/* biba/equal /dev biba/equal /dev/* biba/equal /var biba/equal /var/spool biba/equal /var/spool/* biba/equal /var/log biba/equal /var/log/* biba/equal /tmp biba/equal /tmp/* biba/equal /var/tmp biba/equal /var/tmp/* biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # For Nagios: /usr/local/etc/nagios /usr/local/etc/nagios/* biba/10 /var/spool/nagios biba/10 /var/spool/nagios/* biba/10 # For apache /usr/local/etc/apache biba/10 /usr/local/etc/apache/* biba/10 .... This policy will enforce security by setting restrictions on the flow of information. In this specific configuration, users, `root` and others, should never be allowed to access Nagios. Configuration files and processes that are a part of Nagios will be completely self contained or jailed. This file may now be read into our system by issuing the following command: [source,shell] .... # setfsmac -ef /etc/policy.contexts / # setfsmac -ef /etc/policy.contexts / .... [NOTE] ==== The above file system layout may be different depending on environment; however, it must be run on every single file system. ==== The [.filename]#/etc/mac.conf# file requires the following modifications in the main section: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === Enable Networking Add the following line to [.filename]#/boot/loader.conf#: [.programlisting] .... security.mac.biba.trust_all_interfaces=1 .... And the following to the network card configuration stored in [.filename]#rc.conf#. If the primary Internet configuration is done via DHCP, this may need to be configured manually after every system boot: [.programlisting] .... maclabel biba/equal .... === Testing the Configuration Ensure that the web server and Nagios will not be started on system initialization, and reboot. Ensure the `root` user cannot access any of the files in the Nagios configuration directory. If `root` can issue an man:ls[1] command on [.filename]#/var/spool/nagios#, then something is wrong. Otherwise a "permission denied" error should be returned. If all seems well, Nagios, Apache, and Sendmail can now be started in a way fitting of the security policy. The following commands will make this happen: [source,shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Double check to ensure that everything is working properly. If not, check the log files or error messages. Use the man:sysctl[8] utility to disable the man:mac_biba[4] security policy module enforcement and try starting everything again, like normal. [NOTE] ==== The `root` user can change the security enforcement and edit the configuration files without fear. The following command will permit the degradation of the security policy to a lower grade for a newly spawned shell: [source,shell] .... # setpmac biba/10 csh .... To block this from happening, force the user into a range via man:login.conf[5]. If man:setpmac[8] attempts to run a command outside of the compartment's range, an error will be returned and the command will not be executed. In this case, setting root to `biba/high(high-high)`. ==== [[mac-userlocked]] == User Lock Down This example considers a relatively small, fewer than fifty users, storage system. Users would have login capabilities, and be permitted to not only store data but access resources as well. For this scenario, the man:mac_bsdextended[4] mixed with man:mac_seeotheruids[4] could co-exist and block access not only to system objects but to hide user processes as well. Begin by adding the following lines to [.filename]#/boot/loader.conf#: [.programlisting] .... mac_seeotheruids_enabled="YES" .... The man:mac_bsdextended[4] security policy module may be activated through the use of the following rc.conf variable: [.programlisting] .... ugidfw_enable="YES" .... Default rules stored in [.filename]#/etc/rc.bsdextended# will be loaded at system initialization; however, the default entries may need modification. Since this machine is expected only to service users, everything may be left commented out except the last two. These will force the loading of user owned system objects by default. Add the required users to this machine and reboot. For testing purposes, try logging in as a different user across two consoles. Run the `ps aux` command to see if processes of other users are visible. Try to run man:ls[1] on another users home directory, it should fail. Do not try to test with the `root` user unless the specific ``sysctl``s have been modified to block super user access. [NOTE] ==== When a new user is added, their man:mac_bsdextended[4] rule will not be in the ruleset list. To update the ruleset quickly, simply unload the security policy module and reload it again using the man:kldunload[8] and man:kldload[8] utilities. ==== [[mac-troubleshoot]] == Troubleshooting the MAC Framework During the development stage, a few users reported problems with normal configuration. Some of these problems are listed below: === The `multilabel` option cannot be enabled on [.filename]#/# The `multilabel` flag does not stay enabled on my root ([.filename]#/#) partition! It seems that one out of every fifty users has this problem, indeed, we had this problem during our initial configuration. Further observation of this so called "bug" has lead me to believe that it is a result of either incorrect documentation or misinterpretation of the documentation. Regardless of why it happened, the following steps may be taken to resolve it: [.procedure] . Edit [.filename]#/etc/fstab# and set the root partition at `ro` for read-only. . Reboot into single user mode. . Run `tunefs -l enable` on [.filename]#/#. . Reboot the system into normal mode. . Run `mount -urw` [.filename]#/# and change the `ro` back to `rw` in [.filename]#/etc/fstab# and reboot the system again. . Double-check the output from the `mount` to ensure that `multilabel` has been properly set on the root file system. === Cannot start a X11 server after MAC After establishing a secure environment with MAC, I am no longer able to start X! This could be caused by the MAC `partition` policy or by a mislabeling in one of the MAC labeling policies. To debug, try the following: [.procedure] . Check the error message; if the user is in the `insecure` class, the `partition` policy may be the culprit. Try setting the user's class back to the `default` class and rebuild the database with the `cap_mkdb` command. If this does not alleviate the problem, go to step two. . Double-check the label policies. Ensure that the policies are set correctly for the user in question, the X11 application, and the [.filename]#/dev# entries. . If neither of these resolve the problem, send the error message and a description of your environment to the TrustedBSD discussion lists located at the http://www.TrustedBSD.org[TrustedBSD] website or to the {freebsd-questions} mailing list. === Error: man:_secure_path[3] cannot stat [.filename]#.login_conf# When I attempt to switch from the `root` to another user in the system, the error message `_secure_path: unable to state .login_conf`. This message is usually shown when the user has a higher label setting then that of the user whom they are attempting to become. For instance a user on the system, `joe`, has a default label of `biba/low`. The `root` user, who has a label of `biba/high`, cannot view ``joe``'s home directory. This will happen regardless if `root` has used the `su` command to become `joe`, or not. In this scenario, the Biba integrity model will not permit `root` to view objects set at a lower integrity level. === The `root` username is broken! In normal or even single user mode, the `root` is not recognized. The `whoami` command returns 0 (zero) and `su` returns `who are you?`. What could be going on? This can happen if a labeling policy has been disabled, either by a man:sysctl[8] or the policy module was unloaded. If the policy is being disabled or has been temporarily disabled, then the login capabilities database needs to be reconfigured with the `label` option being removed. Double check the [.filename]#login.conf# file to ensure that all `label` options have been removed and rebuild the database with the `cap_mkdb` command. This may also happen if a policy restricts access to the [.filename]#master.passwd# file or database. Usually caused by an administrator altering the file under a label which conflicts with the general policy being used by the system. In these cases, the user information would be read by the system and access would be blocked as the file has inherited the new label. Disable the policy via a man:sysctl[8] and everything should return to normal. diff --git a/documentation/content/el/books/handbook/network-servers/_index.adoc b/documentation/content/el/books/handbook/network-servers/_index.adoc index ff66734be6..1c5f46ce1c 100644 --- a/documentation/content/el/books/handbook/network-servers/_index.adoc +++ b/documentation/content/el/books/handbook/network-servers/_index.adoc @@ -1,2356 +1,2355 @@ --- title: Κεφάλαιο 29. Εξυπηρετητές Δικτύου part: Μέρος IV. Δικτυακές Επικοινωνίες prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 34 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Εξυπηρετητές Δικτύου :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 29 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Σύνοψη Το κεφάλαιο αυτό καλύπτει ορισμένες από τις πιο συχνά χρησιμοποιούμενες δικτυακές υπηρεσίες των συστημάτων UNIX(R). Θα παρουσιάσουμε την εγκατάσταση, ρύθμιση, έλεγχο και συντήρηση πολλών διαφορετικών τύπων δικτυακών υπηρεσιών. Σε όλο το κεφάλαιο, για τη δική σας διευκόλυνση, υπάρχουν παραδείγματα διαφόρων αρχείων ρυθμίσεων. Αφού διαβάσετε αυτό το κεφάλαιο, θα ξέρετε: * Πως να διαχειρίζεστε την υπηρεσία inetd. * Πως να ρυθμίσετε ένα δικτυακό σύστημα αρχείων. * Πως να ρυθμίσετε ένα εξυπηρετητή δικτυακών πληροφοριών για το διαμοιρασμό λογαριασμών χρηστών. * Πως να χρησιμοποιήσετε το DHCP για την αυτόματη ρύθμιση των παραμέτρων του δικτύου. * Πως να ρυθμίσετε ένα εξυπηρετητή ονομασίας περιοχών (DNS). * Πως να ρυθμίσετε τον εξυπηρετητή ιστοσελίδων Apache. * Πως να ρυθμίσετε ένα εξυπηρετητή μεταφοράς αρχείων (FTP). * Πως να ρυθμίσετε ένα εξυπηρετητή αρχείων και εκτυπωτών για πελάτες Windows(R) με χρήση της εφαρμογής Samba. * Πως να συγχρονίσετε την ημερομηνία και την ώρα, και να ρυθμίσετε ένα εξυπηρετητή ώρας με τη βοήθεια του NTP πρωτοκόλλου. Πριν διαβάσετε αυτό κεφάλαιο, θα πρέπει: * Να κατανοείτε τις βασικές έννοιες των αρχείων script [.filename]#/etc/rc#. * Να είστε εξοικειωμένοι με τη βασική ορολογία των δικτύων. * Να γνωρίζετε πως να εγκαταστήσετε πρόσθετο λογισμικό τρίτου κατασκευαστή (crossref:ports[ports,Εγκατάσταση Εφαρμογών: Πακέτα και Ports]). [[network-inetd]] == The inetd"Super-Server" [[network-inetd-overview]] === Overview man:inetd[8] is sometimes referred to as the "Internet Super-Server" because it manages connections for several services. When a connection is received by inetd, it determines which program the connection is destined for, spawns the particular process and delegates the socket to it (the program is invoked with the service socket as its standard input, output and error descriptors). Running inetd for servers that are not heavily used can reduce the overall system load, when compared to running each daemon individually in stand-alone mode. Primarily, inetd is used to spawn other daemons, but several trivial protocols are handled directly, such as chargen, auth, and daytime. This section will cover the basics in configuring inetd through its command-line options and its configuration file, [.filename]#/etc/inetd.conf#. [[network-inetd-settings]] === Settings inetd is initialized through the man:rc[8] system. The `inetd_enable` option is set to `NO` by default, but may be turned on by sysinstall during installation, depending on the configuration chosen by the user. Placing: [.programlisting] .... inetd_enable="YES" .... or [.programlisting] .... inetd_enable="NO" .... into [.filename]#/etc/rc.conf# will enable or disable inetd starting at boot time. The command: [.programlisting] .... /etc/rc.d/inetd rcvar .... can be run to display the current effective setting. Additionally, different command-line options can be passed to inetd via the `inetd_flags` option. [[network-inetd-cmdline]] === Command-Line Options Like most server daemons, inetd has a number of options that it can be passed in order to modify its behaviour. The full list of options reads: `inetd [-d] [-l] [-w] [-W] [-c maximum] [-C rate] [-a address | hostname] [-p filename] [-R rate] [-s maximum] [configuration file]` Options can be passed to inetd using the `inetd_flags` option in [.filename]#/etc/rc.conf#. By default, `inetd_flags` is set to `-wW -C 60`, which turns on TCP wrapping for inetd's services, and prevents any single IP address from requesting any service more than 60 times in any given minute. Novice users may be pleased to note that these parameters usually do not need to be modified, although we mention the rate-limiting options below as they be useful should you find that you are receiving an excessive amount of connections. A full list of options can be found in the man:inetd[8] manual. -c maximum:: Specify the default maximum number of simultaneous invocations of each service; the default is unlimited. May be overridden on a per-service basis with the `max-child` parameter. -C rate:: Specify the default maximum number of times a service can be invoked from a single IP address in one minute; the default is unlimited. May be overridden on a per-service basis with the `max-connections-per-ip-per-minute` parameter. -R rate:: Specify the maximum number of times a service can be invoked in one minute; the default is 256. A rate of 0 allows an unlimited number of invocations. -s maximum:: Specify the maximum number of times a service can be invoked from a single IP address at any one time; the default is unlimited. May be overridden on a per-service basis with the `max-child-per-ip` parameter. [[network-inetd-conf]] === [.filename]#inetd.conf# Configuration of inetd is done via the file [.filename]#/etc/inetd.conf#. When a modification is made to [.filename]#/etc/inetd.conf#, inetd can be forced to re-read its configuration file by running the command: [[network-inetd-reread]] .Reloading the inetd configuration file [example] ==== [source,shell] .... # /etc/rc.d/inetd reload .... ==== Each line of the configuration file specifies an individual daemon. Comments in the file are preceded by a "#". The format of each entry in [.filename]##/etc/inetd.conf## is as follows: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... An example entry for the man:ftpd[8] daemon using IPv4 might read: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... service-name:: This is the service name of the particular daemon. It must correspond to a service listed in [.filename]#/etc/services#. This determines which port inetd must listen to. If a new service is being created, it must be placed in [.filename]#/etc/services# first. socket-type:: Either `stream`, `dgram`, `raw`, or `seqpacket`. `stream` must be used for connection-based, TCP daemons, while `dgram` is used for daemons utilizing the UDP transport protocol. protocol:: One of the following: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protocol | Explanation |tcp, tcp4 |TCP IPv4 |udp, udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |Both TCP IPv4 and v6 |udp46 |Both UDP IPv4 and v6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: `wait|nowait` indicates whether the daemon invoked from inetd is able to handle its own socket or not. `dgram` socket types must use the `wait` option, while stream socket daemons, which are usually multi-threaded, should use `nowait`. `wait` usually hands off multiple sockets to a single daemon, while `nowait` spawns a child daemon for each new socket. + The maximum number of child daemons inetd may spawn can be set using the `max-child` option. If a limit of ten instances of a particular daemon is needed, a `/10` would be placed after `nowait`. Specifying `/0` allows an unlimited number of children + In addition to `max-child`, two other options which limit the maximum connections from a single place to a particular daemon can be enabled. `max-connections-per-ip-per-minute` limits the number of connections from any particular IP address per minutes, e.g. a value of ten would limit any particular IP address connecting to a particular service to ten attempts per minute. `max-child-per-ip` limits the number of children that can be started on behalf on any single IP address at any moment. These options are useful to prevent intentional or unintentional excessive resource consumption and Denial of Service (DoS) attacks to a machine. + In this field, either of `wait` or `nowait` is mandatory. `max-child`, `max-connections-per-ip-per-minute` and `max-child-per-ip` are optional. + A stream-type multi-threaded daemon without any `max-child`, `max-connections-per-ip-per-minute` or `max-child-per-ip` limits would simply be: `nowait`. + The same daemon with a maximum limit of ten daemons would read: `nowait/10`. + The same setup with a limit of twenty connections per IP address per minute and a maximum total limit of ten child daemons would read: `nowait/10/20`. + These options are utilized by the default settings of the man:fingerd[8] daemon, as seen here: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -s .... + Finally, an example of this field with a maximum of 100 children in total, with a maximum of 5 for any one IP address would read: `nowait/100/0/5`. user:: This is the username that the particular daemon should run as. Most commonly, daemons run as the `root` user. For security purposes, it is common to find some servers running as the `daemon` user, or the least privileged `nobody` user. server-program:: The full path of the daemon to be executed when a connection is received. If the daemon is a service provided by inetd internally, then `internal` should be used. server-program-arguments:: This works in conjunction with `server-program` by specifying the arguments, starting with `argv[0]`, passed to the daemon on invocation. If `mydaemon -d` is the command line, `mydaemon -d` would be the value of `server-program-arguments`. Again, if the daemon is an internal service, use `internal` here. [[network-inetd-security]] === Security Depending on the choices made at install time, many of inetd's services may be enabled by default. If there is no apparent need for a particular daemon, consider disabling it. Place a "#" in front of the daemon in question in [.filename]##/etc/inetd.conf##, and then <>. Some daemons, such as fingerd, may not be desired at all because they provide information that may be useful to an attacker. Some daemons are not security-conscious and have long, or non-existent, timeouts for connection attempts. This allows an attacker to slowly send connections to a particular daemon, thus saturating available resources. It may be a good idea to place `max-connections-per-ip-per-minute`, `max-child` or `max-child-per-ip` limitations on certain daemons if you find that you have too many connections. By default, TCP wrapping is turned on. Consult the man:hosts_access[5] manual page for more information on placing TCP restrictions on various inetd invoked daemons. [[network-inetd-misc]] === Miscellaneous daytime, time, echo, discard, chargen, and auth are all internally provided services of inetd. The auth service provides identity network services, and is configurable to a certain degree, whilst the others are simply on or off. Consult the man:inetd[8] manual page for more in-depth information. [[network-nfs]] == Network File System (NFS) Among the many different file systems that FreeBSD supports is the Network File System, also known as NFS. NFS allows a system to share directories and files with others over a network. By using NFS, users and programs can access files on remote systems almost as if they were local files. Some of the most notable benefits that NFS can provide are: * Local workstations use less disk space because commonly used data can be stored on a single machine and still remain accessible to others over the network. * There is no need for users to have separate home directories on every network machine. Home directories could be set up on the NFS server and made available throughout the network. * Storage devices such as floppy disks, CDROM drives, and Zip(R) drives can be used by other machines on the network. This may reduce the number of removable media drives throughout the network. === How NFS Works NFS consists of at least two main parts: a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly a few processes have to be configured and running. The server has to be running the following daemons: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Daemon | Description |nfsd |The NFS daemon which services requests from the NFS clients. |mountd |The NFS mount daemon which carries out the requests that man:nfsd[8] passes on to it. |rpcbind | This daemon allows NFS clients to discover which port the NFS server is using. |=== The client can also run a daemon, known as nfsiod. The nfsiod daemon services the requests from the NFS server. This is optional, and improves performance, but is not required for normal and correct operation. See the man:nfsiod[8] manual page for more information. [[network-configuring-nfs]] === Configuring NFS NFS configuration is a relatively straightforward process. The processes that need to be running can all start at boot time with a few modifications to your [.filename]#/etc/rc.conf# file. On the NFS server, make sure that the following options are configured in the [.filename]#/etc/rc.conf# file: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" .... mountd runs automatically whenever the NFS server is enabled. On the client, make sure this option is present in [.filename]#/etc/rc.conf#: [.programlisting] .... nfs_client_enable="YES" .... The [.filename]#/etc/exports# file specifies which file systems NFS should export (sometimes referred to as "share"). Each line in [.filename]#/etc/exports# specifies a file system to be exported and which machines have access to that file system. Along with what machines have access to that file system, access options may also be specified. There are many such options that can be used in this file but only a few will be mentioned here. You can easily discover other options by reading over the man:exports[5] manual page. Here are a few example [.filename]#/etc/exports# entries: The following examples give an idea of how to export file systems, although the settings may be different depending on your environment and network configuration. For instance, to export the [.filename]#/cdrom# directory to three example machines that have the same domain name as the server (hence the lack of a domain name for each) or have entries in your [.filename]#/etc/hosts# file. The `-ro` flag makes the exported file system read-only. With this flag, the remote system will not be able to write any changes to the exported file system. [.programlisting] .... /cdrom -ro host1 host2 host3 .... The following line exports [.filename]#/home# to three hosts by IP address. This is a useful setup if you have a private network without a DNS server configured. Optionally the [.filename]#/etc/hosts# file could be configured for internal hostnames; please review man:hosts[5] for more information. The `-alldirs` flag allows the subdirectories to be mount points. In other words, it will not mount the subdirectories but permit the client to mount only the directories that are required or needed. [.programlisting] .... /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... The following line exports [.filename]#/a# so that two clients from different domains may access the file system. The `-maproot=root` flag allows the `root` user on the remote system to write data on the exported file system as `root`. If the `-maproot=root` flag is not specified, then even if a user has `root` access on the remote system, he will not be able to modify files on the exported file system. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... In order for a client to access an exported file system, the client must have permission to do so. Make sure the client is listed in your [.filename]#/etc/exports# file. In [.filename]#/etc/exports#, each line represents the export information for one file system to one host. A remote host can only be specified once per file system, and may only have one default entry. For example, assume that [.filename]#/usr# is a single file system. The following [.filename]#/etc/exports# would be invalid: [.programlisting] .... # Invalid when /usr is one file system /usr/src client /usr/ports client .... One file system, [.filename]#/usr#, has two lines specifying exports to the same host, `client`. The correct format for this situation is: [.programlisting] .... /usr/src /usr/ports client .... The properties of one file system exported to a given host must all occur on one line. Lines without a client specified are treated as a single host. This limits how you can export file systems, but for most people this is not an issue. The following is an example of a valid export list, where [.filename]#/usr# and [.filename]#/exports# are local file systems: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... The mountd daemon must be forced to recheck the [.filename]#/etc/exports# file whenever it has been modified, so the changes can take effect. This can be accomplished either by sending a HUP signal to the running daemon: [source,shell] .... # kill -HUP `cat /var/run/mountd.pid` .... or by invoking the `mountd` man:rc[8] script with the appropriate parameter: [source,shell] .... # /etc/rc.d/mountd onereload .... Please refer to crossref:config[configtuning-rcd,Χρησιμοποιώντας Το Σύστημα rc Στο FreeBSD] for more information about using rc scripts. Alternatively, a reboot will make FreeBSD set everything up properly. A reboot is not necessary though. Executing the following commands as `root` should start everything up. On the NFS server: [source,shell] .... # rpcbind # nfsd -u -t -n 4 # mountd -r .... On the NFS client: [source,shell] .... # nfsiod -n 4 .... Now everything should be ready to actually mount a remote file system. In these examples the server's name will be `server` and the client's name will be `client`. If you only want to temporarily mount a remote file system or would rather test the configuration, just execute a command like this as `root` on the client: [source,shell] .... # mount server:/home /mnt .... This will mount the [.filename]#/home# directory on the server at [.filename]#/mnt# on the client. If everything is set up correctly you should be able to enter [.filename]#/mnt# on the client and see all the files that are on the server. If you want to automatically mount a remote file system each time the computer boots, add the file system to the [.filename]#/etc/fstab# file. Here is an example: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... The man:fstab[5] manual page lists all the available options. === Locking Some applications (e.g. mutt) require file locking to operate correctly. In the case of NFS, rpc.lockd can be used for file locking. To enable it, add the following to the [.filename]#/etc/rc.conf# file on both client and server (it is assumed that the NFS client and server are configured already): [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... Start the application by using: [source,shell] .... # /etc/rc.d/nfslocking start .... If real locking between the NFS clients and NFS server is not required, it is possible to let the NFS client do locking locally by passing `-L` to man:mount_nfs[8]. Refer to the man:mount_nfs[8] manual page for further details. === Practical Uses NFS has many practical uses. Some of the more common ones are listed below: * Set several machines to share a CDROM or other media among them. This is cheaper and often a more convenient method to install software on multiple machines. * On large networks, it might be more convenient to configure a central NFS server in which to store all the user home directories. These home directories can then be exported to the network so that users would always have the same home directory, regardless of which workstation they log in to. * Several machines could have a common [.filename]#/usr/ports/distfiles# directory. That way, when you need to install a port on several machines, you can quickly access the source without downloading it on each machine. [[network-amd]] === Automatic Mounts with amd man:amd[8] (the automatic mounter daemon) automatically mounts a remote file system whenever a file or directory within that file system is accessed. Filesystems that are inactive for a period of time will also be automatically unmounted by amd. Using amd provides a simple alternative to permanent mounts, as permanent mounts are usually listed in [.filename]#/etc/fstab#. amd operates by attaching itself as an NFS server to the [.filename]#/host# and [.filename]#/net# directories. When a file is accessed within one of these directories, amd looks up the corresponding remote mount and automatically mounts it. [.filename]#/net# is used to mount an exported file system from an IP address, while [.filename]#/host# is used to mount an export from a remote hostname. An access to a file within [.filename]#/host/foobar/usr# would tell amd to attempt to mount the [.filename]#/usr# export on the host `foobar`. .Mounting an Export with amd [example] ==== You can view the available mounts of a remote host with the `showmount` command. For example, to view the mounts of a host named `foobar`, you can use: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /host/foobar/usr .... ==== As seen in the example, the `showmount` shows [.filename]#/usr# as an export. When changing directories to [.filename]#/host/foobar/usr#, amd attempts to resolve the hostname `foobar` and automatically mount the desired export. amd can be started by the startup scripts by placing the following lines in [.filename]#/etc/rc.conf#: [.programlisting] .... amd_enable="YES" .... Additionally, custom flags can be passed to amd from the `amd_flags` option. By default, `amd_flags` is set to: [.programlisting] .... amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" .... The [.filename]#/etc/amd.map# file defines the default options that exports are mounted with. The [.filename]#/etc/amd.conf# file defines some of the more advanced features of amd. Consult the man:amd[8] and man:amd.conf[8] manual pages for more information. [[network-nfs-integration]] === Problems Integrating with Other Systems Certain Ethernet adapters for ISA PC systems have limitations which can lead to serious network problems, particularly with NFS. This difficulty is not specific to FreeBSD, but FreeBSD systems are affected by it. The problem nearly always occurs when (FreeBSD) PC systems are networked with high-performance workstations, such as those made by Silicon Graphics, Inc., and Sun Microsystems, Inc. The NFS mount will work fine, and some operations may succeed, but suddenly the server will seem to become unresponsive to the client, even though requests to and from other systems continue to be processed. This happens to the client system, whether the client is the FreeBSD system or the workstation. On many systems, there is no way to shut down the client gracefully once this problem has manifested itself. The only solution is often to reset the client, because the NFS situation cannot be resolved. Though the "correct" solution is to get a higher performance and capacity Ethernet adapter for the FreeBSD system, there is a simple workaround that will allow satisfactory operation. If the FreeBSD system is the _server_, include the option `-w=1024` on the mount from the client. If the FreeBSD system is the _client_, then mount the NFS file system with the option `-r=1024`. These options may be specified using the fourth field of the [.filename]#fstab# entry on the client for automatic mounts, or by using the `-o` parameter of the man:mount[8] command for manual mounts. It should be noted that there is a different problem, sometimes mistaken for this one, when the NFS servers and clients are on different networks. If that is the case, make _certain_ that your routers are routing the necessary UDP information, or you will not get anywhere, no matter what else you are doing. In the following examples, `fastws` is the host (interface) name of a high-performance workstation, and `freebox` is the host (interface) name of a FreeBSD system with a lower-performance Ethernet adapter. Also, [.filename]#/sharedfs# will be the exported NFS file system (see man:exports[5]), and [.filename]#/project# will be the mount point on the client for the exported file system. In all cases, note that additional options, such as `hard` or `soft` and `bg` may be desirable in your application. Examples for the FreeBSD system (`freebox`) as the client in [.filename]#/etc/fstab# on `freebox`: [.programlisting] .... fastws:/sharedfs /project nfs rw,-r=1024 0 0 .... As a manual mount command on `freebox`: [source,shell] .... # mount -t nfs -o -r=1024 fastws:/sharedfs /project .... Examples for the FreeBSD system as the server in [.filename]#/etc/fstab# on `fastws`: [.programlisting] .... freebox:/sharedfs /project nfs rw,-w=1024 0 0 .... As a manual mount command on `fastws`: [source,shell] .... # mount -t nfs -o -w=1024 freebox:/sharedfs /project .... Nearly any 16-bit Ethernet adapter will allow operation without the above restrictions on the read or write size. For anyone who cares, here is what happens when the failure occurs, which also explains why it is unrecoverable. NFS typically works with a "block" size of 8 K (though it may do fragments of smaller sizes). Since the maximum Ethernet packet is around 1500 bytes, the NFS "block" gets split into multiple Ethernet packets, even though it is still a single unit to the upper-level code, and must be received, assembled, and _acknowledged_ as a unit. The high-performance workstations can pump out the packets which comprise the NFS unit one right after the other, just as close together as the standard allows. On the smaller, lower capacity cards, the later packets overrun the earlier packets of the same unit before they can be transferred to the host and the unit as a whole cannot be reconstructed or acknowledged. As a result, the workstation will time out and try again, but it will try again with the entire 8 K unit, and the process will be repeated, ad infinitum. By keeping the unit size below the Ethernet packet size limitation, we ensure that any complete Ethernet packet received can be acknowledged individually, avoiding the deadlock situation. Overruns may still occur when a high-performance workstations is slamming data out to a PC system, but with the better cards, such overruns are not guaranteed on NFS "units". When an overrun occurs, the units affected will be retransmitted, and there will be a fair chance that they will be received, assembled, and acknowledged. [[network-nis]] == Network Information System (NIS/YP) === What Is It? NIS, which stands for Network Information Services, was developed by Sun Microsystems to centralize administration of UNIX(R) (originally SunOS(TM)) systems. It has now essentially become an industry standard; all major UNIX(R) like systems (Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD, FreeBSD, etc) support NIS. NIS was formerly known as Yellow Pages, but because of trademark issues, Sun changed the name. The old term (and yp) is still often seen and used. It is a RPC-based client/server system that allows a group of machines within an NIS domain to share a common set of configuration files. This permits a system administrator to set up NIS client systems with only minimal configuration data and add, remove or modify configuration data from a single location. It is similar to the Windows NT(R) domain system; although the internal implementation of the two are not at all similar, the basic functionality can be compared. === Terms/Processes You Should Know There are several terms and several important user processes that you will come across when attempting to implement NIS on FreeBSD, whether you are trying to create an NIS server or act as an NIS client: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Term | Description |NIS domainname |An NIS master server and all of its clients (including its slave servers) have a NIS domainname. Similar to an Windows NT(R) domain name, the NIS domainname does not have anything to do with DNS. |rpcbind |Must be running in order to enable RPC (Remote Procedure Call, a network protocol used by NIS). If rpcbind is not running, it will be impossible to run an NIS server, or to act as an NIS client. |ypbind |"Binds" an NIS client to its NIS server. It will take the NIS domainname from the system, and using RPC, connect to the server. ypbind is the core of client-server communication in an NIS environment; if ypbind dies on a client machine, it will not be able to access the NIS server. |ypserv |Should only be running on NIS servers; this is the NIS server process itself. If man:ypserv[8] dies, then the server will no longer be able to respond to NIS requests (hopefully, there is a slave server to take over for it). There are some implementations of NIS (but not the FreeBSD one), that do not try to reconnect to another server if the server it used before dies. Often, the only thing that helps in this case is to restart the server process (or even the whole server) or the ypbind process on the client. |rpc.yppasswdd |Another process that should only be running on NIS master servers; this is a daemon that will allow NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there. |=== === How Does It Work? There are three types of hosts in an NIS environment: master servers, slave servers, and clients. Servers act as a central repository for host configuration information. Master servers hold the authoritative copy of this information, while slave servers mirror this information for redundancy. Clients rely on the servers to provide this information to them. Information in many files can be shared in this manner. The [.filename]#master.passwd#, [.filename]#group#, and [.filename]#hosts# files are commonly shared via NIS. Whenever a process on a client needs information that would normally be found in these files locally, it makes a query to the NIS server that it is bound to instead. ==== Machine Types * A _NIS master server_. This server, analogous to a Windows NT(R) primary domain controller, maintains the files used by all of the NIS clients. The [.filename]#passwd#, [.filename]#group#, and other various files used by the NIS clients live on the master server. + [NOTE] ==== It is possible for one machine to be an NIS master server for more than one NIS domain. However, this will not be covered in this introduction, which assumes a relatively small-scale NIS environment. ==== * _NIS slave servers_. Similar to the Windows NT(R) backup domain controllers, NIS slave servers maintain copies of the NIS master's data files. NIS slave servers provide the redundancy, which is needed in important environments. They also help to balance the load of the master server: NIS Clients always attach to the NIS server whose response they get first, and this includes slave-server-replies. * _NIS clients_. NIS clients, like most Windows NT(R) workstations, authenticate against the NIS server (or the Windows NT(R) domain controller in the Windows NT(R) workstations case) to log on. === Using NIS/YP This section will deal with setting up a sample NIS environment. ==== Planning Let us assume that you are the administrator of a small university lab. This lab, which consists of 15 FreeBSD machines, currently has no centralized point of administration; each machine has its own [.filename]#/etc/passwd# and [.filename]#/etc/master.passwd#. These files are kept in sync with each other only through manual intervention; currently, when you add a user to the lab, you must run `adduser` on all 15 machines. Clearly, this has to change, so you have decided to convert the lab to use NIS, using two of the machines as servers. Therefore, the configuration of the lab now looks something like: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Machine name | IP address | Machine role |`ellington` |`10.0.0.2` |NIS master |`coltrane` |`10.0.0.3` |NIS slave |`basie` |`10.0.0.4` |Faculty workstation |`bird` |`10.0.0.5` |Client machine |`cli[1-11]` |`10.0.0.[6-17]` |Other client machines |=== If you are setting up a NIS scheme for the first time, it is a good idea to think through how you want to go about it. No matter what the size of your network, there are a few decisions that need to be made. ===== Choosing a NIS Domain Name This might not be the "domainname" that you are used to. It is more accurately called the "NIS domainname". When a client broadcasts its requests for info, it includes the name of the NIS domain that it is part of. This is how multiple servers on one network can tell which server should answer which request. Think of the NIS domainname as the name for a group of hosts that are related in some way. Some organizations choose to use their Internet domainname for their NIS domainname. This is not recommended as it can cause confusion when trying to debug network problems. The NIS domainname should be unique within your network and it is helpful if it describes the group of machines it represents. For example, the Art department at Acme Inc. might be in the "acme-art" NIS domain. For this example, assume you have chosen the name `test-domain`. However, some operating systems (notably SunOS(TM)) use their NIS domain name as their Internet domain name. If one or more machines on your network have this restriction, you _must_ use the Internet domain name as your NIS domain name. ===== Physical Server Requirements There are several things to keep in mind when choosing a machine to use as a NIS server. One of the unfortunate things about NIS is the level of dependency the clients have on the server. If a client cannot contact the server for its NIS domain, very often the machine becomes unusable. The lack of user and group information causes most systems to temporarily freeze up. With this in mind you should make sure to choose a machine that will not be prone to being rebooted regularly, or one that might be used for development. The NIS server should ideally be a stand alone machine whose sole purpose in life is to be an NIS server. If you have a network that is not very heavily used, it is acceptable to put the NIS server on a machine running other services, just keep in mind that if the NIS server becomes unavailable, it will affect _all_ of your NIS clients adversely. ==== NIS Servers The canonical copies of all NIS information are stored on a single machine called the NIS master server. The databases used to store the information are called NIS maps. In FreeBSD, these maps are stored in [.filename]#/var/yp/[domainname]# where [.filename]#[domainname]# is the name of the NIS domain being served. A single NIS server can support several domains at once, therefore it is possible to have several such directories, one for each supported domain. Each domain will have its own independent set of maps. NIS master and slave servers handle all NIS requests with the `ypserv` daemon. `ypserv` is responsible for receiving incoming requests from NIS clients, translating the requested domain and map name to a path to the corresponding database file and transmitting data from the database back to the client. ===== Setting Up a NIS Master Server Setting up a master NIS server can be relatively straight forward, depending on your needs. FreeBSD comes with support for NIS out-of-the-box. All you need is to add the following lines to [.filename]#/etc/rc.conf#, and FreeBSD will do the rest for you. [.procedure] ==== [.programlisting] .... nisdomainname="test-domain" .... This line will set the NIS domainname to `test-domain` upon network setup (e.g. after reboot). [.programlisting] .... nis_server_enable="YES" .... This will tell FreeBSD to start up the NIS server processes when the networking is next brought up. [.programlisting] .... nis_yppasswdd_enable="YES" .... This will enable the `rpc.yppasswdd` daemon which, as mentioned above, will allow users to change their NIS password from a client machine. ==== [NOTE] ==== Depending on your NIS setup, you may need to add further entries. See the <>, below, for details. ==== Now, all you have to do is to run the command `/etc/netstart` as superuser. It will set up everything for you, using the values you defined in [.filename]#/etc/rc.conf#. ===== Initializing the NIS Maps The _NIS maps_ are database files, that are kept in the [.filename]#/var/yp# directory. They are generated from configuration files in the [.filename]#/etc# directory of the NIS master, with one exception: the [.filename]#/etc/master.passwd# file. This is for a good reason, you do not want to propagate passwords to your `root` and other administrative accounts to all the servers in the NIS domain. Therefore, before we initialize the NIS maps, you should: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... You should remove all entries regarding system accounts (`bin`, `tty`, `kmem`, `games`, etc), as well as any accounts that you do not want to be propagated to the NIS clients (for example `root` and any other UID 0 (superuser) accounts). [NOTE] ==== Make sure the [.filename]#/var/yp/master.passwd# is neither group nor world readable (mode 600)! Use the `chmod` command, if appropriate. ==== When you have finished, it is time to initialize the NIS maps! FreeBSD includes a script named `ypinit` to do this for you (see its manual page for more information). Note that this script is available on most UNIX(R) Operating Systems, but not on all. On Digital UNIX/Compaq Tru64 UNIX it is called `ypsetup`. Because we are generating maps for an NIS master, we are going to pass the `-m` option to `ypinit`. To generate the NIS maps, assuming you already performed the steps above, run: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... `ypinit` should have created [.filename]#/var/yp/Makefile# from [.filename]#/var/yp/Makefile.dist#. When created, this file assumes that you are operating in a single server NIS environment with only FreeBSD machines. Since `test-domain` has a slave server as well, you must edit [.filename]#/var/yp/Makefile#: [source,shell] .... ellington# vi /var/yp/Makefile .... You should comment out the line that says [.programlisting] .... NOPUSH = "True" .... (if it is not commented out already). ===== Setting up a NIS Slave Server Setting up an NIS slave server is even more simple than setting up the master. Log on to the slave server and edit the file [.filename]#/etc/rc.conf# as you did before. The only difference is that we now must use the `-s` option when running `ypinit`. The `-s` option requires the name of the NIS master be passed to it as well, so our command line looks like: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Don't forget to update map ypservers on ellington. .... You should now have a directory called [.filename]#/var/yp/test-domain#. Copies of the NIS master server's maps should be in this directory. You will need to make sure that these stay updated. The following [.filename]#/etc/crontab# entries on your slave servers should do the job: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... These two lines force the slave to sync its maps with the maps on the master server. Although these entries are not mandatory, since the master server attempts to ensure any changes to its NIS maps are communicated to its slaves and because password information is vital to systems depending on the server, it is a good idea to force the updates. This is more important on busy networks where map updates might not always complete. Now, run the command `/etc/netstart` on the slave server as well, which again starts the NIS server. ==== NIS Clients An NIS client establishes what is called a binding to a particular NIS server using the `ypbind` daemon. `ypbind` checks the system's default domain (as set by the `domainname` command), and begins broadcasting RPC requests on the local network. These requests specify the name of the domain for which `ypbind` is attempting to establish a binding. If a server that has been configured to serve the requested domain receives one of the broadcasts, it will respond to `ypbind`, which will record the server's address. If there are several servers available (a master and several slaves, for example), `ypbind` will use the address of the first one to respond. From that point on, the client system will direct all of its NIS requests to that server. `ypbind` will occasionally "ping" the server to make sure it is still up and running. If it fails to receive a reply to one of its pings within a reasonable amount of time, `ypbind` will mark the domain as unbound and begin broadcasting again in the hopes of locating another server. ===== Setting Up a NIS Client Setting up a FreeBSD machine to be a NIS client is fairly straightforward. [.procedure] ==== . Edit the file [.filename]#/etc/rc.conf# and add the following lines in order to set the NIS domainname and start `ypbind` upon network startup: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... . To import all possible password entries from the NIS server, remove all user accounts from your [.filename]#/etc/master.passwd# file and use `vipw` to add the following line to the end of the file: + [.programlisting] .... +::::::::: .... + [NOTE] ====== This line will afford anyone with a valid account in the NIS server's password maps an account. There are many ways to configure your NIS client by changing this line. See the <> below for more information. For more detailed reading see O'Reilly's book on `Managing NFS and NIS`. ====== + [NOTE] ====== You should keep at least one local account (i.e. not imported via NIS) in your [.filename]#/etc/master.passwd# and this account should also be a member of the group `wheel`. If there is something wrong with NIS, this account can be used to log in remotely, become `root`, and fix things. ====== . To import all possible group entries from the NIS server, add this line to your [.filename]#/etc/group# file: + [.programlisting] .... +:*:: .... ==== After completing these steps, you should be able to run `ypcat passwd` and see the NIS server's passwd map. === NIS Security In general, any remote user can issue an RPC to man:ypserv[8] and retrieve the contents of your NIS maps, provided the remote user knows your domainname. To prevent such unauthorized transactions, man:ypserv[8] supports a feature called "securenets" which can be used to restrict access to a given set of hosts. At startup, man:ypserv[8] will attempt to load the securenets information from a file called [.filename]#/var/yp/securenets#. [NOTE] ==== This path varies depending on the path specified with the `-p` option. This file contains entries that consist of a network specification and a network mask separated by white space. Lines starting with "#" are considered to be comments. A sample securenets file might look like this: ==== [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... If man:ypserv[8] receives a request from an address that matches one of these rules, it will process the request normally. If the address fails to match a rule, the request will be ignored and a warning message will be logged. If the [.filename]#/var/yp/securenets# file does not exist, `ypserv` will allow connections from any host. The `ypserv` program also has support for Wietse Venema's TCP Wrapper package. This allows the administrator to use the TCP Wrapper configuration files for access control instead of [.filename]#/var/yp/securenets#. [NOTE] ==== While both of these access control mechanisms provide some security, they, like the privileged port test, are vulnerable to "IP spoofing" attacks. All NIS-related traffic should be blocked at your firewall. Servers using [.filename]#/var/yp/securenets# may fail to serve legitimate NIS clients with archaic TCP/IP implementations. Some of these implementations set all host bits to zero when doing broadcasts and/or fail to observe the subnet mask when calculating the broadcast address. While some of these problems can be fixed by changing the client configuration, other problems may force the retirement of the client systems in question or the abandonment of [.filename]#/var/yp/securenets#. Using [.filename]#/var/yp/securenets# on a server with such an archaic implementation of TCP/IP is a really bad idea and will lead to loss of NIS functionality for large parts of your network. The use of the TCP Wrapper package increases the latency of your NIS server. The additional delay may be long enough to cause timeouts in client programs, especially in busy networks or with slow NIS servers. If one or more of your client systems suffers from these symptoms, you should convert the client systems in question into NIS slave servers and force them to bind to themselves. ==== === Barring Some Users from Logging On In our lab, there is a machine `basie` that is supposed to be a faculty only workstation. We do not want to take this machine out of the NIS domain, yet the [.filename]#passwd# file on the master NIS server contains accounts for both faculty and students. What can we do? There is a way to bar specific users from logging on to a machine, even if they are present in the NIS database. To do this, all you must do is add `-username` to the end of the [.filename]#/etc/master.passwd# file on the client machine, where _username_ is the username of the user you wish to bar from logging in. This should preferably be done using `vipw`, since `vipw` will sanity check your changes to [.filename]#/etc/master.passwd#, as well as automatically rebuild the password database when you finish editing. For example, if we wanted to bar user `bill` from logging on to `basie` we would: [source,shell] .... basie# vipw [add -bill to the end, exit] vipw: rebuilding the database... vipw: done basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin +::::::::: -bill basie# .... [[network-netgroups]] === Using Netgroups The method shown in the previous section works reasonably well if you need special rules for a very small number of users and/or machines. On larger networks, you _will_ forget to bar some users from logging onto sensitive machines, or you may even have to modify each machine separately, thus losing the main benefit of NIS: _centralized_ administration. The NIS developers' solution for this problem is called _netgroups_. Their purpose and semantics can be compared to the normal groups used by UNIX(R) file systems. The main differences are the lack of a numeric ID and the ability to define a netgroup by including both user accounts and other netgroups. Netgroups were developed to handle large, complex networks with hundreds of users and machines. On one hand, this is a Good Thing if you are forced to deal with such a situation. On the other hand, this complexity makes it almost impossible to explain netgroups with really simple examples. The example used in the remainder of this section demonstrates this problem. Let us assume that your successful introduction of NIS in your laboratory caught your superiors' interest. Your next job is to extend your NIS domain to cover some of the other machines on campus. The two tables contain the names of the new users and new machines as well as brief descriptions of them. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | User Name(s) | Description |`alpha`, `beta` |Normal employees of the IT department |`charlie`, `delta` |The new apprentices of the IT department |`echo`, `foxtrott`, `golf`, ... |Ordinary employees |`able`, `baker`, ... |The current interns |=== [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Machine Name(s) | Description |`war`, `death`, `famine`, `pollution` |Your most important servers. Only the IT employees are allowed to log onto these machines. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |Less important servers. All members of the IT department are allowed to login onto these machines. |`one`, `two`, `three`, `four`, ... |Ordinary workstations. Only the _real_ employees are allowed to use these machines. |`trashcan` |A very old machine without any critical data. Even the intern is allowed to use this box. |=== If you tried to implement these restrictions by separately blocking each user, you would have to add one `-user` line to each system's [.filename]#passwd# for each user who is not allowed to login onto that system. If you forget just one entry, you could be in trouble. It may be feasible to do this correctly during the initial setup, however you _will_ eventually forget to add the lines for new users during day-to-day operations. After all, Murphy was an optimist. Handling this situation with netgroups offers several advantages. Each user need not be handled separately; you assign a user to one or more netgroups and allow or forbid logins for all members of the netgroup. If you add a new machine, you will only have to define login restrictions for netgroups. If a new user is added, you will only have to add the user to one or more netgroups. Those changes are independent of each other: no more "for each combination of user and machine do..." If your NIS setup is planned carefully, you will only have to modify exactly one central configuration file to grant or deny access to machines. The first step is the initialization of the NIS map netgroup. FreeBSD's man:ypinit[8] does not create this map by default, but its NIS implementation will support it once it has been created. To create an empty map, simply type [source,shell] .... ellington# vi /var/yp/netgroup .... and start adding content. For our example, we need at least four netgroups: IT employees, IT apprentices, normal employees and interns. [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... `IT_EMP`, `IT_APP` etc. are the names of the netgroups. Each bracketed group adds one or more user accounts to it. The three fields inside a group are: . The name of the host(s) where the following items are valid. If you do not specify a hostname, the entry is valid on all hosts. If you do specify a hostname, you will enter a realm of darkness, horror and utter confusion. . The name of the account that belongs to this netgroup. . The NIS domain for the account. You can import accounts from other NIS domains into your netgroup if you are one of the unlucky fellows with more than one NIS domain. Each of these fields can contain wildcards. See man:netgroup[5] for details. [NOTE] ==== Netgroup names longer than 8 characters should not be used, especially if you have machines running other operating systems within your NIS domain. The names are case sensitive; using capital letters for your netgroup names is an easy way to distinguish between user, machine and netgroup names. Some NIS clients (other than FreeBSD) cannot handle netgroups with a large number of entries. For example, some older versions of SunOS(TM) start to cause trouble if a netgroup contains more than 15 _entries_. You can circumvent this limit by creating several sub-netgroups with 15 users or less and a real netgroup that consists of the sub-netgroups: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... You can repeat this process if you need more than 225 users within a single netgroup. ==== Activating and distributing your new NIS map is easy: [source,shell] .... ellington# cd /var/yp ellington# make .... This will generate the three NIS maps [.filename]#netgroup#, [.filename]#netgroup.byhost# and [.filename]#netgroup.byuser#. Use man:ypcat[1] to check if your new NIS maps are available: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... The output of the first command should resemble the contents of [.filename]#/var/yp/netgroup#. The second command will not produce output if you have not specified host-specific netgroups. The third command can be used to get the list of netgroups for a user. The client setup is quite simple. To configure the server `war`, you only have to start man:vipw[8] and replace the line [.programlisting] .... +::::::::: .... with [.programlisting] .... +@IT_EMP::::::::: .... Now, only the data for the users defined in the netgroup `IT_EMP` is imported into ``war``'s password database and only these users are allowed to login. Unfortunately, this limitation also applies to the `~` function of the shell and all routines converting between user names and numerical user IDs. In other words, `cd ~user` will not work, `ls -l` will show the numerical ID instead of the username and `find . -user joe -print` will fail with `No such user`. To fix this, you will have to import all user entries _without allowing them to login onto your servers_. This can be achieved by adding another line to [.filename]#/etc/master.passwd#. This line should contain: `+:::::::::/sbin/nologin`, meaning "Import all entries but replace the shell with [.filename]#/sbin/nologin# in the imported entries". You can replace any field in the `passwd` entry by placing a default value in your [.filename]#/etc/master.passwd#. [WARNING] ==== Make sure that the line `+:::::::::/sbin/nologin` is placed after `+@IT_EMP:::::::::`. Otherwise, all user accounts imported from NIS will have [.filename]#/sbin/nologin# as their login shell. ==== After this change, you will only have to change one NIS map if a new employee joins the IT department. You could use a similar approach for the less important servers by replacing the old `+:::::::::` in their local version of [.filename]#/etc/master.passwd# with something like this: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin .... The corresponding lines for the normal workstations could be: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/sbin/nologin .... And everything would be fine until there is a policy change a few weeks later: The IT department starts hiring interns. The IT interns are allowed to use the normal workstations and the less important servers; and the IT apprentices are allowed to login onto the main servers. You add a new netgroup `IT_INTERN`, add the new IT interns to this netgroup and start to change the configuration on each and every machine... As the old saying goes: "Errors in centralized planning lead to global mess". NIS' ability to create netgroups from other netgroups can be used to prevent situations like these. One possibility is the creation of role-based netgroups. For example, you could create a netgroup called `BIGSRV` to define the login restrictions for the important servers, another netgroup called `SMALLSRV` for the less important servers and a third netgroup called `USERBOX` for the normal workstations. Each of these netgroups contains the netgroups that are allowed to login onto these machines. The new entries for your NIS map netgroup should look like this: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... This method of defining login restrictions works reasonably well if you can define groups of machines with identical restrictions. Unfortunately, this is the exception and not the rule. Most of the time, you will need the ability to define login restrictions on a per-machine basis. Machine-specific netgroup definitions are the other possibility to deal with the policy change outlined above. In this scenario, the [.filename]#/etc/master.passwd# of each box contains two lines starting with "+". The first of them adds a netgroup with the accounts allowed to login onto this machine, the second one adds all other accounts with [.filename]#/sbin/nologin# as shell. It is a good idea to use the "ALL-CAPS" version of the machine name as the name of the netgroup. In other words, the lines should look like this: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/sbin/nologin .... Once you have completed this task for all your machines, you will not have to modify the local versions of [.filename]#/etc/master.passwd# ever again. All further changes can be handled by modifying the NIS map. Here is an example of a possible netgroup map for this scenario with some additional goodies: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... If you are using some kind of database to manage your user accounts, you should be able to create the first part of the map with your database's report tools. This way, new users will automatically have access to the boxes. One last word of caution: It may not always be advisable to use machine-based netgroups. If you are deploying a couple of dozen or even hundreds of identical machines for student labs, you should use role-based netgroups instead of machine-based netgroups to keep the size of the NIS map within reasonable limits. === Important Things to Remember There are still a couple of things that you will need to do differently now that you are in an NIS environment. * Every time you wish to add a user to the lab, you must add it to the master NIS server _only_, and _you must remember to rebuild the NIS maps_. If you forget to do this, the new user will not be able to login anywhere except on the NIS master. For example, if we needed to add a new user `jsmith` to the lab, we would: + [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... + You could also run `adduser jsmith` instead of `pw useradd jsmith`. * _Keep the administration accounts out of the NIS maps_. You do not want to be propagating administrative accounts and passwords to machines that will have users that should not have access to those accounts. * _Keep the NIS master and slave secure, and minimize their downtime_. If somebody either hacks or simply turns off these machines, they have effectively rendered many people without the ability to login to the lab. + This is the chief weakness of any centralized administration system. If you do not protect your NIS servers, you will have a lot of angry users! === NIS v1 Compatibility FreeBSD's ypserv has some support for serving NIS v1 clients. FreeBSD's NIS implementation only uses the NIS v2 protocol, however other implementations include support for the v1 protocol for backwards compatibility with older systems. The ypbind daemons supplied with these systems will try to establish a binding to an NIS v1 server even though they may never actually need it (and they may persist in broadcasting in search of one even after they receive a response from a v2 server). Note that while support for normal client calls is provided, this version of ypserv does not handle v1 map transfer requests; consequently, it cannot be used as a master or slave in conjunction with older NIS servers that only support the v1 protocol. Fortunately, there probably are not any such servers still in use today. [[network-nis-server-is-client]] === NIS Servers That Are Also NIS Clients Care must be taken when running ypserv in a multi-server domain where the server machines are also NIS clients. It is generally a good idea to force the servers to bind to themselves rather than allowing them to broadcast bind requests and possibly become bound to each other. Strange failure modes can result if one server goes down and others are dependent upon it. Eventually all the clients will time out and attempt to bind to other servers, but the delay involved can be considerable and the failure mode is still present since the servers might bind to each other all over again. You can force a host to bind to a particular server by running `ypbind` with the `-S` flag. If you do not want to do this manually each time you reboot your NIS server, you can add the following lines to your [.filename]#/etc/rc.conf#: [.programlisting] .... nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" .... See man:ypbind[8] for further information. === Password Formats One of the most common issues that people run into when trying to implement NIS is password format compatibility. If your NIS server is using DES encrypted passwords, it will only support clients that are also using DES. For example, if you have Solaris(TM) NIS clients in your network, then you will almost certainly need to use DES encrypted passwords. To check which format your servers and clients are using, look at [.filename]#/etc/login.conf#. If the host is configured to use DES encrypted passwords, then the `default` class will contain an entry like this: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Further entries elided] .... Other possible values for the `passwd_format` capability include `blf` and `md5` (for Blowfish and MD5 encrypted passwords, respectively). If you have made changes to [.filename]#/etc/login.conf#, you will also need to rebuild the login capability database, which is achieved by running the following command as `root`: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== The format of passwords already in [.filename]#/etc/master.passwd# will not be updated until a user changes his password for the first time _after_ the login capability database is rebuilt. ==== Next, in order to ensure that passwords are encrypted with the format that you have chosen, you should also check that the `crypt_default` in [.filename]#/etc/auth.conf# gives precedence to your chosen password format. To do this, place the format that you have chosen first in the list. For example, when using DES encrypted passwords, the entry would be: [.programlisting] .... crypt_default = des blf md5 .... Having followed the above steps on each of the FreeBSD based NIS servers and clients, you can be sure that they all agree on which password format is used within your network. If you have trouble authenticating on an NIS client, this is a pretty good place to start looking for possible problems. Remember: if you want to deploy an NIS server for a heterogenous network, you will probably have to use DES on all systems because it is the lowest common standard. [[network-dhcp]] == Automatic Network Configuration (DHCP) === What Is DHCP? DHCP, the Dynamic Host Configuration Protocol, describes the means by which a system can connect to a network and obtain the necessary information for communication upon that network. FreeBSD versions prior to 6.0 use the ISC (Internet Software Consortium) DHCP client (man:dhclient[8]) implementation. Later versions use the OpenBSD `dhclient` taken from OpenBSD 3.7. All information here regarding `dhclient` is for use with either of the ISC or OpenBSD DHCP clients. The DHCP server is the one included in the ISC distribution. === What This Section Covers This section describes both the client-side components of the ISC and OpenBSD DHCP client and server-side components of the ISC DHCP system. The client-side program, `dhclient`, comes integrated within FreeBSD, and the server-side portion is available from the package:net/isc-dhcp3-server[] port. The man:dhclient[8], man:dhcp-options[5], and man:dhclient.conf[5] manual pages, in addition to the references below, are useful resources. === How It Works When `dhclient`, the DHCP client, is executed on the client machine, it begins broadcasting requests for configuration information. By default, these requests are on UDP port 68. The server replies on UDP 67, giving the client an IP address and other relevant network information such as netmask, router, and DNS servers. All of this information comes in the form of a DHCP "lease" and is only valid for a certain time (configured by the DHCP server maintainer). In this manner, stale IP addresses for clients no longer connected to the network can be automatically reclaimed. DHCP clients can obtain a great deal of information from the server. An exhaustive list may be found in man:dhcp-options[5]. === FreeBSD Integration FreeBSD fully integrates the ISC or OpenBSD DHCP client, `dhclient` (according to the FreeBSD version you run). DHCP client support is provided within both the installer and the base system, obviating the need for detailed knowledge of network configurations on any network that runs a DHCP server. `dhclient` has been included in all FreeBSD distributions since 3.2. DHCP is supported by sysinstall. When configuring a network interface within sysinstall, the second question asked is: "Do you want to try DHCP configuration of the interface?". Answering affirmatively will execute `dhclient`, and if successful, will fill in the network configuration information automatically. There are two things you must do to have your system use DHCP upon startup: * Make sure that the [.filename]#bpf# device is compiled into your kernel. To do this, add `device bpf` to your kernel configuration file, and rebuild the kernel. For more information about building kernels, see crossref:kernelconfig[kernelconfig,Ρυθμίζοντας τον Πυρήνα του FreeBSD]. + The [.filename]#bpf# device is already part of the [.filename]#GENERIC# kernel that is supplied with FreeBSD, so if you do not have a custom kernel, you should not need to create one in order to get DHCP working. + [NOTE] ==== For those who are particularly security conscious, you should be warned that [.filename]#bpf# is also the device that allows packet sniffers to work correctly (although they still have to be run as `root`). [.filename]#bpf#_is_ required to use DHCP, but if you are very sensitive about security, you probably should not add [.filename]#bpf# to your kernel in the expectation that at some point in the future you will be using DHCP. ==== * Edit your [.filename]#/etc/rc.conf# to include the following: + [.programlisting] .... ifconfig_fxp0="DHCP" .... + [NOTE] ==== Be sure to replace `fxp0` with the designation for the interface that you wish to dynamically configure, as described in crossref:config[config-network-setup,Ρυθμίζοντας Τις Κάρτες Δικτύου]. ==== + If you are using a different location for `dhclient`, or if you wish to pass additional flags to `dhclient`, also include the following (editing as necessary): + [.programlisting] .... dhcp_program="/sbin/dhclient" dhcp_flags="" .... The DHCP server, dhcpd, is included as part of the package:net/isc-dhcp3-server[] port in the ports collection. This port contains the ISC DHCP server and documentation. === Files * [.filename]#/etc/dhclient.conf# + `dhclient` requires a configuration file, [.filename]#/etc/dhclient.conf#. Typically the file contains only comments, the defaults being reasonably sane. This configuration file is described by the man:dhclient.conf[5] manual page. * [.filename]#/sbin/dhclient# + `dhclient` is statically linked and resides in [.filename]#/sbin#. The man:dhclient[8] manual page gives more information about `dhclient`. * [.filename]#/sbin/dhclient-script# + `dhclient-script` is the FreeBSD-specific DHCP client configuration script. It is described in man:dhclient-script[8], but should not need any user modification to function properly. * [.filename]#/var/db/dhclient.leases# + The DHCP client keeps a database of valid leases in this file, which is written as a log. man:dhclient.leases[5] gives a slightly longer description. === Further Reading The DHCP protocol is fully described in http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. An informational resource has also been set up at http://www.dhcp.org/[http://www.dhcp.org/]. [[network-dhcp-server]] === Installing and Configuring a DHCP Server ==== What This Section Covers This section provides information on how to configure a FreeBSD system to act as a DHCP server using the ISC (Internet Software Consortium) implementation of the DHCP server. The server is not provided as part of FreeBSD, and so you will need to install the package:net/isc-dhcp3-server[] port to provide this service. See crossref:ports[ports,Εγκατάσταση Εφαρμογών: Πακέτα και Ports] for more information on using the Ports Collection. ==== DHCP Server Installation In order to configure your FreeBSD system as a DHCP server, you will need to ensure that the man:bpf[4] device is compiled into your kernel. To do this, add `device bpf` to your kernel configuration file, and rebuild the kernel. For more information about building kernels, see crossref:kernelconfig[kernelconfig,Ρυθμίζοντας τον Πυρήνα του FreeBSD]. The [.filename]#bpf# device is already part of the [.filename]#GENERIC# kernel that is supplied with FreeBSD, so you do not need to create a custom kernel in order to get DHCP working. [NOTE] ==== Those who are particularly security conscious should note that [.filename]#bpf# is also the device that allows packet sniffers to work correctly (although such programs still need privileged access). [.filename]#bpf#_is_ required to use DHCP, but if you are very sensitive about security, you probably should not include [.filename]#bpf# in your kernel purely because you expect to use DHCP at some point in the future. ==== The next thing that you will need to do is edit the sample [.filename]#dhcpd.conf# which was installed by the package:net/isc-dhcp3-server[] port. By default, this will be [.filename]#/usr/local/etc/dhcpd.conf.sample#, and you should copy this to [.filename]#/usr/local/etc/dhcpd.conf# before proceeding to make changes. ==== Configuring the DHCP Server [.filename]#dhcpd.conf# is comprised of declarations regarding subnets and hosts, and is perhaps most easily explained using an example : [.programlisting] .... option domain-name "example.com";<.> option domain-name-servers 192.168.4.100;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 3600;<.> max-lease-time 86400;<.> ddns-update-style none;<.> subnet 192.168.4.0 netmask 255.255.255.0 { range 192.168.4.129 192.168.4.254;<.> option routers 192.168.4.1;<.> } host mailhost { hardware ethernet 02:03:04:05:06:07;<.> fixed-address mailhost.example.com;<.> } .... <.> This option specifies the domain that will be provided to clients as the default search domain. See man:resolv.conf[5] for more information on what this means. <.> This option specifies a comma separated list of DNS servers that the client should use. <.> The netmask that will be provided to clients. <.> A client may request a specific length of time that a lease will be valid. Otherwise the server will assign a lease with this expiry value (in seconds). <.> This is the maximum length of time that the server will lease for. Should a client request a longer lease, a lease will be issued, although it will only be valid for `max-lease-time` seconds. <.> This option specifies whether the DHCP server should attempt to update DNS when a lease is accepted or released. In the ISC implementation, this option is _required_. <.> This denotes which IP addresses should be used in the pool reserved for allocating to clients. IP addresses between, and including, the ones stated are handed out to clients. <.> Declares the default gateway that will be provided to clients. <.> The hardware MAC address of a host (so that the DHCP server can recognize a host when it makes a request). <.> Specifies that the host should always be given the same IP address. Note that using a hostname is correct here, since the DHCP server will resolve the hostname itself before returning the lease information. Once you have finished writing your [.filename]#dhcpd.conf#, you should enable the DHCP server in [.filename]#/etc/rc.conf#, i.e. by adding: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... Replace the `dc0` interface name with the interface (or interfaces, separated by whitespace) that your DHCP server should listen on for DHCP client requests. Then, you can proceed to start the server by issuing the following command: [source,shell] .... # /usr/local/etc/rc.d/isc-dhcpd.sh start .... Should you need to make changes to the configuration of your server in the future, it is important to note that sending a `SIGHUP` signal to dhcpd does _not_ result in the configuration being reloaded, as it does with most daemons. You will need to send a `SIGTERM` signal to stop the process, and then restart it using the command above. ==== Files * [.filename]#/usr/local/sbin/dhcpd# + dhcpd is statically linked and resides in [.filename]#/usr/local/sbin#. The man:dhcpd[8] manual page installed with the port gives more information about dhcpd. * [.filename]#/usr/local/etc/dhcpd.conf# + dhcpd requires a configuration file, [.filename]#/usr/local/etc/dhcpd.conf# before it will start providing service to clients. This file needs to contain all the information that should be provided to clients that are being serviced, along with information regarding the operation of the server. This configuration file is described by the man:dhcpd.conf[5] manual page installed by the port. * [.filename]#/var/db/dhcpd.leases# + The DHCP server keeps a database of leases it has issued in this file, which is written as a log. The manual page man:dhcpd.leases[5], installed by the port gives a slightly longer description. * [.filename]#/usr/local/sbin/dhcrelay# + dhcrelay is used in advanced environments where one DHCP server forwards a request from a client to another DHCP server on a separate network. If you require this functionality, then install the package:net/isc-dhcp3-relay[] port. The man:dhcrelay[8] manual page provided with the port contains more detail. [[network-dns]] == Domain Name System (DNS) === Overview FreeBSD utilizes, by default, a version of BIND (Berkeley Internet Name Domain), which is the most common implementation of the DNS protocol. DNS is the protocol through which names are mapped to IP addresses, and vice versa. For example, a query for `www.FreeBSD.org` will receive a reply with the IP address of The FreeBSD Project's web server, whereas, a query for `ftp.FreeBSD.org` will return the IP address of the corresponding FTP machine. Likewise, the opposite can happen. A query for an IP address can resolve its hostname. It is not necessary to run a name server to perform DNS lookups on a system. FreeBSD currently comes with BIND9 DNS server software by default. Our installation provides enhanced security features, a new file system layout and automated man:chroot[8] configuration. DNS is coordinated across the Internet through a somewhat complex system of authoritative root, Top Level Domain (TLD), and other smaller-scale name servers which host and cache individual domain information. Currently, BIND is maintained by the Internet Software Consortium http://www.isc.org/[http://www.isc.org/]. === Terminology To understand this document, some terms related to DNS must be understood. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Term | Definition |Forward DNS |Mapping of hostnames to IP addresses. |Origin |Refers to the domain covered in a particular zone file. |named, BIND, name server |Common names for the BIND name server package within FreeBSD. |Resolver |A system process through which a machine queries a name server for zone information. |Reverse DNS |The opposite of forward DNS; mapping of IP addresses to hostnames. |Root zone |The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory. |Zone |An individual domain, subdomain, or portion of the DNS administered by the same authority. |=== Examples of zones: * `.` is the root zone. * `org.` is a Top Level Domain (TLD) under the root zone. * `example.org.` is a zone under the `org.` TLD. * `1.168.192.in-addr.arpa` is a zone referencing all IP addresses which fall under the `192.168.1.*` IP space. As one can see, the more specific part of a hostname appears to its left. For example, `example.org.` is more specific than `org.`, as `org.` is more specific than the root zone. The layout of each part of a hostname is much like a file system: the [.filename]#/dev# directory falls within the root, and so on. === Reasons to Run a Name Server Name servers usually come in two forms: an authoritative name server, and a caching name server. An authoritative name server is needed when: * One wants to serve DNS information to the world, replying authoritatively to queries. * A domain, such as `example.org`, is registered and IP addresses need to be assigned to hostnames under it. * An IP address block requires reverse DNS entries (IP to hostname). * A backup or second name server, called a slave, will reply to queries. A caching name server is needed when: * A local DNS server may cache and respond more quickly than querying an outside name server. When one queries for `www.FreeBSD.org`, the resolver usually queries the uplink ISP's name server, and retrieves the reply. With a local, caching DNS server, the query only has to be made once to the outside world by the caching DNS server. Every additional query will not have to look to the outside of the local network, since the information is cached locally. === How It Works In FreeBSD, the BIND daemon is called named for obvious reasons. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | File | Description |man:named[8] |The BIND daemon. |man:rndc[8] |Name server control utility. |[.filename]#/etc/namedb# |Directory where BIND zone information resides. |[.filename]#/etc/namedb/named.conf# |Configuration file of the daemon. |=== Depending on how a given zone is configured on the server, the files related to that zone can be found in the [.filename]#master#, [.filename]#slave#, or [.filename]#dynamic# subdirectories of the [.filename]#/etc/namedb# directory. These files contain the DNS information that will be given out by the name server in response to queries. === Starting BIND Since BIND is installed by default, configuring it all is relatively simple. The default named configuration is that of a basic resolving name server, ran in a man:chroot[8] environment. To start the server one time with this configuration, use the following command: [source,shell] .... # /etc/rc.d/named forcestart .... To ensure the named daemon is started at boot each time, put the following line into the [.filename]#/etc/rc.conf#: [.programlisting] .... named_enable="YES" .... There are obviously many configuration options for [.filename]#/etc/namedb/named.conf# that are beyond the scope of this document. However, if you are interested in the startup options for named on FreeBSD, take a look at the `named_*` flags in [.filename]#/etc/defaults/rc.conf# and consult the man:rc.conf[5] manual page. The crossref:config[configtuning-rcd,Χρησιμοποιώντας Το Σύστημα rc Στο FreeBSD] section is also a good read. === Configuration Files Configuration files for named currently reside in [.filename]#/etc/namedb# directory and will need modification before use, unless all that is needed is a simple resolver. This is where most of the configuration will be performed. ==== Using `make-localhost` To configure a master zone for the localhost visit the [.filename]#/etc/namedb# directory and run the following command: [source,shell] .... # sh make-localhost .... If all went well, a new file should exist in the [.filename]#master# subdirectory. The filenames should be [.filename]#localhost.rev# for the local domain name and [.filename]#localhost-v6.rev# for IPv6 configurations. As the default configuration file, required information will be present in the [.filename]#named.conf# file. ==== [.filename]#/etc/namedb/named.conf# [.programlisting] .... // $FreeBSD$ // // Refer to the named.conf(5) and named(8) man pages, and the documentation // in /usr/shared/doc/bind9 for more details. // // If you are going to set up an authoritative server, make sure you // understand the hairy details of how DNS works. Even with // simple mistakes, you can break connectivity for affected parties, // or cause huge amounts of useless Internet traffic. options { directory "/etc/namedb"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; // If named is being used only as a local resolver, this is a safe default. // For named to be accessible to the network, comment this option, specify // the proper IP address, or delete this option. listen-on { 127.0.0.1; }; // If you have IPv6 enabled on this system, uncomment this option for // use as a local resolver. To give access to the network, specify // an IPv6 address, or the keyword "any". // listen-on-v6 { ::1; }; // In addition to the "forwarders" clause, you can force your name // server to never initiate queries of its own, but always ask its // forwarders only, by enabling the following line: // // forward only; // If you've got a DNS server around at your upstream provider, enter // its IP address here, and enable the line below. This will make you // benefit from its cache, thus reduce overall DNS traffic in the Internet. /* forwarders { 127.0.0.1; }; */ .... Just as the comment says, to benefit from an uplink's cache, `forwarders` can be enabled here. Under normal circumstances, a name server will recursively query the Internet looking at certain name servers until it finds the answer it is looking for. Having this enabled will have it query the uplink's name server (or name server provided) first, taking advantage of its cache. If the uplink name server in question is a heavily trafficked, fast name server, enabling this may be worthwhile. [WARNING] ==== `127.0.0.1` will _not_ work here. Change this IP address to a name server at your uplink. ==== [.programlisting] .... /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND versions 8 and later * use a pseudo-random unprivileged UDP port by default. */ // query-source address * port 53; }; // If you enable a local name server, don't forget to enter 127.0.0.1 // first in your /etc/resolv.conf so this server will be queried. // Also, make sure to enable it in /etc/rc.conf. zone "." { type hint; file "named.root"; }; zone "0.0.127.IN-ADDR.ARPA" { type master; file "master/localhost.rev"; }; // RFC 3152 zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA" { type master; file "master/localhost-v6.rev"; }; // NB: Do not use the IP addresses below, they are faked, and only // serve demonstration/documentation purposes! // // Example slave zone config entries. It can be convenient to become // a slave at least for the zone your own domain is in. Ask // your network administrator for the IP address of the responsible // primary. // // Never forget to include the reverse lookup (IN-ADDR.ARPA) zone! // (This is named after the first bytes of the IP address, in reverse // order, with ".IN-ADDR.ARPA" appended.) // // Before starting to set up a primary zone, make sure you fully // understand how DNS and BIND works. There are sometimes // non-obvious pitfalls. Setting up a slave zone is simpler. // // NB: Don't blindly enable the examples below. :-) Use actual names // and addresses instead. /* An example master zone zone "example.net" { type master; file "master/example.net"; }; */ /* An example dynamic zone key "exampleorgkey" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "example.org" { type master; allow-update { key "exampleorgkey"; }; file "dynamic/example.org"; }; */ /* Examples of forward and reverse slave zones zone "example.com" { type slave; file "slave/example.com"; masters { 192.168.1.1; }; }; zone "1.168.192.in-addr.arpa" { type slave; file "slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ .... In [.filename]#named.conf#, these are examples of slave entries for a forward and reverse zone. For each new zone served, a new zone entry must be added to [.filename]#named.conf#. For example, the simplest zone entry for `example.org` can look like: [.programlisting] .... zone "example.org" { type master; file "master/example.org"; }; .... The zone is a master, as indicated by the `type` statement, holding its zone information in [.filename]#/etc/namedb/master/example.org# indicated by the `file` statement. [.programlisting] .... zone "example.org" { type slave; file "slave/example.org"; }; .... In the slave case, the zone information is transferred from the master name server for the particular zone, and saved in the file specified. If and when the master server dies or is unreachable, the slave name server will have the transferred zone information and will be able to serve it. ==== Zone Files An example master zone file for `example.org` (existing within [.filename]#/etc/namedb/master/example.org#) is as follows: [.programlisting] .... $TTL 3600 ; 1 hour example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ; Minimum TTL ) ; DNS Servers IN NS ns1.example.org. IN NS ns2.example.org. ; MX Records IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Machine Names localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 ; Aliases www IN CNAME @ .... Note that every hostname ending in a "." is an exact hostname, whereas everything without a trailing "." is referenced to the origin. For example, `www` is translated into `www.origin`. In our fictitious zone file, our origin is `example.org.`, so `www` would translate to `www.example.org.` The format of a zone file follows: [.programlisting] .... recordname IN recordtype value .... The most commonly used DNS records: SOA:: start of zone authority NS:: an authoritative name server A:: a host address CNAME:: the canonical name for an alias MX:: mail exchanger PTR:: a domain name pointer (used in reverse DNS) [.programlisting] .... example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh after 3 hours 3600 ; Retry after 1 hour 604800 ; Expire after 1 week 86400 ) ; Minimum TTL of 1 day .... `example.org.`:: the domain name, also the origin for this zone file. `ns1.example.org.`:: the primary/authoritative name server for this zone. `admin.example.org.`:: the responsible person for this zone, email address with "@" replaced. (mailto:admin@example.org[admin@example.org] becomes `admin.example.org`) `2006051501`:: the serial number of the file. This must be incremented each time the zone file is modified. Nowadays, many admins prefer a `yyyymmddrr` format for the serial number. `2006051501` would mean last modified 05/15/2006, the latter `01` being the first time the zone file has been modified this day. The serial number is important as it alerts slave name servers for a zone when it is updated. [.programlisting] .... IN NS ns1.example.org. .... This is an NS entry. Every name server that is going to reply authoritatively for the zone must have one of these entries. [.programlisting] .... localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 .... The A record indicates machine names. As seen above, `ns1.example.org` would resolve to `192.168.1.2`. [.programlisting] .... IN A 192.168.1.1 .... This line assigns IP address `192.168.1.1` to the current origin, in this case `example.org`. [.programlisting] .... www IN CNAME @ .... The canonical name record is usually used for giving aliases to a machine. In the example, `www` is aliased to the "master" machine which name equals to domain name `example.org` (`192.168.1.1`). CNAMEs can be used to provide alias hostnames, or round robin one hostname among multiple machines. [.programlisting] .... IN MX 10 mail.example.org. .... The MX record indicates which mail servers are responsible for handling incoming mail for the zone. `mail.example.org` is the hostname of the mail server, and 10 being the priority of that mail server. One can have several mail servers, with priorities of 10, 20 and so on. A mail server attempting to deliver to `example.org` would first try the highest priority MX (the record with the lowest priority number), then the second highest, etc, until the mail can be properly delivered. For in-addr.arpa zone files (reverse DNS), the same format is used, except with PTR entries instead of A or CNAME. [.programlisting] .... $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 3600 ) ; Minimum IN NS ns1.example.org. IN NS ns2.example.org. 1 IN PTR example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 4 IN PTR mx.example.org. 5 IN PTR mail.example.org. .... This file gives the proper IP address to hostname mappings of our above fictitious domain. === Caching Name Server A caching name server is a name server that is not authoritative for any zones. It simply asks queries of its own, and remembers them for later use. To set one up, just configure the name server as usual, omitting any inclusions of zones. === Security Although BIND is the most common implementation of DNS, there is always the issue of security. Possible and exploitable security holes are sometimes found. While FreeBSD automatically drops named into a man:chroot[8] environment; there are several other security mechanisms in place which could help to lure off possible DNS service attacks. It is always good idea to read http://www.cert.org/[CERT]'s security advisories and to subscribe to the {freebsd-security-notifications} to stay up to date with the current Internet and FreeBSD security issues. [TIP] ==== If a problem arises, keeping sources up to date and having a fresh build of named would not hurt. ==== === Further Reading BIND/named manual pages: man:rndc[8] man:named[8] man:named.conf[8] * http://www.isc.org/products/BIND/[Official ISC BIND Page] * http://www.isc.org/sw/guild/bf/[Official ISC BIND Forum] * http://www.nominum.com/getOpenSourceResource.php?id=6[ BIND FAQ] * http://www.oreilly.com/catalog/dns5/[O'Reilly DNS and BIND 5th Edition] * link:ftp://ftp.isi.edu/in-notes/rfc1034.txt[RFC1034 - Domain Names - Concepts and Facilities] * link:ftp://ftp.isi.edu/in-notes/rfc1035.txt[RFC1035 - Domain Names - Implementation and Specification] [[network-apache]] == Ο εξυπηρετητής HTTP Apache === Σύνοψη Το FreeBSD χρησιμοποιείται για να φιλοξενεί παγκοσμίως ιστοσελίδες μεγάλης επισκεψιμότητας. Οι περισσότεροι διακομιστές web στο διαδίκτυο χρησιμοποιούν τον εξυπηρετητή HTTP Apache. Τα πακέτα λογισμικού του Apache θα πρέπει να περιέχονται στο μέσο εγκατατάστασης του FreeBSD που χρησιμοποιείτε. Αν δεν εγκαταστήσατε τον Apache κατά την διάρκεια της εγκατάστασης του FreeBSD, τότε μπορείτε να τον εγκαταστήσετε από το πακέτο package:www/apache13[] ή από το πακέτο package:www/apache20[]. Αφού ολοκληρώσετε επιτυχώς την εγκατάσταση του Apache, θα πρέπει να κάνετε τις απαραίτητες ρυθμίσεις. [NOTE] ==== Αυτή η ενότητα καλύπτει την έκδοση εξυπηρετητών Apache HTTP 1.3.X, μιας που αυτή η έκδοση είναι η πιο διαδεδομένη για το FreeBSD. Ο Apache 2.X παρουσιάζει πολλές νέες τεχνολογίες αλλά αυτές δεν περιγράφονται σε αυτή την ενότητα. Περισσότερες πληροφορίες για τον Apache 2.X, μπορείτε να δείτε στην σελίδα http://httpd.apache.org/[http://httpd.apache.org/]. ==== === Ρυθμίσεις Στο FreeBSD το σημαντικότερο αρχείο ρυθμίσεων του Εξυπηρετητή HTTP Apache είναι το [.filename]#/usr/local/etc/apache/httpd.conf#. Είναι ένα τυπικό UNIX(R) ρυθμιστικό αρχείο κειμένου, με γραμμές σχολίων που ξεκινούν με τον χαρακτήρα `#`. Σκοπός μας εδώ δεν είναι μια ολοκληρωμένη περιγραφή όλων των πιθανών επιλογών, επομένως θα περιγράψουμε μόνο τις πιο δημοφιλείς επιλογές ρυθμίσεις (configuration directives). `ServerRoot "/usr/local"`:: Εδώ περιγράφεται ο προεπιλεγμένος ιεραρχικά κατάλογος εγκατάστασης για τον Apache. Τα εκτελέσιμα αρχεία είναι αποθηκευμένα στους υποκαταλόγους [.filename]#bin# και [.filename]#sbin# του καταλόγου "ServerRoot" και τα αρχεία ρυθμίσεων αποθηκεύονται στον κατάλογο [.filename]#etc/apache#. `ServerAdmin you@your.address`:: Η ηλεκτρονική διεύθυνση στην οποία θα πρέπει να αποστέλλονται αναφορές προβλημάτων σχετικά με τον εξυπηρετητή. Αυτή η διεύθυνση εμφανίζεται σε κάποιες σελίδες που δημιουργούνται από τον εξυπηρετητή, όπως οι σελίδες σφαλμάτων. `ServerName www.example.com`:: Το `ServerName` σας επιτρέπει να θέσετε ένα όνομα κόμβου (hostname) για τον εξυπηρετητή σας, το οποίο αποστέλλεται πίσω στους clients αν είναι διαφορετικό από εκείνο που έχετε ήδη ρυθμίσει στον κόμβο σας (εδώ μπορείτε, για παράδειγμα, να χρησιμοποιήσετε `www` αντί του πραγματικού ονόματος του κόμβου). `DocumentRoot "/usr/local/www/data"`:: `DocumentRoot`: Είναι ο κατάλογος από τον οποίο θα προσφέρονται τα έγγραφα σας. Προεπιλεγμένα, όλα τα αιτήματα θα εξυπηρετούνται από αυτό τον κατάλογο, αλλά μπορούν επίσης να χρησιμοποιηθούν συμβολικοί δεσμοί (symbolic link) ή παρωνύμια (aliases) που θα στοχεύουν σε άλλες τοποθεσίες. Πριν κάνετε οποιαδήποτε αλλαγή, είναι καλό να δημιουργείτε αντίγραφα ασφαλείας (backup) του αρχείου ρυθμίσεων του Apache. Μόλις κρίνετε πως είστε ικανοποιημένος με τις αρχικές ρυθμίσεις μπορείτε να ξεκινήσετε με την εκτέλεση του Apache. === Εκτέλεση του Apache O Apache δεν τρέχει διαμέσου του υπερ-διακομιστή inetd όπως κάνουν πολλοί άλλοι δικτυακοί εξυπηρετητές. Είναι ρυθμισμένος να τρέχει αυτόνομα για να εξυπηρετεί καλύτερα τις αιτήσεις HTTP των πελατών του, δηλαδή των προγραμμάτων πλοήγησης (browsers). Η εγκατάσταση του Apache από τα FreeBSD Ports περιέχει ένα βοηθητικό shell script για την εκκίνηση, το σταμάτημα και την επανεκκίνηση του εξυπηρετητή. Για να ξεκινήσετε τον Apache για πρώτη φορά, απλά τρέξτε: [source,shell] .... # /usr/local/sbin/apachectl start .... Μπορείτε οποιαδήποτε στιγμή να σταματήσετε τον εξυπηρετητή, πληκτρολογώντας: [source,shell] .... # /usr/local/sbin/apachectl stop .... Μετά από αλλαγές που πιθανώς να κάνατε για οποιονδήποτε λόγο στο αρχείο ρυθμίσεων, θα χρειαστεί να επανεκκινήσετε τον εξυπηρετητή: [source,shell] .... # /usr/local/sbin/apachectl restart .... Για να επανεκκινήσετε τον Apache δίχως να διακόψετε τις τρέχουσες συνδέσεις, τρέξτε: [source,shell] .... # /usr/local/sbin/apachectl graceful .... Περισσότερες πληροφορίες θα βρείτε στη σελίδα βοήθειας του man:apachectl[8]. Για να ξεκινάει ο Apache αυτόματα κατά τη διάρκεια εκκίνησης του συστήματος, προσθέστε την ακόλουθη γραμμή στο [.filename]#/etc/rc.conf#: [.programlisting] .... apache_enable="YES" .... Αν επιθυμείτε να παρέχονται κατά την εκκίνηση του συστήματος πρόσθετες επιλογές στην γραμμή εντολών για το πρόγραμμα Apache `httpd` μπορείτε να τις δηλώσετε με μια πρόσθετη γραμμή στο [.filename]#rc.conf#: [.programlisting] .... apache_flags="" .... Τώρα που έχει ξεκινήσει ο εξυπηρετής web, μπορείτε να δείτε την ιστοσελίδα σας στοχεύοντας το πρόγραμμα πλοήγησης στο `http://localhost/`. Η προκαθορισμένη σελίδα που εμφανίζεται είναι η [.filename]#/usr/local/www/data/index.html#. === Virtual Hosting Ο Apache υποστηρίζει δύο διαφορετικούς τύπους Virtual Hosting. Το Ονομαστικό virtual hosting χρησιμοποιεί τους HTTP/1.1 headers για να καθορίσει τον κόμβο. Αυτό επιτρέπει την κοινή χρήση της ίδιας IP για πολλά και διαφορετικά domains. Για να ρυθμίσετε τον Apache να χρησιμοποιεί το Ονομαστικό Virtual Hosting εισάγετε μια καταχώριση στο [.filename]#httpd.conf# σαν την ακόλουθη: [.programlisting] .... NameVirtualHost * .... Αν ο διακομιστής web ονομάζεται `www.domain.tld` και επιθυμείτε να εγκαταστήσετε ένα virtual domain για το `www.someotherdomain.tld` τότε θα πρέπει να προσθέσετε τις ακόλουθες καταχωρήσεις στο [.filename]#httpd.conf#: [source,shell] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... Αντικαταστήστε τις παραπάνω διευθύνσεις με εκείνες που επιθυμείτε να χρησιμοποιήσετε και την κατάλληλη διαδρομή προς τα έγγραφά σας. Για περισσότερες πληροφορίες σχετικά με τις ρυθμίσεις για τα virtual host, σας προτρέπουμε να συμβουλευτείτε την επίσημη τεκμηρίωση του Apache στο http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]. === Apache Modules Υπάρχουν πολλοί και διάφοροι διαθέσιμοι τύποι αρθρωμάτων (modules) για τον Apache, τα οποία επεκτείνουν κι εμπλουτίζουν τις λειτουργίες του βασικού εξυπηρετητή. Η Συλλογή των Ports του FreeBSD παρέχει έναν εύκολο τρόπο για να εγκαταστήσετε τον Apache και μερικά από τα πιο δημοφιλή αρθρώματα. ==== mod_ssl Το άρθρωμα mod_ssl χρησιμοποιεί την βιβλιοθήκη OpenSSL για να παρέχει ισχυρή κρυπτογράφηση διαμέσου των πρωτοκόλων Secure Sockets Layer (SSL v2/v3) και Transport Layer Security (TLS v1). Το άρθρωμα παρέχει όλα τα απαραίτητα συστατικά για να μπορεί να αιτείται υπογεγγραμμένα πιστοποιητικά από έμπιστους εξουσιοδοτημένους φορείς πιστοποίησης έτσι ώστε να μπορείτε να τρέχετε έναν ασφαλή εξυπηρετητή web στο FreeBSD. Εάν δεν έχετε εγκαταστήσει ακόμη τον Apache, μπορείτε να εγκαταστήσετε την έκδοση του Apache 1.3.X που περιλαμβάνει το mod_ssl από την port package:www/apache13-modssl[] . Το SSL είναι επίσης διαθέσιμο για τον Apache 2.X στην port package:www/apache20[], όπου το SSL είναι ενεργοποιημένο από προεπιλογή. ==== Δυναμικές Ιστοσελίδες με Perl & PHP Την τελευταία δεκαετία, πολλές επιχειρήσεις στρέψανε τις δραστηριότητες τους προς το Ίντερνετ με σκοπό να βελτιώσουν τα έσοδα τους και για μεγαλύτερη προβολή. Αυτό με τη σειρά του δημιούργησε την ανάγκη για διαδραστικό διαδικτυακό περιεχόμενο. Ενώ κάποιες εταιρείες, όπως η Microsoft(R), παρουσίασαν λύσεις ενσωματωμένες στα ιδιόκτητα προϊόντα τους, η κοινότητα ανοιχτού λογισμικού έλαβε το μήνυμα. Στις σύγχρονες επιλογές για διαδικτυακές σελίδες δυναμικού περιεχομένου περιλαμβάνονται τα Django, Ruby on Rails, mod_perl και mod_php. mod_perl & mod_php. ===== mod_perl Το γεγονός συνύπαρξης Apache/Perl φέρνει κοντά τη μεγάλη δύναμη της γλώσσας προγραμματισμού Perl και τον εξυπηρετητή HTTP Apache. Με το άρθρωμα mod_perl έχετε τη δυνατότητα να γράψετε επεκτάσεις για τον Apache εξ' ολοκλήρου σε Perl. Επιπλέον, ο διατηρήσιμος μεταγλωττιστής που είναι ενσωματωμένος στον εξυπηρετητή σας επιτρέπει να αποφύγετε την χρήση ενός εξωτερικού μεταγλωττιστή Perl και να επιβαρυνθείτε από το χρόνο εκκίνησης του. Το mod_perl διατίθεται με διάφορους τρόπους. Για να χρησιμοποιήσετε το mod_perl να θυμάστε ότι το mod_perl 1.0 mod_perl 1.0 δουλεύει μόνο με τον Apache 1.3 και το mod_perl 2.0 δουλεύει μόνο με τον Apache 2. Το mod_perl 1.0 είναι διαθέσιμο στο port package:www/mod_perl[] ενώ μια στατικά μεταγλωττισμένη έκδοση είναι διαθέσιμη στο package:www/apache13-modperl[]. Το mod_perl 2.0 διατίθεται στο port package:www/mod_perl2[]. ===== mod_php Το PHP, γνωστό και ως "PHP: Hypertext Preprocessor" είναι μια script γλώσσα προγραμματισμού γενικής χρήσης αλλά ιδιαίτερα κατάλληλη για ανάπτυξη λογισμικού Web. Η σύνταξή της προέρχεται από τις C, Java(TM) και Perl και έχει την δυνατότητα να ενσωματώνεται σε κώδικα HTML, με σκοπό να επιτρέπει στους προγραμματιστές web να γράφουν γρήγορα δυναμικές ιστοσελίδες. Ο Apache υποστηρίζει το PHP5. Μπορείτε να ξεκινήσετε εγκαθιστώντας το πακέτο package:lang/php5[]. Αν το πακέτο package:lang/php5[] εγκαθίσταται για πρώτη φορά, αυτόματα θα σας εμφανιστούν όλες οι δυνατές επιλογές `OPTIONS`. Αν κάποιο μενού δεν εμφανίζεται, π.χ. επειδή το πακέτο package:lang/php5[] είχε εγκατασταθεί στο παρελθόν, μπορείτε πάντα να ρυθμίσετε από την αρχή το πακέτο, τρέχοντας στον κατάλογο του port: [source,shell] .... # make config .... Στις επιλογές εγκατάστασης, διαλέξτε την επιλογή `APACHE` ώστε να συμπεριληφθεί και το άρθρωμα mod_php για τον εξυπηρετητή Apache. [NOTE] ==== Μερικές τοποθεσίες χρησιμοποιούν ακόμη το PHP4 για διάφορους λόγους (π.χ. θέματα συμβατότητος ή επειδή έχουν ήδη εγκατεστημένες εφαρμογές που το απαιτούν). Αν είναι ανάγκη να χρησιμοποιήσετε το mod_php4 αντί του mod_php5, τότε χρησιμοποιείστε το port package:lang/php4[]. Το port package:lang/php4[] υποστηρίζει πολλές από τις ρυθμίσεις και τις επιλογές εγκατάστασης του port package:lang/php5[]. ==== Με αυτό τον τρόπο θα εγκατασταθούν και θα ρυθμιστούν τα απαιτούμενα αρθρώματα ώστε να υποστηρίζουν δυναμικές εφαρμογές PHP. Για επιβεβαίωση ελέγξτε πως έχουν προστεθεί στις αντίστοιχες ενότητες του [.filename]#/usr/local/etc/apache/httpd.conf# τα ακόλουθα:: [.programlisting] .... LoadModule php5_module libexec/apache/libphp5.so .... [.programlisting] .... AddModule mod_php5.c DirectoryIndex index.php index.html AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps .... Αφού ολοκληρώσετε τον έλεγχο, για να φορτωθεί το άρθρωμα PHP χρειάζεται μια απλή κλήση με την εντολή `apachectl` για μια κανονική (graceful) επανεκκίνηση: [source,shell] .... # apachectl graceful .... Για μελλοντικές αναβαθμίσεις του PHP, δεν απαιτείται η εντολή `make config`. Οι επιλεγμένες `OPTIONS` αποθηκεύονται αυτόματα από το μηχανισμό εγκατάστασης των Ports του FreeBSD. Η σύνθεση του PHP στο FreeBSD, είναι εξαιρετικά στοιχειακή, και ο βασικός κορμός που έχει εγκατασταθεί είναι πολύ περιορισμένος. Είναι πολύ εύκολο όμως να προσθέσουμε επεκτάσεις χρησιμοποιώντας το port package:lang/php5-extensions[]. Αυτό το port παρέχει μενού επιλογών για την εγκατάσταση των επεκτάσιμων συστατικών του PHP. Εναλλακτικά, μπορείτε να εγκαταστήσετε καθεμία επέκταση ξεχωριστά χρησιμοποιώντας το κατάλληλο port. Για παράδειγμα, για να προσθέσετε στο PHP5, τη δυνατότητα υποστήριξης για βάσεις δεδομένωνMySQL απλά εγκαταστήστε το port package:databases/php5-mysql[]. Μετά την εγκατάσταση ενός νέου αρθρώματος ή κάποιας άλλης επέκτασης, ο εξυπηρετητής Apache θα πρέπει να επαναφορτωθεί για να ενεργοποιηθούν οι νέες ρυθμίσεις: [source,shell] .... # apachectl graceful .... [[network-ftp]] == Πρωτόκολο Μεταφοράς Αρχείων (FTP) === Σύνοψη Το Πρωτόκολο Μεταφοράς Αρχείων (File Transfer Protocol - FTP) παρέχει στους χρήστες έναν εύκολο τρόπο για να μεταφέρουν τα αρχεία τους από και προς έναν εξυπηρετητή FTP. Το βασικό σύστημα του FreeBSD περιλαμβάνει ένα εξυπηρετητή FTP, το ftpd. Αυτό καθιστά την εγκατάσταση και την διαχείριση του εξυπηρετητή FTP πολύ εύκολη υπόθεση. === Ρυθμίσεις Το πιο σημαντικό βήμα στις ρυθμίσεις είναι να αποφασίσετε σε ποιούς λογαριασμούς θα επιτραπεί η πρόσβαση στον εξυπηρετητή FTP. Ένα συνηθισμένο σύστημα FreeBSD δημιουργεί μερικούς λογαριασμούς συστήματος για διάφορους δαίμονες, αλλά δεν πρέπει να επιτρέπεται η πρόσβαση στο σύστημα με αυτούς τους λογαριασμούς. Το αρχείο [.filename]#/etc/ftpusers# περιέχει μια λίστα από χρήστες για τους οποίους απορρίπτεται η πρόσβαση μέσω FTP. Προεπιλεγμένα, περιέχονται οι προαναφερθέντες λογαριασμοί του συστήματος, αλλά μπορείτε επίσης να προσθέσετε συγκεκριμένους χρήστες που δε θα πρέπει να έχουν πρόσβαση μέσω FTP. Μπορείτε αν θέλετε να περιορίσετε την πρόσβαση σε κάποιους χρήστες, δίχως όμως να τους εμποδίσετε πλήρως. Αυτό μπορεί να συμβεί με τις ρυθμίσεις του αρχείου [.filename]#/etc/ftpchroot#. Αυτό το αρχείο περιέχει λίστες χρηστών και ομάδων περιορισμένης πρόσβασης FTP. Η σελίδα βοήθειας man:ftpchroot[5] περιέχει όλες τις απαραίτητες λεπτομέρειες, επομένως δε θα χρειαστεί να μπούμε σε λεπτομέρειες εδώ. Αν επιθυμείτε να ενεργοποιήσετε ανώνυμη πρόσβαση FTP στον εξυπηρετητή σας, θα πρέπει να δημιουργήσετε, στο FreeBSD σύστημα σας, ένα χρήστη με όνομα `ftp` . Οι ανώνυμοι χρήστες θα μπορούν να εισέρχονται στον εξυπηρετητή FTP με το γενικό όνομα χρήστη `ftp` ή με `anonymous` και με οποιαδήποτε κωδικό πρόσβασης (συνηθίζεται να ζητείται η διεύθυνση email του χρήστη ως κωδικός πρόσβασης). Ο εξυπηρετητής FTP θα καλέσει το man:chroot[2] μόλις εισέλθη ο ανώνυμος χρήστης, για να του περιορίσει την πρόσβαση, επιτρέποντας του μόνο τον αρχικό κατάλογο (home directory) του χρήστη `ftp`. Υπάρχουν δύο αρχεία κειμένου για τον ορισμό μηνυμάτων καλωσορίσματος που θα εμφανίζονται στους πελάτες FTP. Το περιεχόμενο του αρχείου [.filename]#/etc/ftpwelcome# εμφανίζεται στους χρήστες πριν φτάσουν στην προτροπή εισόδου. Μετά από μια πετυχημένη είσοδο στο σύστημα, εμφανίζεται το περιεχόμενο του αρχείου [.filename]#/etc/ftpmotd#. Παρατηρήστε πως η διαδρομή σε αυτό το αρχείο είναι σχετική με το περιβάλλον πρόσβασης, επομένως για τους ανώνυμους χρήστες θα εμφανίζεται το περιεχόμενο του αρχείου [.filename]#~ftp/etc/ftpmotd#. Αφού ρυθμίσετε κατάλληλα τον εξυπηρετητή FTP, θα πρέπει να τον ενεργοποιήσετε στο αρχείο [.filename]#/etc/inetd.conf#. Το μόνο που χρειάζεται να κάνετε είναι να αφαιρέσετε το σύμβολο σχολιασμού "#" μπροστά από την υπάρχουσα γραμμή ftpd : [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... Όπως εξηγήσαμε στο <>, η διεργασία inetd θα πρέπει να ξαναφορτώνεται αν έχουν γίνει αλλαγές στο αρχείο ρυθμίσεων της. Τώρα μπορείτε να δώσετε τα στοιχεία του λογαριασμού σας για να εισέλθετε στον εξυπηρετητή FTP. [source,shell] .... % ftp localhost .... === Συντήρηση Ο δαίμονας ftpd χρησιμοποιεί το man:syslog[3] για την δημιουργία μηνυμάτων αναφοράς. Προεπιλεγμένα, ο δαίμονας των log του συστήματος θα εναποθέτει τις σχετικές με το FTP αναφορές στο αρχείο [.filename]#/var/log/xferlog#. Η τοποθεσία του αρχείου αναφοράς μπορεί να τροποποιηθεί αλλάζοντας την ακόλουθη γραμμή στο [.filename]#/etc/syslog.conf#: [.programlisting] .... ftp.info /var/log/xferlog .... Πρέπει να είστε ενήμεροι για τα προβλήματα που μπορούν να παρουσιαστούν σχετικά με τη λειτουργία ενός ανώνυμου εξυπηρετητή FTP. Ειδικότερα, θα πρέπει να σκεφτείτε σοβαρά αν όντως επιθυμείτε να έχουν δυνατότητα να ανεβάζουν αρχεία οι ανώνυμοι χρήστες σας. Αν αφήσετε οποιονδήποτε ανώνυμο χρήστη να ανεβάζει αρχεία, μπορεί ξαφνικά να ανακαλύψετε πως ο εξυπηρετητής σας FTP χρησιμοποιείται για διακίνηση πειρατικού εμπορικού λογισμικού ή για άλλο, ακόμα χειρότερο, παράνομο υλικό. Εάν όντως χρειάζεται οι χρήστες να έχουν άδεια προσθήκης αρχείων, τότε θα πρέπει να ρυθμίσετε τις άδειες έτσι ώστε τα αρχεία αυτά να μην είναι ορατά από άλλους ανώνυμους χρήστες, έως ότου να πάρουν την ασφαλή έγκριση σας. [[network-samba]] == File and Print Services for Microsoft(R) Windows(R) clients (Samba) === Overview Samba is a popular open source software package that provides file and print services for Microsoft(R) Windows(R) clients. Such clients can connect to and use FreeBSD filespace as if it was a local disk drive, or FreeBSD printers as if they were local printers. Samba software packages should be included on your FreeBSD installation media. If you did not install Samba when you first installed FreeBSD, then you can install it from the package:net/samba3[] port or package. === Configuration A default Samba configuration file is installed as [.filename]#/usr/local/etc/smb.conf.default#. This file must be copied to [.filename]#/usr/local/etc/smb.conf# and customized before Samba can be used. The [.filename]#smb.conf# file contains runtime configuration information for Samba, such as definitions of the printers and "file system shares" that you would like to share with Windows(R) clients. The Samba package includes a web based tool called swat which provides a simple way of configuring the [.filename]#smb.conf# file. ==== Using the Samba Web Administration Tool (SWAT) The Samba Web Administration Tool (SWAT) runs as a daemon from inetd. Therefore, the following line in [.filename]#/etc/inetd.conf# should be uncommented before swat can be used to configure Samba: [.programlisting] .... swat stream tcp nowait/400 root /usr/local/sbin/swat .... As explained in <>, the inetd must be reloaded after this configuration file is changed. Once swat has been enabled in [.filename]#inetd.conf#, you can use a browser to connect to http://localhost:901[http://localhost:901]. You will first have to log on with the system `root` account. Once you have successfully logged on to the main Samba configuration page, you can browse the system documentation, or begin by clicking on the menu:Globals[] tab. The menu:Globals[] section corresponds to the variables that are set in the `[global]` section of [.filename]#/usr/local/etc/smb.conf#. ==== Global Settings Whether you are using swat or editing [.filename]#/usr/local/etc/smb.conf# directly, the first directives you are likely to encounter when configuring Samba are: `workgroup`:: NT Domain-Name or Workgroup-Name for the computers that will be accessing this server. `netbios name`:: This sets the NetBIOS name by which a Samba server is known. By default it is the same as the first component of the host's DNS name. `server string`:: This sets the string that will be displayed with the `net view` command and some other networking tools that seek to display descriptive text about the server. ==== Security Settings Two of the most important settings in [.filename]#/usr/local/etc/smb.conf# are the security model chosen, and the backend password format for client users. The following directives control these options: `security`:: The two most common options here are `security = share` and `security = user`. If your clients use usernames that are the same as their usernames on your FreeBSD machine then you will want to use user level security. This is the default security policy and it requires clients to first log on before they can access shared resources. + In share level security, client do not need to log onto the server with a valid username and password before attempting to connect to a shared resource. This was the default security model for older versions of Samba. `passdb backend`:: Samba has several different backend authentication models. You can authenticate clients with LDAP, NIS+, a SQL database, or a modified password file. The default authentication method is `smbpasswd`, and that is all that will be covered here. Assuming that the default `smbpasswd` backend is used, the [.filename]#/usr/local/private/smbpasswd# file must be created to allow Samba to authenticate clients. If you would like to give your UNIX(R) user accounts access from Windows(R) clients, use the following command: [source,shell] .... # smbpasswd -a username .... Please see the http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/[Official Samba HOWTO] for additional information about configuration options. With the basics outlined here, you should have everything you need to start running Samba. === Starting Samba The package:net/samba3[] port adds a new startup script, which can be used to control Samba. To enable this script, so that it can be used for example to start, stop or restart Samba, add the following line to the [.filename]#/etc/rc.conf# file: [.programlisting] .... samba_enable="YES" .... [NOTE] ==== This will also configure Samba to automatically start at system boot time. ==== It is possible then to start Samba at any time by typing: [source,shell] .... # /usr/local/etc/rc.d/samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. .... Please refer to crossref:config[configtuning-rcd,Χρησιμοποιώντας Το Σύστημα rc Στο FreeBSD] for more information about using rc scripts. Samba actually consists of three separate daemons. You should see that both the nmbd and smbd daemons are started by the [.filename]#samba.sh# script. If you enabled winbind name resolution services in [.filename]#smb.conf#, then you will also see that the winbindd daemon is started. You can stop Samba at any time by typing : [source,shell] .... # /usr/local/etc/rc.d/samba.sh stop .... Samba is a complex software suite with functionality that allows broad integration with Microsoft(R) Windows(R) networks. For more information about functionality beyond the basic installation described here, please see http://www.samba.org[http://www.samba.org]. [[network-ntp]] == Συγχρονισμός Ρολογιού Συστήματος με NTP === Σύνοψη Με το πέρασμα του χρόνου, το ρολόι συστήματος ενός υπολογιστή έχει την τάση να αποσυγχρονίζεται. Το Πρωτόκολο Χρονισμού Δικτύων (Network Time Protocol ή NTP) παρέχει ένα τρόπο για να εξασφαλίσετε την ακρίβεια του clock σας. Πολλές διαδικτυακές υπηρεσίες βασίζονται ή ωφελούνται σε μεγάλο βαθμό από την ακρίβεια του ρολογιού συστήματος ενός υπολογιστή. Για παράδειγμα, ένας εξυπηρετητής web μπορεί να δεχθεί αιτήσεις για αποστολή ενός αρχείου όταν το αρχείο αυτό έχει τροποποιηθεί μέχρι κάποια συγκεκριμένη ώρα. Σε ένα περιβάλλον τοπικού δικτύου, είναι θεμελιώδης αρχή οι υπολογιστές που θα διαμοιραστούν αρχεία από τον ίδιο διακομιστή αρχείων να έχουν συγχρονισμένα ρολόγια, έτσι ώστε τα χρονικά χαρακτηριστικά του αρχείου να συμφωνούν. Επίσης διεργασίες όπως η man:cron[8] βασίζονται σε ένα ακριβές ρολόι ώστε να μπορούν να τρέχουν εντολές στους προκαθορισμένους χρόνους. Το FreeBSD διατίθεται με τον εξυπηρετητή NTP man:ntpd[8], ο οποίος μπορεί να χρησιμοποιηθεί για να συγχρονίζει το ρολόι συστήματος του υπολογιστή σας, εξετάζοντας άλλους εξυπηρετητές NTP ή να παρέχει ο ίδιος υπηρεσίες συγχρονισμού σε άλλα μηχανήματα. === Επιλογή των Κατάλληλων Εξυπηρετητών NTP Για να συγχρονίσετε το ρολόι συστήματος του υπολογιστή σας θα πρέπει να βρείτε έναν ή περισσότερους διαθέσιμους NTP εξυπηρετητές για να χρησιμοποιήσετε. Ο διαχειριστής δικτύου ή ο ISP σας μπορεί να έχουν εγκαταστήσει κάποιον εξυπηρετητή NTP για αυτό το σκοπό - ελέγξτε την τεκμηρίωση τους να δείτε αν υπάρχει τέτοια περίπτωση. Επιπλέον, υπάρχει μία http://ntp.isc.org/bin/view/Servers/WebHome[online λίστα εξυπηρετητών δημόσιας πρόσβασης], που μπορείτε να χρησιμοποιήσετε για να βρείτε έναν κοντινό εξυπηρετητή NTP. Όποιον εξυπηρετητή κι αν επιλέξετε, ενημερωθείτε για την πολιτική χρήσης του και ζητήστε άδεια να τον χρησιμοποιήσετε αν χρειάζεται τέτοια άδεια. Είναι καλή ιδέα να επιλέξετε πολλούς εξυπηρετητές NTP, οι οποίοι να μην συνδέονται μεταξύ τους, στην περίπτωση που κάποιος από τους εξυπηρετητές που χρησιμοποιείτε γίνει απρόσιτος ή το ρολόι του είναι ανακριβές. Ο εξυπηρετητής man:ntpd[8] του FreeBSD χειρίζεται έξυπνα τις απαντήσεις που λαμβάνει από τους υπόλοιπους εξυπηρετητές - ευνοεί τους πιο αξιόπιστους και δείχνει μικρότερη προτίμηση στους λιγότερο αξιόπιστους εξυπηρετητές. === Ρυθμίστε Το Μηχάνημα Σας ==== Βασικές Ρυθμίσεις Αν επιθυμείτε να συγχρονίζεται το clock σας μόνο κατά την εκκίνηση λειτουργίας του μηχανήματος, τότε μπορείτε να χρησιμοποιήσετε το man:ntpdate[8]. Αυτός ο τρόπος συγχρονισμού είναι κατάλληλος για μηχανήματα desktop τα οποία κάνουν επανακκίνηση ανά τακτά χρονικά διαστήματα και μόνο σε ειδικές περιπτώσεις έχουν ανάγκη συγχρονισμού. Αντιθέτως, τα υπόλοιπα μηχανήματα θα πρέπει να τρέχουν την διεργασία man:ntpd[8]. Είναι καλή πρακτική τα μηχανήματα που τρέχουν man:ntpd[8] να χρησιμοποιούν και το man:ntpdate[8] κατά τη διάρκεια εκκίνησης τους. Το man:ntpd[8] μεταβάλλει το clock βαθμιαία, ενώ το man:ntpdate[8] ρυθμίζει άμεσα το clock ανεξάρτητα από το πόσο μεγάλη είναι η χρονική διαφορά μεταξύ πραγματικής και τρέχουσας ώρας του clock του μηχανήματος. Για να ενεργοποιήσετε το man:ntpdate[8] κατά την εκκίνηση, προσθέστε `ntpdate_enable="YES"` στο [.filename]#/etc/rc.conf#. Θα πρέπει να προσδιορίσετε στο `ntpdate_flags` όλους τους διακομιστές με τους οποίους επιθυμείτε να συγχρονίζεστε και όλα τα flag που θέλετε να συνοδεύουν τοman:ntpdate[8]. ==== Γενικές Ρυθμίσεις Οι ρυθμίσεις του NTP βρίσκονται στο αρχείο [.filename]#/etc/ntp.conf# και είναι στη μορφή που περιγράφεται στο man:ntp.conf[5]. Ακολουθεί ένα απλό παράδειγμα: [.programlisting] .... server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift .... Η επιλογή `server` προσδιορίζει ποιοι εξυπηρετητές θα χρησιμοποιηθούν, παραθέτοντας έναν σε κάθε γραμμή. Αν ένας εξυπηρετητής φέρει το πρόθεμα `prefer`, όπως συμβαίνει με τον `ntplocal.example.com`, τότε αυτός ο εξυπηρετητής είναι ο προτιμώμενος. Θα απορριφθεί η απάντηση από τον προτιμώμενο εξυπηρετητή σε περίπτωση που διαφέρει σημαντικά από όλους τους άλλους εξυπηρετητές, Σε περίπτωση που δεν υπάρχει μεγάλη απόκλιση θα χρησιμοποιηθεί δίχως να ληφθούν υπόψιν οι άλλες απαντήσεις. Το πρόθεμα `prefer` συνήθως χρησιμοποιείται με εξυπηρετητές NTP ακριβείας, όπως αυτοί που φέρουν ειδικούς μηχανισμούς παρακολούθησης χρονισμού. Η επιλογή `driftfile` προσδιορίζει ποιό αρχείο χρησιμοποιείται για να διατηρεί τη συχνότητα διόρθωσης του clock του συστήματος. Το πρόγραμμα man:ntpd[8] χρησιμοποιεί αυτόματα αυτή τη τιμή για να αντισταθμίζει τις φυσικές αποκλίσεις του clock, επιτρέποντας του να διατηρεί μια λογική ρύθμιση, ακόμη κι αν του απαγορευτεί για κάποιο χρονικό διάστημα η πρόσβαση προς όλες τις εξωτερικές πηγές συγχρονισμού. Η επιλογή `driftfile` προσδιορίζει ποιό αρχείο χρησιμοποιείται για να αποθηκεύει πληροφορίες σχετικά με τις προηγούμενες απαντήσεις από τους εξυπηρετητές NTP. Αυτό το αρχείο περιέχει εσωτερικές πληροφορίες του NTP. Δεν θα έπρεπε να τροποποιείτε από καμμία άλλη διεργασία. ==== Έλεγχος Πρόσβασης στον Εξυπηρετητή Σας Προεπιλεγμένα, ο εξυπηρετητής σας NTP θα είναι προσβάσιμος από όλους τους κόμβους στο διαδίκτυο. Η επιλογή `restrict` στο [.filename]#/etc/ntp.conf# σας επιτρέπει να ελέγχετε ποια μηχανήματα θα μπορούν να έχουν πρόσβαση στον εξυπηρετή σας. Αν επιθυμείτε να απορρίψετε την πρόσβαση προς τον εξυπηρετητή σας NTP για όλα τα μηχανήματα, προσθέστε την ακόλουθη γραμμή στο [.filename]#/etc/ntp.conf#: [.programlisting] .... restrict default ignore .... Αν θέλετε μόνο να επιτρέψετε τον συγχρονισμό του εξυπηρετητή σας με μηχανήματα εντός του δικτύου σας, αλλά δίχως δυνατότητα ρύθμισης του εξυπηρετητή ή να γίνουν ομοιόβαθμα με άδεια συγχρονισμού, τότε αντιθέτως προσθέστε: [.programlisting] .... restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap .... όπου `192.168.1.0` είναι η διεύθυνση IP του δικτύου και `255.255.255.0` είναι η μάσκα του δικτύου σας. Το [.filename]#/etc/ntp.conf# μπορεί να περιέχει πολλαπλές επιλογές `restrict`. Για περισσότερες πληροφορίες, δείτε την υποενότητα `Υποστήριξη Ελέγχου Πρόσβασης (Access Control Support)`, υποενότητα του man:ntp.conf[5]. === Εκτέλεση του NTP Εξυπηρετητή Σας Για να βεβαιωθείτε πως ο εξυπηρετητής NTP θα ξεκινάει κατά την διάρκεια εκκίνησης του συστήματος, προσθέστε τη γραμμή `ntpd_enable="YES"` στο [.filename]#/etc/rc.conf#. Για να ξεκινήσετε τον εξυπηρετητή δίχως να επανεκκινήσετε το μηχάνημα σας, τρέξτε man:ntpd[8] προσδιορίζοντας κάθε επιπρόσθετη παράμετρο από τα `ntpd_flags` στο [.filename]#/etc/rc.conf#. Για παράδειγμα: [source,shell] .... # ntpd -p /var/run/ntpd.pid .... === Χρήση του ntpd με Προσωρινή Σύνδεση στο Ίντερνετ Το πρόγραμμα man:ntpd[8] δεν χρειάζεται μια μόνιμη σύνδεση στο Ίντερνετ για να δουλέψει σωστά. Αν έχετε μια προσωρινή σύνδεση που είναι ρυθμισμένη να κάνει κλήσεις μέσω τηλεφώνου (dial out on demand), είναι καλό να μην είναι η κίνηση δεδομένων του NTP το αίτιο της κλήσης ή αυτή που θα κρατάει ενεργή την σύνδεση. Αν χρησιμοποιείτε PPP χρήστη, μπορείτε να χρησιμοποιήσετε `φίλτρα` στους κώδικες παραπομπής του [.filename]#/etc/ppp/ppp.conf#, όπως για παράδειγμα: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... Για περισσότερες λεπτομέρειες δείτε το `PACKET FILTERING` στην ενότητα man:ppp[8] και τα παραδείγματα στο [.filename]#/usr/shared/examples/ppp/#. [NOTE] ==== Σημείωση: Μερικοί ISP μπλοκάρουν την χρήση θύρας με χαμηλό αριθμό, εμποδίζοντας στο NTP να δουλεύει αφού οι απαντήσεις δεν φτάνουν ποτέ στο μηχάνημα σας. ==== === Περαιτέρω Πληροφορίες Η τεκμηρίωση για τους εξυπηρετητές NTP διατίθεται και σε φόρμα HTML στο [.filename]#/usr/shared/doc/ntp/#. diff --git a/documentation/content/en/books/handbook/mac/_index.adoc b/documentation/content/en/books/handbook/mac/_index.adoc index 79745411de..453caa9f7f 100644 --- a/documentation/content/en/books/handbook/mac/_index.adoc +++ b/documentation/content/en/books/handbook/mac/_index.adoc @@ -1,968 +1,966 @@ --- title: Chapter 18. Mandatory Access Control part: Part III. System Administration prev: books/handbook/jails next: books/handbook/audit description: "This chapter focuses on the MAC framework and the set of pluggable security policy modules FreeBSD provides for enabling various security mechanisms" tags: ["MAC", "labels", "security", "configuration", "nagios"] showBookMenu: true weight: 22 params: path: "/books/handbook/mac/" --- [[mac]] = Mandatory Access Control :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 18 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == Synopsis FreeBSD supports security extensions based on the POSIX(R).1e draft. These security mechanisms include file system Access Control Lists (crossref:security[fs-acl,“Access Control Lists”]) and Mandatory Access Control (MAC). MAC allows access control modules to be loaded in order to implement security policies. Some modules provide protections for a narrow subset of the system, hardening a particular service. Others provide comprehensive labeled security across all subjects and objects. The mandatory part of the definition indicates that enforcement of controls is performed by administrators and the operating system. This is in contrast to the default security mechanism of Discretionary Access Control (DAC) where enforcement is left to the discretion of users. This chapter focuses on the MAC framework and the set of pluggable security policy modules FreeBSD provides for enabling various security mechanisms. Read this chapter to learn: * The terminology associated with the MAC framework. * The capabilities of MAC security policy modules as well as the difference between a labeled and non-labeled policy. * The considerations to take into account before configuring a system to use the MAC framework. * Which MAC security policy modules are included in FreeBSD and how to configure them. * How to implement a more secure environment using the MAC framework. * How to test the MAC configuration to ensure the framework has been properly implemented. Before reading this chapter: * Understand UNIX(R) and FreeBSD basics (crossref:basics[basics,FreeBSD Basics]). * Have some familiarity with security and how it pertains to FreeBSD (crossref:security[security,Security]). [WARNING] ==== Improper MAC configuration may cause loss of system access, aggravation of users, or inability to access the features provided by Xorg. More importantly, MAC should not be relied upon to completely secure a system. The MAC framework only augments an existing security policy. Without sound security practices and regular security checks, the system will never be completely secure. The examples contained within this chapter are for demonstration purposes and the example settings should _not_ be implemented on a production system. Implementing any security policy takes a good deal of understanding, proper design, and thorough testing. ==== While this chapter covers a broad range of security issues relating to the MAC framework, the development of new MAC security policy modules will not be covered. A number of security policy modules included with the MAC framework have specific characteristics which are provided for both testing and new module development. Refer to man:mac_test[4], man:mac_stub[4] and man:mac_none[4] for more information on these security policy modules and the various mechanisms they provide. [[mac-inline-glossary]] == Key Terms The following key terms are used when referring to the MAC framework: * _compartment_: a set of programs and data to be partitioned or separated, where users are given explicit access to specific component of a system. A compartment represents a grouping, such as a work group, department, project, or topic. Compartments make it possible to implement a need-to-know-basis security policy. * _integrity_: the level of trust which can be placed on data. As the integrity of the data is elevated, so does the ability to trust that data. * _level_: the increased or decreased setting of a security attribute. As the level increases, its security is considered to elevate as well. * _label_: a security attribute which can be applied to files, directories, or other items in the system. It could be considered a confidentiality stamp. When a label is placed on a file, it describes the security properties of that file and will only permit access by files, users, and resources with a similar security setting. The meaning and interpretation of label values depends on the policy configuration. Some policies treat a label as representing the integrity or secrecy of an object while other policies might use labels to hold rules for access. * _multilabel_: this property is a file system option which can be set in single-user mode using man:tunefs[8], during boot using man:fstab[5], or during the creation of a new file system. This option permits an administrator to apply different MAC labels on different objects. This option only applies to security policy modules which support labeling. * _single label_: a policy where the entire file system uses one label to enforce access control over the flow of data. Whenever `multilabel` is not set, all files will conform to the same label setting. * _object_: an entity through which information flows under the direction of a _subject_. This includes directories, files, fields, screens, keyboards, memory, magnetic storage, printers or any other data storage or moving device. An object is a data container or a system resource. Access to an object effectively means access to its data. * _subject_: any active entity that causes information to flow between _objects_ such as a user, user process, or system process. On FreeBSD, this is almost always a thread acting in a process on behalf of a user. * _policy_: a collection of rules which defines how objectives are to be achieved. A policy usually documents how certain items are to be handled. This chapter considers a policy to be a collection of rules which controls the flow of data and information and defines who has access to that data and information. * _high-watermark_: this type of policy permits the raising of security levels for the purpose of accessing higher level information. In most cases, the original level is restored after the process is complete. Currently, the FreeBSD MAC framework does not include this type of policy. * _low-watermark_: this type of policy permits lowering security levels for the purpose of accessing information which is less secure. In most cases, the original security level of the user is restored after the process is complete. The only security policy module in FreeBSD to use this is man:mac_lomac[4]. * _sensitivity_: usually used when discussing Multilevel Security (MLS). A sensitivity level describes how important or secret the data should be. As the sensitivity level increases, so does the importance of the secrecy, or confidentiality, of the data. [[mac-understandlabel]] == Understanding MAC Labels A MAC label is a security attribute which may be applied to subjects and objects throughout the system. When setting a label, the administrator must understand its implications in order to prevent unexpected or undesired behavior of the system. The attributes available on an object depend on the loaded policy module, as policy modules interpret their attributes in different ways. The security label on an object is used as a part of a security access control decision by a policy. With some policies, the label contains all of the information necessary to make a decision. In other policies, the labels may be processed as part of a larger rule set. There are two types of label policies: single label and multi label. By default, the system will use single label. The administrator should be aware of the pros and cons of each in order to implement policies which meet the requirements of the system's security model. A single label security policy only permits one label to be used for every subject or object. Since a single label policy enforces one set of access permissions across the entire system, it provides lower administration overhead, but decreases the flexibility of policies which support labeling. However, in many environments, a single label policy may be all that is required. A single label policy is somewhat similar to DAC as `root` configures the policies so that users are placed in the appropriate categories and access levels. A notable difference is that many policy modules can also restrict `root`. Basic control over objects will then be released to the group, but `root` may revoke or modify the settings at any time. When appropriate, a multi label policy can be set on a UFS file system by passing `multilabel` to man:tunefs[8]. A multi label policy permits each subject or object to have its own independent MAC label. The decision to use a multi label or single label policy is only required for policies which implement the labeling feature, such as `biba`, `lomac`, and `mls`. Some policies, such as `seeotheruids`, `portacl` and `partition`, do not use labels at all. Using a multi label policy on a partition and establishing a multi label security model can increase administrative overhead as everything in that file system has a label. This includes directories, files, and even device nodes. The following command will set `multilabel` on the specified UFS file system. This may only be done in single-user mode and is not a requirement for the swap file system: [source,shell] .... # tunefs -l enable / .... [NOTE] ==== Some users have experienced problems with setting the `multilabel` flag on the root partition. If this is the case, please review crossref:mac[mac-troubleshoot, Troubleshooting the MAC Framework]. ==== Since the multi label policy is set on a per-file system basis, a multi label policy may not be needed if the file system layout is well designed. Consider an example security MAC model for a FreeBSD web server. This machine uses the single label, `biba/high`, for everything in the default file systems. If the web server needs to run at `biba/low` to prevent write up capabilities, it could be installed to a separate UFS [.filename]#/usr/local# file system set at `biba/low`. === Label Configuration Virtually all aspects of label policy module configuration will be performed using the base system utilities. These commands provide a simple interface for object or subject configuration or the manipulation and verification of the configuration. All configuration may be done using `setfmac`, which is used to set MAC labels on system objects, and `setpmac`, which is used to set the labels on system subjects. For example, to set the `biba` MAC label to `high` on [.filename]#test#: [source,shell] .... # setfmac biba/high test .... If the configuration is successful, the prompt will be returned without error. A common error is `Permission denied` which usually occurs when the label is being set or modified on a restricted object. Other conditions may produce different failures. For instance, the file may not be owned by the user attempting to relabel the object, the object may not exist, or the object may be read-only. A mandatory policy will not allow the process to relabel the file, maybe because of a property of the file, a property of the process, or a property of the proposed new label value. For example, if a user running at low integrity tries to change the label of a high integrity file, or a user running at low integrity tries to change the label of a low integrity file to a high integrity label, these operations will fail. The system administrator may use `setpmac` to override the policy module's settings by assigning a different label to the invoked process: [source,shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... For currently running processes, such as sendmail, `getpmac` is usually used instead. This command takes a process ID (PID) in place of a command name. If users attempt to manipulate a file not in their access, subject to the rules of the loaded policy modules, the `Operation not permitted` error will be displayed. === Predefined Labels A few FreeBSD policy modules which support the labeling feature offer three predefined labels: `low`, `equal`, and `high`, where: * `low` is considered the lowest label setting an object or subject may have. Setting this on objects or subjects blocks their access to objects or subjects marked high. * `equal` sets the subject or object to be disabled or unaffected and should only be placed on objects considered to be exempt from the policy. * `high` grants an object or subject the highest setting available in the Biba and MLS policy modules. Such policy modules include man:mac_biba[4], man:mac_mls[4] and man:mac_lomac[4]. Each of the predefined labels establishes a different information flow directive. Refer to the manual page of the module to determine the traits of the generic label configurations. === Numeric Labels The Biba and MLS policy modules support a numeric label which may be set to indicate the precise level of hierarchical control. This numeric level is used to partition or sort information into different groups of classification, only permitting access to that group or a higher group level. For example: [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... may be interpreted as "Biba Policy Label/Grade 10:Compartments 2, 3 and 6: (grade 5 ...") In this example, the first grade would be considered the effective grade with effective compartments, the second grade is the low grade, and the last one is the high grade. In most configurations, such fine-grained settings are not needed as they are considered to be advanced configurations. System objects only have a current grade and compartment. System subjects reflect the range of available rights in the system, and network interfaces, where they are used for access control. The grade and compartments in a subject and object pair are used to construct a relationship known as _dominance_, in which a subject dominates an object, the object dominates the subject, neither dominates the other, or both dominate each other. The "both dominate" case occurs when the two labels are equal. Due to the information flow nature of Biba, a user has rights to a set of compartments that might correspond to projects, but objects also have a set of compartments. Users may have to subset their rights using `su` or `setpmac` in order to access objects in a compartment from which they are not restricted. === User Labels Users are required to have labels so that their files and processes properly interact with the security policy defined on the system. This is configured in [.filename]#/etc/login.conf# using login classes. Every policy module that uses labels will implement the user class setting. To set the user class default label which will be enforced by MAC, add a `label` entry. An example `label` entry containing every policy module is displayed below. Note that in a real configuration, the administrator would never enable every policy module. It is recommended that the rest of this chapter be reviewed before any configuration is implemented. [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/share/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... While users can not modify the default value, they may change their label after they login, subject to the constraints of the policy. The example above tells the Biba policy that a process's minimum integrity is `5`, its maximum is `15`, and the default effective label is `10`. The process will run at `10` until it chooses to change label, perhaps due to the user using `setpmac`, which will be constrained by Biba to the configured range. After any change to [.filename]#login.conf#, the login class capability database must be rebuilt using `cap_mkdb`. Many sites have a large number of users requiring several different user classes. In depth planning is required as this can become difficult to manage. === Network Interface Labels Labels may be set on network interfaces to help control the flow of data across the network. Policies using network interface labels function in the same way that policies function with respect to objects. Users at high settings in Biba, for example, will not be permitted to access network interfaces with a label of `low`. When setting the MAC label on network interfaces, `maclabel` may be passed to `ifconfig`: [source,shell] .... # ifconfig bge0 maclabel biba/equal .... This example will set the MAC label of `biba/equal` on the `bge0` interface. When using a setting similar to `biba/high(low-high)`, the entire label should be quoted to prevent an error from being returned. Each policy module which supports labeling has a tunable which may be used to disable the MAC label on network interfaces. Setting the label to `equal` will have a similar effect. Review the output of `sysctl`, the policy manual pages, and the information in the rest of this chapter for more information on those tunables. [[mac-planning]] == Planning the Security Configuration Before implementing any MAC policies, a planning phase is recommended. During the planning stages, an administrator should consider the implementation requirements and goals, such as: * How to classify information and resources available on the target systems. * Which information or resources to restrict access to along with the type of restrictions that should be applied. * Which MAC modules will be required to achieve this goal. A trial run of the trusted system and its configuration should occur _before_ a MAC implementation is used on production systems. Since different environments have different needs and requirements, establishing a complete security profile will decrease the need of changes once the system goes live. Consider how the MAC framework augments the security of the system as a whole. The various security policy modules provided by the MAC framework could be used to protect the network and file systems or to block users from accessing certain ports and sockets. Perhaps the best use of the policy modules is to load several security policy modules at a time in order to provide a MLS environment. This approach differs from a hardening policy, which typically hardens elements of a system which are used only for specific purposes. The downside to MLS is increased administrative overhead. The overhead is minimal when compared to the lasting effect of a framework which provides the ability to pick and choose which policies are required for a specific configuration and which keeps performance overhead down. The reduction of support for unneeded policies can increase the overall performance of the system as well as offer flexibility of choice. A good implementation would consider the overall security requirements and effectively implement the various security policy modules offered by the framework. A system utilizing MAC guarantees that a user will not be permitted to change security attributes at will. All user utilities, programs, and scripts must work within the constraints of the access rules provided by the selected security policy modules and control of the MAC access rules is in the hands of the system administrator. It is the duty of the system administrator to carefully select the correct security policy modules. For an environment that needs to limit access control over the network, the man:mac_portacl[4], man:mac_ifoff[4], and man:mac_biba[4] policy modules make good starting points. For an environment where strict confidentiality of file system objects is required, consider the man:mac_bsdextended[4] and man:mac_mls[4] policy modules. Policy decisions could be made based on network configuration. If only certain users should be permitted access to man:ssh[1], the man:mac_portacl[4] policy module is a good choice. In the case of file systems, access to objects might be considered confidential to some users, but not to others. As an example, a large development team might be broken off into smaller projects where developers in project A might not be permitted to access objects written by developers in project B. Yet both projects might need to access objects created by developers in project C. Using the different security policy modules provided by the MAC framework, users could be divided into these groups and then given access to the appropriate objects. Each security policy module has a unique way of dealing with the overall security of a system. Module selection should be based on a well thought out security policy which may require revision and reimplementation. Understanding the different security policy modules offered by the MAC framework will help administrators choose the best policies for their situations. The rest of this chapter covers the available modules, describes their use and configuration, and in some cases, provides insight on applicable situations. [CAUTION] ==== Implementing MAC is much like implementing a firewall since care must be taken to prevent being completely locked out of the system. The ability to revert back to a previous configuration should be considered and the implementation of MAC over a remote connection should be done with extreme caution. ==== [[mac-policies]] == Available MAC Policies The default FreeBSD kernel includes `options MAC`. This means that every module included with the MAC framework can be loaded with `kldload` as a run-time kernel module. After testing the module, add the module name to [.filename]#/boot/loader.conf# so that it will load during boot. Each module also provides a kernel option for those administrators who choose to compile their own custom kernel. FreeBSD includes a group of policies that will cover most security requirements. Each policy is summarized below. The last three policies support integer settings in place of the three default labels. [[mac-seeotheruids]] === The MAC See Other UIDs Policy Module name: [.filename]#mac_seeotheruids.ko# Kernel configuration line: `options MAC_SEEOTHERUIDS` Boot option: `mac_seeotheruids_load="YES"` The man:mac_seeotheruids[4] module extends the `security.bsd.see_other_uids` and `security.bsd.see_other_gids sysctl` tunables. This option does not require any labels to be set before configuration and can operate transparently with other modules. After loading the module, the following `sysctl` tunables may be used to control its features: * `security.mac.seeotheruids.enabled` enables the module and implements the default settings which deny users the ability to view processes and sockets owned by other users. * `security.mac.seeotheruids.specificgid_enabled` allows specified groups to be exempt from this policy. To exempt specific groups, use the `security.mac.seeotheruids.specificgid=_XXX_ sysctl` tunable, replacing _XXX_ with the numeric group ID to be exempted. * `security.mac.seeotheruids.primarygroup_enabled` is used to exempt specific primary groups from this policy. When using this tunable, `security.mac.seeotheruids.specificgid_enabled` may not be set. [[mac-bsdextended]] === The MAC BSD Extended Policy Module name: [.filename]#mac_bsdextended.ko# Kernel configuration line: `options MAC_BSDEXTENDED` Boot option: `mac_bsdextended_load="YES"` The man:mac_bsdextended[4] module enforces a file system firewall. It provides an extension to the standard file system permissions model, permitting an administrator to create a firewall-like ruleset to protect files, utilities, and directories in the file system hierarchy. When access to a file system object is attempted, the list of rules is iterated until either a matching rule is located or the end is reached. This behavior may be changed using `security.mac.bsdextended.firstmatch_enabled`. Similar to other firewall modules in FreeBSD, a file containing the access control rules can be created and read by the system at boot time using an man:rc.conf[5] variable. The rule list may be entered using man:ugidfw[8] which has a syntax similar to man:ipfw[8]. More tools can be written by using the functions in the man:libugidfw[3] library. After the man:mac_bsdextended[4] module has been loaded, the following command may be used to list the current rule configuration: [source,shell] .... # ugidfw list 0 slots, 0 rules .... By default, no rules are defined and everything is completely accessible. To create a rule which blocks all access by users but leaves `root` unaffected: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... While this rule is simple to implement, it is a very bad idea as it blocks all users from issuing any commands. A more realistic example blocks `user1` all access, including directory listings, to ``_user2_``'s home directory: [source,shell] .... # ugidfw set 2 subject uid user1 object uid user2 mode n # ugidfw set 3 subject uid user1 object gid user2 mode n .... Instead of `user1`, `not uid _user2_` could be used in order to enforce the same access restrictions for all users. However, the `root` user is unaffected by these rules. [NOTE] ==== Extreme caution should be taken when working with this module as incorrect use could block access to certain parts of the file system. ==== [[mac-ifoff]] === The MAC Interface Silencing Policy Module name: [.filename]#mac_ifoff.ko# Kernel configuration line: `options MAC_IFOFF` Boot option: `mac_ifoff_load="YES"` The man:mac_ifoff[4] module is used to disable network interfaces on the fly and to keep network interfaces from being brought up during system boot. It does not use labels and does not depend on any other MAC modules. Most of this module's control is performed through these `sysctl` tunables: * `security.mac.ifoff.lo_enabled` enables or disables all traffic on the loopback, man:lo[4], interface. * `security.mac.ifoff.bpfrecv_enabled` enables or disables all traffic on the Berkeley Packet Filter interface, man:bpf[4]. * `security.mac.ifoff.other_enabled` enables or disables traffic on all other interfaces. One of the most common uses of man:mac_ifoff[4] is network monitoring in an environment where network traffic should not be permitted during the boot sequence. Another use would be to write a script which uses an application such as package:security/aide[] to automatically block network traffic if it finds new or altered files in protected directories. [[mac-portacl]] === The MAC Port Access Control List Policy Module name: [.filename]#mac_portacl.ko# Kernel configuration line: `MAC_PORTACL` Boot option: `mac_portacl_load="YES"` The man:mac_portacl[4] module is used to limit binding to local TCP and UDP ports, making it possible to allow non-`root` users to bind to specified privileged ports below 1024. Once loaded, this module enables the MAC policy on all sockets. The following tunables are available: * `security.mac.portacl.enabled` enables or disables the policy completely. * `security.mac.portacl.port_high` sets the highest port number that man:mac_portacl[4] protects. * `security.mac.portacl.suser_exempt`, when set to a non-zero value, exempts the `root` user from this policy. * `security.mac.portacl.rules` specifies the policy as a text string of the form `rule[,rule,...]`, with as many rules as needed, and where each rule is of the form `idtype:id:protocol:port`. The `idtype` is either `uid` or `gid`. The `protocol` parameter can be `tcp` or `udp`. The `port` parameter is the port number to allow the specified user or group to bind to. Only numeric values can be used for the user ID, group ID, and port parameters. By default, ports below 1024 can only be used by privileged processes which run as `root`. For man:mac_portacl[4] to allow non-privileged processes to bind to ports below 1024, set the following tunables as follows: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 # sysctl net.inet.ip.portrange.reservedhigh=0 .... To prevent the `root` user from being affected by this policy, set `security.mac.portacl.suser_exempt` to a non-zero value. [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... To allow the `www` user with UID 80 to bind to port 80 without ever needing `root` privilege: [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... This next example permits the user with the UID of 1001 to bind to TCP ports 110 (POP3) and 995 (POP3s): [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... [[mac-partition]] === The MAC Partition Policy Module name: [.filename]#mac_partition.ko# Kernel configuration line: `options MAC_PARTITION` Boot option: `mac_partition_load="YES"` The man:mac_partition[4] policy drops processes into specific "partitions" based on their MAC label. Most configuration for this policy is done using man:setpmac[8]. One `sysctl` tunable is available for this policy: * `security.mac.partition.enabled` enables the enforcement of MAC process partitions. When this policy is enabled, users will only be permitted to see their processes, and any others within their partition, but will not be permitted to work with utilities outside the scope of this partition. For instance, a user in the `insecure` class will not be permitted to access `top` as well as many other commands that must spawn a process. This example adds `top` to the label set on users in the `insecure` class. All processes spawned by users in the `insecure` class will stay in the `partition/13` label. [source,shell] .... # setpmac partition/13 top .... This command displays the partition label and the process list: [source,shell] .... # ps Zax .... This command displays another user's process partition label and that user's currently running processes: [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== Users can see processes in ``root``'s label unless the man:mac_seeotheruids[4] policy is loaded. ==== [[mac-mls]] === The MAC Multi-Level Security Module Module name: [.filename]#mac_mls.ko# Kernel configuration line: `options MAC_MLS` Boot option: `mac_mls_load="YES"` The man:mac_mls[4] policy controls access between subjects and objects in the system by enforcing a strict information flow policy. In MLS environments, a "clearance" level is set in the label of each subject or object, along with compartments. Since these clearance levels can reach numbers greater than several thousand, it would be a daunting task to thoroughly configure every subject or object. To ease this administrative overhead, three labels are included in this policy: `mls/low`, `mls/equal`, and `mls/high`, where: * Anything labeled with `mls/low` will have a low clearance level and not be permitted to access information of a higher level. This label also prevents objects of a higher clearance level from writing or passing information to a lower level. * `mls/equal` should be placed on objects which should be exempt from the policy. * `mls/high` is the highest level of clearance possible. Objects assigned this label will hold dominance over all other objects in the system; however, they will not permit the leaking of information to objects of a lower class. MLS provides: * A hierarchical security level with a set of non-hierarchical categories. * Fixed rules of `no read up, no write down`. This means that a subject can have read access to objects on its own level or below, but not above. Similarly, a subject can have write access to objects on its own level or above, but not beneath. * Secrecy, or the prevention of inappropriate disclosure of data. * A basis for the design of systems that concurrently handle data at multiple sensitivity levels without leaking information between secret and confidential. The following `sysctl` tunables are available: * `security.mac.mls.enabled` is used to enable or disable the MLS policy. * `security.mac.mls.ptys_equal` labels all man:pty[4] devices as `mls/equal` during creation. * `security.mac.mls.revocation_enabled` revokes access to objects after their label changes to a label of a lower grade. * `security.mac.mls.max_compartments` sets the maximum number of compartment levels allowed on a system. To manipulate MLS labels, use man:setfmac[8]. To assign a label to an object: [source,shell] .... # setfmac mls/5 test .... To get the MLS label for the file [.filename]#test#: [source,shell] .... # getfmac test .... Another approach is to create a master policy file in [.filename]#/etc/# which specifies the MLS policy information and to feed that file to `setfmac`. When using the MLS policy module, an administrator plans to control the flow of sensitive information. The default `block read up block write down` sets everything to a low state. Everything is accessible and an administrator slowly augments the confidentiality of the information. Beyond the three basic label options, an administrator may group users and groups as required to block the information flow between them. It might be easier to look at the information in clearance levels using descriptive words, such as classifications of `Confidential`, `Secret`, and `Top Secret`. Some administrators instead create different groups based on project levels. Regardless of the classification method, a well thought out plan must exist before implementing a restrictive policy. Some example situations for the MLS policy module include an e-commerce web server, a file server holding critical company information, and financial institution environments. [[mac-biba]] === The MAC Biba Module Module name: [.filename]#mac_biba.ko# Kernel configuration line: `options MAC_BIBA` Boot option: `mac_biba_load="YES"` The man:mac_biba[4] module loads the MAC Biba policy. This policy is similar to the MLS policy with the exception that the rules for information flow are slightly reversed. This is to prevent the downward flow of sensitive information whereas the MLS policy prevents the upward flow of sensitive information. In Biba environments, an "integrity" label is set on each subject or object. These labels are made up of hierarchical grades and non-hierarchical components. As a grade ascends, so does its integrity. Supported labels are `biba/low`, `biba/equal`, and `biba/high`, where: * `biba/low` is considered the lowest integrity an object or subject may have. Setting this on objects or subjects blocks their write access to objects or subjects marked as `biba/high`, but will not prevent read access. * `biba/equal` should only be placed on objects considered to be exempt from the policy. * `biba/high` permits writing to objects set at a lower label, but does not permit reading that object. It is recommended that this label be placed on objects that affect the integrity of the entire system. Biba provides: * Hierarchical integrity levels with a set of non-hierarchical integrity categories. * Fixed rules are `no write up, no read down`, the opposite of MLS. A subject can have write access to objects on its own level or below, but not above. Similarly, a subject can have read access to objects on its own level or above, but not below. * Integrity by preventing inappropriate modification of data. * Integrity levels instead of MLS sensitivity levels. The following tunables can be used to manipulate the Biba policy: * `security.mac.biba.enabled` is used to enable or disable enforcement of the Biba policy on the target machine. * `security.mac.biba.ptys_equal` is used to disable the Biba policy on man:pty[4] devices. * `security.mac.biba.revocation_enabled` forces the revocation of access to objects if the label is changed to dominate the subject. To access the Biba policy setting on system objects, use `setfmac` and `getfmac`: [source,shell] .... # setfmac biba/low test # getfmac test test: biba/low .... Integrity, which is different from sensitivity, is used to guarantee that information is not manipulated by untrusted parties. This includes information passed between subjects and objects. It ensures that users will only be able to modify or access information they have been given explicit access to. The man:mac_biba[4] security policy module permits an administrator to configure which files and programs a user may see and invoke while assuring that the programs and files are trusted by the system for that user. During the initial planning phase, an administrator must be prepared to partition users into grades, levels, and areas. The system will default to a high label once this policy module is enabled, and it is up to the administrator to configure the different grades and levels for users. Instead of using clearance levels, a good planning method could include topics. For instance, only allow developers modification access to the source code repository, source code compiler, and other development utilities. Other users would be grouped into other categories such as testers, designers, or end users and would only be permitted read access. A lower integrity subject is unable to write to a higher integrity subject and a higher integrity subject cannot list or read a lower integrity object. Setting a label at the lowest possible grade could make it inaccessible to subjects. Some prospective environments for this security policy module would include a constrained web server, a development and test machine, and a source code repository. A less useful implementation would be a personal workstation, a machine used as a router, or a network firewall. [[mac-lomac]] === The MAC Low-watermark Module Module name: [.filename]#mac_lomac.ko# Kernel configuration line: `options MAC_LOMAC` Boot option: `mac_lomac_load="YES"` Unlike the MAC Biba policy, the man:mac_lomac[4] policy permits access to lower integrity objects only after decreasing the integrity level to not disrupt any integrity rules. The Low-watermark integrity policy works almost identically to Biba, with the exception of using floating labels to support subject demotion via an auxiliary grade compartment. This secondary compartment takes the form `[auxgrade]`. When assigning a policy with an auxiliary grade, use the syntax `lomac/10[2]`, where `2` is the auxiliary grade. This policy relies on the ubiquitous labeling of all system objects with integrity labels, permitting subjects to read from low integrity objects and then downgrading the label on the subject to prevent future writes to high integrity objects using `[auxgrade]`. The policy may provide greater compatibility and require less initial configuration than Biba. Like the Biba and MLS policies, `setfmac` and `setpmac` are used to place labels on system objects: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... The auxiliary grade `low` is a feature provided only by the MACLOMAC policy. [[mac-userlocked]] == User Lock Down This example considers a relatively small storage system with fewer than fifty users. Users will have login capabilities and are permitted to store data and access resources. For this scenario, the man:mac_bsdextended[4] and man:mac_seeotheruids[4] policy modules could co-exist and block access to system objects while hiding user processes. Begin by adding the following line to [.filename]#/boot/loader.conf#: [.programlisting] .... mac_seeotheruids_load="YES" .... The man:mac_bsdextended[4] security policy module may be activated by adding this line to [.filename]#/etc/rc.conf#: [.programlisting] .... ugidfw_enable="YES" .... Default rules stored in [.filename]#/etc/rc.bsdextended# will be loaded at system initialization. However, the default entries may need modification. Since this machine is expected only to service users, everything may be left commented out except the last two lines in order to force the loading of user owned system objects by default. Add the required users to this machine and reboot. For testing purposes, try logging in as a different user across two consoles. Run `ps aux` to see if processes of other users are visible. Verify that running man:ls[1] on another user's home directory fails. Do not try to test with the `root` user unless the specific ``sysctl``s have been modified to block super user access. [NOTE] ==== When a new user is added, their man:mac_bsdextended[4] rule will not be in the ruleset list. To update the ruleset quickly, unload the security policy module and reload it again using man:kldunload[8] and man:kldload[8]. ==== [[mac-implementing]] == Nagios in a MAC Jail This section demonstrates the steps that are needed to implement the Nagios network monitoring system in a MAC environment. This is meant as an example which still requires the administrator to test that the implemented policy meets the security requirements of the network before using in a production environment. This example requires `multilabel` to be set on each file system. It also assumes that package:net-mgmt/nagios-plugins[], package:net-mgmt/nagios[], and package:www/apache22[] are all installed, configured, and working correctly before attempting the integration into the MAC framework. === Create an Insecure User Class Begin the procedure by adding the following user class to [.filename]#/etc/login.conf#: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/share/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... Then, add the following line to the default user class section: [.programlisting] .... :label=biba/high: .... Save the edits and issue the following command to rebuild the database: [source,shell] .... # cap_mkdb /etc/login.conf .... === Configure Users Set the `root` user to the default class using: [source,shell] .... # pw usermod root -L default .... All user accounts that are not `root` will now require a login class. The login class is required, otherwise users will be refused access to common commands. The following `sh` script should do the trick: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ /etc/passwd`; do pw usermod $x -L default; done; .... Next, drop the `nagios` and `www` accounts into the insecure class: [source,shell] .... # pw usermod nagios -L insecure # pw usermod www -L insecure .... === Create the Contexts File A contexts file should now be created as [.filename]#/etc/policy.contexts#: [.programlisting] .... # This is the default BIBA policy for this system. # System: /var/run(/.*)? biba/equal /dev/(/.*)? biba/equal /var biba/equal /var/spool(/.*)? biba/equal /var/log(/.*)? biba/equal /tmp(/.*)? biba/equal /var/tmp(/.*)? biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # For Nagios: /usr/local/etc/nagios(/.*)? biba/10 /var/spool/nagios(/.*)? biba/10 # For apache /usr/local/etc/apache(/.*)? biba/10 .... This policy enforces security by setting restrictions on the flow of information. In this specific configuration, users, including `root`, should never be allowed to access Nagios. Configuration files and processes that are a part of Nagios will be completely self contained or jailed. This file will be read after running `setfsmac` on every file system. This example sets the policy on the root file system: [source,shell] .... # setfsmac -ef /etc/policy.contexts / .... Next, add these edits to the main section of [.filename]#/etc/mac.conf#: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === Loader Configuration To finish the configuration, add the following lines to [.filename]#/boot/loader.conf#: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" security.mac.biba.trust_all_interfaces=1 .... And the following line to the network card configuration stored in [.filename]#/etc/rc.conf#. If the primary network configuration is done via DHCP, this may need to be configured manually after every system boot: [.programlisting] .... maclabel biba/equal .... === Testing the Configuration First, ensure that the web server and Nagios will not be started on system initialization and reboot. Ensure that `root` cannot access any of the files in the Nagios configuration directory. If `root` can list the contents of [.filename]#/var/spool/nagios#, something is wrong. Instead, a "permission denied" error should be returned. If all seems well, Nagios, Apache, and Sendmail can now be started: [source,shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Double check to ensure that everything is working properly. If not, check the log files for error messages. If needed, use man:sysctl[8] to disable the man:mac_biba[4] security policy module and try starting everything again as usual. [NOTE] ==== The `root` user can still change the security enforcement and edit its configuration files. The following command will permit the degradation of the security policy to a lower grade for a newly spawned shell: [source,shell] .... # setpmac biba/10 csh .... To block this from happening, force the user into a range using man:login.conf[5]. If man:setpmac[8] attempts to run a command outside of the compartment's range, an error will be returned and the command will not be executed. In this case, set root to `biba/high(high-high)`. ==== [[mac-troubleshoot]] == Troubleshooting the MAC Framework This section discusses common configuration errors and how to resolve them. The `multilabel` flag does not stay enabled on the root ([.filename]#/#) partition::: The following steps may resolve this transient error: [.procedure] ==== . Edit [.filename]#/etc/fstab# and set the root partition to `ro` for read-only. . Reboot into single user mode. . Run `tunefs -l enable` on [.filename]#/#. . Reboot the system. . Run `mount -urw`[.filename]#/# and change the `ro` back to `rw` in [.filename]#/etc/fstab# and reboot the system again. . Double-check the output from `mount` to ensure that `multilabel` has been properly set on the root file system. ==== After establishing a secure environment with MAC, Xorg no longer starts::: This could be caused by the MAC `partition` policy or by a mislabeling in one of the MAC labeling policies. To debug, try the following: [.procedure] ==== . Check the error message. If the user is in the `insecure` class, the `partition` policy may be the culprit. Try setting the user's class back to the `default` class and rebuild the database with `cap_mkdb`. If this does not alleviate the problem, go to step two. . Double-check that the label policies are set correctly for the user, Xorg, and the [.filename]#/dev# entries. . If neither of these resolve the problem, send the error message and a description of the environment to the {freebsd-questions}. ==== The `_secure_path: unable to stat .login_conf` error appears::: This error can appear when a user attempts to switch from the `root` user to another user in the system. This message usually occurs when the user has a higher label setting than that of the user they are attempting to become. For instance, if `joe` has a default label of `biba/low` and `root` has a label of `biba/high`, `root` cannot view ``joe``'s home directory. This will happen whether or not `root` has used `su` to become `joe` as the Biba integrity model will not permit `root` to view objects set at a lower integrity level. The system no longer recognizes `root`::: When this occurs, `whoami` returns `0` and `su` returns `who are you?`. + This can happen if a labeling policy has been disabled by man:sysctl[8] or the policy module was unloaded. If the policy is disabled, the login capabilities database needs to be reconfigured. Double check [.filename]#/etc/login.conf# to ensure that all `label` options have been removed and rebuild the database with `cap_mkdb`. + This may also happen if a policy restricts access to [.filename]#master.passwd#. This is usually caused by an administrator altering the file under a label which conflicts with the general policy being used by the system. In these cases, the user information would be read by the system and access would be blocked as the file has inherited the new label. Disable the policy using man:sysctl[8] and everything should return to normal. diff --git a/documentation/content/en/books/handbook/mac/_index.po b/documentation/content/en/books/handbook/mac/_index.po index 20ed04eae8..1872b3e31c 100644 --- a/documentation/content/en/books/handbook/mac/_index.po +++ b/documentation/content/en/books/handbook/mac/_index.po @@ -1,2326 +1,2324 @@ # SOME DESCRIPTIVE TITLE # Copyright (C) YEAR The FreeBSD Project # This file is distributed under the same license as the FreeBSD Documentation package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: FreeBSD Documentation VERSION\n" "POT-Creation-Date: 2025-11-08 16:17+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "Language: \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #. type: YAML Front Matter: description #: documentation/content/en/books/handbook/mac/_index.adoc:1 #, no-wrap msgid "This chapter focuses on the MAC framework and the set of pluggable security policy modules FreeBSD provides for enabling various security mechanisms" msgstr "" #. type: YAML Front Matter: part #: documentation/content/en/books/handbook/mac/_index.adoc:1 #, no-wrap msgid "Part III. System Administration" msgstr "" #. type: YAML Front Matter: title #: documentation/content/en/books/handbook/mac/_index.adoc:1 #, no-wrap msgid "Chapter 18. Mandatory Access Control" msgstr "" #. type: Title = #: documentation/content/en/books/handbook/mac/_index.adoc:15 #, no-wrap msgid "Mandatory Access Control" msgstr "" #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:53 #, no-wrap msgid "Synopsis" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:62 msgid "" "FreeBSD supports security extensions based on the POSIX(R).1e draft. These " "security mechanisms include file system Access Control Lists " "(crossref:security[fs-acl,“Access Control Lists”]) and Mandatory Access " "Control (MAC). MAC allows access control modules to be loaded in order to " "implement security policies. Some modules provide protections for a narrow " "subset of the system, hardening a particular service. Others provide " "comprehensive labeled security across all subjects and objects. The " "mandatory part of the definition indicates that enforcement of controls is " "performed by administrators and the operating system. This is in contrast " "to the default security mechanism of Discretionary Access Control (DAC) " "where enforcement is left to the discretion of users." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:64 msgid "" "This chapter focuses on the MAC framework and the set of pluggable security " "policy modules FreeBSD provides for enabling various security mechanisms." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:66 msgid "Read this chapter to learn:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:68 msgid "The terminology associated with the MAC framework." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:69 msgid "" "The capabilities of MAC security policy modules as well as the difference " "between a labeled and non-labeled policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:70 msgid "" "The considerations to take into account before configuring a system to use " "the MAC framework." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:71 msgid "" "Which MAC security policy modules are included in FreeBSD and how to " "configure them." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:72 msgid "How to implement a more secure environment using the MAC framework." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:73 msgid "" "How to test the MAC configuration to ensure the framework has been properly " "implemented." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:75 msgid "Before reading this chapter:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:77 msgid "" "Understand UNIX(R) and FreeBSD basics (crossref:basics[basics,FreeBSD " "Basics])." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:78 msgid "" "Have some familiarity with security and how it pertains to FreeBSD " "(crossref:security[security,Security])." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:85 msgid "" "Improper MAC configuration may cause loss of system access, aggravation of " "users, or inability to access the features provided by Xorg. More " "importantly, MAC should not be relied upon to completely secure a system. " "The MAC framework only augments an existing security policy. Without sound " "security practices and regular security checks, the system will never be " "completely secure." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:88 msgid "" "The examples contained within this chapter are for demonstration purposes " "and the example settings should _not_ be implemented on a production " "system. Implementing any security policy takes a good deal of " "understanding, proper design, and thorough testing." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:93 msgid "" "While this chapter covers a broad range of security issues relating to the " "MAC framework, the development of new MAC security policy modules will not " "be covered. A number of security policy modules included with the MAC " "framework have specific characteristics which are provided for both testing " "and new module development. Refer to man:mac_test[4], man:mac_stub[4] and " "man:mac_none[4] for more information on these security policy modules and " "the various mechanisms they provide." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:95 #, no-wrap msgid "Key Terms" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:98 msgid "The following key terms are used when referring to the MAC framework:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:100 msgid "" "_compartment_: a set of programs and data to be partitioned or separated, " "where users are given explicit access to specific component of a system. A " "compartment represents a grouping, such as a work group, department, " "project, or topic. Compartments make it possible to implement a need-to-know-" "basis security policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:101 msgid "" "_integrity_: the level of trust which can be placed on data. As the " "integrity of the data is elevated, so does the ability to trust that data." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:102 msgid "" "_level_: the increased or decreased setting of a security attribute. As the " "level increases, its security is considered to elevate as well." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:103 msgid "" "_label_: a security attribute which can be applied to files, directories, or " "other items in the system. It could be considered a confidentiality stamp. " "When a label is placed on a file, it describes the security properties of " "that file and will only permit access by files, users, and resources with a " "similar security setting. The meaning and interpretation of label values " "depends on the policy configuration. Some policies treat a label as " "representing the integrity or secrecy of an object while other policies " "might use labels to hold rules for access." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:104 msgid "" "_multilabel_: this property is a file system option which can be set in " "single-user mode using man:tunefs[8], during boot using man:fstab[5], or " "during the creation of a new file system. This option permits an " "administrator to apply different MAC labels on different objects. This " "option only applies to security policy modules which support labeling." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:105 msgid "" "_single label_: a policy where the entire file system uses one label to " "enforce access control over the flow of data. Whenever `multilabel` is not " "set, all files will conform to the same label setting." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:106 msgid "" "_object_: an entity through which information flows under the direction of a " "_subject_. This includes directories, files, fields, screens, keyboards, " "memory, magnetic storage, printers or any other data storage or moving " "device. An object is a data container or a system resource. Access to an " "object effectively means access to its data." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:107 msgid "" "_subject_: any active entity that causes information to flow between " "_objects_ such as a user, user process, or system process. On FreeBSD, this " "is almost always a thread acting in a process on behalf of a user." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:108 msgid "" "_policy_: a collection of rules which defines how objectives are to be " "achieved. A policy usually documents how certain items are to be handled. " "This chapter considers a policy to be a collection of rules which controls " "the flow of data and information and defines who has access to that data and " "information." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:109 msgid "" "_high-watermark_: this type of policy permits the raising of security levels " "for the purpose of accessing higher level information. In most cases, the " "original level is restored after the process is complete. Currently, the " "FreeBSD MAC framework does not include this type of policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:110 msgid "" "_low-watermark_: this type of policy permits lowering security levels for " "the purpose of accessing information which is less secure. In most cases, " "the original security level of the user is restored after the process is " "complete. The only security policy module in FreeBSD to use this is " "man:mac_lomac[4]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:111 msgid "" "_sensitivity_: usually used when discussing Multilevel Security (MLS). A " "sensitivity level describes how important or secret the data should be. As " "the sensitivity level increases, so does the importance of the secrecy, or " "confidentiality, of the data." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:113 #, no-wrap msgid "Understanding MAC Labels" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:118 msgid "" "A MAC label is a security attribute which may be applied to subjects and " "objects throughout the system. When setting a label, the administrator must " "understand its implications in order to prevent unexpected or undesired " "behavior of the system. The attributes available on an object depend on the " "loaded policy module, as policy modules interpret their attributes in " "different ways." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:122 msgid "" "The security label on an object is used as a part of a security access " "control decision by a policy. With some policies, the label contains all of " "the information necessary to make a decision. In other policies, the labels " "may be processed as part of a larger rule set." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:126 msgid "" "There are two types of label policies: single label and multi label. By " "default, the system will use single label. The administrator should be " "aware of the pros and cons of each in order to implement policies which meet " "the requirements of the system's security model." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:130 msgid "" "A single label security policy only permits one label to be used for every " "subject or object. Since a single label policy enforces one set of access " "permissions across the entire system, it provides lower administration " "overhead, but decreases the flexibility of policies which support labeling. " "However, in many environments, a single label policy may be all that is " "required." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:134 msgid "" "A single label policy is somewhat similar to DAC as `root` configures the " "policies so that users are placed in the appropriate categories and access " "levels. A notable difference is that many policy modules can also restrict " "`root`. Basic control over objects will then be released to the group, but " "`root` may revoke or modify the settings at any time." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:139 msgid "" "When appropriate, a multi label policy can be set on a UFS file system by " "passing `multilabel` to man:tunefs[8]. A multi label policy permits each " "subject or object to have its own independent MAC label. The decision to " "use a multi label or single label policy is only required for policies which " "implement the labeling feature, such as `biba`, `lomac`, and `mls`. Some " "policies, such as `seeotheruids`, `portacl` and `partition`, do not use " "labels at all." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:142 msgid "" "Using a multi label policy on a partition and establishing a multi label " "security model can increase administrative overhead as everything in that " "file system has a label. This includes directories, files, and even device " "nodes." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:145 msgid "" "The following command will set `multilabel` on the specified UFS file " "system. This may only be done in single-user mode and is not a requirement " "for the swap file system:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:149 #, no-wrap msgid "# tunefs -l enable /\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:155 msgid "" "Some users have experienced problems with setting the `multilabel` flag on " "the root partition. If this is the case, please review crossref:mac[mac-" "troubleshoot, Troubleshooting the MAC Framework]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:161 msgid "" "Since the multi label policy is set on a per-file system basis, a multi " "label policy may not be needed if the file system layout is well designed. " "Consider an example security MAC model for a FreeBSD web server. This " "machine uses the single label, `biba/high`, for everything in the default " "file systems. If the web server needs to run at `biba/low` to prevent write " "up capabilities, it could be installed to a separate UFS [.filename]#/usr/" "local# file system set at `biba/low`." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:162 #, no-wrap msgid "Label Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:166 msgid "" "Virtually all aspects of label policy module configuration will be performed " "using the base system utilities. These commands provide a simple interface " "for object or subject configuration or the manipulation and verification of " "the configuration." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:169 msgid "" "All configuration may be done using `setfmac`, which is used to set MAC " "labels on system objects, and `setpmac`, which is used to set the labels on " "system subjects. For example, to set the `biba` MAC label to `high` on " "[.filename]#test#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:173 #, no-wrap msgid "# setfmac biba/high test\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:181 msgid "" "If the configuration is successful, the prompt will be returned without " "error. A common error is `Permission denied` which usually occurs when the " "label is being set or modified on a restricted object. Other conditions may " "produce different failures. For instance, the file may not be owned by the " "user attempting to relabel the object, the object may not exist, or the " "object may be read-only. A mandatory policy will not allow the process to " "relabel the file, maybe because of a property of the file, a property of the " "process, or a property of the proposed new label value. For example, if a " "user running at low integrity tries to change the label of a high integrity " "file, or a user running at low integrity tries to change the label of a low " "integrity file to a high integrity label, these operations will fail." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:183 msgid "" "The system administrator may use `setpmac` to override the policy module's " "settings by assigning a different label to the invoked process:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:191 #, no-wrap msgid "" "# setfmac biba/high test\n" "Permission denied\n" "# setpmac biba/low setfmac biba/high test\n" "# getfmac test\n" "test: biba/high\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:196 msgid "" "For currently running processes, such as sendmail, `getpmac` is usually used " "instead. This command takes a process ID (PID) in place of a command name. " "If users attempt to manipulate a file not in their access, subject to the " "rules of the loaded policy modules, the `Operation not permitted` error will " "be displayed." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:197 #, no-wrap msgid "Predefined Labels" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:200 msgid "" "A few FreeBSD policy modules which support the labeling feature offer three " "predefined labels: `low`, `equal`, and `high`, where:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:202 msgid "" "`low` is considered the lowest label setting an object or subject may have. " "Setting this on objects or subjects blocks their access to objects or " "subjects marked high." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:203 msgid "" "`equal` sets the subject or object to be disabled or unaffected and should " "only be placed on objects considered to be exempt from the policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:204 msgid "" "`high` grants an object or subject the highest setting available in the Biba " "and MLS policy modules." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:208 msgid "" "Such policy modules include man:mac_biba[4], man:mac_mls[4] and " "man:mac_lomac[4]. Each of the predefined labels establishes a different " "information flow directive. Refer to the manual page of the module to " "determine the traits of the generic label configurations." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:209 #, no-wrap msgid "Numeric Labels" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:214 msgid "" "The Biba and MLS policy modules support a numeric label which may be set to " "indicate the precise level of hierarchical control. This numeric level is " "used to partition or sort information into different groups of " "classification, only permitting access to that group or a higher group " "level. For example:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:218 #, no-wrap msgid "biba/10:2+3+6(5:2+3-20:2+3+4+5+6)\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:221 msgid "" "may be interpreted as \"Biba Policy Label/Grade 10:Compartments 2, 3 and 6: " "(grade 5 ...\")" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:224 msgid "" "In this example, the first grade would be considered the effective grade " "with effective compartments, the second grade is the low grade, and the last " "one is the high grade. In most configurations, such fine-grained settings " "are not needed as they are considered to be advanced configurations." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:227 msgid "" "System objects only have a current grade and compartment. System subjects " "reflect the range of available rights in the system, and network interfaces, " "where they are used for access control." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:232 msgid "" "The grade and compartments in a subject and object pair are used to " "construct a relationship known as _dominance_, in which a subject dominates " "an object, the object dominates the subject, neither dominates the other, or " "both dominate each other. The \"both dominate\" case occurs when the two " "labels are equal. Due to the information flow nature of Biba, a user has " "rights to a set of compartments that might correspond to projects, but " "objects also have a set of compartments. Users may have to subset their " "rights using `su` or `setpmac` in order to access objects in a compartment " "from which they are not restricted." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:233 #, no-wrap msgid "User Labels" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:238 msgid "" "Users are required to have labels so that their files and processes properly " "interact with the security policy defined on the system. This is configured " "in [.filename]#/etc/login.conf# using login classes. Every policy module " "that uses labels will implement the user class setting." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:243 msgid "" "To set the user class default label which will be enforced by MAC, add a " "`label` entry. An example `label` entry containing every policy module is " "displayed below. Note that in a real configuration, the administrator would " "never enable every policy module. It is recommended that the rest of this " "chapter be reviewed before any configuration is implemented." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:269 #, no-wrap msgid "" "default:\\\n" -"\t:copyright=/etc/COPYRIGHT:\\\n" "\t:welcome=/etc/motd:\\\n" "\t:setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\\\n" "\t:path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\\\n" "\t:manpath=/usr/share/man /usr/local/man:\\\n" "\t:nologin=/usr/sbin/nologin:\\\n" "\t:cputime=1h30m:\\\n" "\t:datasize=8M:\\\n" "\t:vmemoryuse=100M:\\\n" "\t:stacksize=2M:\\\n" "\t:memorylocked=4M:\\\n" "\t:memoryuse=8M:\\\n" "\t:filesize=8M:\\\n" "\t:coredumpsize=8M:\\\n" "\t:openfiles=24:\\\n" "\t:maxproc=32:\\\n" "\t:priority=0:\\\n" "\t:requirehome:\\\n" "\t:passwordtime=91d:\\\n" "\t:umask=022:\\\n" "\t:ignoretime@:\\\n" "\t:label=partition/13,mls/5,biba/10(5-15),lomac/10[2]:\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:274 msgid "" "While users can not modify the default value, they may change their label " "after they login, subject to the constraints of the policy. The example " "above tells the Biba policy that a process's minimum integrity is `5`, its " "maximum is `15`, and the default effective label is `10`. The process will " "run at `10` until it chooses to change label, perhaps due to the user using " "`setpmac`, which will be constrained by Biba to the configured range." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:276 msgid "" "After any change to [.filename]#login.conf#, the login class capability " "database must be rebuilt using `cap_mkdb`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:279 msgid "" "Many sites have a large number of users requiring several different user " "classes. In depth planning is required as this can become difficult to " "manage." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:280 #, no-wrap msgid "Network Interface Labels" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:285 msgid "" "Labels may be set on network interfaces to help control the flow of data " "across the network. Policies using network interface labels function in the " "same way that policies function with respect to objects. Users at high " "settings in Biba, for example, will not be permitted to access network " "interfaces with a label of `low`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:287 msgid "" "When setting the MAC label on network interfaces, `maclabel` may be passed " "to `ifconfig`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:291 #, no-wrap msgid "# ifconfig bge0 maclabel biba/equal\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:295 msgid "" "This example will set the MAC label of `biba/equal` on the `bge0` " "interface. When using a setting similar to `biba/high(low-high)`, the " "entire label should be quoted to prevent an error from being returned." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:299 msgid "" "Each policy module which supports labeling has a tunable which may be used " "to disable the MAC label on network interfaces. Setting the label to " "`equal` will have a similar effect. Review the output of `sysctl`, the " "policy manual pages, and the information in the rest of this chapter for " "more information on those tunables." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:301 #, no-wrap msgid "Planning the Security Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:305 msgid "" "Before implementing any MAC policies, a planning phase is recommended. " "During the planning stages, an administrator should consider the " "implementation requirements and goals, such as:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:307 msgid "" "How to classify information and resources available on the target systems." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:308 msgid "" "Which information or resources to restrict access to along with the type of " "restrictions that should be applied." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:309 msgid "Which MAC modules will be required to achieve this goal." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:312 msgid "" "A trial run of the trusted system and its configuration should occur " "_before_ a MAC implementation is used on production systems. Since " "different environments have different needs and requirements, establishing a " "complete security profile will decrease the need of changes once the system " "goes live." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:318 msgid "" "Consider how the MAC framework augments the security of the system as a " "whole. The various security policy modules provided by the MAC framework " "could be used to protect the network and file systems or to block users from " "accessing certain ports and sockets. Perhaps the best use of the policy " "modules is to load several security policy modules at a time in order to " "provide a MLS environment. This approach differs from a hardening policy, " "which typically hardens elements of a system which are used only for " "specific purposes. The downside to MLS is increased administrative overhead." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:322 msgid "" "The overhead is minimal when compared to the lasting effect of a framework " "which provides the ability to pick and choose which policies are required " "for a specific configuration and which keeps performance overhead down. The " "reduction of support for unneeded policies can increase the overall " "performance of the system as well as offer flexibility of choice. A good " "implementation would consider the overall security requirements and " "effectively implement the various security policy modules offered by the " "framework." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:325 msgid "" "A system utilizing MAC guarantees that a user will not be permitted to " "change security attributes at will. All user utilities, programs, and " "scripts must work within the constraints of the access rules provided by the " "selected security policy modules and control of the MAC access rules is in " "the hands of the system administrator." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:329 msgid "" "It is the duty of the system administrator to carefully select the correct " "security policy modules. For an environment that needs to limit access " "control over the network, the man:mac_portacl[4], man:mac_ifoff[4], and " "man:mac_biba[4] policy modules make good starting points. For an " "environment where strict confidentiality of file system objects is required, " "consider the man:mac_bsdextended[4] and man:mac_mls[4] policy modules." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:336 msgid "" "Policy decisions could be made based on network configuration. If only " "certain users should be permitted access to man:ssh[1], the " "man:mac_portacl[4] policy module is a good choice. In the case of file " "systems, access to objects might be considered confidential to some users, " "but not to others. As an example, a large development team might be broken " "off into smaller projects where developers in project A might not be " "permitted to access objects written by developers in project B. Yet both " "projects might need to access objects created by developers in project C. " "Using the different security policy modules provided by the MAC framework, " "users could be divided into these groups and then given access to the " "appropriate objects." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:340 msgid "" "Each security policy module has a unique way of dealing with the overall " "security of a system. Module selection should be based on a well thought " "out security policy which may require revision and reimplementation. " "Understanding the different security policy modules offered by the MAC " "framework will help administrators choose the best policies for their " "situations." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:342 msgid "" "The rest of this chapter covers the available modules, describes their use " "and configuration, and in some cases, provides insight on applicable " "situations." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:347 msgid "" "Implementing MAC is much like implementing a firewall since care must be " "taken to prevent being completely locked out of the system. The ability to " "revert back to a previous configuration should be considered and the " "implementation of MAC over a remote connection should be done with extreme " "caution." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:350 #, no-wrap msgid "Available MAC Policies" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:356 msgid "" "The default FreeBSD kernel includes `options MAC`. This means that every " "module included with the MAC framework can be loaded with `kldload` as a run-" "time kernel module. After testing the module, add the module name to " "[.filename]#/boot/loader.conf# so that it will load during boot. Each " "module also provides a kernel option for those administrators who choose to " "compile their own custom kernel." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:359 msgid "" "FreeBSD includes a group of policies that will cover most security " "requirements. Each policy is summarized below. The last three policies " "support integer settings in place of the three default labels." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:361 #, no-wrap msgid "The MAC See Other UIDs Policy" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:364 msgid "Module name: [.filename]#mac_seeotheruids.ko#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:366 msgid "Kernel configuration line: `options MAC_SEEOTHERUIDS`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:368 msgid "Boot option: `mac_seeotheruids_load=\"YES\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:371 msgid "" "The man:mac_seeotheruids[4] module extends the `security.bsd.see_other_uids` " "and `security.bsd.see_other_gids sysctl` tunables. This option does not " "require any labels to be set before configuration and can operate " "transparently with other modules." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:373 msgid "" "After loading the module, the following `sysctl` tunables may be used to " "control its features:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:375 msgid "" "`security.mac.seeotheruids.enabled` enables the module and implements the " "default settings which deny users the ability to view processes and sockets " "owned by other users." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:376 msgid "" "`security.mac.seeotheruids.specificgid_enabled` allows specified groups to " "be exempt from this policy. To exempt specific groups, use the " "`security.mac.seeotheruids.specificgid=_XXX_ sysctl` tunable, replacing " "_XXX_ with the numeric group ID to be exempted." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:377 msgid "" "`security.mac.seeotheruids.primarygroup_enabled` is used to exempt specific " "primary groups from this policy. When using this tunable, " "`security.mac.seeotheruids.specificgid_enabled` may not be set." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:379 #, no-wrap msgid "The MAC BSD Extended Policy" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:382 msgid "Module name: [.filename]#mac_bsdextended.ko#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:384 msgid "Kernel configuration line: `options MAC_BSDEXTENDED`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:386 msgid "Boot option: `mac_bsdextended_load=\"YES\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:392 msgid "" "The man:mac_bsdextended[4] module enforces a file system firewall. It " "provides an extension to the standard file system permissions model, " "permitting an administrator to create a firewall-like ruleset to protect " "files, utilities, and directories in the file system hierarchy. When access " "to a file system object is attempted, the list of rules is iterated until " "either a matching rule is located or the end is reached. This behavior may " "be changed using `security.mac.bsdextended.firstmatch_enabled`. Similar to " "other firewall modules in FreeBSD, a file containing the access control " "rules can be created and read by the system at boot time using an " "man:rc.conf[5] variable." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:395 msgid "" "The rule list may be entered using man:ugidfw[8] which has a syntax similar " "to man:ipfw[8]. More tools can be written by using the functions in the " "man:libugidfw[3] library." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:397 msgid "" "After the man:mac_bsdextended[4] module has been loaded, the following " "command may be used to list the current rule configuration:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:402 #, no-wrap msgid "" "# ugidfw list\n" "0 slots, 0 rules\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:406 msgid "" "By default, no rules are defined and everything is completely accessible. " "To create a rule which blocks all access by users but leaves `root` " "unaffected:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:410 #, no-wrap msgid "# ugidfw add subject not uid root new object not uid root mode n\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:414 msgid "" "While this rule is simple to implement, it is a very bad idea as it blocks " "all users from issuing any commands. A more realistic example blocks " "`user1` all access, including directory listings, to ``_user2_``'s home " "directory:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:419 #, no-wrap msgid "" "# ugidfw set 2 subject uid user1 object uid user2 mode n\n" "# ugidfw set 3 subject uid user1 object gid user2 mode n\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:423 msgid "" "Instead of `user1`, `not uid _user2_` could be used in order to enforce the " "same access restrictions for all users. However, the `root` user is " "unaffected by these rules." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:427 msgid "" "Extreme caution should be taken when working with this module as incorrect " "use could block access to certain parts of the file system." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:430 #, no-wrap msgid "The MAC Interface Silencing Policy" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:433 msgid "Module name: [.filename]#mac_ifoff.ko#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:435 msgid "Kernel configuration line: `options MAC_IFOFF`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:437 msgid "Boot option: `mac_ifoff_load=\"YES\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:440 msgid "" "The man:mac_ifoff[4] module is used to disable network interfaces on the fly " "and to keep network interfaces from being brought up during system boot. It " "does not use labels and does not depend on any other MAC modules." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:442 msgid "" "Most of this module's control is performed through these `sysctl` tunables:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:444 msgid "" "`security.mac.ifoff.lo_enabled` enables or disables all traffic on the " "loopback, man:lo[4], interface." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:445 msgid "" "`security.mac.ifoff.bpfrecv_enabled` enables or disables all traffic on the " "Berkeley Packet Filter interface, man:bpf[4]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:446 msgid "" "`security.mac.ifoff.other_enabled` enables or disables traffic on all other " "interfaces." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:449 msgid "" "One of the most common uses of man:mac_ifoff[4] is network monitoring in an " "environment where network traffic should not be permitted during the boot " "sequence. Another use would be to write a script which uses an application " "such as package:security/aide[] to automatically block network traffic if it " "finds new or altered files in protected directories." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:451 #, no-wrap msgid "The MAC Port Access Control List Policy" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:454 msgid "Module name: [.filename]#mac_portacl.ko#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:456 msgid "Kernel configuration line: `MAC_PORTACL`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:458 msgid "Boot option: `mac_portacl_load=\"YES\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:460 msgid "" "The man:mac_portacl[4] module is used to limit binding to local TCP and UDP " "ports, making it possible to allow non-`root` users to bind to specified " "privileged ports below 1024." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:463 msgid "" "Once loaded, this module enables the MAC policy on all sockets. The " "following tunables are available:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:465 msgid "" "`security.mac.portacl.enabled` enables or disables the policy completely." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:466 msgid "" "`security.mac.portacl.port_high` sets the highest port number that " "man:mac_portacl[4] protects." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:467 msgid "" "`security.mac.portacl.suser_exempt`, when set to a non-zero value, exempts " "the `root` user from this policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:468 msgid "" "`security.mac.portacl.rules` specifies the policy as a text string of the " "form `rule[,rule,...]`, with as many rules as needed, and where each rule is " "of the form `idtype:id:protocol:port`. The `idtype` is either `uid` or " "`gid`. The `protocol` parameter can be `tcp` or `udp`. The `port` parameter " "is the port number to allow the specified user or group to bind to. Only " "numeric values can be used for the user ID, group ID, and port parameters." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:471 msgid "" "By default, ports below 1024 can only be used by privileged processes which " "run as `root`. For man:mac_portacl[4] to allow non-privileged processes to " "bind to ports below 1024, set the following tunables as follows:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:477 #, no-wrap msgid "" "# sysctl security.mac.portacl.port_high=1023\n" "# sysctl net.inet.ip.portrange.reservedlow=0\n" "# sysctl net.inet.ip.portrange.reservedhigh=0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:480 msgid "" "To prevent the `root` user from being affected by this policy, set " "`security.mac.portacl.suser_exempt` to a non-zero value." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:484 #, no-wrap msgid "# sysctl security.mac.portacl.suser_exempt=1\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:487 msgid "" "To allow the `www` user with UID 80 to bind to port 80 without ever needing " "`root` privilege:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:491 #, no-wrap msgid "# sysctl security.mac.portacl.rules=uid:80:tcp:80\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:494 msgid "" "This next example permits the user with the UID of 1001 to bind to TCP ports " "110 (POP3) and 995 (POP3s):" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:498 #, no-wrap msgid "# sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:501 #, no-wrap msgid "The MAC Partition Policy" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:504 msgid "Module name: [.filename]#mac_partition.ko#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:506 msgid "Kernel configuration line: `options MAC_PARTITION`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:508 msgid "Boot option: `mac_partition_load=\"YES\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:512 msgid "" "The man:mac_partition[4] policy drops processes into specific \"partitions\" " "based on their MAC label. Most configuration for this policy is done using " "man:setpmac[8]. One `sysctl` tunable is available for this policy:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:514 msgid "" "`security.mac.partition.enabled` enables the enforcement of MAC process " "partitions." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:517 msgid "" "When this policy is enabled, users will only be permitted to see their " "processes, and any others within their partition, but will not be permitted " "to work with utilities outside the scope of this partition. For instance, a " "user in the `insecure` class will not be permitted to access `top` as well " "as many other commands that must spawn a process." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:520 msgid "" "This example adds `top` to the label set on users in the `insecure` class. " "All processes spawned by users in the `insecure` class will stay in the " "`partition/13` label." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:524 #, no-wrap msgid "# setpmac partition/13 top\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:527 msgid "This command displays the partition label and the process list:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:531 #, no-wrap msgid "# ps Zax\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:534 msgid "" "This command displays another user's process partition label and that user's " "currently running processes:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:538 #, no-wrap msgid "# ps -ZU trhodes\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:543 msgid "" "Users can see processes in ``root``'s label unless the " "man:mac_seeotheruids[4] policy is loaded." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:546 #, no-wrap msgid "The MAC Multi-Level Security Module" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:549 msgid "Module name: [.filename]#mac_mls.ko#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:551 msgid "Kernel configuration line: `options MAC_MLS`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:553 msgid "Boot option: `mac_mls_load=\"YES\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:555 msgid "" "The man:mac_mls[4] policy controls access between subjects and objects in " "the system by enforcing a strict information flow policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:559 msgid "" "In MLS environments, a \"clearance\" level is set in the label of each " "subject or object, along with compartments. Since these clearance levels " "can reach numbers greater than several thousand, it would be a daunting task " "to thoroughly configure every subject or object. To ease this " "administrative overhead, three labels are included in this policy: `mls/" "low`, `mls/equal`, and `mls/high`, where:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:561 msgid "" "Anything labeled with `mls/low` will have a low clearance level and not be " "permitted to access information of a higher level. This label also prevents " "objects of a higher clearance level from writing or passing information to a " "lower level." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:562 msgid "" "`mls/equal` should be placed on objects which should be exempt from the " "policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:563 msgid "" "`mls/high` is the highest level of clearance possible. Objects assigned this " "label will hold dominance over all other objects in the system; however, " "they will not permit the leaking of information to objects of a lower class." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:565 msgid "MLS provides:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:567 msgid "" "A hierarchical security level with a set of non-hierarchical categories." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:568 msgid "" "Fixed rules of `no read up, no write down`. This means that a subject can " "have read access to objects on its own level or below, but not above. " "Similarly, a subject can have write access to objects on its own level or " "above, but not beneath." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:569 msgid "Secrecy, or the prevention of inappropriate disclosure of data." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:570 msgid "" "A basis for the design of systems that concurrently handle data at multiple " "sensitivity levels without leaking information between secret and " "confidential." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:572 msgid "The following `sysctl` tunables are available:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:574 msgid "`security.mac.mls.enabled` is used to enable or disable the MLS policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:575 msgid "" "`security.mac.mls.ptys_equal` labels all man:pty[4] devices as `mls/equal` " "during creation." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:576 msgid "" "`security.mac.mls.revocation_enabled` revokes access to objects after their " "label changes to a label of a lower grade." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:577 msgid "" "`security.mac.mls.max_compartments` sets the maximum number of compartment " "levels allowed on a system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:579 msgid "" "To manipulate MLS labels, use man:setfmac[8]. To assign a label to an object:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:583 #, no-wrap msgid "# setfmac mls/5 test\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:586 msgid "To get the MLS label for the file [.filename]#test#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:590 #, no-wrap msgid "# getfmac test\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:593 msgid "" "Another approach is to create a master policy file in [.filename]#/etc/# " "which specifies the MLS policy information and to feed that file to " "`setfmac`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:597 msgid "" "When using the MLS policy module, an administrator plans to control the flow " "of sensitive information. The default `block read up block write down` sets " "everything to a low state. Everything is accessible and an administrator " "slowly augments the confidentiality of the information." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:602 msgid "" "Beyond the three basic label options, an administrator may group users and " "groups as required to block the information flow between them. It might be " "easier to look at the information in clearance levels using descriptive " "words, such as classifications of `Confidential`, `Secret`, and `Top " "Secret`. Some administrators instead create different groups based on " "project levels. Regardless of the classification method, a well thought out " "plan must exist before implementing a restrictive policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:604 msgid "" "Some example situations for the MLS policy module include an e-commerce web " "server, a file server holding critical company information, and financial " "institution environments." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:606 #, no-wrap msgid "The MAC Biba Module" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:609 msgid "Module name: [.filename]#mac_biba.ko#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:611 msgid "Kernel configuration line: `options MAC_BIBA`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:613 msgid "Boot option: `mac_biba_load=\"YES\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:617 msgid "" "The man:mac_biba[4] module loads the MAC Biba policy. This policy is " "similar to the MLS policy with the exception that the rules for information " "flow are slightly reversed. This is to prevent the downward flow of " "sensitive information whereas the MLS policy prevents the upward flow of " "sensitive information." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:621 msgid "" "In Biba environments, an \"integrity\" label is set on each subject or " "object. These labels are made up of hierarchical grades and non-" "hierarchical components. As a grade ascends, so does its integrity." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:623 msgid "Supported labels are `biba/low`, `biba/equal`, and `biba/high`, where:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:625 msgid "" "`biba/low` is considered the lowest integrity an object or subject may have. " "Setting this on objects or subjects blocks their write access to objects or " "subjects marked as `biba/high`, but will not prevent read access." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:626 msgid "" "`biba/equal` should only be placed on objects considered to be exempt from " "the policy." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:627 msgid "" "`biba/high` permits writing to objects set at a lower label, but does not " "permit reading that object. It is recommended that this label be placed on " "objects that affect the integrity of the entire system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:629 msgid "Biba provides:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:631 msgid "" "Hierarchical integrity levels with a set of non-hierarchical integrity " "categories." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:632 msgid "" "Fixed rules are `no write up, no read down`, the opposite of MLS. A subject " "can have write access to objects on its own level or below, but not above. " "Similarly, a subject can have read access to objects on its own level or " "above, but not below." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:633 msgid "Integrity by preventing inappropriate modification of data." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:634 msgid "Integrity levels instead of MLS sensitivity levels." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:636 msgid "The following tunables can be used to manipulate the Biba policy:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:638 msgid "" "`security.mac.biba.enabled` is used to enable or disable enforcement of the " "Biba policy on the target machine." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:639 msgid "" "`security.mac.biba.ptys_equal` is used to disable the Biba policy on " "man:pty[4] devices." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:640 msgid "" "`security.mac.biba.revocation_enabled` forces the revocation of access to " "objects if the label is changed to dominate the subject." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:642 msgid "" "To access the Biba policy setting on system objects, use `setfmac` and " "`getfmac`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:648 #, no-wrap msgid "" "# setfmac biba/low test\n" "# getfmac test\n" "test: biba/low\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:654 msgid "" "Integrity, which is different from sensitivity, is used to guarantee that " "information is not manipulated by untrusted parties. This includes " "information passed between subjects and objects. It ensures that users will " "only be able to modify or access information they have been given explicit " "access to. The man:mac_biba[4] security policy module permits an " "administrator to configure which files and programs a user may see and " "invoke while assuring that the programs and files are trusted by the system " "for that user." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:660 msgid "" "During the initial planning phase, an administrator must be prepared to " "partition users into grades, levels, and areas. The system will default to " "a high label once this policy module is enabled, and it is up to the " "administrator to configure the different grades and levels for users. " "Instead of using clearance levels, a good planning method could include " "topics. For instance, only allow developers modification access to the " "source code repository, source code compiler, and other development " "utilities. Other users would be grouped into other categories such as " "testers, designers, or end users and would only be permitted read access." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:665 msgid "" "A lower integrity subject is unable to write to a higher integrity subject " "and a higher integrity subject cannot list or read a lower integrity " "object. Setting a label at the lowest possible grade could make it " "inaccessible to subjects. Some prospective environments for this security " "policy module would include a constrained web server, a development and test " "machine, and a source code repository. A less useful implementation would " "be a personal workstation, a machine used as a router, or a network firewall." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:667 #, no-wrap msgid "The MAC Low-watermark Module" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:670 msgid "Module name: [.filename]#mac_lomac.ko#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:672 msgid "Kernel configuration line: `options MAC_LOMAC`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:674 msgid "Boot option: `mac_lomac_load=\"YES\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:676 msgid "" "Unlike the MAC Biba policy, the man:mac_lomac[4] policy permits access to " "lower integrity objects only after decreasing the integrity level to not " "disrupt any integrity rules." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:680 msgid "" "The Low-watermark integrity policy works almost identically to Biba, with " "the exception of using floating labels to support subject demotion via an " "auxiliary grade compartment. This secondary compartment takes the form " "`[auxgrade]`. When assigning a policy with an auxiliary grade, use the " "syntax `lomac/10[2]`, where `2` is the auxiliary grade." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:683 msgid "" "This policy relies on the ubiquitous labeling of all system objects with " "integrity labels, permitting subjects to read from low integrity objects and " "then downgrading the label on the subject to prevent future writes to high " "integrity objects using `[auxgrade]`. The policy may provide greater " "compatibility and require less initial configuration than Biba." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:685 msgid "" "Like the Biba and MLS policies, `setfmac` and `setpmac` are used to place " "labels on system objects:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:690 #, no-wrap msgid "" "# setfmac /usr/home/trhodes lomac/high[low]\n" "# getfmac /usr/home/trhodes lomac/high[low]\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:693 msgid "" "The auxiliary grade `low` is a feature provided only by the MACLOMAC policy." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:695 #, no-wrap msgid "User Lock Down" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:699 msgid "" "This example considers a relatively small storage system with fewer than " "fifty users. Users will have login capabilities and are permitted to store " "data and access resources." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:701 msgid "" "For this scenario, the man:mac_bsdextended[4] and man:mac_seeotheruids[4] " "policy modules could co-exist and block access to system objects while " "hiding user processes." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:703 msgid "Begin by adding the following line to [.filename]#/boot/loader.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:707 #, no-wrap msgid "mac_seeotheruids_load=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:710 msgid "" "The man:mac_bsdextended[4] security policy module may be activated by adding " "this line to [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:714 #, no-wrap msgid "ugidfw_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:719 msgid "" "Default rules stored in [.filename]#/etc/rc.bsdextended# will be loaded at " "system initialization. However, the default entries may need modification. " "Since this machine is expected only to service users, everything may be left " "commented out except the last two lines in order to force the loading of " "user owned system objects by default." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:724 msgid "" "Add the required users to this machine and reboot. For testing purposes, " "try logging in as a different user across two consoles. Run `ps aux` to see " "if processes of other users are visible. Verify that running man:ls[1] on " "another user's home directory fails." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:726 msgid "" "Do not try to test with the `root` user unless the specific ``sysctl``s have " "been modified to block super user access." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:731 msgid "" "When a new user is added, their man:mac_bsdextended[4] rule will not be in " "the ruleset list. To update the ruleset quickly, unload the security policy " "module and reload it again using man:kldunload[8] and man:kldload[8]." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:734 #, no-wrap msgid "Nagios in a MAC Jail" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:738 msgid "" "This section demonstrates the steps that are needed to implement the Nagios " "network monitoring system in a MAC environment. This is meant as an example " "which still requires the administrator to test that the implemented policy " "meets the security requirements of the network before using in a production " "environment." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:741 msgid "" "This example requires `multilabel` to be set on each file system. It also " "assumes that package:net-mgmt/nagios-plugins[], package:net-mgmt/nagios[], " "and package:www/apache22[] are all installed, configured, and working " "correctly before attempting the integration into the MAC framework." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:742 #, no-wrap msgid "Create an Insecure User Class" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:745 msgid "" "Begin the procedure by adding the following user class to [.filename]#/etc/" "login.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:771 #, no-wrap msgid "" "insecure:\\\n" -":copyright=/etc/COPYRIGHT:\\\n" ":welcome=/etc/motd:\\\n" ":setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\\\n" ":path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin\n" ":manpath=/usr/share/man /usr/local/man:\\\n" ":nologin=/usr/sbin/nologin:\\\n" ":cputime=1h30m:\\\n" ":datasize=8M:\\\n" ":vmemoryuse=100M:\\\n" ":stacksize=2M:\\\n" ":memorylocked=4M:\\\n" ":memoryuse=8M:\\\n" ":filesize=8M:\\\n" ":coredumpsize=8M:\\\n" ":openfiles=24:\\\n" ":maxproc=32:\\\n" ":priority=0:\\\n" ":requirehome:\\\n" ":passwordtime=91d:\\\n" ":umask=022:\\\n" ":ignoretime@:\\\n" ":label=biba/10(10-10):\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:774 msgid "Then, add the following line to the default user class section:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:778 #, no-wrap msgid ":label=biba/high:\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:781 msgid "Save the edits and issue the following command to rebuild the database:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:785 #, no-wrap msgid "# cap_mkdb /etc/login.conf\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:787 #, no-wrap msgid "Configure Users" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:790 msgid "Set the `root` user to the default class using:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:794 #, no-wrap msgid "# pw usermod root -L default\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:799 msgid "" "All user accounts that are not `root` will now require a login class. The " "login class is required, otherwise users will be refused access to common " "commands. The following `sh` script should do the trick:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:804 #, no-wrap msgid "" "# for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \\\n" "\t/etc/passwd`; do pw usermod $x -L default; done;\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:807 msgid "Next, drop the `nagios` and `www` accounts into the insecure class:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:812 #, no-wrap msgid "" "# pw usermod nagios -L insecure\n" "# pw usermod www -L insecure\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:814 #, no-wrap msgid "Create the Contexts File" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:817 msgid "" "A contexts file should now be created as [.filename]#/etc/policy.contexts#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:821 #, no-wrap msgid "# This is the default BIBA policy for this system.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:824 #, no-wrap msgid "" "# System:\n" "/var/run(/.*)?\t\t\tbiba/equal\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:826 #, no-wrap msgid "/dev/(/.*)?\t\t\tbiba/equal\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:829 #, no-wrap msgid "" "/var\t\t\t\tbiba/equal\n" "/var/spool(/.*)?\t\tbiba/equal\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:831 #, no-wrap msgid "/var/log(/.*)?\t\t\tbiba/equal\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:834 #, no-wrap msgid "" "/tmp(/.*)?\t\t\tbiba/equal\n" "/var/tmp(/.*)?\t\t\tbiba/equal\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:837 #, no-wrap msgid "" "/var/spool/mqueue\t\tbiba/equal\n" "/var/spool/clientmqueue\t\tbiba/equal\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:840 #, no-wrap msgid "" "# For Nagios:\n" "/usr/local/etc/nagios(/.*)?\tbiba/10\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:842 #, no-wrap msgid "/var/spool/nagios(/.*)?\t\tbiba/10\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:845 #, no-wrap msgid "" "# For apache\n" "/usr/local/etc/apache(/.*)?\tbiba/10\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:850 msgid "" "This policy enforces security by setting restrictions on the flow of " "information. In this specific configuration, users, including `root`, " "should never be allowed to access Nagios. Configuration files and processes " "that are a part of Nagios will be completely self contained or jailed." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:853 msgid "" "This file will be read after running `setfsmac` on every file system. This " "example sets the policy on the root file system:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:857 #, no-wrap msgid "# setfsmac -ef /etc/policy.contexts /\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:860 msgid "" "Next, add these edits to the main section of [.filename]#/etc/mac.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:867 #, no-wrap msgid "" "default_labels file ?biba\n" "default_labels ifnet ?biba\n" "default_labels process ?biba\n" "default_labels socket ?biba\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:869 #, no-wrap msgid "Loader Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:872 msgid "" "To finish the configuration, add the following lines to [.filename]#/boot/" "loader.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:878 #, no-wrap msgid "" "mac_biba_load=\"YES\"\n" "mac_seeotheruids_load=\"YES\"\n" "security.mac.biba.trust_all_interfaces=1\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:882 msgid "" "And the following line to the network card configuration stored in " "[.filename]#/etc/rc.conf#. If the primary network configuration is done via " "DHCP, this may need to be configured manually after every system boot:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:886 #, no-wrap msgid "maclabel biba/equal\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:888 #, no-wrap msgid "Testing the Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:894 msgid "" "First, ensure that the web server and Nagios will not be started on system " "initialization and reboot. Ensure that `root` cannot access any of the " "files in the Nagios configuration directory. If `root` can list the " "contents of [.filename]#/var/spool/nagios#, something is wrong. Instead, a " "\"permission denied\" error should be returned." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:896 msgid "If all seems well, Nagios, Apache, and Sendmail can now be started:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:902 #, no-wrap msgid "" "# cd /etc/mail && make stop && \\\n" "setpmac biba/equal make start && setpmac biba/10\\(10-10\\) apachectl start && \\\n" "setpmac biba/10\\(10-10\\) /usr/local/etc/rc.d/nagios.sh forcestart\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:907 msgid "" "Double check to ensure that everything is working properly. If not, check " "the log files for error messages. If needed, use man:sysctl[8] to disable " "the man:mac_biba[4] security policy module and try starting everything again " "as usual." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:912 msgid "" "The `root` user can still change the security enforcement and edit its " "configuration files. The following command will permit the degradation of " "the security policy to a lower grade for a newly spawned shell:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:916 #, no-wrap msgid "# setpmac biba/10 csh\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:921 msgid "" "To block this from happening, force the user into a range using " "man:login.conf[5]. If man:setpmac[8] attempts to run a command outside of " "the compartment's range, an error will be returned and the command will not " "be executed. In this case, set root to `biba/high(high-high)`." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:924 #, no-wrap msgid "Troubleshooting the MAC Framework" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:927 msgid "" "This section discusses common configuration errors and how to resolve them." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/mac/_index.adoc:928 #, no-wrap msgid "The `multilabel` flag does not stay enabled on the root ([.filename]#/#) partition" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:930 msgid "The following steps may resolve this transient error:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:934 msgid "" "Edit [.filename]#/etc/fstab# and set the root partition to `ro` for read-" "only." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:935 msgid "Reboot into single user mode." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:936 msgid "Run `tunefs -l enable` on [.filename]#/#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:937 msgid "Reboot the system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:938 msgid "" "Run `mount -urw`[.filename]#/# and change the `ro` back to `rw` in " "[.filename]#/etc/fstab# and reboot the system again." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:939 msgid "" "Double-check the output from `mount` to ensure that `multilabel` has been " "properly set on the root file system." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/mac/_index.adoc:941 #, no-wrap msgid "After establishing a secure environment with MAC, Xorg no longer starts" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:944 msgid "" "This could be caused by the MAC `partition` policy or by a mislabeling in " "one of the MAC labeling policies. To debug, try the following:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:948 msgid "" "Check the error message. If the user is in the `insecure` class, the " "`partition` policy may be the culprit. Try setting the user's class back to " "the `default` class and rebuild the database with `cap_mkdb`. If this does " "not alleviate the problem, go to step two." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:949 msgid "" "Double-check that the label policies are set correctly for the user, Xorg, " "and the [.filename]#/dev# entries." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:950 msgid "" "If neither of these resolve the problem, send the error message and a " "description of the environment to the {freebsd-questions}." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/mac/_index.adoc:952 #, no-wrap msgid "The `_secure_path: unable to stat .login_conf` error appears" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:957 msgid "" "This error can appear when a user attempts to switch from the `root` user to " "another user in the system. This message usually occurs when the user has a " "higher label setting than that of the user they are attempting to become. " "For instance, if `joe` has a default label of `biba/low` and `root` has a " "label of `biba/high`, `root` cannot view ``joe``'s home directory. This " "will happen whether or not `root` has used `su` to become `joe` as the Biba " "integrity model will not permit `root` to view objects set at a lower " "integrity level." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/mac/_index.adoc:958 #, no-wrap msgid "The system no longer recognizes `root`" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:960 msgid "When this occurs, `whoami` returns `0` and `su` returns `who are you?`." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:964 msgid "" "This can happen if a labeling policy has been disabled by man:sysctl[8] or " "the policy module was unloaded. If the policy is disabled, the login " "capabilities database needs to be reconfigured. Double check [.filename]#/" "etc/login.conf# to ensure that all `label` options have been removed and " "rebuild the database with `cap_mkdb`." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:968 msgid "" "This may also happen if a policy restricts access to " "[.filename]#master.passwd#. This is usually caused by an administrator " "altering the file under a label which conflicts with the general policy " "being used by the system. In these cases, the user information would be " "read by the system and access would be blocked as the file has inherited the " "new label. Disable the policy using man:sysctl[8] and everything should " "return to normal." msgstr "" diff --git a/documentation/content/en/books/handbook/network-servers/_index.adoc b/documentation/content/en/books/handbook/network-servers/_index.adoc index 0485cd03a0..4b40e09ce0 100644 --- a/documentation/content/en/books/handbook/network-servers/_index.adoc +++ b/documentation/content/en/books/handbook/network-servers/_index.adoc @@ -1,3044 +1,3043 @@ --- title: Chapter 32. Network Servers part: IV. Network Communication prev: books/handbook/mail next: books/handbook/firewalls description: This chapter covers some of the more frequently used network services on UNIX systems tags: ["network", "servers", "inetd", "NFS", "NIS", "LDAP", "DHCP", "DNS", "Apache HTTP", "FTP", "Samba", "NTP", "iSCSI"] showBookMenu: true weight: 37 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Network Servers :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 32 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Synopsis This chapter covers some of the more frequently used network services on UNIX(R) systems. This includes installing, configuring, testing, and maintaining many different types of network services. Example configuration files are included throughout this chapter for reference. By the end of this chapter, readers will know: * How to manage the inetd daemon. * How to set up the Network File System (NFS). * How to set up the Network Information Server (NIS) for centralizing and sharing user accounts. * How to set FreeBSD up to act as an LDAP server or client * How to set up automatic network settings using DHCP. * How to set up a Domain Name Server (DNS). * How to set up the Apache HTTP Server. * How to set up a File Transfer Protocol (FTP) server. * How to set up a file and print server for Windows(R) clients using Samba. * How to synchronize the time and date, and set up a time server using the Network Time Protocol (NTP). * How to set up iSCSI. This chapter assumes a basic knowledge of: * [.filename]#/etc/rc# scripts. * Network terminology. * Installation of additional third-party software (crossref:ports[ports,Installing Applications: Packages and Ports]). [[network-inetd]] == The inetd Super-Server The man:inetd[8] daemon is sometimes referred to as a Super-Server because it manages connections for many services. Instead of starting multiple applications, only the inetd service needs to be started. When a connection is received for a service that is managed by inetd, it determines which program the connection is destined for, spawns a process for that program, and delegates the program a socket. Using inetd for services that are not heavily used can reduce system load, when compared to running each daemon individually in stand-alone mode. Primarily, inetd is used to spawn other daemons, but several trivial protocols are handled internally, such as chargen, auth, time, echo, discard, and daytime. This section covers the basics of configuring inetd. [[network-inetd-conf]] === Configuration File Configuration of inetd is done by editing [.filename]#/etc/inetd.conf#. Each line of this configuration file represents an application which can be started by inetd. By default, every line starts with a comment (`+#+`), meaning that inetd is not listening for any applications. To configure inetd to listen for an application's connections, remove the `+#+` at the beginning of the line for that application. After saving the edits, configure inetd to start at system boot by editing [.filename]#/etc/rc.conf#: [.programlisting] .... inetd_enable="YES" .... To start inetd now, so that it listens for the configured service, type: [source,shell] .... # service inetd start .... Once inetd is started, it needs to be notified whenever a modification is made to [.filename]#/etc/inetd.conf#: [[network-inetd-reread]] .Reloading the inetd Configuration File [example] ==== [source,shell] .... # service inetd reload .... ==== Typically, the default entry for an application does not need to be edited beyond removing the `+#+`. In some situations, it may be appropriate to edit the default entry. As an example, this is the default entry for man:ftpd[8] over IPv4: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... The seven columns in an entry are as follows: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... where: service-name:: The service name of the daemon to start. It must correspond to a service listed in [.filename]#/etc/services#. This determines which port inetd listens on for incoming connections to that service. When using a custom service, it must first be added to [.filename]#/etc/services#. socket-type:: Either `stream`, `dgram`, `raw`, or `seqpacket`. Use `stream` for TCP connections and `dgram` for UDP services. protocol:: Use one of the following protocol names: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protocol Name | Explanation |tcp or tcp4 |TCP IPv4 |udp or udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |Both TCP IPv4 and IPv6 |udp46 |Both UDP IPv4 and IPv6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: In this field, `wait` or `nowait` must be specified. `max-child`, `max-connections-per-ip-per-minute` and `max-child-per-ip` are optional. + `wait|nowait` indicates whether or not the service is able to handle its own socket. `dgram` socket types must use `wait` while `stream` daemons, which are usually multi-threaded, should use `nowait`. `wait` usually hands off multiple sockets to a single daemon, while `nowait` spawns a child daemon for each new socket. + The maximum number of child daemons inetd may spawn is set by `max-child`. For example, to limit ten instances of the daemon, place a `/10` after `nowait`. Specifying `/0` allows an unlimited number of children. + `max-connections-per-ip-per-minute` limits the number of connections from any particular IP address per minute. Once the limit is reached, further connections from this IP address will be dropped until the end of the minute. For example, a value of `/10` would limit any particular IP address to ten connection attempts per minute. `max-child-per-ip` limits the number of child processes that can be started on behalf on any single IP address at any moment. These options can limit excessive resource consumption and help to prevent Denial of Service attacks. + An example can be seen in the default settings for man:fingerd[8]: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s .... user:: The username the daemon will run as. Daemons typically run as `root`, `daemon`, or `nobody`. server-program:: The full path to the daemon. If the daemon is a service provided by inetd internally, use `internal`. server-program-arguments:: Used to specify any command arguments to be passed to the daemon on invocation. If the daemon is an internal service, use `internal`. [[network-inetd-cmdline]] === Command-Line Options Like most server daemons, inetd has a number of options that can be used to modify its behavior. By default, inetd is started with `-wW -C 60`. These options enable TCP wrappers for all services, including internal services, and prevent any IP address from requesting any service more than 60 times per minute. To change the default options which are passed to inetd, add an entry for `inetd_flags` in [.filename]#/etc/rc.conf#. If inetd is already running, restart it with `service inetd restart`. The available rate limiting options are: -c maximum:: Specify the default maximum number of simultaneous invocations of each service, where the default is unlimited. May be overridden on a per-service basis by using `max-child` in [.filename]#/etc/inetd.conf#. -C rate:: Specify the default maximum number of times a service can be invoked from a single IP address per minute. May be overridden on a per-service basis by using `max-connections-per-ip-per-minute` in [.filename]#/etc/inetd.conf#. -R rate:: Specify the maximum number of times a service can be invoked in one minute, where the default is `256`. A rate of `0` allows an unlimited number. -s maximum:: Specify the maximum number of times a service can be invoked from a single IP address at any one time, where the default is unlimited. May be overridden on a per-service basis by using `max-child-per-ip` in [.filename]#/etc/inetd.conf#. Additional options are available. Refer to man:inetd[8] for the full list of options. [[network-inetd-security]] === Security Considerations Many of the daemons which can be managed by inetd are not security-conscious. Some daemons, such as fingerd, can provide information that may be useful to an attacker. Only enable the services which are needed and monitor the system for excessive connection attempts. `max-connections-per-ip-per-minute`, `max-child` and `max-child-per-ip` can be used to limit such attacks. By default, TCP wrappers are enabled. Consult man:hosts_access[5] for more information on placing TCP restrictions on various inetd invoked daemons. [[network-nfs]] == Network File System (NFS) FreeBSD supports the Network File System (NFS), which allows a server to share directories and files with clients over a network. With NFS, users and programs can access files on remote systems as if they were stored locally. NFS has many practical uses. Some of the more common uses include: * Data that would otherwise be duplicated on each client can be kept in a single location and accessed by clients on the network. * Several clients may need access to the [.filename]#/usr/ports/distfiles# directory. Sharing that directory allows for quick access to the source files without having to download them to each client. * On large networks, it is often more convenient to configure a central NFS server on which all user home directories are stored. Users can log into a client anywhere on the network and have access to their home directories. * Administration of NFS exports is simplified. For example, there is only one file system where security or backup policies must be set. * Removable media storage devices can be used by other machines on the network. This reduces the number of devices throughout the network and provides a centralized location to manage their security. It is often more convenient to install software on multiple machines from a centralized installation media. NFS consists of a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly, a few processes have to be configured and running. These daemons must be running on the server: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Daemon | Description |nfsd |The NFS daemon which services requests from NFS clients. |mountd |The NFS mount daemon which carries out requests received from nfsd. |rpcbind | This daemon allows NFS clients to discover which port the NFS server is using. |=== Running man:nfsiod[8] on the client can improve performance, but is not required. [[network-configuring-nfs]] === Configuring the Server The file systems which the NFS server will share are specified in [.filename]#/etc/exports#. Each line in this file specifies a file system to be exported, which clients have access to that file system, and any access options. When adding entries to this file, each exported file system, its properties, and allowed hosts must occur on a single line. If no clients are listed in the entry, then any client on the network can mount that file system. The following [.filename]#/etc/exports# entries demonstrate how to export file systems. The examples can be modified to match the file systems and client names on the reader's network. There are many options that can be used in this file, but only a few will be mentioned here. See man:exports[5] for the full list of options. This example shows how to export [.filename]#/media# to three hosts named _alpha_, _bravo_, and _charlie_: [.programlisting] .... /media -ro alpha bravo charlie .... The `-ro` flag makes the file system read-only, preventing clients from making any changes to the exported file system. This example assumes that the host names are either in DNS or in [.filename]#/etc/hosts#. Refer to man:hosts[5] if the network does not have a DNS server. The next example exports [.filename]#/home# to three clients by IP address. This can be useful for networks without DNS or [.filename]#/etc/hosts# entries. The `-alldirs` flag allows subdirectories to be mount points. In other words, it will not automatically mount the subdirectories, but will permit the client to mount the directories that are required as needed. [.programlisting] .... /usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... This next example exports [.filename]#/a# so that two clients from different domains may access that file system. The `-maproot=root` allows `root` on the remote system to write data on the exported file system as `root`. If `-maproot=root` is not specified, the client's `root` user will be mapped to the server's `nobody` account and will be subject to the access limitations defined for `nobody`. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... A client can only be specified once per file system. For example, if [.filename]#/usr# is a single file system, these entries would be invalid as both entries specify the same host: [.programlisting] .... # Invalid when /usr is one file system /usr/src client /usr/ports client .... The correct format for this situation is to use one entry: [.programlisting] .... /usr/src /usr/ports client .... The following is an example of a valid export list, where [.filename]#/usr# and [.filename]#/exports# are local file systems: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... To enable the processes required by the NFS server at boot time, add these options to [.filename]#/etc/rc.conf#: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_enable="YES" .... The server can be started now by running this command: [source,shell] .... # service nfsd start .... Whenever the NFS server is started, mountd also starts automatically. However, mountd only reads [.filename]#/etc/exports# when it is started. To make subsequent [.filename]#/etc/exports# edits take effect immediately, force mountd to reread it: [source,shell] .... # service mountd reload .... Refer to man:zfs-share[8] for a description of exporting ZFS datasets via NFS using the `sharenfs` ZFS property instead of the man:exports[5] file. Refer to man:nfsv4[4] for a description of an NFS Version 4 setup. === Configuring the Client To enable NFS clients, set this option in each client's [.filename]#/etc/rc.conf#: [.programlisting] .... nfs_client_enable="YES" .... Then, run this command on each NFS client: [source,shell] .... # service nfsclient start .... The client now has everything it needs to mount a remote file system. In these examples, the server's name is `server` and the client's name is `client`. To mount [.filename]#/home# on `server` to the [.filename]#/mnt# mount point on `client`: [source,shell] .... # mount server:/home /mnt .... The files and directories in [.filename]#/home# will now be available on `client`, in the [.filename]#/mnt# directory. To mount a remote file system each time the client boots, add it to [.filename]#/etc/fstab#: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... Refer to man:fstab[5] for a description of all available options. === Locking Some applications require file locking to operate correctly. To enable locking, execute the following command on both the client and server: [source,shell] .... # sysrc rpc_lockd_enable="YES" .... Then start the man:rpc.lockd[8] service: [source,shell] .... # service lockd start .... If locking is not required on the server, the NFS client can be configured to lock locally by including `-L` when running mount. Refer to man:mount_nfs[8] for further details. [[network-autofs]] === Automating Mounts with man:autofs[5] [NOTE] ==== The man:autofs[5] automount facility is supported starting with FreeBSD 10.1-RELEASE. To use the automounter functionality in older versions of FreeBSD, use man:amd[8] instead. This chapter only describes the man:autofs[5] automounter. ==== The man:autofs[5] facility is a common name for several components that, together, allow for automatic mounting of remote and local filesystems whenever a file or directory within that file system is accessed. It consists of the kernel component, man:autofs[5], and several userspace applications: man:automount[8], man:automountd[8] and man:autounmountd[8]. It serves as an alternative for man:amd[8] from previous FreeBSD releases. amd is still provided for backward compatibility purposes, as the two use different map formats; the one used by autofs is the same as with other SVR4 automounters, such as the ones in Solaris, MacOS X, and Linux. The man:autofs[5] virtual filesystem is mounted on specified mountpoints by man:automount[8], usually invoked during boot. Whenever a process attempts to access a file within the man:autofs[5] mountpoint, the kernel will notify man:automountd[8] daemon and pause the triggering process. The man:automountd[8] daemon will handle kernel requests by finding the proper map and mounting the filesystem according to it, then signal the kernel to release blocked process. The man:autounmountd[8] daemon automatically unmounts automounted filesystems after some time, unless they are still being used. The primary autofs configuration file is [.filename]#/etc/auto_master#. It assigns individual maps to top-level mounts. For an explanation of [.filename]#auto_master# and the map syntax, refer to man:auto_master[5]. There is a special automounter map mounted on [.filename]#/net#. When a file is accessed within this directory, man:autofs[5] looks up the corresponding remote mount and automatically mounts it. For instance, an attempt to access a file within [.filename]#/net/foobar/usr# would tell man:automountd[8] to mount the [.filename]#/usr# export from the host `foobar`. .Mounting an Export with man:autofs[5] [example] ==== In this example, `showmount -e` shows the exported file systems that can be mounted from the NFS server, `foobar`: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /net/foobar/usr .... ==== The output from `showmount` shows [.filename]#/usr# as an export. When changing directories to [.filename]#/host/foobar/usr#, man:automountd[8] intercepts the request and attempts to resolve the hostname `foobar`. If successful, man:automountd[8] automatically mounts the source export. To enable man:autofs[5] at boot time, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... autofs_enable="YES" .... Then man:autofs[5] can be started by running: [source,shell] .... # service automount start # service automountd start # service autounmountd start .... The man:autofs[5] map format is the same as in other operating systems. Information about this format from other sources can be useful, like the http://web.archive.org/web/20160813071113/http://images.apple.com/business/docs/Autofs.pdf[Mac OS X document]. Consult the man:automount[8], man:automountd[8], man:autounmountd[8], and man:auto_master[5] manual pages for more information. [[network-nis]] == Network Information System (NIS) Network Information System (NIS) is designed to centralize administration of UNIX(R)-like systems such as Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD, and FreeBSD. NIS was originally known as Yellow Pages but the name was changed due to trademark issues. This is the reason why NIS commands begin with `yp`. NIS is a Remote Procedure Call (RPC)-based client/server system that allows a group of machines within an NIS domain to share a common set of configuration files. This permits a system administrator to set up NIS client systems with only minimal configuration data and to add, remove, or modify configuration data from a single location. FreeBSD uses version 2 of the NIS protocol. === NIS Terms and Processes Table 28.1 summarizes the terms and important processes used by NIS: .NIS Terminology [cols="1,1", frame="none", options="header"] |=== | Term | Description |NIS domain name |NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS. |man:rpcbind[8] |This service enables RPC and must be running in order to run an NIS server or act as an NIS client. |man:ypbind[8] |This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server. |man:ypserv[8] |This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-FreeBSD clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients. |man:rpc.yppasswdd[8] |This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there. |=== === Machine Types There are three types of hosts in an NIS environment: * NIS master server + This server acts as a central repository for host configuration information and maintains the authoritative copy of the files used by all of the NIS clients. The [.filename]#passwd#, [.filename]#group#, and other various files used by NIS clients are stored on the master server. While it is possible for one machine to be an NIS master server for more than one NIS domain, this type of configuration will not be covered in this chapter as it assumes a relatively small-scale NIS environment. * NIS slave servers + NIS slave servers maintain copies of the NIS master's data files in order to provide redundancy. Slave servers also help to balance the load of the master server as NIS clients always attach to the NIS server which responds first. * NIS clients + NIS clients authenticate against the NIS server during log on. Information in many files can be shared using NIS. The [.filename]#master.passwd#, [.filename]#group#, and [.filename]#hosts# files are commonly shared via NIS. Whenever a process on a client needs information that would normally be found in these files locally, it makes a query to the NIS server that it is bound to instead. === Planning Considerations This section describes a sample NIS environment which consists of 15 FreeBSD machines with no centralized point of administration. Each machine has its own [.filename]#/etc/passwd# and [.filename]#/etc/master.passwd#. These files are kept in sync with each other only through manual intervention. Currently, when a user is added to the lab, the process must be repeated on all 15 machines. The configuration of the lab will be as follows: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Machine name | IP address | Machine role |`ellington` |`10.0.0.2` |NIS master |`coltrane` |`10.0.0.3` |NIS slave |`basie` |`10.0.0.4` |Faculty workstation |`bird` |`10.0.0.5` |Client machine |`cli[1-11]` |`10.0.0.[6-17]` |Other client machines |=== If this is the first time an NIS scheme is being developed, it should be thoroughly planned ahead of time. Regardless of network size, several decisions need to be made as part of the planning process. ==== Choosing a NIS Domain Name When a client broadcasts its requests for info, it includes the name of the NIS domain that it is part of. This is how multiple servers on one network can tell which server should answer which request. Think of the NIS domain name as the name for a group of hosts. Some organizations choose to use their Internet domain name for their NIS domain name. This is not recommended as it can cause confusion when trying to debug network problems. The NIS domain name should be unique within the network and it is helpful if it describes the group of machines it represents. For example, the Art department at Acme Inc. might be in the "acme-art" NIS domain. This example will use the domain name `test-domain`. However, some non-FreeBSD operating systems require the NIS domain name to be the same as the Internet domain name. If one or more machines on the network have this restriction, the Internet domain name _must_ be used as the NIS domain name. ==== Physical Server Requirements There are several things to keep in mind when choosing a machine to use as a NIS server. Since NIS clients depend upon the availability of the server, choose a machine that is not rebooted frequently. The NIS server should ideally be a stand alone machine whose sole purpose is to be an NIS server. If the network is not heavily used, it is acceptable to put the NIS server on a machine running other services. However, if the NIS server becomes unavailable, it will adversely affect all NIS clients. === Configuring the NIS Master Server The canonical copies of all NIS files are stored on the master server. The databases used to store the information are called NIS maps. In FreeBSD, these maps are stored in [.filename]#/var/yp/[domainname]# where [.filename]#[domainname]# is the name of the NIS domain. Since multiple domains are supported, it is possible to have several directories, one for each domain. Each domain will have its own independent set of maps. NIS master and slave servers handle all NIS requests through man:ypserv[8]. This daemon is responsible for receiving incoming requests from NIS clients, translating the requested domain and map name to a path to the corresponding database file, and transmitting data from the database back to the client. Setting up a master NIS server can be relatively straight forward, depending on environmental needs. Since FreeBSD provides built-in NIS support, it only needs to be enabled by adding the following lines to [.filename]#/etc/rc.conf#: [.programlisting] .... nisdomainname="test-domain" <.> nis_server_enable="YES" <.> nis_yppasswdd_enable="YES" <.> .... <.> This line sets the NIS domain name to `test-domain`. <.> This automates the start up of the NIS server processes when the system boots. <.> This enables the man:rpc.yppasswdd[8] daemon so that users can change their NIS password from a client machine. Care must be taken in a multi-server domain where the server machines are also NIS clients. It is generally a good idea to force the servers to bind to themselves rather than allowing them to broadcast bind requests and possibly become bound to each other. Strange failure modes can result if one server goes down and others are dependent upon it. Eventually, all the clients will time out and attempt to bind to other servers, but the delay involved can be considerable and the failure mode is still present since the servers might bind to each other all over again. A server that is also a client can be forced to bind to a particular server by adding these additional lines to [.filename]#/etc/rc.conf#: [.programlisting] .... nis_client_enable="YES" <.> nis_client_flags="-S test-domain,server" <.> .... <.> This enables running client stuff as well. <.> This line sets the NIS domain name to `test-domain` and bind to itself. After saving the edits, type `/etc/netstart` to restart the network and apply the values defined in [.filename]#/etc/rc.conf#. Before initializing the NIS maps, start man:ypserv[8]: [source,shell] .... # service ypserv start .... ==== Initializing the NIS Maps NIS maps are generated from the configuration files in [.filename]#/etc# on the NIS master, with one exception: [.filename]#/etc/master.passwd#. This is to prevent the propagation of passwords to all the servers in the NIS domain. Therefore, before the NIS maps are initialized, configure the primary password files: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... It is advisable to remove all entries for system accounts as well as any user accounts that do not need to be propagated to the NIS clients, such as the `root` and any other administrative accounts. [NOTE] ==== Ensure that the [.filename]#/var/yp/master.passwd# is neither group or world readable by setting its permissions to `600`. ==== After completing this task, initialize the NIS maps. FreeBSD includes the man:ypinit[8] script to do this. When generating maps for the master server, include `-m` and specify the NIS domain name: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... This will create [.filename]#/var/yp/Makefile# from [.filename]#/var/yp/Makefile.dist#. By default, this file assumes that the environment has a single NIS server with only FreeBSD clients. Since `test-domain` has a slave server, edit this line in [.filename]#/var/yp/Makefile# so that it begins with a comment (`+#+`): [.programlisting] .... NOPUSH = "True" .... ==== Adding New Users Every time a new user is created, the user account must be added to the master NIS server and the NIS maps rebuilt. Until this occurs, the new user will not be able to login anywhere except on the NIS master. For example, to add the new user `jsmith` to the `test-domain` domain, run these commands on the master server: [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... The user could also be added using `adduser jsmith` instead of `pw useradd smith`. === Setting up a NIS Slave Server To set up an NIS slave server, log on to the slave server and edit [.filename]#/etc/rc.conf# as for the master server. Do not generate any NIS maps, as these already exist on the master server. When running `ypinit` on the slave server, use `-s` (for slave) instead of `-m` (for master). This option requires the name of the NIS master in addition to the domain name, as seen in this example: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. .... This will generate a directory on the slave server called [.filename]#/var/yp/test-domain# which contains copies of the NIS master server's maps. Adding these [.filename]#/etc/crontab# entries on each slave server will force the slaves to sync their maps with the maps on the master server: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... These entries are not mandatory because the master server automatically attempts to push any map changes to its slaves. However, since clients may depend upon the slave server to provide correct password information, it is recommended to force frequent password map updates. This is especially important on busy networks where map updates might not always complete. To finish the configuration, run `/etc/netstart` on the slave server in order to start the NIS services. === Setting Up an NIS Client An NIS client binds to an NIS server using man:ypbind[8]. This daemon broadcasts RPC requests on the local network. These requests specify the domain name configured on the client. If an NIS server in the same domain receives one of the broadcasts, it will respond to ypbind, which will record the server's address. If there are several servers available, the client will use the address of the first server to respond and will direct all of its NIS requests to that server. The client will automatically ping the server on a regular basis to make sure it is still available. If it fails to receive a reply within a reasonable amount of time, ypbind will mark the domain as unbound and begin broadcasting again in the hopes of locating another server. To configure a FreeBSD machine to be an NIS client: [.procedure] ==== . Edit [.filename]#/etc/rc.conf# and add the following lines in order to set the NIS domain name and start man:ypbind[8] during network startup: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... . To import all possible password entries from the NIS server, use `vipw` to remove all user accounts except one from [.filename]#/etc/master.passwd#. When removing the accounts, keep in mind that at least one local account should remain and this account should be a member of `wheel`. If there is a problem with NIS, this local account can be used to log in remotely, become the superuser, and fix the problem. Before saving the edits, add the following line to the end of the file: + [.programlisting] .... +::::::::: .... + This line configures the client to provide anyone with a valid account in the NIS server's password maps an account on the client. There are many ways to configure the NIS client by modifying this line. One method is described in crossref:network-servers[network-netgroups, Using Netgroups]. For more detailed reading, refer to the book `Managing NFS and NIS`, published by O'Reilly Media. . To import all possible group entries from the NIS server, add this line to [.filename]#/etc/group#: + [.programlisting] .... +:*:: .... ==== To start the NIS client immediately, execute the following commands as the superuser: [source,shell] .... # /etc/netstart # service ypbind start .... After completing these steps, running `ypcat passwd` on the client should show the server's [.filename]#passwd# map. === NIS Security Since RPC is a broadcast-based service, any system running ypbind within the same domain can retrieve the contents of the NIS maps. To prevent unauthorized transactions, man:ypserv[8] supports a feature called "securenets" which can be used to restrict access to a given set of hosts. By default, this information is stored in [.filename]#/var/yp/securenets#, unless man:ypserv[8] is started with `-p` and an alternate path. This file contains entries that consist of a network specification and a network mask separated by white space. Lines starting with `+"#"+` are considered to be comments. A sample [.filename]#securenets# might look like this: [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... If man:ypserv[8] receives a request from an address that matches one of these rules, it will process the request normally. If the address fails to match a rule, the request will be ignored and a warning message will be logged. If the [.filename]#securenets# does not exist, `ypserv` will allow connections from any host. crossref:security[tcpwrappers,"TCP Wrapper"] is an alternate mechanism for providing access control instead of [.filename]#securenets#. While either access control mechanism adds some security, they are both vulnerable to "IP spoofing" attacks. All NIS-related traffic should be blocked at the firewall. Servers using [.filename]#securenets# may fail to serve legitimate NIS clients with archaic TCP/IP implementations. Some of these implementations set all host bits to zero when doing broadcasts or fail to observe the subnet mask when calculating the broadcast address. While some of these problems can be fixed by changing the client configuration, other problems may force the retirement of these client systems or the abandonment of [.filename]#securenets#. The use of TCP Wrapper increases the latency of the NIS server. The additional delay may be long enough to cause timeouts in client programs, especially in busy networks with slow NIS servers. If one or more clients suffer from latency, convert those clients into NIS slave servers and force them to bind to themselves. ==== Barring Some Users In this example, the `basie` system is a faculty workstation within the NIS domain. The [.filename]#passwd# map on the master NIS server contains accounts for both faculty and students. This section demonstrates how to allow faculty logins on this system while refusing student logins. To prevent specified users from logging on to a system, even if they are present in the NIS database, use `vipw` to add `-_username_` with the correct number of colons towards the end of [.filename]#/etc/master.passwd# on the client, where _username_ is the username of a user to bar from logging in. The line with the blocked user must be before the `+` line that allows NIS users. In this example, `bill` is barred from logging on to `basie`: [source,shell] .... basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin operator:*:2:5::0:0:System &:/:/usr/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/usr/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin -bill::::::::: +::::::::: basie# .... [[network-netgroups]] === Using Netgroups Barring specified users from logging on to individual systems becomes unscaleable on larger networks and quickly loses the main benefit of NIS: _centralized_ administration. Netgroups were developed to handle large, complex networks with hundreds of users and machines. Their use is comparable to UNIX(R) groups, where the main difference is the lack of a numeric ID and the ability to define a netgroup by including both user accounts and other netgroups. To expand on the example used in this chapter, the NIS domain will be extended to add the users and systems shown in Tables 28.2 and 28.3: .Additional Users [cols="1,1", frame="none", options="header"] |=== | User Name(s) | Description |`alpha`, `beta` |IT department employees |`charlie`, `delta` |IT department apprentices |`echo`, `foxtrott`, `golf`, ... |employees |`able`, `baker`, ... |interns |=== .Additional Systems [cols="1,1", frame="none", options="header"] |=== | Machine Name(s) | Description |`war`, `death`, `famine`, `pollution` |Only IT employees are allowed to log onto these servers. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |All members of the IT department are allowed to login onto these servers. |`one`, `two`, `three`, `four`, ... |Ordinary workstations used by employees. |`trashcan` |A very old machine without any critical data. Even interns are allowed to use this system. |=== When using netgroups to configure this scenario, each user is assigned to one or more netgroups and logins are then allowed or forbidden for all members of the netgroup. When adding a new machine, login restrictions must be defined for all netgroups. When a new user is added, the account must be added to one or more netgroups. If the NIS setup is planned carefully, only one central configuration file needs modification to grant or deny access to machines. The first step is the initialization of the NIS `netgroup` map. In FreeBSD, this map is not created by default. On the NIS master server, use an editor to create a map named [.filename]#/var/yp/netgroup#. This example creates four netgroups to represent IT employees, IT apprentices, employees, and interns: [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... Each entry configures a netgroup. The first column in an entry is the name of the netgroup. Each set of parentheses represents either a group of one or more users or the name of another netgroup. When specifying a user, the three comma-delimited fields inside each group represent: . The name of the host(s) where the other fields representing the user are valid. If a hostname is not specified, the entry is valid on all hosts. . The name of the account that belongs to this netgroup. . The NIS domain for the account. Accounts may be imported from other NIS domains into a netgroup. If a group contains multiple users, separate each user with whitespace. Additionally, each field may contain wildcards. See man:netgroup[5] for details. Netgroup names longer than 8 characters should not be used. The names are case sensitive and using capital letters for netgroup names is an easy way to distinguish between user, machine and netgroup names. Some non-FreeBSD NIS clients cannot handle netgroups containing more than 15 entries. This limit may be circumvented by creating several sub-netgroups with 15 users or fewer and a real netgroup consisting of the sub-netgroups, as seen in this example: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Repeat this process if more than 225 (15 times 15) users exist within a single netgroup. To activate and distribute the new NIS map: [source,shell] .... ellington# cd /var/yp ellington# make .... This will generate the three NIS maps [.filename]#netgroup#, [.filename]#netgroup.byhost# and [.filename]#netgroup.byuser#. Use the map key option of man:ypcat[1] to check if the new NIS maps are available: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... The output of the first command should resemble the contents of [.filename]#/var/yp/netgroup#. The second command only produces output if host-specific netgroups were created. The third command is used to get the list of netgroups for a user. To configure a client, use man:vipw[8] to specify the name of the netgroup. For example, on the server named `war`, replace this line: [.programlisting] .... +::::::::: .... with [.programlisting] .... +@IT_EMP::::::::: .... This specifies that only the users defined in the netgroup `IT_EMP` will be imported into this system's password database and only those users are allowed to login to this system. This configuration also applies to the `~` function of the shell and all routines which convert between user names and numerical user IDs. In other words, `cd ~_user_` will not work, `ls -l` will show the numerical ID instead of the username, and `find . -user joe -print` will fail with the message `No such user`. To fix this, import all user entries without allowing them to login into the servers. This can be achieved by adding an extra line: [.programlisting] .... +:::::::::/usr/sbin/nologin .... This line configures the client to import all entries but to replace the shell in those entries with [.filename]#/usr/sbin/nologin#. Make sure that extra line is placed _after_ `+@IT_EMP:::::::::`. Otherwise, all user accounts imported from NIS will have [.filename]#/usr/sbin/nologin# as their login shell and no one will be able to login to the system. To configure the less important servers, replace the old `+:::::::::` on the servers with these lines: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/usr/sbin/nologin .... The corresponding lines for the workstations would be: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/usr/sbin/nologin .... NIS supports the creation of netgroups from other netgroups which can be useful if the policy regarding user access changes. One possibility is the creation of role-based netgroups. For example, one might create a netgroup called `BIGSRV` to define the login restrictions for the important servers, another netgroup called `SMALLSRV` for the less important servers, and a third netgroup called `USERBOX` for the workstations. Each of these netgroups contains the netgroups that are allowed to login onto these machines. The new entries for the NIS`netgroup` map would look like this: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... This method of defining login restrictions works reasonably well when it is possible to define groups of machines with identical restrictions. Unfortunately, this is the exception and not the rule. Most of the time, the ability to define login restrictions on a per-machine basis is required. Machine-specific netgroup definitions are another possibility to deal with the policy changes. In this scenario, the [.filename]#/etc/master.passwd# of each system contains two lines starting with "+". The first line adds a netgroup with the accounts allowed to login onto this machine and the second line adds all other accounts with [.filename]#/usr/sbin/nologin# as shell. It is recommended to use the "ALL-CAPS" version of the hostname as the name of the netgroup: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/usr/sbin/nologin .... Once this task is completed on all the machines, there is no longer a need to modify the local versions of [.filename]#/etc/master.passwd# ever again. All further changes can be handled by modifying the NIS map. Here is an example of a possible `netgroup` map for this scenario: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... It may not always be advisable to use machine-based netgroups. When deploying a couple of dozen or hundreds of systems, role-based netgroups instead of machine-based netgroups may be used to keep the size of the NIS map within reasonable limits. === Password Formats NIS requires that all hosts within an NIS domain use the same format for encrypting passwords. If users have trouble authenticating on an NIS client, it may be due to a differing password format. In a heterogeneous network, the format must be supported by all operating systems, where DES is the lowest common standard. To check which format a server or client is using, look at this section of [.filename]#/etc/login.conf#: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Further entries elided] .... In this example, the system is using the DES format for password hashing. Other possible values include `blf` for Blowfish, `md5` for MD5, `sha256` and `sha512` for SHA-256 and SHA-512 respectively. For more information and the up to date list of what is available on the system, consult the man:crypt[3] manpage. If the format on a host needs to be edited to match the one being used in the NIS domain, the login capability database must be rebuilt after saving the change: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== The format of passwords for existing user accounts will not be updated until each user changes their password _after_ the login capability database is rebuilt. ==== [[network-ldap]] == Lightweight Directory Access Protocol (LDAP) The Lightweight Directory Access Protocol (LDAP) is an application layer protocol used to access, modify, and authenticate objects using a distributed directory information service. Think of it as a phone or record book which stores several levels of hierarchical, homogeneous information. It is used in Active Directory and OpenLDAP networks and allows users to access to several levels of internal information utilizing a single account. For example, email authentication, pulling employee contact information, and internal website authentication might all make use of a single user account in the LDAP server's record base. This section provides a quick start guide for configuring an LDAP server on a FreeBSD system. It assumes that the administrator already has a design plan which includes the type of information to store, what that information will be used for, which users should have access to that information, and how to secure this information from unauthorized access. === LDAP Terminology and Structure LDAP uses several terms which should be understood before starting the configuration. All directory entries consist of a group of _attributes_. Each of these attribute sets contains a unique identifier known as a _Distinguished Name_ (DN) which is normally built from several other attributes such as the common or _Relative Distinguished Name_ (RDN). Similar to how directories have absolute and relative paths, consider a DN as an absolute path and the RDN as the relative path. An example LDAP entry looks like the following. This example searches for the entry for the specified user account (`uid`), organizational unit (`ou`), and organization (`o`): [source,shell] .... % ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... This example entry shows the values for the `dn`, `mail`, `cn`, `uid`, and `telephoneNumber` attributes. The cn attribute is the RDN. More information about LDAP and its terminology can be found at http://www.openldap.org/doc/admin24/intro.html[http://www.openldap.org/doc/admin24/intro.html]. [[ldap-config]] === Configuring an LDAP Server FreeBSD does not provide a built-in LDAP server. Begin the configuration by installing package:net/openldap-server[] package or port: [source,shell] .... # pkg install openldap-server .... There is a large set of default options enabled in the extref:{linux-users}[package, software]. Review them by running `pkg info openldap-server`. If they are not sufficient (for example if SQL support is needed), please consider recompiling the port using the appropriate crossref:ports[ports-using,framework]. The installation creates the directory [.filename]#/var/db/openldap-data# to hold the data. The directory to store the certificates must be created: [source,shell] .... # mkdir /usr/local/etc/openldap/private .... The next phase is to configure the Certificate Authority. The following commands must be executed from [.filename]#/usr/local/etc/openldap/private#. This is important as the file permissions need to be restrictive and users should not have access to these files. More detailed information about certificates and their parameters can be found in crossref:security[openssl,"OpenSSL"]. To create the Certificate Authority, start with this command and follow the prompts: [source,shell] .... # openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt .... The entries for the prompts may be generic _except_ for the `Common Name`. This entry must be _different_ than the system hostname. If this will be a self signed certificate, prefix the hostname with `CA` for Certificate Authority. The next task is to create a certificate signing request and a private key. Input this command and follow the prompts: [source,shell] .... # openssl req -days 365 -nodes -new -keyout server.key -out server.csr .... During the certificate generation process, be sure to correctly set the `Common Name` attribute. The Certificate Signing Request must be signed with the Certificate Authority in order to be used as a valid certificate: [source,shell] .... # openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial .... The final part of the certificate generation process is to generate and sign the client certificates: [source,shell] .... # openssl req -days 365 -nodes -new -keyout client.key -out client.csr # openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key .... Remember to use the same `Common Name` attribute when prompted. When finished, ensure that a total of eight (8) new files have been generated through the proceeding commands. The daemon running the OpenLDAP server is [.filename]#slapd#. Its configuration is performed through [.filename]#slapd.ldif#: the old [.filename]#slapd.conf# has been deprecated by OpenLDAP. http://www.openldap.org/doc/admin24/slapdconf2.html[Configuration examples] for [.filename]#slapd.ldif# are available and can also be found in [.filename]#/usr/local/etc/openldap/slapd.ldif.sample#. Options are documented in slapd-config(5). Each section of [.filename]#slapd.ldif#, like all the other LDAP attribute sets, is uniquely identified through a DN. Be sure that no blank lines are left between the `dn:` statement and the desired end of the section. In the following example, TLS will be used to implement a secure channel. The first section represents the global configuration: [.programlisting] .... # # See slapd-config(5) for details on configuration options. # This file should NOT be world readable. # dn: cn=config objectClass: olcGlobal cn: config # # # Define global ACLs to disable default read access. # olcArgsFile: /var/run/openldap/slapd.args olcPidFile: /var/run/openldap/slapd.pid olcTLSCertificateFile: /usr/local/etc/openldap/server.crt olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt #olcTLSCipherSuite: HIGH olcTLSProtocolMin: 3.1 olcTLSVerifyClient: never .... The Certificate Authority, server certificate and server private key files must be specified here. It is recommended to let the clients choose the security cipher and omit option `olcTLSCipherSuite` (incompatible with TLS clients other than [.filename]#openssl#). Option `olcTLSProtocolMin` lets the server require a minimum security level: it is recommended. While verification is mandatory for the server, it is not for the client: `olcTLSVerifyClient: never`. The second section is about the backend modules and can be configured as follows: [.programlisting] .... # # Load dynamic backend modules: # dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/local/libexec/openldap olcModuleload: back_mdb.la #olcModuleload: back_bdb.la #olcModuleload: back_hdb.la #olcModuleload: back_ldap.la #olcModuleload: back_passwd.la #olcModuleload: back_shell.la .... The third section is devoted to load the needed `ldif` schemas to be used by the databases: they are essential. [.programlisting] .... dn: cn=schema,cn=config objectClass: olcSchemaConfig cn: schema include: file:///usr/local/etc/openldap/schema/core.ldif include: file:///usr/local/etc/openldap/schema/cosine.ldif include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif include: file:///usr/local/etc/openldap/schema/nis.ldif .... Next, the frontend configuration section: [.programlisting] .... # Frontend settings # dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: to * by * read # # Sample global access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # #olcAccess: to dn.base="" by * read #olcAccess: to dn.base="cn=Subschema" by * read #olcAccess: to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! # olcPasswordHash: {SSHA} # {SSHA} is already the default for olcPasswordHash .... Another section is devoted to the _configuration backend_, the only way to later access the OpenLDAP server configuration is as a global super-user. [.programlisting] .... dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: to * by * none olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U .... The default administrator username is `cn=config`. Type [.filename]#slappasswd# in a shell, choose a password and use its hash in `olcRootPW`. If this option is not specified now, before [.filename]#slapd.ldif# is imported, no one will be later able to modify the _global configuration_ section. The last section is about the database backend: [.programlisting] .... ####################################################################### # LMDB database definitions ####################################################################### # dn: olcDatabase=mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: mdb olcDbMaxSize: 1073741824 olcSuffix: dc=domain,dc=example olcRootDN: cn=mdbadmin,dc=domain,dc=example # Cleartext passwords, especially for the rootdn, should # be avoided. See slappasswd(8) and slapd-config(5) for details. # Use of strong authentication encouraged. olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+ # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. olcDbDirectory: /var/db/openldap-data # Indices to maintain olcDbIndex: objectClass eq .... This database hosts the _actual contents_ of the LDAP directory. Types other than `mdb` are available. Its super-user, not to be confused with the global one, is configured here: a (possibly custom) username in `olcRootDN` and the password hash in `olcRootPW`; [.filename]#slappasswd# can be used as before. This http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/data/regressions/its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;hb=HEAD[repository] contains four examples of [.filename]#slapd.ldif#. To convert an existing [.filename]#slapd.conf# into [.filename]#slapd.ldif#, refer to http://www.openldap.org/doc/admin24/slapdconf2.html[this page] (please note that this may introduce some unuseful options). When the configuration is completed, [.filename]#slapd.ldif# must be placed in an empty directory. It is recommended to create it as: [source,shell] .... # mkdir /usr/local/etc/openldap/slapd.d/ .... Import the configuration database: [source,shell] .... # /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif .... Start the [.filename]#slapd# daemon: [source,shell] .... # /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/ .... Option `-d` can be used for debugging, as specified in slapd(8). To verify that the server is running and working: [source,shell] .... # ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContexts # # dn: namingContexts: dc=domain,dc=example # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... The server must still be trusted. If that has never been done before, follow these instructions. Install the OpenSSL package or port: [source,shell] .... # pkg install openssl .... From the directory where [.filename]#ca.crt# is stored (in this example, [.filename]#/usr/local/etc/openldap#), run: [source,shell] .... # c_rehash . .... Both the CA and the server certificate are now correctly recognized in their respective roles. To verify this, run this command from the [.filename]#server.crt# directory: [source,shell] .... # openssl verify -verbose -CApath . server.crt .... If [.filename]#slapd# was running, restart it. As stated in [.filename]#/usr/local/etc/rc.d/slapd#, to properly run [.filename]#slapd# at boot the following lines must be added to [.filename]#/etc/rc.conf#: [.programlisting] .... slapd_enable="YES" slapd_flags='-h "ldapi://%2fvar%2frun%2fopenldap%2fldapi/ ldap://0.0.0.0/"' slapd_sockets="/var/run/openldap/ldapi" slapd_cn_config="YES" .... [.filename]#slapd# does not provide debugging at boot. Check [.filename]#/var/log/debug.log#, [.filename]#dmesg -a# and [.filename]#/var/log/messages# for this purpose. The following example adds the group `team` and the user `john` to the `domain.example` LDAP database, which is still empty. First, create the file [.filename]#domain.ldif#: [source,shell] .... # cat domain.ldif dn: dc=domain,dc=example objectClass: dcObject objectClass: organization o: domain.example dc: domain dn: ou=groups,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: groups dn: ou=users,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: users dn: cn=team,ou=groups,dc=domain,dc=example objectClass: top objectClass: posixGroup cn: team gidNumber: 10001 dn: uid=john,ou=users,dc=domain,dc=example objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: John McUser uid: john uidNumber: 10001 gidNumber: 10001 homeDirectory: /home/john/ loginShell: /usr/bin/bash userPassword: secret .... See the OpenLDAP documentation for more details. Use [.filename]#slappasswd# to replace the plain text password `secret` with a hash in `userPassword`. The path specified as `loginShell` must exist in all the systems where `john` is allowed to login. Finally, use the `mdb` administrator to modify the database: [source,shell] .... # ldapadd -W -D "cn=mdbadmin,dc=domain,dc=example" -f domain.ldif .... Modifications to the _global configuration_ section can only be performed by the global super-user. For example, assume that the option `olcTLSCipherSuite: HIGH:MEDIUM:SSLv3` was initially specified and must now be deleted. First, create a file that contains the following: [source,shell] .... # cat global_mod dn: cn=config changetype: modify delete: olcTLSCipherSuite .... Then, apply the modifications: [source,shell] .... # ldapmodify -f global_mod -x -D "cn=config" -W .... When asked, provide the password chosen in the _configuration backend_ section. The username is not required: here, `cn=config` represents the DN of the database section to be modified. Alternatively, use `ldapmodify` to delete a single line of the database, `ldapdelete` to delete a whole entry. If something goes wrong, or if the global super-user cannot access the configuration backend, it is possible to delete and re-write the whole configuration: [source,shell] .... # rm -rf /usr/local/etc/openldap/slapd.d/ .... [.filename]#slapd.ldif# can then be edited and imported again. Please, follow this procedure only when no other solution is available. This is the configuration of the server only. The same machine can also host an LDAP client, with its own separate configuration. [[network-dhcp]] == Dynamic Host Configuration Protocol (DHCP) The Dynamic Host Configuration Protocol (DHCP) allows a system to connect to a network in order to be assigned the necessary addressing information for communication on that network. FreeBSD includes the OpenBSD version of `dhclient` which is used by the client to obtain the addressing information. FreeBSD does not install a DHCP server, but several servers are available in the FreeBSD Ports Collection. The DHCP protocol is fully described in http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Informational resources are also available at http://www.isc.org/downloads/dhcp/[isc.org/downloads/dhcp/]. This section describes how to use the built-in DHCP client. It then describes how to install and configure a DHCP server. [NOTE] ==== In FreeBSD, the man:bpf[4] device is needed by both the DHCP server and DHCP client. This device is included in the [.filename]#GENERIC# kernel that is installed with FreeBSD. Users who prefer to create a custom kernel need to keep this device if DHCP is used. It should be noted that [.filename]#bpf# also allows privileged users to run network packet sniffers on that system. ==== === Configuring a DHCP Client DHCP client support is included in the FreeBSD installer, making it easy to configure a newly installed system to automatically receive its networking addressing information from an existing DHCP server. Refer to crossref:bsdinstall[bsdinstall-post,"Accounts, Time Zone, Services and Hardening"] for examples of network configuration. When `dhclient` is executed on the client machine, it begins broadcasting requests for configuration information. By default, these requests use UDP port 68. The server replies on UDP port 67, giving the client an IP address and other relevant network information such as a subnet mask, default gateway, and DNS server addresses. This information is in the form of a DHCP "lease" and is valid for a configurable time. This allows stale IP addresses for clients no longer connected to the network to automatically be reused. DHCP clients can obtain a great deal of information from the server. An exhaustive list may be found in man:dhcp-options[5]. By default, when a FreeBSD system boots, its DHCP client runs in the background, or _asynchronously_. Other startup scripts continue to run while the DHCP process completes, which speeds up system startup. Background DHCP works well when the DHCP server responds quickly to the client's requests. However, DHCP may take a long time to complete on some systems. If network services attempt to run before DHCP has assigned the network addressing information, they will fail. Using DHCP in _synchronous_ mode prevents this problem as it pauses startup until the DHCP configuration has completed. This line in [.filename]#/etc/rc.conf# is used to configure background or asynchronous mode: [.programlisting] .... ifconfig_fxp0="DHCP" .... This line may already exist if the system was configured to use DHCP during installation. Replace the _fxp0_ shown in these examples with the name of the interface to be dynamically configured, as described in crossref:config[config-network-setup,“Setting Up Network Interface Cards”]. To instead configure the system to use synchronous mode, and to pause during startup while DHCP completes, use "`SYNCDHCP`": [.programlisting] .... ifconfig_fxp0="SYNCDHCP" .... Additional client options are available. Search for `dhclient` in man:rc.conf[5] for details. The DHCP client uses the following files: * [.filename]#/etc/dhclient.conf# + The configuration file used by `dhclient`. Typically, this file contains only comments as the defaults are suitable for most clients. This configuration file is described in man:dhclient.conf[5]. * [.filename]#/sbin/dhclient# + More information about the command itself can be found in man:dhclient[8]. * [.filename]#/sbin/dhclient-script# + The FreeBSD-specific DHCP client configuration script. It is described in man:dhclient-script[8], but should not need any user modification to function properly. * [.filename]#/var/db/dhclient.leases.interface# + The DHCP client keeps a database of valid leases in this file, which is written as a log and is described in man:dhclient.leases[5]. [[network-dhcp-server]] === Installing and Configuring a DHCP Server This section demonstrates how to configure a FreeBSD system to act as a DHCP server using the Internet Systems Consortium (ISC) implementation of the DHCP server. This implementation and its documentation can be installed using the package:net/isc-dhcp44-server[] package or port. The installation of package:net/isc-dhcp44-server[] installs a sample configuration file. Copy [.filename]#/usr/local/etc/dhcpd.conf.example# to [.filename]#/usr/local/etc/dhcpd.conf# and make any edits to this new file. The configuration file is comprised of declarations for subnets and hosts which define the information that is provided to DHCP clients. For example, these lines configure the following: [.programlisting] .... option domain-name "example.org";<.> option domain-name-servers ns1.example.org;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 600;<.> max-lease-time 72400;<.> ddns-update-style none;<.> subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20;<.> option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;<.> } host fantasia { hardware ethernet 08:00:07:26:c0:a5;<.> fixed-address fantasia.fugue.com;<.> } .... <.> This option specifies the default search domain that will be provided to clients. Refer to man:resolv.conf[5] for more information. <.> This option specifies a comma separated list of DNS servers that the client should use. They can be listed by their Fully Qualified Domain Names (FQDN), as seen in the example, or by their IP addresses. <.> The subnet mask that will be provided to clients. <.> The default lease expiry time in seconds. A client can be configured to override this value. <.> The maximum allowed length of time, in seconds, for a lease. Should a client request a longer lease, a lease will still be issued, but it will only be valid for `max-lease-time`. <.> The default of `none` disables dynamic DNS updates. Changing this to `interim` configures the DHCP server to update a DNS server whenever it hands out a lease so that the DNS server knows which IP addresses are associated with which computers in the network. Do not change the default setting unless the DNS server has been configured to support dynamic DNS. <.> This line creates a pool of available IP addresses which are reserved for allocation to DHCP clients. The range of addresses must be valid for the network or subnet specified in the previous line. <.> Declares the default gateway that is valid for the network or subnet specified before the opening `{` bracket. <.> Specifies the hardware MAC address of a client so that the DHCP server can recognize the client when it makes a request. <.> Specifies that this host should always be given the same IP address. Using the hostname is correct, since the DHCP server will resolve the hostname before returning the lease information. This configuration file supports many more options. Refer to dhcpd.conf(5), installed with the server, for details and examples. Once the configuration of [.filename]#dhcpd.conf# is complete, enable the DHCP server in [.filename]#/etc/rc.conf#: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... Replace the `dc0` with the interface (or interfaces, separated by whitespace) that the DHCP server should listen on for DHCP client requests. Start the server by issuing the following command: [source,shell] .... # service isc-dhcpd start .... Any future changes to the configuration of the server will require the dhcpd service to be stopped and then started using man:service[8]. The DHCP server uses the following files. Note that the manual pages are installed with the server software. * [.filename]#/usr/local/sbin/dhcpd# + More information about the dhcpd server can be found in dhcpd(8). * [.filename]#/usr/local/etc/dhcpd.conf# + The server configuration file needs to contain all the information that should be provided to clients, along with information regarding the operation of the server. This configuration file is described in dhcpd.conf(5). * [.filename]#/var/db/dhcpd.leases# + The DHCP server keeps a database of leases it has issued in this file, which is written as a log. Refer to dhcpd.leases(5), which gives a slightly longer description. * [.filename]#/usr/local/sbin/dhcrelay# + This daemon is used in advanced environments where one DHCP server forwards a request from a client to another DHCP server on a separate network. If this functionality is required, install the package:net/isc-dhcp44-relay[] package or port. The installation includes dhcrelay(8) which provides more detail. [[network-dns]] == Domain Name System (DNS) Domain Name System (DNS) is the protocol through which domain names are mapped to IP addresses, and vice versa. DNS is coordinated across the Internet through a somewhat complex system of authoritative root, Top Level Domain (TLD), and other smaller-scale name servers, which host and cache individual domain information. It is not necessary to run a name server to perform DNS lookups on a system. The following table describes some of the terms associated with DNS: .DNS Terminology [cols="1,1", frame="none", options="header"] |=== | Term | Definition |Forward DNS |Mapping of hostnames to IP addresses. |Origin |Refers to the domain covered in a particular zone file. |Resolver |A system process through which a machine queries a name server for zone information. |Reverse DNS |Mapping of IP addresses to hostnames. |Root zone |The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory. |Zone |An individual domain, subdomain, or portion of the DNS administered by the same authority. |=== Examples of zones: * `.` is how the root zone is usually referred to in documentation. * `org.` is a Top Level Domain (TLD) under the root zone. * `example.org.` is a zone under the `org.`TLD. * `1.168.192.in-addr.arpa` is a zone referencing all IP addresses which fall under the `192.168.1.*`IP address space. As one can see, the more specific part of a hostname appears to its left. For example, `example.org.` is more specific than `org.`, as `org.` is more specific than the root zone. The layout of each part of a hostname is much like a file system: the [.filename]#/dev# directory falls within the root, and so on. === Reasons to Run a Name Server Name servers generally come in two forms: authoritative name servers, and caching (also known as resolving) name servers. An authoritative name server is needed when: * One wants to serve DNS information to the world, replying authoritatively to queries. * A domain, such as `example.org`, is registered and IP addresses need to be assigned to hostnames under it. * An IP address block requires reverse DNS entries (IP to hostname). * A backup or second name server, called a slave, will reply to queries. A caching name server is needed when: * A local DNS server may cache and respond more quickly than querying an outside name server. When one queries for `www.FreeBSD.org`, the resolver usually queries the uplink ISP's name server, and retrieves the reply. With a local, caching DNS server, the query only has to be made once to the outside world by the caching DNS server. Additional queries will not have to go outside the local network, since the information is cached locally. === DNS Server Configuration Unbound is provided in the FreeBSD base system. By default, it will provide DNS resolution to the local machine only. While the base system package can be configured to provide resolution services beyond the local machine, it is recommended that such requirements be addressed by installing Unbound from the FreeBSD Ports Collection. To enable Unbound, add the following to [.filename]#/etc/rc.conf#: [.programlisting] .... local_unbound_enable="YES" .... Any existing nameservers in [.filename]#/etc/resolv.conf# will be configured as forwarders in the new Unbound configuration. [NOTE] ==== If any of the listed nameservers do not support DNSSEC, local DNS resolution will fail. Be sure to test each nameserver and remove any that fail the test. The following command will show the trust tree or a failure for a nameserver running on `192.168.1.1`: [source,shell] .... % drill -S FreeBSD.org @192.168.1.1 .... ==== Once each nameserver is confirmed to support DNSSEC, start Unbound: [source,shell] .... # service local_unbound onestart .... This will take care of updating [.filename]#/etc/resolv.conf# so that queries for DNSSEC secured domains will now work. For example, run the following to validate the FreeBSD.org DNSSEC trust tree: [source,shell] .... % drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful .... === Authoritative Name Server Configuration FreeBSD does not provide authoritative name server software in the base system. Users are encouraged to install third party applications, like package:dns/nsd[] or package:dns/bind918[] package or port. [[network-zeroconf]] == Zero-configuration Networking (mDNS/DNS-SD) https://en.wikipedia.org/wiki/Zero-configuration_networking[Zero-configuration networking] (sometimes referred to as _Zeroconf_) is a set of technologies, which simplify network configuration. The main parts of Zeroconf are: - Link-Local Addressing providing automatic assignment of numeric network addresses. - Multicast DNS (_mDNS_) providing automatic distribution and resolution of hostnames. - DNS-Based Service Discovery (_DNS-SD_) providing automatic discovery of service instances. === Configuring and Starting Avahi One of the popular implementations of zeroconf is https://avahi.org/[Avahi]. Avahi can be installed and configured with the following commands: [source,shell] .... # pkg install avahi-app nss_mdns # grep -q '^hosts:.*\' /etc/nsswitch.conf || sed -i "" 's/^hosts: .*/& mdns/' /etc/nsswitch.conf # service dbus enable # service avahi-daemon enable # service dbus start # service avahi-daemon start .... [[network-apache]] == Apache HTTP Server The open source Apache HTTP Server is the most widely used web server. FreeBSD does not install this web server by default, but it can be installed from the package:www/apache24[] package or port. This section summarizes how to configure and start version 2._x_ of the Apache HTTP Server on FreeBSD. For more detailed information about Apache 2.X and its configuration directives, refer to http://httpd.apache.org/[httpd.apache.org]. === Configuring and Starting Apache In FreeBSD, the main Apache HTTP Server configuration file is installed as [.filename]#/usr/local/etc/apache2x/httpd.conf#, where _x_ represents the version number. This ASCII text file begins comment lines with a `+#+`. The most frequently modified directives are: `ServerRoot "/usr/local"`:: Specifies the default directory hierarchy for the Apache installation. Binaries are stored in the [.filename]#bin# and [.filename]#sbin# subdirectories of the server root and configuration files are stored in the [.filename]#etc/apache2x# subdirectory. `ServerAdmin \you@example.com`:: Change this to the email address to receive problems with the server. This address also appears on some server-generated pages, such as error documents. `ServerName www.example.com:80`:: Allows an administrator to set a hostname which is sent back to clients for the server. For example, `www` can be used instead of the actual hostname. If the system does not have a registered DNS name, enter its IP address instead. If the server will listen on an alternate report, change `80` to the alternate port number. `DocumentRoot "/usr/local/www/apache2_x_/data"`:: The directory where documents will be served from. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. It is always a good idea to make a backup copy of the default Apache configuration file before making changes. When the configuration of Apache is complete, save the file and verify the configuration using `apachectl`. Running `apachectl configtest` should return `Syntax OK`. To launch Apache at system startup, add the following line to [.filename]#/etc/rc.conf#: [.programlisting] .... apache24_enable="YES" .... If Apache should be started with non-default options, the following line may be added to [.filename]#/etc/rc.conf# to specify the needed flags: [.programlisting] .... apache24_flags="" .... If apachectl does not report configuration errors, start `httpd` now: [source,shell] .... # service apache24 start .... The `httpd` service can be tested by entering `http://_localhost_` in a web browser, replacing _localhost_ with the fully-qualified domain name of the machine running `httpd`. The default web page that is displayed is [.filename]#/usr/local/www/apache24/data/index.html#. The Apache configuration can be tested for errors after making subsequent configuration changes while `httpd` is running using the following command: [source,shell] .... # service apache24 configtest .... [NOTE] ==== It is important to note that `configtest` is not an man:rc[8] standard, and should not be expected to work for all startup scripts. ==== === Virtual Hosting Virtual hosting allows multiple websites to run on one Apache server. The virtual hosts can be _IP-based_ or _name-based_. IP-based virtual hosting uses a different IP address for each website. Name-based virtual hosting uses the clients HTTP/1.1 headers to figure out the hostname, which allows the websites to share the same IP address. To setup Apache to use name-based virtual hosting, add a `VirtualHost` block for each website. For example, for the webserver named `www.domain.tld` with a virtual domain of `www.someotherdomain.tld`, add the following entries to [.filename]#httpd.conf#: [.programlisting] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... For each virtual host, replace the values for `ServerName` and `DocumentRoot` with the values to be used. For more information about setting up virtual hosts, consult the official Apache documentation at: http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]. === Apache Modules Apache uses modules to augment the functionality provided by the basic server. Refer to http://httpd.apache.org/docs/current/mod/[http://httpd.apache.org/docs/current/mod/] for a complete listing of and the configuration details for the available modules. In FreeBSD, some modules can be compiled with the package:www/apache24[] port. Type `make config` within [.filename]#/usr/ports/www/apache24# to see which modules are available and which are enabled by default. If the module is not compiled with the port, the FreeBSD Ports Collection provides an easy way to install many modules. This section describes three of the most commonly used modules. ==== SSL support At one point, support for SSL inside of Apache required a secondary module called [.filename]#mod_ssl#. This is no longer the case and the default install of Apache comes with SSL built into the web server. An example of how to enable support for SSL websites is available in the installed file, [.filename]#httpd-ssl.conf# inside of the [.filename]#/usr/local/etc/apache24/extra# directory Inside this directory is also a sample file called named [.filename]#ssl.conf-sample#. It is recommended that both files be evaluated to properly set up secure websites in the Apache web server. After the configuration of SSL is complete, the following line must be uncommented in the main [.filename]#http.conf# to activate the changes on the next restart or reload of Apache: [.programlisting] .... #Include etc/apache24/extra/httpd-ssl.conf .... [WARNING] ==== SSL version two and version three have known vulnerability issues. It is highly recommended TLS version 1.2 and 1.3 be enabled in place of the older SSL options. This can be accomplished by setting the following options in the [.filename]#ssl.conf#: ==== [.programlisting] .... SSLProtocol all -SSLv3 -SSLv2 +TLSv1.2 +TLSv1.3 SSLProxyProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 .... To complete the configuration of SSL in the web server, uncomment the following line to ensure that the configuration will be pulled into Apache during restart or reload: [.programlisting] .... # Secure (SSL/TLS) connections Include etc/apache24/extra/httpd-ssl.conf .... The following lines must also be uncommented in the [.filename]#httpd.conf# to fully support SSL in Apache: [.programlisting] .... LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so LoadModule ssl_module libexec/apache24/mod_ssl.so .... The next step is to work with a certificate authority to have the appropriate certificates installed on the system. This will set up a chain of trust for the site and prevent any warnings of self-signed certificates. ==== [.filename]#mod_perl# The [.filename]#mod_perl# module makes it possible to write Apache modules in Perl. In addition, the persistent interpreter embedded in the server avoids the overhead of starting an external interpreter and the penalty of Perl start-up time. The [.filename]#mod_perl# can be installed using the package:www/mod_perl2[] package or port. Documentation for using this module can be found at http://perl.apache.org/docs/2.0/index.html[http://perl.apache.org/docs/2.0/index.html]. ==== [.filename]#mod_php# _PHP: Hypertext Preprocessor_ (PHP) is a general-purpose scripting language that is especially suited for web development. Capable of being embedded into HTML, its syntax draws upon C, Java(TM), and Perl with the intention of allowing web developers to write dynamically generated webpages quickly. Support for PHP for Apache and any other feature written in the language, can be added by installing the appropriate port. For all supported versions, search the package database using `pkg`: [source,shell] .... # pkg search php .... A list will be displayed including the versions and additional features they provide. The components are completely modular, meaning features are enabled by installing the appropriate port. To install PHP version 7.4 for Apache, issue the following command: [source,shell] .... # pkg install mod_php74 .... If any dependency packages need to be installed, they will be installed as well. By default, PHP will not be enabled. The following lines will need to be added to the Apache configuration file located in [.filename]#/usr/local/etc/apache24# to make it active: [.programlisting] .... SetHandler application/x-httpd-php SetHandler application/x-httpd-php-source .... In addition, the `DirectoryIndex` in the configuration file will also need to be updated and Apache will either need to be restarted or reloaded for the changes to take effect. Support for many of the PHP features may also be installed by using `pkg`. For example, to install support for XML or SSL, install their respective ports: [source,shell] .... # pkg install php74-xml php74-openssl .... As before, the Apache configuration will need to be reloaded for the changes to take effect, even in cases where it was just a module install. To perform a graceful restart to reload the configuration, issue the following command: [source,shell] .... # apachectl graceful .... Once the install is complete, there are two methods of obtaining the installed PHP support modules and the environmental information of the build. The first is to install the full PHP binary and running the command to gain the information: [source,shell] .... # pkg install php74 .... [source,shell] .... # php -i | less .... It is necessary to pass the output to a pager, such as the `more` or `less` to easier digest the amount of output. Finally, to make any changes to the global configuration of PHP there is a well documented file installed into [.filename]#/usr/local/etc/php.ini#. At the time of install, this file will not exist because there are two versions to choose from, one is [.filename]#php.ini-development# and the other is [.filename]#php.ini-production#. These are starting points to assist administrators in their deployment. ==== HTTP2 Support Apache support for the HTTP2 protocol is included by default when installing the port with `pkg`. The new version of HTTP includes many improvements over the previous version, including utilizing a single connection to a website, reducing overall roundtrips of TCP connections. Also, packet header data is compressed and HTTP2 requires encryption by default. When Apache is configured to only use HTTP2, web browsers will require secure, encrypted HTTPS connections. When Apache is configured to use both versions, HTTP1.1 will be considered a fall back option if any issues arise during the connection. While this change does require administrators to make changes, they are positive and equate to a more secure Internet for everyone. The changes are only required for sites not currently implementing SSL and TLS. [NOTE] ==== This configuration depends on the previous sections, including TLS support. It is recommended those instructions be followed before continuing with this configuration. ==== Start the process by enabling the http2 module by uncommenting the line in [.filename]#/usr/local/etc/apache24/httpd.conf# and replace the mpm_prefork module with mpm_event as the former does not support HTTP2. [.programlisting] .... LoadModule http2_module libexec/apache24/mod_http2.so LoadModule mpm_event_module libexec/apache24/mod_mpm_event.so .... [NOTE] ==== There is a separate [.filename]#mod_http2# port that is available. It exists to deliver security and bug fixes quicker than the module installed with the bundled [.filename]#apache24# port. It is not required for HTTP2 support but is available. When installed, the [.filename]#mod_h2.so# should be used in place of [.filename]#mod_http2.so# in the Apache configuration. ==== There are two methods to implement HTTP2 in Apache; one way is globally for all sites and each VirtualHost running on the system. To enable HTTP2 globally, add the following line under the ServerName directive: [.programlisting] .... Protocols h2 http/1.1 .... [NOTE] ==== To enable HTTP2 over plaintext, use h2h2chttp/1.1 in the [.filename]#httpd.conf#. ==== Having the h2c here will allow plaintext HTTP2 data to pass on the system but is not recommended. In addition, using the http/1.1 here will allow fallback to the HTTP1.1 version of the protocol should it be needed by the system. To enable HTTP2 for individual VirtualHosts, add the same line within the VirtualHost directive in either [.filename]#httpd.conf# or [.filename]#httpd-ssl.conf#. Reload the configuration using the `apachectl`[parameter]#reload# command and test the configuration either by using either of the following methods after visiting one of the hosted pages: [source,shell] .... # grep "HTTP/2.0" /var/log/httpd-access.log .... This should return something similar to the following: [.programlisting] .... 192.168.1.205 - - [18/Oct/2020:18:34:36 -0400] "GET / HTTP/2.0" 304 - 192.0.2.205 - - [18/Oct/2020:19:19:57 -0400] "GET / HTTP/2.0" 304 - 192.0.0.205 - - [18/Oct/2020:19:20:52 -0400] "GET / HTTP/2.0" 304 - 192.0.2.205 - - [18/Oct/2020:19:23:10 -0400] "GET / HTTP/2.0" 304 - .... The other method is using the web browser's built in site debugger or `tcpdump`; however, using either method is beyond the scope of this document. Support for HTTP2 reverse proxy connections by using the [.filename]#mod_proxy_http2.so# module. When configuring the ProxyPass or RewriteRules [P] statements, they should use h2:// for the connection. === Dynamic Websites In addition to mod_perl and mod_php, other languages are available for creating dynamic web content. These include Django and Ruby on Rails. ==== Django Django is a BSD-licensed framework designed to allow developers to write high performance, elegant web applications quickly. It provides an object-relational mapper so that data types are developed as Python objects. A rich dynamic database-access API is provided for those objects without the developer ever having to write SQL. It also provides an extensible template system so that the logic of the application is separated from the HTML presentation. Django depends on [.filename]#mod_python#, and an SQL database engine. In FreeBSD, the package:www/py-django[] port automatically installs [.filename]#mod_python# and supports the PostgreSQL, MySQL, or SQLite databases, with the default being SQLite. To change the database engine, type `make config` within [.filename]#/usr/ports/www/py-django#, then install the port. Once Django is installed, the application will need a project directory along with the Apache configuration in order to use the embedded Python interpreter. This interpreter is used to call the application for specific URLs on the site. To configure Apache to pass requests for certain URLs to the web application, add the following to [.filename]#httpd.conf#, specifying the full path to the project directory: [.programlisting] .... SetHandler python-program PythonPath "['/dir/to/the/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On .... Refer to https://docs.djangoproject.com[https://docs.djangoproject.com] for more information on how to use Django. ==== Ruby on Rails Ruby on Rails is another open source web framework that provides a full development stack. It is optimized to make web developers more productive and capable of writing powerful applications quickly. On FreeBSD, it can be installed using the package:www/rubygem-rails[] package or port. Refer to http://guides.rubyonrails.org[http://guides.rubyonrails.org] for more information on how to use Ruby on Rails. [[network-ftp]] == File Transfer Protocol (FTP) The File Transfer Protocol (FTP) provides users with a simple way to transfer files to and from an FTP server. FreeBSD includes FTP server software, ftpd, in the base system. FreeBSD provides several configuration files for controlling access to the FTP server. This section summarizes these files. Refer to man:ftpd[8] for more details about the built-in FTP server. === Configuration The most important configuration step is deciding which accounts will be allowed access to the FTP server. A FreeBSD system has a number of system accounts which should not be allowed FTP access. The list of users disallowed any FTP access can be found in [.filename]#/etc/ftpusers#. By default, it includes system accounts. Additional users that should not be allowed access to FTP can be added. In some cases it may be desirable to restrict the access of some users without preventing them completely from using FTP. This can be accomplished be creating [.filename]#/etc/ftpchroot# as described in man:ftpchroot[5]. This file lists users and groups subject to FTP access restrictions. To enable anonymous FTP access to the server, create a user named `ftp` on the FreeBSD system. Users will then be able to log on to the FTP server with a username of `ftp` or `anonymous`. When prompted for the password, any input will be accepted, but by convention, an email address should be used as the password. The FTP server will call man:chroot[2] when an anonymous user logs in, to restrict access to only the home directory of the `ftp` user. There are two text files that can be created to specify welcome messages to be displayed to FTP clients. The contents of [.filename]#/etc/ftpwelcome# will be displayed to users before they reach the login prompt. After a successful login, the contents of [.filename]#/etc/ftpmotd# will be displayed. Note that the path to this file is relative to the login environment, so the contents of [.filename]#~ftp/etc/ftpmotd# would be displayed for anonymous users. Once the FTP server has been configured, set the appropriate variable in [.filename]#/etc/rc.conf# to start the service during boot: [.programlisting] .... ftpd_enable="YES" .... To start the service now: [source,shell] .... # service ftpd start .... Test the connection to the FTP server by typing: [source,shell] .... % ftp localhost .... The ftpd daemon uses man:syslog[3] to log messages. By default, the system log daemon will write messages related to FTP in [.filename]#/var/log/xferlog#. The location of the FTP log can be modified by changing the following line in [.filename]#/etc/syslog.conf#: [.programlisting] .... ftp.info /var/log/xferlog .... [NOTE] ==== Be aware of the potential problems involved with running an anonymous FTP server. In particular, think twice about allowing anonymous users to upload files. It may turn out that the FTP site becomes a forum for the trade of unlicensed commercial software or worse. If anonymous FTP uploads are required, then verify the permissions so that these files cannot be read by other anonymous users until they have been reviewed by an administrator. ==== [[network-samba]] == File and Print Services for Microsoft(R) Windows(R) Clients (Samba) Samba is a popular open source software package that provides file and print services using the SMB/CIFS protocol. This protocol is built into Microsoft(R) Windows(R) systems. It can be added to non-Microsoft(R) Windows(R) systems by installing the Samba client libraries. The protocol allows clients to access shared data and printers. These shares can be mapped as a local disk drive and shared printers can be used as if they were local printers. On FreeBSD, the Samba client libraries can be installed using the package:net/samba416[] port or package. The client provides the ability for a FreeBSD system to access SMB/CIFS shares in a Microsoft(R) Windows(R) network. A FreeBSD system can also be configured to act as a Samba server by installing the same package:net/samba416[] port or package. This allows the administrator to create SMB/CIFS shares on the FreeBSD system which can be accessed by clients running Microsoft(R) Windows(R) or the Samba client libraries. === Server Configuration Samba is configured in [.filename]#/usr/local/etc/smb4.conf#. This file must be created before Samba can be used. A simple [.filename]#smb4.conf# to share directories and printers with Windows(R) clients in a workgroup is shown here. For more complex setups involving LDAP or Active Directory, it is easier to use man:samba-tool[8] to create the initial [.filename]#smb4.conf#. [.programlisting] .... [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = ExampleMachine wins support = Yes security = user passdb backend = tdbsam # Example: share /usr/src accessible only to 'developer' user [src] path = /usr/src valid users = developer writable = yes browsable = yes read only = no guest ok = no public = no create mask = 0666 directory mask = 0755 .... ==== Global Settings Settings that describe the network are added in [.filename]#/usr/local/etc/smb4.conf#: `workgroup`:: The name of the workgroup to be served. `netbios name`:: The NetBIOS name by which a Samba server is known. By default, it is the same as the first component of the host's DNS name. `server string`:: The string that will be displayed in the output of `net view` and some other networking tools that seek to display descriptive text about the server. `wins support`:: Whether Samba will act as a WINS server. Do not enable support for WINS on more than one server on the network. ==== Security Settings The most important settings in [.filename]#/usr/local/etc/smb4.conf# are the security model and the backend password format. These directives control the options: `security`:: If the clients use usernames that are the same as their usernames on the FreeBSD machine, user level security should be used. `security = user` is the default security policy and it requires clients to first log on before they can access shared resources. + Refer to man:smb.conf[5] to learn about other supported settings for the `security` option. `passdb backend`:: Samba has several different backend authentication models. Clients may be authenticated with LDAP, NIS+, an SQL database, or a modified password file. The recommended authentication method, `tdbsam`, is ideal for simple networks and is covered here. For larger or more complex networks, `ldapsam` is recommended. `smbpasswd` was the former default and is now obsolete. ==== Samba Users FreeBSD user accounts must be mapped to the `SambaSAMAccount` database for Windows(R) clients to access the share. Map existing FreeBSD user accounts using man:pdbedit[8]: [source,shell] .... # pdbedit -a -u username .... This section has only mentioned the most commonly used settings. Refer to the https://wiki.samba.org[Official Samba Wiki] for additional information about the available configuration options. === Starting Samba To enable Samba at boot time, add the following line to [.filename]#/etc/rc.conf#: [.programlisting] .... samba_server_enable="YES" .... To start Samba now: [source,shell] .... # service samba_server start Performing sanity check on Samba configuration: OK Starting nmbd. Starting smbd. .... Samba consists of three separate daemons. Both the nmbd and smbd daemons are started by `samba_enable`. If winbind name resolution is also required, set: [.programlisting] .... winbindd_enable="YES" .... Samba can be stopped at any time by typing: [source,shell] .... # service samba_server stop .... Samba is a complex software suite with functionality that allows broad integration with Microsoft(R) Windows(R) networks. For more information about functionality beyond the basic configuration described here, refer to https://www.samba.org[https://www.samba.org]. [[network-ntp]] == Clock Synchronization with NTP Over time, a computer's clock is prone to drift. This is problematic as many network services require the computers on a network to share the same accurate time. Accurate time is also needed to ensure that file timestamps stay consistent. The Network Time Protocol (NTP) is one way to provide clock accuracy in a network. FreeBSD includes man:ntpd[8] which can be configured to query other NTP servers to synchronize the clock on that machine or to provide time services to other computers in the network. This section describes how to configure ntpd on FreeBSD. Further documentation can be found in [.filename]#/usr/share/doc/ntp/# in HTML format. === NTP Configuration On FreeBSD, the built-in ntpd can be used to synchronize a system's clock. ntpd is configured using man:rc.conf[5] variables and [.filename]#/etc/ntp.conf#, as detailed in the following sections. ntpd communicates with its network peers using UDP packets. Any firewalls between the machine and its NTP peers must be configured to allow UDP packets in and out on port 123. ==== The [.filename]#/etc/ntp.conf# file ntpd reads [.filename]#/etc/ntp.conf# to determine which NTP servers to query. Choosing several NTP servers is recommended in case one of the servers becomes unreachable or its clock proves unreliable. As ntpd receives responses, it favors reliable servers over the less reliable ones. The servers which are queried can be local to the network, provided by an ISP, or selected from an http://support.ntp.org/bin/view/Servers/WebHome[ online list of publicly accessible NTP servers]. When choosing a public NTP server, select one that is geographically close and review its usage policy. The `pool` configuration keyword selects one or more servers from a pool of servers. An http://support.ntp.org/bin/view/Servers/NTPPoolServers[ online list of publicly accessible NTP pools] is available, organized by geographic area. In addition, FreeBSD provides a project-sponsored pool, `0.freebsd.pool.ntp.org`. .Sample [.filename]#/etc/ntp.conf# [example] ==== This is a simple example of an [.filename]#ntp.conf# file. It can safely be used as-is; it contains the recommended `restrict` options for operation on a publicly-accessible network connection. [.programlisting] .... # Disallow ntpq control/query access. Allow peers to be added only # based on pool and server statements in this file. restrict default limited kod nomodify notrap noquery nopeer restrict source limited kod nomodify notrap noquery # Allow unrestricted access from localhost for queries and control. restrict 127.0.0.1 restrict ::1 # Add a specific server. server ntplocal.example.com iburst # Add FreeBSD pool servers until 3-6 good servers are available. tos minclock 3 maxclock 6 pool 0.freebsd.pool.ntp.org iburst # Use a local leap-seconds file. leapfile "/var/db/ntpd.leap-seconds.list" .... ==== The format of this file is described in man:ntp.conf[5]. The descriptions below provide a quick overview of just the keywords used in the sample file above. By default, an NTP server is accessible to any network host. The `restrict` keyword controls which systems can access the server. Multiple `restrict` entries are supported, each one refining the restrictions given in previous statements. The values shown in the example grant the local system full query and control access, while allowing remote systems only the ability to query the time. For more details, refer to the `Access Control Support` subsection of man:ntp.conf[5]. The `server` keyword specifies a single server to query. The file can contain multiple server keywords, with one server listed on each line. The `pool` keyword specifies a pool of servers. ntpd will add one or more servers from this pool as needed to reach the number of peers specified using the `tos minclock` value. The `iburst` keyword directs ntpd to perform a burst of eight quick packet exchanges with a server when contact is first established, to help quickly synchronize system time. The `leapfile` keyword specifies the location of a file containing information about leap seconds. The file is updated automatically by man:periodic[8]. The file location specified by this keyword must match the location set in the `ntp_db_leapfile` variable in [.filename]#/etc/rc.conf#. ==== NTP entries in [.filename]#/etc/rc.conf# Set `ntpd_enable=YES` to start ntpd at boot time. Once `ntpd_enable=YES` has been added to [.filename]#/etc/rc.conf#, ntpd can be started immediately without rebooting the system by typing: [source,shell] .... # service ntpd start .... Only `ntpd_enable` must be set to use ntpd. The [.filename]#rc.conf# variables listed below may also be set as needed. Set `ntpd_sync_on_start=YES` to allow ntpd to step the clock any amount, one time at startup. Normally ntpd will log an error message and exit if the clock is off by more than 1000 seconds. This option is especially useful on systems without a battery-backed realtime clock. Set `ntpd_oomprotect=YES` to protect the ntpd daemon from being killed by the system attempting to recover from an Out Of Memory (OOM) condition. Set `ntpd_config=` to the location of an alternate [.filename]#ntp.conf# file. Set `ntpd_flags=` to contain any other ntpd flags as needed, but avoid using these flags which are managed internally by [.filename]#/etc/rc.d/ntpd#: * `-p` (pid file location) * `-c` (set `ntpd_config=` instead) ==== ntpd and the unprivileged `ntpd` user ntpd on FreeBSD can start and run as an unprivileged user. Doing so requires the man:mac_ntpd[4] policy module. The [.filename]#/etc/rc.d/ntpd# startup script first examines the NTP configuration. If possible, it loads the `mac_ntpd` module, then starts ntpd as unprivileged user `ntpd` (user id 123). To avoid problems with file and directory access, the startup script will not automatically start ntpd as `ntpd` when the configuration contains any file-related options. The presence of any of the following in `ntpd_flags` requires manual configuration as described below to run as the `ntpd` user: * -f or --driftfile * -i or --jaildir * -k or --keyfile * -l or --logfile * -s or --statsdir The presence of any of the following keywords in [.filename]#ntp.conf# requires manual configuration as described below to run as the `ntpd` user: * crypto * driftfile * key * logdir * statsdir To manually configure ntpd to run as user `ntpd`: * Ensure that the `ntpd` user has access to all the files and directories specified in the configuration. * Arrange for the `mac_ntpd` module to be loaded or compiled into the kernel. See man:mac_ntpd[4] for details. * Set `ntpd_user="ntpd"` in [.filename]#/etc/rc.conf# === Using NTP with a PPP Connection ntpd does not need a permanent connection to the Internet to function properly. However, if a PPP connection is configured to dial out on demand, NTP traffic should be prevented from triggering a dial out or keeping the connection alive. This can be configured with `filter` directives in [.filename]#/etc/ppp/ppp.conf#. For example: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... For more details, refer to the `PACKET FILTERING` section in man:ppp[8] and the examples in [.filename]#/usr/share/examples/ppp/#. [NOTE] ==== Some Internet access providers block low-numbered ports, preventing NTP from functioning since replies never reach the machine. ==== [[network-iscsi]] == iSCSI Initiator and Target Configuration iSCSI is a way to share storage over a network. Unlike NFS, which works at the file system level, iSCSI works at the block device level. In iSCSI terminology, the system that shares the storage is known as the _target_. The storage can be a physical disk, or an area representing multiple disks or a portion of a physical disk. For example, if the disk(s) are formatted with ZFS, a zvol can be created to use as the iSCSI storage. The clients which access the iSCSI storage are called _initiators_. To initiators, the storage available through iSCSI appears as a raw, unformatted disk known as a LUN. Device nodes for the disk appear in [.filename]#/dev/# and the device must be separately formatted and mounted. FreeBSD provides a native, kernel-based iSCSI target and initiator. This section describes how to configure a FreeBSD system as a target or an initiator. [[network-iscsi-target]] === Configuring an iSCSI Target To configure an iSCSI target, create the [.filename]#/etc/ctl.conf# configuration file, add a line to [.filename]#/etc/rc.conf# to make sure the man:ctld[8] daemon is automatically started at boot, and then start the daemon. The following is an example of a simple [.filename]#/etc/ctl.conf# configuration file. Refer to man:ctl.conf[5] for a complete description of this file's available options. [.programlisting] .... portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... The first entry defines the `pg0` portal group. Portal groups define which network addresses the man:ctld[8] daemon will listen on. The `discovery-auth-group no-authentication` entry indicates that any initiator is allowed to perform iSCSI target discovery without authentication. Lines three and four configure man:ctld[8] to listen on all IPv4 (`listen 0.0.0.0`) and IPv6 (`listen [::]`) addresses on the default port of 3260. It is not necessary to define a portal group as there is a built-in portal group called `default`. In this case, the difference between `default` and `pg0` is that with `default`, target discovery is always denied, while with `pg0`, it is always allowed. The second entry defines a single target. Target has two possible meanings: a machine serving iSCSI or a named group of LUNs. This example uses the latter meaning, where `iqn.2012-06.com.example:target0` is the target name. This target name is suitable for testing purposes. For actual use, change `com.example` to the real domain name, reversed. The `2012-06` represents the year and month of acquiring control of that domain name, and `target0` can be any value. Any number of targets can be defined in this configuration file. The `auth-group no-authentication` line allows all initiators to connect to the specified target and `portal-group pg0` makes the target reachable through the `pg0` portal group. The next section defines the LUN. To the initiator, each LUN will be visible as a separate disk device. Multiple LUNs can be defined for each target. Each LUN is identified by a number, where LUN 0 is mandatory. The `path /data/target0-0` line defines the full path to a file or zvol backing the LUN. That path must exist before starting man:ctld[8]. The second line is optional and specifies the size of the LUN. Next, to make sure the man:ctld[8] daemon is started at boot, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... ctld_enable="YES" .... To start man:ctld[8] now, run this command: [source,shell] .... # service ctld start .... As the man:ctld[8] daemon is started, it reads [.filename]#/etc/ctl.conf#. If this file is edited after the daemon starts, use this command so that the changes take effect immediately: [source,shell] .... # service ctld reload .... ==== Authentication The previous example is inherently insecure as it uses no authentication, granting anyone full access to all targets. To require a username and password to access targets, modify the configuration as follows: [.programlisting] .... auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... The `auth-group` section defines username and password pairs. An initiator trying to connect to `iqn.2012-06.com.example:target0` must first specify a defined username and secret. However, target discovery is still permitted without authentication. To require target discovery authentication, set `discovery-auth-group` to a defined `auth-group` name instead of `no-authentication`. It is common to define a single exported target for every initiator. As a shorthand for the syntax above, the username and password can be specified directly in the target entry: [.programlisting] .... target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } .... [[network-iscsi-initiator]] === Configuring an iSCSI Initiator [NOTE] ==== The iSCSI initiator described in this section is supported starting with FreeBSD 10.0-RELEASE. To use the iSCSI initiator available in older versions, refer to man:iscontrol[8]. ==== The iSCSI initiator requires that the man:iscsid[8] daemon is running. This daemon does not use a configuration file. To start it automatically at boot, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... iscsid_enable="YES" .... To start man:iscsid[8] now, run this command: [source,shell] .... # service iscsid start .... Connecting to a target can be done with or without an [.filename]#/etc/iscsi.conf# configuration file. This section demonstrates both types of connections. ==== Connecting to a Target Without a Configuration File To connect an initiator to a single target, specify the IP address of the portal and the name of the target: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 .... To verify if the connection succeeded, run `iscsictl` without any arguments. The output should look similar to this: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 .... In this example, the iSCSI session was successfully established, with [.filename]#/dev/da0# representing the attached LUN. If the `iqn.2012-06.com.example:target0` target exports more than one LUN, multiple device nodes will be shown in that section of the output: [source,shell] .... Connected: da0 da1 da2. .... Any errors will be reported in the output, as well as the system logs. For example, this message usually means that the man:iscsid[8] daemon is not running: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) .... The following message suggests a networking problem, such as a wrong IP address or port: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused .... This message means that the specified target name is wrong: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Not found .... This message means that the target requires authentication: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed .... To specify a CHAP username and secret, use this syntax: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret .... ==== Connecting to a Target with a Configuration File To connect using a configuration file, create [.filename]#/etc/iscsi.conf# with contents like this: [.programlisting] .... t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } .... The `t0` specifies a nickname for the configuration file section. It will be used by the initiator to specify which configuration to use. The other lines specify the parameters to use during connection. The `TargetAddress` and `TargetName` are mandatory, whereas the other options are optional. In this example, the CHAP username and secret are shown. To connect to the defined target, specify the nickname: [source,shell] .... # iscsictl -An t0 .... Alternately, to connect to all targets defined in the configuration file, use: [source,shell] .... # iscsictl -Aa .... To make the initiator automatically connect to all targets in [.filename]#/etc/iscsi.conf#, add the following to [.filename]#/etc/rc.conf#: [.programlisting] .... iscsictl_enable="YES" iscsictl_flags="-Aa" .... diff --git a/documentation/content/en/books/handbook/network-servers/_index.po b/documentation/content/en/books/handbook/network-servers/_index.po index 8c8d926ee0..2e724683e7 100644 --- a/documentation/content/en/books/handbook/network-servers/_index.po +++ b/documentation/content/en/books/handbook/network-servers/_index.po @@ -1,6398 +1,6397 @@ # SOME DESCRIPTIVE TITLE # Copyright (C) YEAR The FreeBSD Project # This file is distributed under the same license as the FreeBSD Documentation package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: FreeBSD Documentation VERSION\n" "POT-Creation-Date: 2025-11-08 16:17+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "Language: \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #. type: YAML Front Matter: description #: documentation/content/en/books/handbook/network-servers/_index.adoc:1 #, no-wrap msgid "This chapter covers some of the more frequently used network services on UNIX systems" msgstr "" #. type: YAML Front Matter: part #: documentation/content/en/books/handbook/network-servers/_index.adoc:1 #, no-wrap msgid "IV. Network Communication" msgstr "" #. type: YAML Front Matter: title #: documentation/content/en/books/handbook/network-servers/_index.adoc:1 #, no-wrap msgid "Chapter 32. Network Servers" msgstr "" #. type: Title = #: documentation/content/en/books/handbook/network-servers/_index.adoc:15 #, no-wrap msgid "Network Servers" msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:53 #, no-wrap msgid "Synopsis" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:58 msgid "" "This chapter covers some of the more frequently used network services on " "UNIX(R) systems. This includes installing, configuring, testing, and " "maintaining many different types of network services. Example configuration " "files are included throughout this chapter for reference." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:60 msgid "By the end of this chapter, readers will know:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:62 msgid "How to manage the inetd daemon." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:63 msgid "How to set up the Network File System (NFS)." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:64 msgid "" "How to set up the Network Information Server (NIS) for centralizing and " "sharing user accounts." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:65 msgid "How to set FreeBSD up to act as an LDAP server or client" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:66 msgid "How to set up automatic network settings using DHCP." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:67 msgid "How to set up a Domain Name Server (DNS)." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:68 msgid "How to set up the Apache HTTP Server." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:69 msgid "How to set up a File Transfer Protocol (FTP) server." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:70 msgid "" "How to set up a file and print server for Windows(R) clients using Samba." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:71 msgid "" "How to synchronize the time and date, and set up a time server using the " "Network Time Protocol (NTP)." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:72 msgid "How to set up iSCSI." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:74 msgid "This chapter assumes a basic knowledge of:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:76 msgid "[.filename]#/etc/rc# scripts." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:77 msgid "Network terminology." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:78 msgid "" "Installation of additional third-party software " "(crossref:ports[ports,Installing Applications: Packages and Ports])." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:80 #, no-wrap msgid "The inetd Super-Server" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:86 msgid "" "The man:inetd[8] daemon is sometimes referred to as a Super-Server because " "it manages connections for many services. Instead of starting multiple " "applications, only the inetd service needs to be started. When a connection " "is received for a service that is managed by inetd, it determines which " "program the connection is destined for, spawns a process for that program, " "and delegates the program a socket. Using inetd for services that are not " "heavily used can reduce system load, when compared to running each daemon " "individually in stand-alone mode." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:88 msgid "" "Primarily, inetd is used to spawn other daemons, but several trivial " "protocols are handled internally, such as chargen, auth, time, echo, " "discard, and daytime." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:90 msgid "This section covers the basics of configuring inetd." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:92 #, no-wrap msgid "Configuration File" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:98 msgid "" "Configuration of inetd is done by editing [.filename]#/etc/inetd.conf#. " "Each line of this configuration file represents an application which can be " "started by inetd. By default, every line starts with a comment (`+#+`), " "meaning that inetd is not listening for any applications. To configure " "inetd to listen for an application's connections, remove the `+#+` at the " "beginning of the line for that application." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:100 msgid "" "After saving the edits, configure inetd to start at system boot by editing " "[.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:104 #, no-wrap msgid "inetd_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:107 msgid "" "To start inetd now, so that it listens for the configured service, type:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:111 #, no-wrap msgid "# service inetd start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:114 msgid "" "Once inetd is started, it needs to be notified whenever a modification is " "made to [.filename]#/etc/inetd.conf#:" msgstr "" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:116 #, no-wrap msgid "Reloading the inetd Configuration File" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:123 #, no-wrap msgid "# service inetd reload\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:129 msgid "" "Typically, the default entry for an application does not need to be edited " "beyond removing the `+#+`. In some situations, it may be appropriate to " "edit the default entry." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:131 msgid "As an example, this is the default entry for man:ftpd[8] over IPv4:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:135 #, no-wrap msgid "ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:138 msgid "The seven columns in an entry are as follows:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:148 #, no-wrap msgid "" "service-name\n" "socket-type\n" "protocol\n" "{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]\n" "user[:group][/login-class]\n" "server-program\n" "server-program-arguments\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:151 msgid "where:" msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:152 #, no-wrap msgid "service-name" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:157 msgid "" "The service name of the daemon to start. It must correspond to a service " "listed in [.filename]#/etc/services#. This determines which port inetd " "listens on for incoming connections to that service. When using a custom " "service, it must first be added to [.filename]#/etc/services#." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:158 #, no-wrap msgid "socket-type" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:161 msgid "" "Either `stream`, `dgram`, `raw`, or `seqpacket`. Use `stream` for TCP " "connections and `dgram` for UDP services." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:162 #, no-wrap msgid "protocol" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:164 msgid "Use one of the following protocol names:" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:169 #, no-wrap msgid "Protocol Name" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:172 #, no-wrap msgid "Explanation" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:173 #, no-wrap msgid "tcp or tcp4" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:175 #, no-wrap msgid "TCP IPv4" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:176 #, no-wrap msgid "udp or udp4" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:178 #, no-wrap msgid "UDP IPv4" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:179 #, no-wrap msgid "tcp6" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:181 #, no-wrap msgid "TCP IPv6" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:182 #, no-wrap msgid "udp6" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:184 #, no-wrap msgid "UDP IPv6" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:185 #, no-wrap msgid "tcp46" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:187 #, no-wrap msgid "Both TCP IPv4 and IPv6" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:188 #, no-wrap msgid "udp46" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:189 #, no-wrap msgid "Both UDP IPv4 and IPv6" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:194 msgid "" "{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-" "ip]]]:: In this field, `wait` or `nowait` must be specified. `max-child`, " "`max-connections-per-ip-per-minute` and `max-child-per-ip` are optional." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:198 msgid "" "`wait|nowait` indicates whether or not the service is able to handle its own " "socket. `dgram` socket types must use `wait` while `stream` daemons, which " "are usually multi-threaded, should use `nowait`. `wait` usually hands off " "multiple sockets to a single daemon, while `nowait` spawns a child daemon " "for each new socket." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:202 msgid "" "The maximum number of child daemons inetd may spawn is set by `max-child`. " "For example, to limit ten instances of the daemon, place a `/10` after " "`nowait`. Specifying `/0` allows an unlimited number of children." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:208 msgid "" "`max-connections-per-ip-per-minute` limits the number of connections from " "any particular IP address per minute. Once the limit is reached, further " "connections from this IP address will be dropped until the end of the " "minute. For example, a value of `/10` would limit any particular IP address " "to ten connection attempts per minute. `max-child-per-ip` limits the number " "of child processes that can be started on behalf on any single IP address at " "any moment. These options can limit excessive resource consumption and help " "to prevent Denial of Service attacks." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:210 msgid "An example can be seen in the default settings for man:fingerd[8]:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:214 #, no-wrap msgid "finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s\n" msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:216 #, no-wrap msgid "user" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:219 msgid "" "The username the daemon will run as. Daemons typically run as `root`, " "`daemon`, or `nobody`." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:220 #, no-wrap msgid "server-program" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:223 msgid "" "The full path to the daemon. If the daemon is a service provided by inetd " "internally, use `internal`." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:224 #, no-wrap msgid "server-program-arguments" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:227 msgid "" "Used to specify any command arguments to be passed to the daemon on " "invocation. If the daemon is an internal service, use `internal`." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:229 #, no-wrap msgid "Command-Line Options" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:234 msgid "" "Like most server daemons, inetd has a number of options that can be used to " "modify its behavior. By default, inetd is started with `-wW -C 60`. These " "options enable TCP wrappers for all services, including internal services, " "and prevent any IP address from requesting any service more than 60 times " "per minute." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:237 msgid "" "To change the default options which are passed to inetd, add an entry for " "`inetd_flags` in [.filename]#/etc/rc.conf#. If inetd is already running, " "restart it with `service inetd restart`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:239 msgid "The available rate limiting options are:" msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:240 #, no-wrap msgid "-c maximum" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:243 msgid "" "Specify the default maximum number of simultaneous invocations of each " "service, where the default is unlimited. May be overridden on a per-service " "basis by using `max-child` in [.filename]#/etc/inetd.conf#." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:244 #, no-wrap msgid "-C rate" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:247 msgid "" "Specify the default maximum number of times a service can be invoked from a " "single IP address per minute. May be overridden on a per-service basis by " "using `max-connections-per-ip-per-minute` in [.filename]#/etc/inetd.conf#." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:248 #, no-wrap msgid "-R rate" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:251 msgid "" "Specify the maximum number of times a service can be invoked in one minute, " "where the default is `256`. A rate of `0` allows an unlimited number." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:252 #, no-wrap msgid "-s maximum" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:255 msgid "" "Specify the maximum number of times a service can be invoked from a single " "IP address at any one time, where the default is unlimited. May be " "overridden on a per-service basis by using `max-child-per-ip` in " "[.filename]#/etc/inetd.conf#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:257 msgid "" "Additional options are available. Refer to man:inetd[8] for the full list of " "options." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:259 #, no-wrap msgid "Security Considerations" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:265 msgid "" "Many of the daemons which can be managed by inetd are not security-" "conscious. Some daemons, such as fingerd, can provide information that may " "be useful to an attacker. Only enable the services which are needed and " "monitor the system for excessive connection attempts. `max-connections-per-" "ip-per-minute`, `max-child` and `max-child-per-ip` can be used to limit such " "attacks." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:268 msgid "" "By default, TCP wrappers are enabled. Consult man:hosts_access[5] for more " "information on placing TCP restrictions on various inetd invoked daemons." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:270 #, no-wrap msgid "Network File System (NFS)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:274 msgid "" "FreeBSD supports the Network File System (NFS), which allows a server to " "share directories and files with clients over a network. With NFS, users " "and programs can access files on remote systems as if they were stored " "locally." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:277 msgid "NFS has many practical uses. Some of the more common uses include:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:279 msgid "" "Data that would otherwise be duplicated on each client can be kept in a " "single location and accessed by clients on the network." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:280 msgid "" "Several clients may need access to the [.filename]#/usr/ports/distfiles# " "directory. Sharing that directory allows for quick access to the source " "files without having to download them to each client." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:281 msgid "" "On large networks, it is often more convenient to configure a central NFS " "server on which all user home directories are stored. Users can log into a " "client anywhere on the network and have access to their home directories." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:282 msgid "" "Administration of NFS exports is simplified. For example, there is only one " "file system where security or backup policies must be set." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:283 msgid "" "Removable media storage devices can be used by other machines on the " "network. This reduces the number of devices throughout the network and " "provides a centralized location to manage their security. It is often more " "convenient to install software on multiple machines from a centralized " "installation media." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:287 msgid "" "NFS consists of a server and one or more clients. The client remotely " "accesses the data that is stored on the server machine. In order for this " "to function properly, a few processes have to be configured and running." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:289 msgid "These daemons must be running on the server:" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:294 #, no-wrap msgid "Daemon" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:297 #: documentation/content/en/books/handbook/network-servers/_index.adoc:559 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1008 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1027 #, no-wrap msgid "Description" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:298 #, no-wrap msgid "nfsd" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:300 #, no-wrap msgid "The NFS daemon which services requests from NFS clients." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:301 #, no-wrap msgid "mountd" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:303 #, no-wrap msgid "The NFS mount daemon which carries out requests received from nfsd." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:304 #, no-wrap msgid "rpcbind" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:305 #, no-wrap msgid "This daemon allows NFS clients to discover which port the NFS server is using." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:308 msgid "" "Running man:nfsiod[8] on the client can improve performance, but is not " "required." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:310 #, no-wrap msgid "Configuring the Server" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:316 msgid "" "The file systems which the NFS server will share are specified in " "[.filename]#/etc/exports#. Each line in this file specifies a file system " "to be exported, which clients have access to that file system, and any " "access options. When adding entries to this file, each exported file " "system, its properties, and allowed hosts must occur on a single line. If " "no clients are listed in the entry, then any client on the network can mount " "that file system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:321 msgid "" "The following [.filename]#/etc/exports# entries demonstrate how to export " "file systems. The examples can be modified to match the file systems and " "client names on the reader's network. There are many options that can be " "used in this file, but only a few will be mentioned here. See " "man:exports[5] for the full list of options." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:323 msgid "" "This example shows how to export [.filename]#/media# to three hosts named " "_alpha_, _bravo_, and _charlie_:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:327 #, no-wrap msgid "/media -ro alpha bravo charlie\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:332 msgid "" "The `-ro` flag makes the file system read-only, preventing clients from " "making any changes to the exported file system. This example assumes that " "the host names are either in DNS or in [.filename]#/etc/hosts#. Refer to " "man:hosts[5] if the network does not have a DNS server." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:337 msgid "" "The next example exports [.filename]#/home# to three clients by IP address. " "This can be useful for networks without DNS or [.filename]#/etc/hosts# " "entries. The `-alldirs` flag allows subdirectories to be mount points. In " "other words, it will not automatically mount the subdirectories, but will " "permit the client to mount the directories that are required as needed." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:341 #, no-wrap msgid "/usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:346 msgid "" "This next example exports [.filename]#/a# so that two clients from different " "domains may access that file system. The `-maproot=root` allows `root` on " "the remote system to write data on the exported file system as `root`. If `-" "maproot=root` is not specified, the client's `root` user will be mapped to " "the server's `nobody` account and will be subject to the access limitations " "defined for `nobody`." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:350 #, no-wrap msgid "/a -maproot=root host.example.com box.example.org\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:354 msgid "" "A client can only be specified once per file system. For example, if " "[.filename]#/usr# is a single file system, these entries would be invalid as " "both entries specify the same host:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:360 #, no-wrap msgid "" "# Invalid when /usr is one file system\n" "/usr/src client\n" "/usr/ports client\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:363 msgid "The correct format for this situation is to use one entry:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:367 #, no-wrap msgid "/usr/src /usr/ports client\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:370 msgid "" "The following is an example of a valid export list, where [.filename]#/usr# " "and [.filename]#/exports# are local file systems:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:381 #, no-wrap msgid "" "# Export src and ports to client01 and client02, but only\n" "# client01 has root privileges on it\n" "/usr/src /usr/ports -maproot=root client01\n" "/usr/src /usr/ports client02\n" "# The client machines have root and can mount anywhere\n" "# on /exports. Anyone in the world can mount /exports/obj read-only\n" "/exports -alldirs -maproot=root client01 client02\n" "/exports/obj -ro\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:384 msgid "" "To enable the processes required by the NFS server at boot time, add these " "options to [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:390 #, no-wrap msgid "" "rpcbind_enable=\"YES\"\n" "nfs_server_enable=\"YES\"\n" "mountd_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:393 msgid "The server can be started now by running this command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:397 #, no-wrap msgid "# service nfsd start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:402 msgid "" "Whenever the NFS server is started, mountd also starts automatically. " "However, mountd only reads [.filename]#/etc/exports# when it is started. To " "make subsequent [.filename]#/etc/exports# edits take effect immediately, " "force mountd to reread it:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:406 #, no-wrap msgid "# service mountd reload\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:409 msgid "" "Refer to man:zfs-share[8] for a description of exporting ZFS datasets via " "NFS using the `sharenfs` ZFS property instead of the man:exports[5] file." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:411 msgid "Refer to man:nfsv4[4] for a description of an NFS Version 4 setup." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:412 #, no-wrap msgid "Configuring the Client" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:415 msgid "" "To enable NFS clients, set this option in each client's [.filename]#/etc/" "rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:419 #, no-wrap msgid "nfs_client_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:422 msgid "Then, run this command on each NFS client:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:426 #, no-wrap msgid "# service nfsclient start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:431 msgid "" "The client now has everything it needs to mount a remote file system. In " "these examples, the server's name is `server` and the client's name is " "`client`. To mount [.filename]#/home# on `server` to the [.filename]#/mnt# " "mount point on `client`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:435 #, no-wrap msgid "# mount server:/home /mnt\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:438 msgid "" "The files and directories in [.filename]#/home# will now be available on " "`client`, in the [.filename]#/mnt# directory." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:440 msgid "" "To mount a remote file system each time the client boots, add it to " "[.filename]#/etc/fstab#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:444 #, no-wrap msgid "server:/home\t/mnt\tnfs\trw\t0\t0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:447 msgid "Refer to man:fstab[5] for a description of all available options." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:448 #, no-wrap msgid "Locking" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:452 msgid "" "Some applications require file locking to operate correctly. To enable " "locking, execute the following command on both the client and server:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:456 #, no-wrap msgid "# sysrc rpc_lockd_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:459 msgid "Then start the man:rpc.lockd[8] service:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:463 #, no-wrap msgid "# service lockd start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:467 msgid "" "If locking is not required on the server, the NFS client can be configured " "to lock locally by including `-L` when running mount. Refer to " "man:mount_nfs[8] for further details." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:469 #, no-wrap msgid "Automating Mounts with man:autofs[5]" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:476 msgid "" "The man:autofs[5] automount facility is supported starting with FreeBSD 10.1-" "RELEASE. To use the automounter functionality in older versions of FreeBSD, " "use man:amd[8] instead. This chapter only describes the man:autofs[5] " "automounter." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:482 msgid "" "The man:autofs[5] facility is a common name for several components that, " "together, allow for automatic mounting of remote and local filesystems " "whenever a file or directory within that file system is accessed. It " "consists of the kernel component, man:autofs[5], and several userspace " "applications: man:automount[8], man:automountd[8] and man:autounmountd[8]. " "It serves as an alternative for man:amd[8] from previous FreeBSD releases. " "amd is still provided for backward compatibility purposes, as the two use " "different map formats; the one used by autofs is the same as with other SVR4 " "automounters, such as the ones in Solaris, MacOS X, and Linux." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:484 msgid "" "The man:autofs[5] virtual filesystem is mounted on specified mountpoints by " "man:automount[8], usually invoked during boot." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:488 msgid "" "Whenever a process attempts to access a file within the man:autofs[5] " "mountpoint, the kernel will notify man:automountd[8] daemon and pause the " "triggering process. The man:automountd[8] daemon will handle kernel " "requests by finding the proper map and mounting the filesystem according to " "it, then signal the kernel to release blocked process. The " "man:autounmountd[8] daemon automatically unmounts automounted filesystems " "after some time, unless they are still being used." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:491 msgid "" "The primary autofs configuration file is [.filename]#/etc/auto_master#. It " "assigns individual maps to top-level mounts. For an explanation of " "[.filename]#auto_master# and the map syntax, refer to man:auto_master[5]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:495 msgid "" "There is a special automounter map mounted on [.filename]#/net#. When a " "file is accessed within this directory, man:autofs[5] looks up the " "corresponding remote mount and automatically mounts it. For instance, an " "attempt to access a file within [.filename]#/net/foobar/usr# would tell " "man:automountd[8] to mount the [.filename]#/usr# export from the host " "`foobar`." msgstr "" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:496 #, no-wrap msgid "Mounting an Export with man:autofs[5]" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:500 msgid "" "In this example, `showmount -e` shows the exported file systems that can be " "mounted from the NFS server, `foobar`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:508 #, no-wrap msgid "" "% showmount -e foobar\n" "Exports list on foobar:\n" "/usr 10.10.10.0\n" "/a 10.10.10.0\n" "% cd /net/foobar/usr\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:515 msgid "" "The output from `showmount` shows [.filename]#/usr# as an export. When " "changing directories to [.filename]#/host/foobar/usr#, man:automountd[8] " "intercepts the request and attempts to resolve the hostname `foobar`. If " "successful, man:automountd[8] automatically mounts the source export." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:517 msgid "" "To enable man:autofs[5] at boot time, add this line to [.filename]#/etc/" "rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:521 #, no-wrap msgid "autofs_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:524 msgid "Then man:autofs[5] can be started by running:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:530 #, no-wrap msgid "" "# service automount start\n" "# service automountd start\n" "# service autounmountd start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:534 msgid "" "The man:autofs[5] map format is the same as in other operating systems. " "Information about this format from other sources can be useful, like the " "http://web.archive.org/web/20160813071113/http://images.apple.com/business/" "docs/Autofs.pdf[Mac OS X document]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:536 msgid "" "Consult the man:automount[8], man:automountd[8], man:autounmountd[8], and " "man:auto_master[5] manual pages for more information." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:538 #, no-wrap msgid "Network Information System (NIS)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:543 msgid "" "Network Information System (NIS) is designed to centralize administration of " "UNIX(R)-like systems such as Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, " "OpenBSD, and FreeBSD. NIS was originally known as Yellow Pages but the name " "was changed due to trademark issues. This is the reason why NIS commands " "begin with `yp`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:546 msgid "" "NIS is a Remote Procedure Call (RPC)-based client/server system that allows " "a group of machines within an NIS domain to share a common set of " "configuration files. This permits a system administrator to set up NIS " "client systems with only minimal configuration data and to add, remove, or " "modify configuration data from a single location." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:548 msgid "FreeBSD uses version 2 of the NIS protocol." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:549 #, no-wrap msgid "NIS Terms and Processes" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:552 msgid "Table 28.1 summarizes the terms and important processes used by NIS:" msgstr "" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:553 #, no-wrap msgid "NIS Terminology" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:557 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1896 #, no-wrap msgid "Term" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:560 #, no-wrap msgid "NIS domain name" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:562 #, no-wrap msgid "NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:563 #, no-wrap msgid "man:rpcbind[8]" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:565 #, no-wrap msgid "This service enables RPC and must be running in order to run an NIS server or act as an NIS client." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:566 #, no-wrap msgid "man:ypbind[8]" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:568 #, no-wrap msgid "This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:569 #, no-wrap msgid "man:ypserv[8]" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:571 #, no-wrap msgid "This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-FreeBSD clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:572 #, no-wrap msgid "man:rpc.yppasswdd[8]" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:573 #, no-wrap msgid "This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:575 #, no-wrap msgid "Machine Types" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:578 msgid "There are three types of hosts in an NIS environment:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:580 msgid "NIS master server" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:584 msgid "" "This server acts as a central repository for host configuration information " "and maintains the authoritative copy of the files used by all of the NIS " "clients. The [.filename]#passwd#, [.filename]#group#, and other various " "files used by NIS clients are stored on the master server. While it is " "possible for one machine to be an NIS master server for more than one NIS " "domain, this type of configuration will not be covered in this chapter as it " "assumes a relatively small-scale NIS environment." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:585 msgid "NIS slave servers" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:588 msgid "" "NIS slave servers maintain copies of the NIS master's data files in order to " "provide redundancy. Slave servers also help to balance the load of the " "master server as NIS clients always attach to the NIS server which responds " "first." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:589 msgid "NIS clients" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:591 msgid "NIS clients authenticate against the NIS server during log on." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:595 msgid "" "Information in many files can be shared using NIS. The " "[.filename]#master.passwd#, [.filename]#group#, and [.filename]#hosts# files " "are commonly shared via NIS. Whenever a process on a client needs " "information that would normally be found in these files locally, it makes a " "query to the NIS server that it is bound to instead." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:596 #, no-wrap msgid "Planning Considerations" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:602 msgid "" "This section describes a sample NIS environment which consists of 15 FreeBSD " "machines with no centralized point of administration. Each machine has its " "own [.filename]#/etc/passwd# and [.filename]#/etc/master.passwd#. These " "files are kept in sync with each other only through manual intervention. " "Currently, when a user is added to the lab, the process must be repeated on " "all 15 machines." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:604 msgid "The configuration of the lab will be as follows:" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:609 #, no-wrap msgid "Machine name" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:610 #, no-wrap msgid "IP address" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:613 #, no-wrap msgid "Machine role" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:614 #, no-wrap msgid "`ellington`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:615 #, no-wrap msgid "`10.0.0.2`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:617 #, no-wrap msgid "NIS master" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:618 #, no-wrap msgid "`coltrane`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:619 #, no-wrap msgid "`10.0.0.3`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:621 #, no-wrap msgid "NIS slave" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:622 #, no-wrap msgid "`basie`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:623 #, no-wrap msgid "`10.0.0.4`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:625 #, no-wrap msgid "Faculty workstation" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:626 #, no-wrap msgid "`bird`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:627 #, no-wrap msgid "`10.0.0.5`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:629 #, no-wrap msgid "Client machine" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:630 #, no-wrap msgid "`cli[1-11]`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:631 #, no-wrap msgid "`10.0.0.[6-17]`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:632 #, no-wrap msgid "Other client machines" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:636 msgid "" "If this is the first time an NIS scheme is being developed, it should be " "thoroughly planned ahead of time. Regardless of network size, several " "decisions need to be made as part of the planning process." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:637 #, no-wrap msgid "Choosing a NIS Domain Name" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:642 msgid "" "When a client broadcasts its requests for info, it includes the name of the " "NIS domain that it is part of. This is how multiple servers on one network " "can tell which server should answer which request. Think of the NIS domain " "name as the name for a group of hosts." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:648 msgid "" "Some organizations choose to use their Internet domain name for their NIS " "domain name. This is not recommended as it can cause confusion when trying " "to debug network problems. The NIS domain name should be unique within the " "network and it is helpful if it describes the group of machines it " "represents. For example, the Art department at Acme Inc. might be in the " "\"acme-art\" NIS domain. This example will use the domain name `test-" "domain`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:651 msgid "" "However, some non-FreeBSD operating systems require the NIS domain name to " "be the same as the Internet domain name. If one or more machines on the " "network have this restriction, the Internet domain name _must_ be used as " "the NIS domain name." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:652 #, no-wrap msgid "Physical Server Requirements" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:659 msgid "" "There are several things to keep in mind when choosing a machine to use as a " "NIS server. Since NIS clients depend upon the availability of the server, " "choose a machine that is not rebooted frequently. The NIS server should " "ideally be a stand alone machine whose sole purpose is to be an NIS server. " "If the network is not heavily used, it is acceptable to put the NIS server " "on a machine running other services. However, if the NIS server becomes " "unavailable, it will adversely affect all NIS clients." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:660 #, no-wrap msgid "Configuring the NIS Master Server" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:667 msgid "" "The canonical copies of all NIS files are stored on the master server. The " "databases used to store the information are called NIS maps. In FreeBSD, " "these maps are stored in [.filename]#/var/yp/[domainname]# where " "[.filename]#[domainname]# is the name of the NIS domain. Since multiple " "domains are supported, it is possible to have several directories, one for " "each domain. Each domain will have its own independent set of maps." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:670 msgid "" "NIS master and slave servers handle all NIS requests through man:ypserv[8]. " "This daemon is responsible for receiving incoming requests from NIS clients, " "translating the requested domain and map name to a path to the corresponding " "database file, and transmitting data from the database back to the client." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:673 msgid "" "Setting up a master NIS server can be relatively straight forward, depending " "on environmental needs. Since FreeBSD provides built-in NIS support, it " "only needs to be enabled by adding the following lines to [.filename]#/etc/" "rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:679 #, no-wrap msgid "" "nisdomainname=\"test-domain\"\t<.>\n" "nis_server_enable=\"YES\"\t\t<.>\n" "nis_yppasswdd_enable=\"YES\"\t<.>\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:682 msgid "This line sets the NIS domain name to `test-domain`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:683 msgid "" "This automates the start up of the NIS server processes when the system " "boots." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:684 msgid "" "This enables the man:rpc.yppasswdd[8] daemon so that users can change their " "NIS password from a client machine." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:689 msgid "" "Care must be taken in a multi-server domain where the server machines are " "also NIS clients. It is generally a good idea to force the servers to bind " "to themselves rather than allowing them to broadcast bind requests and " "possibly become bound to each other. Strange failure modes can result if " "one server goes down and others are dependent upon it. Eventually, all the " "clients will time out and attempt to bind to other servers, but the delay " "involved can be considerable and the failure mode is still present since the " "servers might bind to each other all over again." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:691 msgid "" "A server that is also a client can be forced to bind to a particular server " "by adding these additional lines to [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:696 #, no-wrap msgid "" "nis_client_enable=\"YES\"\t\t\t\t<.>\n" "nis_client_flags=\"-S test-domain,server\"\t<.>\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:699 msgid "This enables running client stuff as well." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:700 msgid "This line sets the NIS domain name to `test-domain` and bind to itself." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:703 msgid "" "After saving the edits, type `/etc/netstart` to restart the network and " "apply the values defined in [.filename]#/etc/rc.conf#. Before initializing " "the NIS maps, start man:ypserv[8]:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:707 #, no-wrap msgid "# service ypserv start\n" msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:710 #, no-wrap msgid "Initializing the NIS Maps" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:715 msgid "" "NIS maps are generated from the configuration files in [.filename]#/etc# on " "the NIS master, with one exception: [.filename]#/etc/master.passwd#. This " "is to prevent the propagation of passwords to all the servers in the NIS " "domain. Therefore, before the NIS maps are initialized, configure the " "primary password files:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:721 #, no-wrap msgid "" "# cp /etc/master.passwd /var/yp/master.passwd\n" "# cd /var/yp\n" "# vi master.passwd\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:724 msgid "" "It is advisable to remove all entries for system accounts as well as any " "user accounts that do not need to be propagated to the NIS clients, such as " "the `root` and any other administrative accounts." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:728 msgid "" "Ensure that the [.filename]#/var/yp/master.passwd# is neither group or world " "readable by setting its permissions to `600`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:733 msgid "" "After completing this task, initialize the NIS maps. FreeBSD includes the " "man:ypinit[8] script to do this. When generating maps for the master " "server, include `-m` and specify the NIS domain name:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:754 #, no-wrap msgid "" "ellington# ypinit -m test-domain\n" "Server Type: MASTER Domain: test-domain\n" "Creating an YP server will require that you answer a few questions.\n" "Questions will all be asked at the beginning of the procedure.\n" "Do you want this procedure to quit on non-fatal errors? [y/n: n] n\n" "Ok, please remember to go back and redo manually whatever fails.\n" "If not, something might not work.\n" "At this point, we have to construct a list of this domains YP servers.\n" "rod.darktech.org is already known as master server.\n" "Please continue to add any slave servers, one per line. When you are\n" "done with the list, type a .\n" "master server : ellington\n" "next host to add: coltrane\n" "next host to add: ^D\n" "The current list of NIS servers looks like this:\n" "ellington\n" "coltrane\n" "Is this correct? [y/n: y] y\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:756 #, no-wrap msgid "[..output from map generation..]\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:759 #, no-wrap msgid "" "NIS Map update completed.\n" "ellington has been setup as an YP master server without any errors.\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:764 msgid "" "This will create [.filename]#/var/yp/Makefile# from [.filename]#/var/yp/" "Makefile.dist#. By default, this file assumes that the environment has a " "single NIS server with only FreeBSD clients. Since `test-domain` has a " "slave server, edit this line in [.filename]#/var/yp/Makefile# so that it " "begins with a comment (`+#+`):" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:768 #, no-wrap msgid "NOPUSH = \"True\"\n" msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:771 #, no-wrap msgid "Adding New Users" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:776 msgid "" "Every time a new user is created, the user account must be added to the " "master NIS server and the NIS maps rebuilt. Until this occurs, the new user " "will not be able to login anywhere except on the NIS master. For example, " "to add the new user `jsmith` to the `test-domain` domain, run these commands " "on the master server:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:782 #, no-wrap msgid "" "# pw useradd jsmith\n" "# cd /var/yp\n" "# make test-domain\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:785 msgid "" "The user could also be added using `adduser jsmith` instead of `pw useradd " "smith`." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:786 #, no-wrap msgid "Setting up a NIS Slave Server" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:792 msgid "" "To set up an NIS slave server, log on to the slave server and edit " "[.filename]#/etc/rc.conf# as for the master server. Do not generate any NIS " "maps, as these already exist on the master server. When running `ypinit` on " "the slave server, use `-s` (for slave) instead of `-m` (for master). This " "option requires the name of the NIS master in addition to the domain name, " "as seen in this example:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:796 #, no-wrap msgid "coltrane# ypinit -s ellington test-domain\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:798 #, no-wrap msgid "Server Type: SLAVE Domain: test-domain Master: ellington\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:801 #, no-wrap msgid "" "Creating an YP server will require that you answer a few questions.\n" "Questions will all be asked at the beginning of the procedure.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:803 #, no-wrap msgid "Do you want this procedure to quit on non-fatal errors? [y/n: n] n\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:848 #, no-wrap msgid "" "Ok, please remember to go back and redo manually whatever fails.\n" "If not, something might not work.\n" "There will be no further questions. The remainder of the procedure\n" "should take a few minutes, to copy the databases from ellington.\n" "Transferring netgroup...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring netgroup.byuser...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring netgroup.byhost...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring master.passwd.byuid...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring passwd.byuid...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring passwd.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring group.bygid...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring group.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring services.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring rpc.bynumber...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring rpc.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring protocols.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring master.passwd.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring networks.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring networks.byaddr...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring netid.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring hosts.byaddr...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring protocols.bynumber...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring ypservers...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring hosts.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:851 #, no-wrap msgid "" "coltrane has been setup as an YP slave server without any errors.\n" "Remember to update map ypservers on ellington.\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:855 msgid "" "This will generate a directory on the slave server called [.filename]#/var/" "yp/test-domain# which contains copies of the NIS master server's maps. " "Adding these [.filename]#/etc/crontab# entries on each slave server will " "force the slaves to sync their maps with the maps on the master server:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:860 #, no-wrap msgid "" "20 * * * * root /usr/libexec/ypxfr passwd.byname\n" "21 * * * * root /usr/libexec/ypxfr passwd.byuid\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:865 msgid "" "These entries are not mandatory because the master server automatically " "attempts to push any map changes to its slaves. However, since clients may " "depend upon the slave server to provide correct password information, it is " "recommended to force frequent password map updates. This is especially " "important on busy networks where map updates might not always complete." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:867 msgid "" "To finish the configuration, run `/etc/netstart` on the slave server in " "order to start the NIS services." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:868 #, no-wrap msgid "Setting Up an NIS Client" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:877 msgid "" "An NIS client binds to an NIS server using man:ypbind[8]. This daemon " "broadcasts RPC requests on the local network. These requests specify the " "domain name configured on the client. If an NIS server in the same domain " "receives one of the broadcasts, it will respond to ypbind, which will record " "the server's address. If there are several servers available, the client " "will use the address of the first server to respond and will direct all of " "its NIS requests to that server. The client will automatically ping the " "server on a regular basis to make sure it is still available. If it fails " "to receive a reply within a reasonable amount of time, ypbind will mark the " "domain as unbound and begin broadcasting again in the hopes of locating " "another server." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:879 msgid "To configure a FreeBSD machine to be an NIS client:" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:883 msgid "" "Edit [.filename]#/etc/rc.conf# and add the following lines in order to set " "the NIS domain name and start man:ypbind[8] during network startup:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:888 #, no-wrap msgid "" "nisdomainname=\"test-domain\"\n" "nis_client_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:891 msgid "" "To import all possible password entries from the NIS server, use `vipw` to " "remove all user accounts except one from [.filename]#/etc/master.passwd#. " "When removing the accounts, keep in mind that at least one local account " "should remain and this account should be a member of `wheel`. If there is a " "problem with NIS, this local account can be used to log in remotely, become " "the superuser, and fix the problem. Before saving the edits, add the " "following line to the end of the file:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:895 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1116 #, no-wrap msgid "+:::::::::\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:901 msgid "" "This line configures the client to provide anyone with a valid account in " "the NIS server's password maps an account on the client. There are many " "ways to configure the NIS client by modifying this line. One method is " "described in crossref:network-servers[network-netgroups, Using Netgroups]. " "For more detailed reading, refer to the book `Managing NFS and NIS`, " "published by O'Reilly Media." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:902 msgid "" "To import all possible group entries from the NIS server, add this line to " "[.filename]#/etc/group#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:906 #, no-wrap msgid "+:*::\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:910 msgid "" "To start the NIS client immediately, execute the following commands as the " "superuser:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:915 #, no-wrap msgid "" "# /etc/netstart\n" "# service ypbind start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:918 msgid "" "After completing these steps, running `ypcat passwd` on the client should " "show the server's [.filename]#passwd# map." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:919 #, no-wrap msgid "NIS Security" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:927 msgid "" "Since RPC is a broadcast-based service, any system running ypbind within the " "same domain can retrieve the contents of the NIS maps. To prevent " "unauthorized transactions, man:ypserv[8] supports a feature called " "\"securenets\" which can be used to restrict access to a given set of " "hosts. By default, this information is stored in [.filename]#/var/yp/" "securenets#, unless man:ypserv[8] is started with `-p` and an alternate " "path. This file contains entries that consist of a network specification " "and a network mask separated by white space. Lines starting with `+\"#\"+` " "are considered to be comments. A sample [.filename]#securenets# might look " "like this:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:939 #, no-wrap msgid "" "# allow connections from local host -- mandatory\n" "127.0.0.1 255.255.255.255\n" "# allow connections from any host\n" "# on the 192.168.128.0 network\n" "192.168.128.0 255.255.255.0\n" "# allow connections from any host\n" "# between 10.0.0.0 to 10.0.15.255\n" "# this includes the machines in the testlab\n" "10.0.0.0 255.255.240.0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:944 msgid "" "If man:ypserv[8] receives a request from an address that matches one of " "these rules, it will process the request normally. If the address fails to " "match a rule, the request will be ignored and a warning message will be " "logged. If the [.filename]#securenets# does not exist, `ypserv` will allow " "connections from any host." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:948 msgid "" "crossref:security[tcpwrappers,\"TCP Wrapper\"] is an alternate mechanism for " "providing access control instead of [.filename]#securenets#. While either " "access control mechanism adds some security, they are both vulnerable to " "\"IP spoofing\" attacks. All NIS-related traffic should be blocked at the " "firewall." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:952 msgid "" "Servers using [.filename]#securenets# may fail to serve legitimate NIS " "clients with archaic TCP/IP implementations. Some of these implementations " "set all host bits to zero when doing broadcasts or fail to observe the " "subnet mask when calculating the broadcast address. While some of these " "problems can be fixed by changing the client configuration, other problems " "may force the retirement of these client systems or the abandonment of " "[.filename]#securenets#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:956 msgid "" "The use of TCP Wrapper increases the latency of the NIS server. The " "additional delay may be long enough to cause timeouts in client programs, " "especially in busy networks with slow NIS servers. If one or more clients " "suffer from latency, convert those clients into NIS slave servers and force " "them to bind to themselves." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:957 #, no-wrap msgid "Barring Some Users" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:962 msgid "" "In this example, the `basie` system is a faculty workstation within the NIS " "domain. The [.filename]#passwd# map on the master NIS server contains " "accounts for both faculty and students. This section demonstrates how to " "allow faculty logins on this system while refusing student logins." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:966 msgid "" "To prevent specified users from logging on to a system, even if they are " "present in the NIS database, use `vipw` to add `-_username_` with the " "correct number of colons towards the end of [.filename]#/etc/master.passwd# " "on the client, where _username_ is the username of a user to bar from " "logging in. The line with the blocked user must be before the `+` line that " "allows NIS users. In this example, `bill` is barred from logging on to " "`basie`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:987 #, no-wrap msgid "" "basie# cat /etc/master.passwd\n" "root:[password]:0:0::0:0:The super-user:/root:/bin/csh\n" "toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh\n" "daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin\n" "operator:*:2:5::0:0:System &:/:/usr/sbin/nologin\n" "bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin\n" "tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin\n" "kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin\n" "games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin\n" "news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin\n" "man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/usr/sbin/nologin\n" "bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin\n" "uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico\n" "xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin\n" "pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin\n" "nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin\n" "-bill:::::::::\n" "+:::::::::\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:989 #, no-wrap msgid "basie#\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:993 #, no-wrap msgid "Using Netgroups" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:996 msgid "" "Barring specified users from logging on to individual systems becomes " "unscaleable on larger networks and quickly loses the main benefit of NIS: " "_centralized_ administration." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:999 msgid "" "Netgroups were developed to handle large, complex networks with hundreds of " "users and machines. Their use is comparable to UNIX(R) groups, where the " "main difference is the lack of a numeric ID and the ability to define a " "netgroup by including both user accounts and other netgroups." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1001 msgid "" "To expand on the example used in this chapter, the NIS domain will be " "extended to add the users and systems shown in Tables 28.2 and 28.3:" msgstr "" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:1002 #, no-wrap msgid "Additional Users" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1006 #, no-wrap msgid "User Name(s)" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1009 #, no-wrap msgid "`alpha`, `beta`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1011 #, no-wrap msgid "IT department employees" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1012 #, no-wrap msgid "`charlie`, `delta`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1014 #, no-wrap msgid "IT department apprentices" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1015 #, no-wrap msgid "`echo`, `foxtrott`, `golf`, ..." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1017 #, no-wrap msgid "employees" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1018 #, no-wrap msgid "`able`, `baker`, ..." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1019 #, no-wrap msgid "interns" msgstr "" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:1021 #, no-wrap msgid "Additional Systems" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1025 #, no-wrap msgid "Machine Name(s)" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1028 #, no-wrap msgid "`war`, `death`, `famine`, `pollution`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1030 #, no-wrap msgid "Only IT employees are allowed to log onto these servers." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1031 #, no-wrap msgid "`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1033 #, no-wrap msgid "All members of the IT department are allowed to login onto these servers." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1034 #, no-wrap msgid "`one`, `two`, `three`, `four`, ..." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1036 #, no-wrap msgid "Ordinary workstations used by employees." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1037 #, no-wrap msgid "`trashcan`" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1038 #, no-wrap msgid "A very old machine without any critical data. Even interns are allowed to use this system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1044 msgid "" "When using netgroups to configure this scenario, each user is assigned to " "one or more netgroups and logins are then allowed or forbidden for all " "members of the netgroup. When adding a new machine, login restrictions must " "be defined for all netgroups. When a new user is added, the account must be " "added to one or more netgroups. If the NIS setup is planned carefully, only " "one central configuration file needs modification to grant or deny access to " "machines." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1048 msgid "" "The first step is the initialization of the NIS `netgroup` map. In FreeBSD, " "this map is not created by default. On the NIS master server, use an editor " "to create a map named [.filename]#/var/yp/netgroup#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1050 msgid "" "This example creates four netgroups to represent IT employees, IT " "apprentices, employees, and interns:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1058 #, no-wrap msgid "" "IT_EMP (,alpha,test-domain) (,beta,test-domain)\n" "IT_APP (,charlie,test-domain) (,delta,test-domain)\n" "USERS (,echo,test-domain) (,foxtrott,test-domain) \\\n" " (,golf,test-domain)\n" "INTERNS (,able,test-domain) (,baker,test-domain)\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1063 msgid "" "Each entry configures a netgroup. The first column in an entry is the name " "of the netgroup. Each set of parentheses represents either a group of one " "or more users or the name of another netgroup. When specifying a user, the " "three comma-delimited fields inside each group represent:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1065 msgid "" "The name of the host(s) where the other fields representing the user are " "valid. If a hostname is not specified, the entry is valid on all hosts." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1066 msgid "The name of the account that belongs to this netgroup." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1067 msgid "" "The NIS domain for the account. Accounts may be imported from other NIS " "domains into a netgroup." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1071 msgid "" "If a group contains multiple users, separate each user with whitespace. " "Additionally, each field may contain wildcards. See man:netgroup[5] for " "details." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1074 msgid "" "Netgroup names longer than 8 characters should not be used. The names are " "case sensitive and using capital letters for netgroup names is an easy way " "to distinguish between user, machine and netgroup names." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1077 msgid "" "Some non-FreeBSD NIS clients cannot handle netgroups containing more than 15 " "entries. This limit may be circumvented by creating several sub-netgroups " "with 15 users or fewer and a real netgroup consisting of the sub-netgroups, " "as seen in this example:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1084 #, no-wrap msgid "" "BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...]\n" "BIGGRP2 (,joe16,domain) (,joe17,domain) [...]\n" "BIGGRP3 (,joe31,domain) (,joe32,domain)\n" "BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1087 msgid "" "Repeat this process if more than 225 (15 times 15) users exist within a " "single netgroup." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1089 msgid "To activate and distribute the new NIS map:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1094 #, no-wrap msgid "" "ellington# cd /var/yp\n" "ellington# make\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1098 msgid "" "This will generate the three NIS maps [.filename]#netgroup#, " "[.filename]#netgroup.byhost# and [.filename]#netgroup.byuser#. Use the map " "key option of man:ypcat[1] to check if the new NIS maps are available:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1104 #, no-wrap msgid "" "ellington% ypcat -k netgroup\n" "ellington% ypcat -k netgroup.byhost\n" "ellington% ypcat -k netgroup.byuser\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1109 msgid "" "The output of the first command should resemble the contents of [.filename]#/" "var/yp/netgroup#. The second command only produces output if host-specific " "netgroups were created. The third command is used to get the list of " "netgroups for a user." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1112 msgid "" "To configure a client, use man:vipw[8] to specify the name of the netgroup. " "For example, on the server named `war`, replace this line:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1119 msgid "with" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1123 #, no-wrap msgid "+@IT_EMP:::::::::\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1126 msgid "" "This specifies that only the users defined in the netgroup `IT_EMP` will be " "imported into this system's password database and only those users are " "allowed to login to this system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1131 msgid "" "This configuration also applies to the `~` function of the shell and all " "routines which convert between user names and numerical user IDs. In other " "words, `cd ~_user_` will not work, `ls -l` will show the numerical ID " "instead of the username, and `find . -user joe -print` will fail with the " "message `No such user`. To fix this, import all user entries without " "allowing them to login into the servers. This can be achieved by adding an " "extra line:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1135 #, no-wrap msgid "+:::::::::/usr/sbin/nologin\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1138 msgid "" "This line configures the client to import all entries but to replace the " "shell in those entries with [.filename]#/usr/sbin/nologin#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1141 msgid "" "Make sure that extra line is placed _after_ `+@IT_EMP:::::::::`. Otherwise, " "all user accounts imported from NIS will have [.filename]#/usr/sbin/nologin# " "as their login shell and no one will be able to login to the system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1143 msgid "" "To configure the less important servers, replace the old `+:::::::::` on the " "servers with these lines:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1149 #, no-wrap msgid "" "+@IT_EMP:::::::::\n" "+@IT_APP:::::::::\n" "+:::::::::/usr/sbin/nologin\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1152 msgid "The corresponding lines for the workstations would be:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1158 #, no-wrap msgid "" "+@IT_EMP:::::::::\n" "+@USERS:::::::::\n" "+:::::::::/usr/sbin/nologin\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1165 msgid "" "NIS supports the creation of netgroups from other netgroups which can be " "useful if the policy regarding user access changes. One possibility is the " "creation of role-based netgroups. For example, one might create a netgroup " "called `BIGSRV` to define the login restrictions for the important servers, " "another netgroup called `SMALLSRV` for the less important servers, and a " "third netgroup called `USERBOX` for the workstations. Each of these " "netgroups contains the netgroups that are allowed to login onto these " "machines. The new entries for the NIS`netgroup` map would look like this:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1171 #, no-wrap msgid "" "BIGSRV IT_EMP IT_APP\n" "SMALLSRV IT_EMP IT_APP ITINTERN\n" "USERBOX IT_EMP ITINTERN USERS\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1176 msgid "" "This method of defining login restrictions works reasonably well when it is " "possible to define groups of machines with identical restrictions. " "Unfortunately, this is the exception and not the rule. Most of the time, " "the ability to define login restrictions on a per-machine basis is required." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1181 msgid "" "Machine-specific netgroup definitions are another possibility to deal with " "the policy changes. In this scenario, the [.filename]#/etc/master.passwd# " "of each system contains two lines starting with \"+\". The first line adds " "a netgroup with the accounts allowed to login onto this machine and the " "second line adds all other accounts with [.filename]#/usr/sbin/nologin# as " "shell. It is recommended to use the \"ALL-CAPS\" version of the hostname as " "the name of the netgroup:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1186 #, no-wrap msgid "" "+@BOXNAME:::::::::\n" "+:::::::::/usr/sbin/nologin\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1191 msgid "" "Once this task is completed on all the machines, there is no longer a need " "to modify the local versions of [.filename]#/etc/master.passwd# ever again. " "All further changes can be handled by modifying the NIS map. Here is an " "example of a possible `netgroup` map for this scenario:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1229 #, no-wrap msgid "" "# Define groups of users first\n" "IT_EMP (,alpha,test-domain) (,beta,test-domain)\n" "IT_APP (,charlie,test-domain) (,delta,test-domain)\n" "DEPT1 (,echo,test-domain) (,foxtrott,test-domain)\n" "DEPT2 (,golf,test-domain) (,hotel,test-domain)\n" "DEPT3 (,india,test-domain) (,juliet,test-domain)\n" "ITINTERN (,kilo,test-domain) (,lima,test-domain)\n" "D_INTERNS (,able,test-domain) (,baker,test-domain)\n" "#\n" "# Now, define some groups based on roles\n" "USERS DEPT1 DEPT2 DEPT3\n" "BIGSRV IT_EMP IT_APP\n" "SMALLSRV IT_EMP IT_APP ITINTERN\n" "USERBOX IT_EMP ITINTERN USERS\n" "#\n" "# And a groups for a special tasks\n" "# Allow echo and golf to access our anti-virus-machine\n" "SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain)\n" "#\n" "# machine-based netgroups\n" "# Our main servers\n" "WAR BIGSRV\n" "FAMINE BIGSRV\n" "# User india needs access to this server\n" "POLLUTION BIGSRV (,india,test-domain)\n" "#\n" "# This one is really important and needs more access restrictions\n" "DEATH IT_EMP\n" "#\n" "# The anti-virus-machine mentioned above\n" "ONE SECURITY\n" "#\n" "# Restrict a machine to a single user\n" "TWO (,hotel,test-domain)\n" "# [...more groups to follow]\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1233 msgid "" "It may not always be advisable to use machine-based netgroups. When " "deploying a couple of dozen or hundreds of systems, role-based netgroups " "instead of machine-based netgroups may be used to keep the size of the NIS " "map within reasonable limits." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1234 #, no-wrap msgid "Password Formats" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1239 msgid "" "NIS requires that all hosts within an NIS domain use the same format for " "encrypting passwords. If users have trouble authenticating on an NIS " "client, it may be due to a differing password format. In a heterogeneous " "network, the format must be supported by all operating systems, where DES is " "the lowest common standard." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1241 msgid "" "To check which format a server or client is using, look at this section of " "[.filename]#/etc/login.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1248 #, no-wrap msgid "" "default:\\\n" "\t:passwd_format=des:\\\n" -"\t:copyright=/etc/COPYRIGHT:\\\n" "\t[Further entries elided]\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1253 msgid "" "In this example, the system is using the DES format for password hashing. " "Other possible values include `blf` for Blowfish, `md5` for MD5, `sha256` " "and `sha512` for SHA-256 and SHA-512 respectively. For more information and " "the up to date list of what is available on the system, consult the " "man:crypt[3] manpage." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1255 msgid "" "If the format on a host needs to be edited to match the one being used in " "the NIS domain, the login capability database must be rebuilt after saving " "the change:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1259 #, no-wrap msgid "# cap_mkdb /etc/login.conf\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1264 msgid "" "The format of passwords for existing user accounts will not be updated until " "each user changes their password _after_ the login capability database is " "rebuilt." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:1267 #, no-wrap msgid "Lightweight Directory Access Protocol (LDAP)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1273 msgid "" "The Lightweight Directory Access Protocol (LDAP) is an application layer " "protocol used to access, modify, and authenticate objects using a " "distributed directory information service. Think of it as a phone or record " "book which stores several levels of hierarchical, homogeneous information. " "It is used in Active Directory and OpenLDAP networks and allows users to " "access to several levels of internal information utilizing a single " "account. For example, email authentication, pulling employee contact " "information, and internal website authentication might all make use of a " "single user account in the LDAP server's record base." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1276 msgid "" "This section provides a quick start guide for configuring an LDAP server on " "a FreeBSD system. It assumes that the administrator already has a design " "plan which includes the type of information to store, what that information " "will be used for, which users should have access to that information, and " "how to secure this information from unauthorized access." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1277 #, no-wrap msgid "LDAP Terminology and Structure" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1283 msgid "" "LDAP uses several terms which should be understood before starting the " "configuration. All directory entries consist of a group of _attributes_. " "Each of these attribute sets contains a unique identifier known as a " "_Distinguished Name_ (DN) which is normally built from several other " "attributes such as the common or _Relative Distinguished Name_ (RDN). " "Similar to how directories have absolute and relative paths, consider a DN " "as an absolute path and the RDN as the relative path." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1286 msgid "" "An example LDAP entry looks like the following. This example searches for " "the entry for the specified user account (`uid`), organizational unit " "(`ou`), and organization (`o`):" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1297 #, no-wrap msgid "" "% ldapsearch -xb \"uid=trhodes,ou=users,o=example.com\"\n" "# extended LDIF\n" "#\n" "# LDAPv3\n" "# base with scope subtree\n" "# filter: (objectclass=*)\n" "# requesting: ALL\n" "#\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1304 #, no-wrap msgid "" "# trhodes, users, example.com\n" "dn: uid=trhodes,ou=users,o=example.com\n" "mail: trhodes@example.com\n" "cn: Tom Rhodes\n" "uid: trhodes\n" "telephoneNumber: (123) 456-7890\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1308 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1582 #, no-wrap msgid "" "# search result\n" "search: 2\n" "result: 0 Success\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1311 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1585 #, no-wrap msgid "" "# numResponses: 2\n" "# numEntries: 1\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1315 msgid "" "This example entry shows the values for the `dn`, `mail`, `cn`, `uid`, and " "`telephoneNumber` attributes. The cn attribute is the RDN." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1317 msgid "" "More information about LDAP and its terminology can be found at http://" "www.openldap.org/doc/admin24/intro.html[http://www.openldap.org/doc/admin24/" "intro.html]." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1319 #, no-wrap msgid "Configuring an LDAP Server" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1323 msgid "" "FreeBSD does not provide a built-in LDAP server. Begin the configuration by " "installing package:net/openldap-server[] package or port:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1327 #, no-wrap msgid "# pkg install openldap-server\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1332 msgid "" "There is a large set of default options enabled in the extref:{linux-users}" "[package, software]. Review them by running `pkg info openldap-server`. If " "they are not sufficient (for example if SQL support is needed), please " "consider recompiling the port using the appropriate crossref:ports[ports-" "using,framework]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1335 msgid "" "The installation creates the directory [.filename]#/var/db/openldap-data# to " "hold the data. The directory to store the certificates must be created:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1339 #, no-wrap msgid "# mkdir /usr/local/etc/openldap/private\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1346 msgid "" "The next phase is to configure the Certificate Authority. The following " "commands must be executed from [.filename]#/usr/local/etc/openldap/" "private#. This is important as the file permissions need to be restrictive " "and users should not have access to these files. More detailed information " "about certificates and their parameters can be found in " "crossref:security[openssl,\"OpenSSL\"]. To create the Certificate " "Authority, start with this command and follow the prompts:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1350 #, no-wrap msgid "# openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1355 msgid "" "The entries for the prompts may be generic _except_ for the `Common Name`. " "This entry must be _different_ than the system hostname. If this will be a " "self signed certificate, prefix the hostname with `CA` for Certificate " "Authority." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1358 msgid "" "The next task is to create a certificate signing request and a private key. " "Input this command and follow the prompts:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1362 #, no-wrap msgid "# openssl req -days 365 -nodes -new -keyout server.key -out server.csr\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1366 msgid "" "During the certificate generation process, be sure to correctly set the " "`Common Name` attribute. The Certificate Signing Request must be signed " "with the Certificate Authority in order to be used as a valid certificate:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1370 #, no-wrap msgid "# openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1373 msgid "" "The final part of the certificate generation process is to generate and sign " "the client certificates:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1378 #, no-wrap msgid "" "# openssl req -days 365 -nodes -new -keyout client.key -out client.csr\n" "# openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1382 msgid "" "Remember to use the same `Common Name` attribute when prompted. When " "finished, ensure that a total of eight (8) new files have been generated " "through the proceeding commands." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1385 msgid "" "The daemon running the OpenLDAP server is [.filename]#slapd#. Its " "configuration is performed through [.filename]#slapd.ldif#: the old " "[.filename]#slapd.conf# has been deprecated by OpenLDAP." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1392 msgid "" "http://www.openldap.org/doc/admin24/slapdconf2.html[Configuration examples] " "for [.filename]#slapd.ldif# are available and can also be found in " "[.filename]#/usr/local/etc/openldap/slapd.ldif.sample#. Options are " "documented in slapd-config(5). Each section of [.filename]#slapd.ldif#, " "like all the other LDAP attribute sets, is uniquely identified through a " "DN. Be sure that no blank lines are left between the `dn:` statement and " "the desired end of the section. In the following example, TLS will be used " "to implement a secure channel. The first section represents the global " "configuration:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1414 #, no-wrap msgid "" "#\n" "# See slapd-config(5) for details on configuration options.\n" "# This file should NOT be world readable.\n" "#\n" "dn: cn=config\n" "objectClass: olcGlobal\n" "cn: config\n" "#\n" "#\n" "# Define global ACLs to disable default read access.\n" "#\n" "olcArgsFile: /var/run/openldap/slapd.args\n" "olcPidFile: /var/run/openldap/slapd.pid\n" "olcTLSCertificateFile: /usr/local/etc/openldap/server.crt\n" "olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key\n" "olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt\n" "#olcTLSCipherSuite: HIGH\n" "olcTLSProtocolMin: 3.1\n" "olcTLSVerifyClient: never\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1419 msgid "" "The Certificate Authority, server certificate and server private key files " "must be specified here. It is recommended to let the clients choose the " "security cipher and omit option `olcTLSCipherSuite` (incompatible with TLS " "clients other than [.filename]#openssl#). Option `olcTLSProtocolMin` lets " "the server require a minimum security level: it is recommended. While " "verification is mandatory for the server, it is not for the client: " "`olcTLSVerifyClient: never`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1421 msgid "" "The second section is about the backend modules and can be configured as " "follows:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1437 #, no-wrap msgid "" "#\n" "# Load dynamic backend modules:\n" "#\n" "dn: cn=module,cn=config\n" "objectClass: olcModuleList\n" "cn: module\n" "olcModulepath:\t/usr/local/libexec/openldap\n" "olcModuleload:\tback_mdb.la\n" "#olcModuleload:\tback_bdb.la\n" "#olcModuleload:\tback_hdb.la\n" "#olcModuleload:\tback_ldap.la\n" "#olcModuleload:\tback_passwd.la\n" "#olcModuleload:\tback_shell.la\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1440 msgid "" "The third section is devoted to load the needed `ldif` schemas to be used by " "the databases: they are essential." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1446 #, no-wrap msgid "" "dn: cn=schema,cn=config\n" "objectClass: olcSchemaConfig\n" "cn: schema\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1451 #, no-wrap msgid "" "include: file:///usr/local/etc/openldap/schema/core.ldif\n" "include: file:///usr/local/etc/openldap/schema/cosine.ldif\n" "include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif\n" "include: file:///usr/local/etc/openldap/schema/nis.ldif\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1454 msgid "Next, the frontend configuration section:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1488 #, no-wrap msgid "" "# Frontend settings\n" "#\n" "dn: olcDatabase={-1}frontend,cn=config\n" "objectClass: olcDatabaseConfig\n" "objectClass: olcFrontendConfig\n" "olcDatabase: {-1}frontend\n" "olcAccess: to * by * read\n" "#\n" "# Sample global access control policy:\n" "#\tRoot DSE: allow anyone to read it\n" "#\tSubschema (sub)entry DSE: allow anyone to read it\n" "#\tOther DSEs:\n" "#\t\tAllow self write access\n" "#\t\tAllow authenticated users read access\n" "#\t\tAllow anonymous users to authenticate\n" "#\n" "#olcAccess: to dn.base=\"\" by * read\n" "#olcAccess: to dn.base=\"cn=Subschema\" by * read\n" "#olcAccess: to *\n" "#\tby self write\n" "#\tby users read\n" "#\tby anonymous auth\n" "#\n" "# if no access controls are present, the default policy\n" "# allows anyone and everyone to read anything but restricts\n" "# updates to rootdn. (e.g., \"access to * by * read\")\n" "#\n" "# rootdn can always read and write EVERYTHING!\n" "#\n" "olcPasswordHash: {SSHA}\n" "# {SSHA} is already the default for olcPasswordHash\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1491 msgid "" "Another section is devoted to the _configuration backend_, the only way to " "later access the OpenLDAP server configuration is as a global super-user." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1499 #, no-wrap msgid "" "dn: olcDatabase={0}config,cn=config\n" "objectClass: olcDatabaseConfig\n" "olcDatabase: {0}config\n" "olcAccess: to * by * none\n" "olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1504 msgid "" "The default administrator username is `cn=config`. Type " "[.filename]#slappasswd# in a shell, choose a password and use its hash in " "`olcRootPW`. If this option is not specified now, before " "[.filename]#slapd.ldif# is imported, no one will be later able to modify the " "_global configuration_ section." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1506 msgid "The last section is about the database backend:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1530 #, no-wrap msgid "" "#######################################################################\n" "# LMDB database definitions\n" "#######################################################################\n" "#\n" "dn: olcDatabase=mdb,cn=config\n" "objectClass: olcDatabaseConfig\n" "objectClass: olcMdbConfig\n" "olcDatabase: mdb\n" "olcDbMaxSize: 1073741824\n" "olcSuffix: dc=domain,dc=example\n" "olcRootDN: cn=mdbadmin,dc=domain,dc=example\n" "# Cleartext passwords, especially for the rootdn, should\n" "# be avoided. See slappasswd(8) and slapd-config(5) for details.\n" "# Use of strong authentication encouraged.\n" "olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+\n" "# The database directory MUST exist prior to running slapd AND\n" "# should only be accessible by the slapd and slap tools.\n" "# Mode 700 recommended.\n" "olcDbDirectory:\t/var/db/openldap-data\n" "# Indices to maintain\n" "olcDbIndex: objectClass eq\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1535 msgid "" "This database hosts the _actual contents_ of the LDAP directory. Types " "other than `mdb` are available. Its super-user, not to be confused with the " "global one, is configured here: a (possibly custom) username in `olcRootDN` " "and the password hash in `olcRootPW`; [.filename]#slappasswd# can be used as " "before." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1538 msgid "" "This http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/" "data/regressions/" "its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;hb=HEAD[repository] " "contains four examples of [.filename]#slapd.ldif#. To convert an existing " "[.filename]#slapd.conf# into [.filename]#slapd.ldif#, refer to http://" "www.openldap.org/doc/admin24/slapdconf2.html[this page] (please note that " "this may introduce some unuseful options)." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1541 msgid "" "When the configuration is completed, [.filename]#slapd.ldif# must be placed " "in an empty directory. It is recommended to create it as:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1545 #, no-wrap msgid "# mkdir /usr/local/etc/openldap/slapd.d/\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1548 msgid "Import the configuration database:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1552 #, no-wrap msgid "# /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1555 msgid "Start the [.filename]#slapd# daemon:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1559 #, no-wrap msgid "# /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1563 msgid "" "Option `-d` can be used for debugging, as specified in slapd(8). To verify " "that the server is running and working:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1574 #, no-wrap msgid "" "# ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts\n" "# extended LDIF\n" "#\n" "# LDAPv3\n" "# base <> with scope baseObject\n" "# filter: (objectclass=*)\n" "# requesting: namingContexts\n" "#\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1578 #, no-wrap msgid "" "#\n" "dn:\n" "namingContexts: dc=domain,dc=example\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1590 msgid "" "The server must still be trusted. If that has never been done before, " "follow these instructions. Install the OpenSSL package or port:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1594 #, no-wrap msgid "# pkg install openssl\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1597 msgid "" "From the directory where [.filename]#ca.crt# is stored (in this example, " "[.filename]#/usr/local/etc/openldap#), run:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1601 #, no-wrap msgid "# c_rehash .\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1605 msgid "" "Both the CA and the server certificate are now correctly recognized in their " "respective roles. To verify this, run this command from the " "[.filename]#server.crt# directory:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1609 #, no-wrap msgid "# openssl verify -verbose -CApath . server.crt\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1613 msgid "" "If [.filename]#slapd# was running, restart it. As stated in [.filename]#/" "usr/local/etc/rc.d/slapd#, to properly run [.filename]#slapd# at boot the " "following lines must be added to [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1621 #, no-wrap msgid "" "slapd_enable=\"YES\"\n" "slapd_flags='-h \"ldapi://%2fvar%2frun%2fopenldap%2fldapi/\n" "ldap://0.0.0.0/\"'\n" "slapd_sockets=\"/var/run/openldap/ldapi\"\n" "slapd_cn_config=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1625 msgid "" "[.filename]#slapd# does not provide debugging at boot. Check [.filename]#/" "var/log/debug.log#, [.filename]#dmesg -a# and [.filename]#/var/log/messages# " "for this purpose." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1628 msgid "" "The following example adds the group `team` and the user `john` to the " "`domain.example` LDAP database, which is still empty. First, create the " "file [.filename]#domain.ldif#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1637 #, no-wrap msgid "" "# cat domain.ldif\n" "dn: dc=domain,dc=example\n" "objectClass: dcObject\n" "objectClass: organization\n" "o: domain.example\n" "dc: domain\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1642 #, no-wrap msgid "" "dn: ou=groups,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: organizationalunit\n" "ou: groups\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1647 #, no-wrap msgid "" "dn: ou=users,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: organizationalunit\n" "ou: users\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1653 #, no-wrap msgid "" "dn: cn=team,ou=groups,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: posixGroup\n" "cn: team\n" "gidNumber: 10001\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1666 #, no-wrap msgid "" "dn: uid=john,ou=users,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: account\n" "objectClass: posixAccount\n" "objectClass: shadowAccount\n" "cn: John McUser\n" "uid: john\n" "uidNumber: 10001\n" "gidNumber: 10001\n" "homeDirectory: /home/john/\n" "loginShell: /usr/bin/bash\n" "userPassword: secret\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1672 msgid "" "See the OpenLDAP documentation for more details. Use " "[.filename]#slappasswd# to replace the plain text password `secret` with a " "hash in `userPassword`. The path specified as `loginShell` must exist in " "all the systems where `john` is allowed to login. Finally, use the `mdb` " "administrator to modify the database:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1676 #, no-wrap msgid "# ldapadd -W -D \"cn=mdbadmin,dc=domain,dc=example\" -f domain.ldif\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1681 msgid "" "Modifications to the _global configuration_ section can only be performed by " "the global super-user. For example, assume that the option " "`olcTLSCipherSuite: HIGH:MEDIUM:SSLv3` was initially specified and must now " "be deleted. First, create a file that contains the following:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1688 #, no-wrap msgid "" "# cat global_mod\n" "dn: cn=config\n" "changetype: modify\n" "delete: olcTLSCipherSuite\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1691 msgid "Then, apply the modifications:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1695 #, no-wrap msgid "# ldapmodify -f global_mod -x -D \"cn=config\" -W\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1700 msgid "" "When asked, provide the password chosen in the _configuration backend_ " "section. The username is not required: here, `cn=config` represents the DN " "of the database section to be modified. Alternatively, use `ldapmodify` to " "delete a single line of the database, `ldapdelete` to delete a whole entry." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1702 msgid "" "If something goes wrong, or if the global super-user cannot access the " "configuration backend, it is possible to delete and re-write the whole " "configuration:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1706 #, no-wrap msgid "# rm -rf /usr/local/etc/openldap/slapd.d/\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1710 msgid "" "[.filename]#slapd.ldif# can then be edited and imported again. Please, " "follow this procedure only when no other solution is available." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1713 msgid "" "This is the configuration of the server only. The same machine can also " "host an LDAP client, with its own separate configuration." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:1715 #, no-wrap msgid "Dynamic Host Configuration Protocol (DHCP)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1722 msgid "" "The Dynamic Host Configuration Protocol (DHCP) allows a system to connect to " "a network in order to be assigned the necessary addressing information for " "communication on that network. FreeBSD includes the OpenBSD version of " "`dhclient` which is used by the client to obtain the addressing " "information. FreeBSD does not install a DHCP server, but several servers " "are available in the FreeBSD Ports Collection. The DHCP protocol is fully " "described in http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Informational " "resources are also available at http://www.isc.org/downloads/dhcp/[isc.org/" "downloads/dhcp/]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1725 msgid "" "This section describes how to use the built-in DHCP client. It then " "describes how to install and configure a DHCP server." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1731 msgid "" "In FreeBSD, the man:bpf[4] device is needed by both the DHCP server and DHCP " "client. This device is included in the [.filename]#GENERIC# kernel that is " "installed with FreeBSD. Users who prefer to create a custom kernel need to " "keep this device if DHCP is used." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1733 msgid "" "It should be noted that [.filename]#bpf# also allows privileged users to run " "network packet sniffers on that system." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1736 #, no-wrap msgid "Configuring a DHCP Client" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1740 msgid "" "DHCP client support is included in the FreeBSD installer, making it easy to " "configure a newly installed system to automatically receive its networking " "addressing information from an existing DHCP server. Refer to " "crossref:bsdinstall[bsdinstall-post,\"Accounts, Time Zone, Services and " "Hardening\"] for examples of network configuration." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1748 msgid "" "When `dhclient` is executed on the client machine, it begins broadcasting " "requests for configuration information. By default, these requests use UDP " "port 68. The server replies on UDP port 67, giving the client an IP address " "and other relevant network information such as a subnet mask, default " "gateway, and DNS server addresses. This information is in the form of a " "DHCP \"lease\" and is valid for a configurable time. This allows stale IP " "addresses for clients no longer connected to the network to automatically be " "reused. DHCP clients can obtain a great deal of information from the " "server. An exhaustive list may be found in man:dhcp-options[5]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1751 msgid "" "By default, when a FreeBSD system boots, its DHCP client runs in the " "background, or _asynchronously_. Other startup scripts continue to run " "while the DHCP process completes, which speeds up system startup." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1756 msgid "" "Background DHCP works well when the DHCP server responds quickly to the " "client's requests. However, DHCP may take a long time to complete on some " "systems. If network services attempt to run before DHCP has assigned the " "network addressing information, they will fail. Using DHCP in _synchronous_ " "mode prevents this problem as it pauses startup until the DHCP configuration " "has completed." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1758 msgid "" "This line in [.filename]#/etc/rc.conf# is used to configure background or " "asynchronous mode:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1762 #, no-wrap msgid "ifconfig_fxp0=\"DHCP\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1766 msgid "" "This line may already exist if the system was configured to use DHCP during " "installation. Replace the _fxp0_ shown in these examples with the name of " "the interface to be dynamically configured, as described in " "crossref:config[config-network-setup,“Setting Up Network Interface Cards”]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1768 msgid "" "To instead configure the system to use synchronous mode, and to pause during " "startup while DHCP completes, use \"`SYNCDHCP`\":" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1772 #, no-wrap msgid "ifconfig_fxp0=\"SYNCDHCP\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1776 msgid "" "Additional client options are available. Search for `dhclient` in " "man:rc.conf[5] for details." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1778 msgid "The DHCP client uses the following files:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1780 msgid "[.filename]#/etc/dhclient.conf#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1784 msgid "" "The configuration file used by `dhclient`. Typically, this file contains " "only comments as the defaults are suitable for most clients. This " "configuration file is described in man:dhclient.conf[5]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1785 msgid "[.filename]#/sbin/dhclient#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1787 msgid "" "More information about the command itself can be found in man:dhclient[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1788 msgid "[.filename]#/sbin/dhclient-script#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1791 msgid "" "The FreeBSD-specific DHCP client configuration script. It is described in " "man:dhclient-script[8], but should not need any user modification to " "function properly." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1792 msgid "[.filename]#/var/db/dhclient.leases.interface#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1794 msgid "" "The DHCP client keeps a database of valid leases in this file, which is " "written as a log and is described in man:dhclient.leases[5]." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1797 #, no-wrap msgid "Installing and Configuring a DHCP Server" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1801 msgid "" "This section demonstrates how to configure a FreeBSD system to act as a DHCP " "server using the Internet Systems Consortium (ISC) implementation of the " "DHCP server. This implementation and its documentation can be installed " "using the package:net/isc-dhcp44-server[] package or port." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1804 msgid "" "The installation of package:net/isc-dhcp44-server[] installs a sample " "configuration file. Copy [.filename]#/usr/local/etc/dhcpd.conf.example# to " "[.filename]#/usr/local/etc/dhcpd.conf# and make any edits to this new file." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1807 msgid "" "The configuration file is comprised of declarations for subnets and hosts " "which define the information that is provided to DHCP clients. For example, " "these lines configure the following:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1813 #, no-wrap msgid "" "option domain-name \"example.org\";<.>\n" "option domain-name-servers ns1.example.org;<.>\n" "option subnet-mask 255.255.255.0;<.>\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1817 #, no-wrap msgid "" "default-lease-time 600;<.>\n" "max-lease-time 72400;<.>\n" "ddns-update-style none;<.>\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1822 #, no-wrap msgid "" "subnet 10.254.239.0 netmask 255.255.255.224 {\n" " range 10.254.239.10 10.254.239.20;<.>\n" " option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;<.>\n" "}\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1827 #, no-wrap msgid "" "host fantasia {\n" " hardware ethernet 08:00:07:26:c0:a5;<.>\n" " fixed-address fantasia.fugue.com;<.>\n" "}\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1830 msgid "" "This option specifies the default search domain that will be provided to " "clients. Refer to man:resolv.conf[5] for more information." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1831 msgid "" "This option specifies a comma separated list of DNS servers that the client " "should use. They can be listed by their Fully Qualified Domain Names (FQDN), " "as seen in the example, or by their IP addresses." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1832 msgid "The subnet mask that will be provided to clients." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1833 msgid "" "The default lease expiry time in seconds. A client can be configured to " "override this value." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1834 msgid "" "The maximum allowed length of time, in seconds, for a lease. Should a client " "request a longer lease, a lease will still be issued, but it will only be " "valid for `max-lease-time`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1835 msgid "" "The default of `none` disables dynamic DNS updates. Changing this to " "`interim` configures the DHCP server to update a DNS server whenever it " "hands out a lease so that the DNS server knows which IP addresses are " "associated with which computers in the network. Do not change the default " "setting unless the DNS server has been configured to support dynamic DNS." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1836 msgid "" "This line creates a pool of available IP addresses which are reserved for " "allocation to DHCP clients. The range of addresses must be valid for the " "network or subnet specified in the previous line." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1837 msgid "" "Declares the default gateway that is valid for the network or subnet " "specified before the opening `{` bracket." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1838 msgid "" "Specifies the hardware MAC address of a client so that the DHCP server can " "recognize the client when it makes a request." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1839 msgid "" "Specifies that this host should always be given the same IP address. Using " "the hostname is correct, since the DHCP server will resolve the hostname " "before returning the lease information." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1842 msgid "" "This configuration file supports many more options. Refer to dhcpd.conf(5), " "installed with the server, for details and examples." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1844 msgid "" "Once the configuration of [.filename]#dhcpd.conf# is complete, enable the " "DHCP server in [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1849 #, no-wrap msgid "" "dhcpd_enable=\"YES\"\n" "dhcpd_ifaces=\"dc0\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1852 msgid "" "Replace the `dc0` with the interface (or interfaces, separated by " "whitespace) that the DHCP server should listen on for DHCP client requests." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1854 msgid "Start the server by issuing the following command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1858 #, no-wrap msgid "# service isc-dhcpd start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1861 msgid "" "Any future changes to the configuration of the server will require the dhcpd " "service to be stopped and then started using man:service[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1864 msgid "" "The DHCP server uses the following files. Note that the manual pages are " "installed with the server software." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1866 msgid "[.filename]#/usr/local/sbin/dhcpd#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1868 msgid "More information about the dhcpd server can be found in dhcpd(8)." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1869 msgid "[.filename]#/usr/local/etc/dhcpd.conf#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1872 msgid "" "The server configuration file needs to contain all the information that " "should be provided to clients, along with information regarding the " "operation of the server. This configuration file is described in " "dhcpd.conf(5)." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1873 msgid "[.filename]#/var/db/dhcpd.leases#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1876 msgid "" "The DHCP server keeps a database of leases it has issued in this file, which " "is written as a log. Refer to dhcpd.leases(5), which gives a slightly " "longer description." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1877 msgid "[.filename]#/usr/local/sbin/dhcrelay#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1881 msgid "" "This daemon is used in advanced environments where one DHCP server forwards " "a request from a client to another DHCP server on a separate network. If " "this functionality is required, install the package:net/isc-dhcp44-relay[] " "package or port. The installation includes dhcrelay(8) which provides more " "detail." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:1884 #, no-wrap msgid "Domain Name System (DNS)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1889 msgid "" "Domain Name System (DNS) is the protocol through which domain names are " "mapped to IP addresses, and vice versa. DNS is coordinated across the " "Internet through a somewhat complex system of authoritative root, Top Level " "Domain (TLD), and other smaller-scale name servers, which host and cache " "individual domain information. It is not necessary to run a name server to " "perform DNS lookups on a system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1891 msgid "The following table describes some of the terms associated with DNS:" msgstr "" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:1892 #, no-wrap msgid "DNS Terminology" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1898 #, no-wrap msgid "Definition" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1899 #, no-wrap msgid "Forward DNS" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1901 #, no-wrap msgid "Mapping of hostnames to IP addresses." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1902 #, no-wrap msgid "Origin" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1904 #, no-wrap msgid "Refers to the domain covered in a particular zone file." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1905 #, no-wrap msgid "Resolver" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1907 #, no-wrap msgid "A system process through which a machine queries a name server for zone information." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1908 #, no-wrap msgid "Reverse DNS" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1910 #, no-wrap msgid "Mapping of IP addresses to hostnames." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1911 #, no-wrap msgid "Root zone" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1913 #, no-wrap msgid "The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory." msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1914 #, no-wrap msgid "Zone" msgstr "" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1915 #, no-wrap msgid "An individual domain, subdomain, or portion of the DNS administered by the same authority." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1918 msgid "Examples of zones:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1920 msgid "`.` is how the root zone is usually referred to in documentation." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1921 msgid "`org.` is a Top Level Domain (TLD) under the root zone." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1922 msgid "`example.org.` is a zone under the `org.`TLD." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1923 msgid "" "`1.168.192.in-addr.arpa` is a zone referencing all IP addresses which fall " "under the `192.168.1.*`IP address space." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1927 msgid "" "As one can see, the more specific part of a hostname appears to its left. " "For example, `example.org.` is more specific than `org.`, as `org.` is more " "specific than the root zone. The layout of each part of a hostname is much " "like a file system: the [.filename]#/dev# directory falls within the root, " "and so on." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1928 #, no-wrap msgid "Reasons to Run a Name Server" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1931 msgid "" "Name servers generally come in two forms: authoritative name servers, and " "caching (also known as resolving) name servers." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1933 msgid "An authoritative name server is needed when:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1935 msgid "" "One wants to serve DNS information to the world, replying authoritatively to " "queries." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1936 msgid "" "A domain, such as `example.org`, is registered and IP addresses need to be " "assigned to hostnames under it." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1937 msgid "An IP address block requires reverse DNS entries (IP to hostname)." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1938 msgid "A backup or second name server, called a slave, will reply to queries." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1940 msgid "A caching name server is needed when:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1942 msgid "" "A local DNS server may cache and respond more quickly than querying an " "outside name server." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1946 msgid "" "When one queries for `www.FreeBSD.org`, the resolver usually queries the " "uplink ISP's name server, and retrieves the reply. With a local, caching " "DNS server, the query only has to be made once to the outside world by the " "caching DNS server. Additional queries will not have to go outside the " "local network, since the information is cached locally." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1947 #, no-wrap msgid "DNS Server Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1952 msgid "" "Unbound is provided in the FreeBSD base system. By default, it will provide " "DNS resolution to the local machine only. While the base system package can " "be configured to provide resolution services beyond the local machine, it is " "recommended that such requirements be addressed by installing Unbound from " "the FreeBSD Ports Collection." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1954 msgid "To enable Unbound, add the following to [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1958 #, no-wrap msgid "local_unbound_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1961 msgid "" "Any existing nameservers in [.filename]#/etc/resolv.conf# will be configured " "as forwarders in the new Unbound configuration." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1967 msgid "" "If any of the listed nameservers do not support DNSSEC, local DNS resolution " "will fail. Be sure to test each nameserver and remove any that fail the " "test. The following command will show the trust tree or a failure for a " "nameserver running on `192.168.1.1`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1971 #, no-wrap msgid "% drill -S FreeBSD.org @192.168.1.1\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1975 msgid "Once each nameserver is confirmed to support DNSSEC, start Unbound:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1979 #, no-wrap msgid "# service local_unbound onestart\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1983 msgid "" "This will take care of updating [.filename]#/etc/resolv.conf# so that " "queries for DNSSEC secured domains will now work. For example, run the " "following to validate the FreeBSD.org DNSSEC trust tree:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1989 #, no-wrap msgid "" "% drill -S FreeBSD.org\n" ";; Number of trusted keys: 1\n" ";; Chasing: freebsd.org. A\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2005 #, no-wrap msgid "" "DNSSEC Trust tree:\n" "freebsd.org. (A)\n" "|---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256)\n" " |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257)\n" " |---freebsd.org. (DS keytag: 32659 digest type: 2)\n" " |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256)\n" " |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257)\n" " |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257)\n" " |---org. (DS keytag: 21366 digest type: 1)\n" " | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256)\n" " | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)\n" " |---org. (DS keytag: 21366 digest type: 2)\n" " |---. (DNSKEY keytag: 40926 alg: 8 flags: 256)\n" " |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)\n" ";; Chase successful\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2007 #, no-wrap msgid "Authoritative Name Server Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2010 msgid "" "FreeBSD does not provide authoritative name server software in the base " "system. Users are encouraged to install third party applications, like " "package:dns/nsd[] or package:dns/bind918[] package or port." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2012 #, no-wrap msgid "Zero-configuration Networking (mDNS/DNS-SD)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2016 msgid "" "https://en.wikipedia.org/wiki/Zero-configuration_networking[Zero-" "configuration networking] (sometimes referred to as _Zeroconf_) is a set of " "technologies, which simplify network configuration. The main parts of " "Zeroconf are:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2018 msgid "" "Link-Local Addressing providing automatic assignment of numeric network " "addresses." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2019 msgid "" "Multicast DNS (_mDNS_) providing automatic distribution and resolution of " "hostnames." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2020 msgid "" "DNS-Based Service Discovery (_DNS-SD_) providing automatic discovery of " "service instances." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2021 #, no-wrap msgid "Configuring and Starting Avahi" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2025 msgid "" "One of the popular implementations of zeroconf is https://avahi.org/" "[Avahi]. Avahi can be installed and configured with the following commands:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2034 #, no-wrap msgid "" "# pkg install avahi-app nss_mdns\n" "# grep -q '^hosts:.*\\' /etc/nsswitch.conf || sed -i \"\" 's/^hosts: .*/& mdns/' /etc/nsswitch.conf\n" "# service dbus enable\n" "# service avahi-daemon enable\n" "# service dbus start\n" "# service avahi-daemon start\n" msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2037 #, no-wrap msgid "Apache HTTP Server" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2041 msgid "" "The open source Apache HTTP Server is the most widely used web server. " "FreeBSD does not install this web server by default, but it can be installed " "from the package:www/apache24[] package or port." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2044 msgid "" "This section summarizes how to configure and start version 2._x_ of the " "Apache HTTP Server on FreeBSD. For more detailed information about Apache " "2.X and its configuration directives, refer to http://httpd.apache.org/" "[httpd.apache.org]." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2045 #, no-wrap msgid "Configuring and Starting Apache" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2050 msgid "" "In FreeBSD, the main Apache HTTP Server configuration file is installed as " "[.filename]#/usr/local/etc/apache2x/httpd.conf#, where _x_ represents the " "version number. This ASCII text file begins comment lines with a `+#+`. " "The most frequently modified directives are:" msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2051 #, no-wrap msgid "`ServerRoot \"/usr/local\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2054 msgid "" "Specifies the default directory hierarchy for the Apache installation. " "Binaries are stored in the [.filename]#bin# and [.filename]#sbin# " "subdirectories of the server root and configuration files are stored in the " "[.filename]#etc/apache2x# subdirectory." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2055 #, no-wrap msgid "`ServerAdmin \\you@example.com`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2058 msgid "" "Change this to the email address to receive problems with the server. This " "address also appears on some server-generated pages, such as error documents." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2059 #, no-wrap msgid "`ServerName www.example.com:80`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2064 msgid "" "Allows an administrator to set a hostname which is sent back to clients for " "the server. For example, `www` can be used instead of the actual hostname. " "If the system does not have a registered DNS name, enter its IP address " "instead. If the server will listen on an alternate report, change `80` to " "the alternate port number." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2065 #, no-wrap msgid "`DocumentRoot \"/usr/local/www/apache2_x_/data\"`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2068 msgid "" "The directory where documents will be served from. By default, all requests " "are taken from this directory, but symbolic links and aliases may be used to " "point to other locations." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2072 msgid "" "It is always a good idea to make a backup copy of the default Apache " "configuration file before making changes. When the configuration of Apache " "is complete, save the file and verify the configuration using `apachectl`. " "Running `apachectl configtest` should return `Syntax OK`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2074 msgid "" "To launch Apache at system startup, add the following line to [.filename]#/" "etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2078 #, no-wrap msgid "apache24_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2081 msgid "" "If Apache should be started with non-default options, the following line may " "be added to [.filename]#/etc/rc.conf# to specify the needed flags:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2085 #, no-wrap msgid "apache24_flags=\"\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2088 msgid "If apachectl does not report configuration errors, start `httpd` now:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2092 #, no-wrap msgid "# service apache24 start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2096 msgid "" "The `httpd` service can be tested by entering `http://_localhost_` in a web " "browser, replacing _localhost_ with the fully-qualified domain name of the " "machine running `httpd`. The default web page that is displayed is " "[.filename]#/usr/local/www/apache24/data/index.html#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2098 msgid "" "The Apache configuration can be tested for errors after making subsequent " "configuration changes while `httpd` is running using the following command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2102 #, no-wrap msgid "# service apache24 configtest\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2107 msgid "" "It is important to note that `configtest` is not an man:rc[8] standard, and " "should not be expected to work for all startup scripts." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2109 #, no-wrap msgid "Virtual Hosting" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2115 msgid "" "Virtual hosting allows multiple websites to run on one Apache server. The " "virtual hosts can be _IP-based_ or _name-based_. IP-based virtual hosting " "uses a different IP address for each website. Name-based virtual hosting " "uses the clients HTTP/1.1 headers to figure out the hostname, which allows " "the websites to share the same IP address." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2118 msgid "" "To setup Apache to use name-based virtual hosting, add a `VirtualHost` block " "for each website. For example, for the webserver named `www.domain.tld` " "with a virtual domain of `www.someotherdomain.tld`, add the following " "entries to [.filename]#httpd.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2125 #, no-wrap msgid "" "\n" " ServerName www.domain.tld\n" " DocumentRoot /www/domain.tld\n" "\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2130 #, no-wrap msgid "" "\n" " ServerName www.someotherdomain.tld\n" " DocumentRoot /www/someotherdomain.tld\n" "\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2133 msgid "" "For each virtual host, replace the values for `ServerName` and " "`DocumentRoot` with the values to be used." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2135 msgid "" "For more information about setting up virtual hosts, consult the official " "Apache documentation at: http://httpd.apache.org/docs/vhosts/[http://" "httpd.apache.org/docs/vhosts/]." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2136 #, no-wrap msgid "Apache Modules" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2140 msgid "" "Apache uses modules to augment the functionality provided by the basic " "server. Refer to http://httpd.apache.org/docs/current/mod/[http://" "httpd.apache.org/docs/current/mod/] for a complete listing of and the " "configuration details for the available modules." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2145 msgid "" "In FreeBSD, some modules can be compiled with the package:www/apache24[] " "port. Type `make config` within [.filename]#/usr/ports/www/apache24# to see " "which modules are available and which are enabled by default. If the module " "is not compiled with the port, the FreeBSD Ports Collection provides an easy " "way to install many modules. This section describes three of the most " "commonly used modules." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2146 #, no-wrap msgid "SSL support" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2153 msgid "" "At one point, support for SSL inside of Apache required a secondary module " "called [.filename]#mod_ssl#. This is no longer the case and the default " "install of Apache comes with SSL built into the web server. An example of " "how to enable support for SSL websites is available in the installed file, " "[.filename]#httpd-ssl.conf# inside of the [.filename]#/usr/local/etc/" "apache24/extra# directory Inside this directory is also a sample file called " "named [.filename]#ssl.conf-sample#. It is recommended that both files be " "evaluated to properly set up secure websites in the Apache web server." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2155 msgid "" "After the configuration of SSL is complete, the following line must be " "uncommented in the main [.filename]#http.conf# to activate the changes on " "the next restart or reload of Apache:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2159 #, no-wrap msgid "#Include etc/apache24/extra/httpd-ssl.conf\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2166 msgid "" "SSL version two and version three have known vulnerability issues. It is " "highly recommended TLS version 1.2 and 1.3 be enabled in place of the older " "SSL options. This can be accomplished by setting the following options in " "the [.filename]#ssl.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2173 #, no-wrap msgid "" "SSLProtocol all -SSLv3 -SSLv2 +TLSv1.2 +TLSv1.3\n" "SSLProxyProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2176 msgid "" "To complete the configuration of SSL in the web server, uncomment the " "following line to ensure that the configuration will be pulled into Apache " "during restart or reload:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2181 #, no-wrap msgid "" "# Secure (SSL/TLS) connections\n" "Include etc/apache24/extra/httpd-ssl.conf\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2184 msgid "" "The following lines must also be uncommented in the [.filename]#httpd.conf# " "to fully support SSL in Apache:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2190 #, no-wrap msgid "" "LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so\n" "LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so\n" "LoadModule ssl_module libexec/apache24/mod_ssl.so\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2194 msgid "" "The next step is to work with a certificate authority to have the " "appropriate certificates installed on the system. This will set up a chain " "of trust for the site and prevent any warnings of self-signed certificates." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2195 #, no-wrap msgid "[.filename]#mod_perl#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2199 msgid "" "The [.filename]#mod_perl# module makes it possible to write Apache modules " "in Perl. In addition, the persistent interpreter embedded in the server " "avoids the overhead of starting an external interpreter and the penalty of " "Perl start-up time." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2202 msgid "" "The [.filename]#mod_perl# can be installed using the package:www/mod_perl2[] " "package or port. Documentation for using this module can be found at http://" "perl.apache.org/docs/2.0/index.html[http://perl.apache.org/docs/2.0/" "index.html]." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2203 #, no-wrap msgid "[.filename]#mod_php#" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2207 msgid "" "_PHP: Hypertext Preprocessor_ (PHP) is a general-purpose scripting language " "that is especially suited for web development. Capable of being embedded " "into HTML, its syntax draws upon C, Java(TM), and Perl with the intention of " "allowing web developers to write dynamically generated webpages quickly." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2209 msgid "" "Support for PHP for Apache and any other feature written in the language, " "can be added by installing the appropriate port." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2211 msgid "For all supported versions, search the package database using `pkg`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2215 #, no-wrap msgid "# pkg search php\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2220 msgid "" "A list will be displayed including the versions and additional features they " "provide. The components are completely modular, meaning features are " "enabled by installing the appropriate port. To install PHP version 7.4 for " "Apache, issue the following command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2224 #, no-wrap msgid "# pkg install mod_php74\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2227 msgid "" "If any dependency packages need to be installed, they will be installed as " "well." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2230 msgid "" "By default, PHP will not be enabled. The following lines will need to be " "added to the Apache configuration file located in [.filename]#/usr/local/etc/" "apache24# to make it active:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2239 #, no-wrap msgid "" "\n" " SetHandler application/x-httpd-php\n" "\n" "\n" " SetHandler application/x-httpd-php-source\n" "\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2242 msgid "" "In addition, the `DirectoryIndex` in the configuration file will also need " "to be updated and Apache will either need to be restarted or reloaded for " "the changes to take effect." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2245 msgid "" "Support for many of the PHP features may also be installed by using `pkg`. " "For example, to install support for XML or SSL, install their respective " "ports:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2249 #, no-wrap msgid "# pkg install php74-xml php74-openssl\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2252 msgid "" "As before, the Apache configuration will need to be reloaded for the changes " "to take effect, even in cases where it was just a module install." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2254 msgid "" "To perform a graceful restart to reload the configuration, issue the " "following command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2258 #, no-wrap msgid "# apachectl graceful\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2262 msgid "" "Once the install is complete, there are two methods of obtaining the " "installed PHP support modules and the environmental information of the " "build. The first is to install the full PHP binary and running the command " "to gain the information:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2266 #, no-wrap msgid "# pkg install php74\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2272 #, no-wrap msgid "# php -i | less\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2275 msgid "" "It is necessary to pass the output to a pager, such as the `more` or `less` " "to easier digest the amount of output." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2279 msgid "" "Finally, to make any changes to the global configuration of PHP there is a " "well documented file installed into [.filename]#/usr/local/etc/php.ini#. At " "the time of install, this file will not exist because there are two versions " "to choose from, one is [.filename]#php.ini-development# and the other is " "[.filename]#php.ini-production#. These are starting points to assist " "administrators in their deployment." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2280 #, no-wrap msgid "HTTP2 Support" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2285 msgid "" "Apache support for the HTTP2 protocol is included by default when installing " "the port with `pkg`. The new version of HTTP includes many improvements " "over the previous version, including utilizing a single connection to a " "website, reducing overall roundtrips of TCP connections. Also, packet " "header data is compressed and HTTP2 requires encryption by default." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2288 msgid "" "When Apache is configured to only use HTTP2, web browsers will require " "secure, encrypted HTTPS connections. When Apache is configured to use both " "versions, HTTP1.1 will be considered a fall back option if any issues arise " "during the connection." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2291 msgid "" "While this change does require administrators to make changes, they are " "positive and equate to a more secure Internet for everyone. The changes are " "only required for sites not currently implementing SSL and TLS." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2296 msgid "" "This configuration depends on the previous sections, including TLS support. " "It is recommended those instructions be followed before continuing with this " "configuration." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2299 msgid "" "Start the process by enabling the http2 module by uncommenting the line in " "[.filename]#/usr/local/etc/apache24/httpd.conf# and replace the mpm_prefork " "module with mpm_event as the former does not support HTTP2." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2304 #, no-wrap msgid "" "LoadModule http2_module libexec/apache24/mod_http2.so\n" "LoadModule mpm_event_module libexec/apache24/mod_mpm_event.so\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2312 msgid "" "There is a separate [.filename]#mod_http2# port that is available. It " "exists to deliver security and bug fixes quicker than the module installed " "with the bundled [.filename]#apache24# port. It is not required for HTTP2 " "support but is available. When installed, the [.filename]#mod_h2.so# should " "be used in place of [.filename]#mod_http2.so# in the Apache configuration." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2316 msgid "" "There are two methods to implement HTTP2 in Apache; one way is globally for " "all sites and each VirtualHost running on the system. To enable HTTP2 " "globally, add the following line under the ServerName directive:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2320 #, no-wrap msgid "Protocols h2 http/1.1\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2325 msgid "" "To enable HTTP2 over plaintext, use h2h2chttp/1.1 in the " "[.filename]#httpd.conf#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2329 msgid "" "Having the h2c here will allow plaintext HTTP2 data to pass on the system " "but is not recommended. In addition, using the http/1.1 here will allow " "fallback to the HTTP1.1 version of the protocol should it be needed by the " "system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2331 msgid "" "To enable HTTP2 for individual VirtualHosts, add the same line within the " "VirtualHost directive in either [.filename]#httpd.conf# or [.filename]#httpd-" "ssl.conf#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2333 msgid "" "Reload the configuration using the `apachectl`[parameter]#reload# command " "and test the configuration either by using either of the following methods " "after visiting one of the hosted pages:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2337 #, no-wrap msgid "# grep \"HTTP/2.0\" /var/log/httpd-access.log\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2340 msgid "This should return something similar to the following:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2347 #, no-wrap msgid "" "192.168.1.205 - - [18/Oct/2020:18:34:36 -0400] \"GET / HTTP/2.0\" 304 -\n" "192.0.2.205 - - [18/Oct/2020:19:19:57 -0400] \"GET / HTTP/2.0\" 304 -\n" "192.0.0.205 - - [18/Oct/2020:19:20:52 -0400] \"GET / HTTP/2.0\" 304 -\n" "192.0.2.205 - - [18/Oct/2020:19:23:10 -0400] \"GET / HTTP/2.0\" 304 -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2350 msgid "" "The other method is using the web browser's built in site debugger or " "`tcpdump`; however, using either method is beyond the scope of this document." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2353 msgid "" "Support for HTTP2 reverse proxy connections by using the " "[.filename]#mod_proxy_http2.so# module. When configuring the ProxyPass or " "RewriteRules [P] statements, they should use h2:// for the connection." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2354 #, no-wrap msgid "Dynamic Websites" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2358 msgid "" "In addition to mod_perl and mod_php, other languages are available for " "creating dynamic web content. These include Django and Ruby on Rails." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2359 #, no-wrap msgid "Django" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2365 msgid "" "Django is a BSD-licensed framework designed to allow developers to write " "high performance, elegant web applications quickly. It provides an object-" "relational mapper so that data types are developed as Python objects. A " "rich dynamic database-access API is provided for those objects without the " "developer ever having to write SQL. It also provides an extensible template " "system so that the logic of the application is separated from the HTML " "presentation." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2369 msgid "" "Django depends on [.filename]#mod_python#, and an SQL database engine. In " "FreeBSD, the package:www/py-django[] port automatically installs " "[.filename]#mod_python# and supports the PostgreSQL, MySQL, or SQLite " "databases, with the default being SQLite. To change the database engine, " "type `make config` within [.filename]#/usr/ports/www/py-django#, then " "install the port." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2372 msgid "" "Once Django is installed, the application will need a project directory " "along with the Apache configuration in order to use the embedded Python " "interpreter. This interpreter is used to call the application for specific " "URLs on the site." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2374 msgid "" "To configure Apache to pass requests for certain URLs to the web " "application, add the following to [.filename]#httpd.conf#, specifying the " "full path to the project directory:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2385 #, no-wrap msgid "" "\n" " SetHandler python-program\n" " PythonPath \"['/dir/to/the/django/packages/'] + sys.path\"\n" " PythonHandler django.core.handlers.modpython\n" " SetEnv DJANGO_SETTINGS_MODULE mysite.settings\n" " PythonAutoReload On\n" " PythonDebug On\n" "\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2388 msgid "" "Refer to https://docs.djangoproject.com[https://docs.djangoproject.com] for " "more information on how to use Django." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2389 #, no-wrap msgid "Ruby on Rails" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2394 msgid "" "Ruby on Rails is another open source web framework that provides a full " "development stack. It is optimized to make web developers more productive " "and capable of writing powerful applications quickly. On FreeBSD, it can be " "installed using the package:www/rubygem-rails[] package or port." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2396 msgid "" "Refer to http://guides.rubyonrails.org[http://guides.rubyonrails.org] for " "more information on how to use Ruby on Rails." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2398 #, no-wrap msgid "File Transfer Protocol (FTP)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2402 msgid "" "The File Transfer Protocol (FTP) provides users with a simple way to " "transfer files to and from an FTP server. FreeBSD includes FTP server " "software, ftpd, in the base system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2406 msgid "" "FreeBSD provides several configuration files for controlling access to the " "FTP server. This section summarizes these files. Refer to man:ftpd[8] for " "more details about the built-in FTP server." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2407 #, no-wrap msgid "Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2413 msgid "" "The most important configuration step is deciding which accounts will be " "allowed access to the FTP server. A FreeBSD system has a number of system " "accounts which should not be allowed FTP access. The list of users " "disallowed any FTP access can be found in [.filename]#/etc/ftpusers#. By " "default, it includes system accounts. Additional users that should not be " "allowed access to FTP can be added." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2417 msgid "" "In some cases it may be desirable to restrict the access of some users " "without preventing them completely from using FTP. This can be accomplished " "be creating [.filename]#/etc/ftpchroot# as described in man:ftpchroot[5]. " "This file lists users and groups subject to FTP access restrictions." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2422 msgid "" "To enable anonymous FTP access to the server, create a user named `ftp` on " "the FreeBSD system. Users will then be able to log on to the FTP server " "with a username of `ftp` or `anonymous`. When prompted for the password, " "any input will be accepted, but by convention, an email address should be " "used as the password. The FTP server will call man:chroot[2] when an " "anonymous user logs in, to restrict access to only the home directory of the " "`ftp` user." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2427 msgid "" "There are two text files that can be created to specify welcome messages to " "be displayed to FTP clients. The contents of [.filename]#/etc/ftpwelcome# " "will be displayed to users before they reach the login prompt. After a " "successful login, the contents of [.filename]#/etc/ftpmotd# will be " "displayed. Note that the path to this file is relative to the login " "environment, so the contents of [.filename]#~ftp/etc/ftpmotd# would be " "displayed for anonymous users." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2429 msgid "" "Once the FTP server has been configured, set the appropriate variable in " "[.filename]#/etc/rc.conf# to start the service during boot:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2433 #, no-wrap msgid "ftpd_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2436 msgid "To start the service now:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2440 #, no-wrap msgid "# service ftpd start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2443 msgid "Test the connection to the FTP server by typing:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2447 #, no-wrap msgid "% ftp localhost\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2452 msgid "" "The ftpd daemon uses man:syslog[3] to log messages. By default, the system " "log daemon will write messages related to FTP in [.filename]#/var/log/" "xferlog#. The location of the FTP log can be modified by changing the " "following line in [.filename]#/etc/syslog.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2456 #, no-wrap msgid "ftp.info /var/log/xferlog\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2465 msgid "" "Be aware of the potential problems involved with running an anonymous FTP " "server. In particular, think twice about allowing anonymous users to upload " "files. It may turn out that the FTP site becomes a forum for the trade of " "unlicensed commercial software or worse. If anonymous FTP uploads are " "required, then verify the permissions so that these files cannot be read by " "other anonymous users until they have been reviewed by an administrator." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2468 #, no-wrap msgid "File and Print Services for Microsoft(R) Windows(R) Clients (Samba)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2475 msgid "" "Samba is a popular open source software package that provides file and print " "services using the SMB/CIFS protocol. This protocol is built into " "Microsoft(R) Windows(R) systems. It can be added to non-Microsoft(R) " "Windows(R) systems by installing the Samba client libraries. The protocol " "allows clients to access shared data and printers. These shares can be " "mapped as a local disk drive and shared printers can be used as if they were " "local printers." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2478 msgid "" "On FreeBSD, the Samba client libraries can be installed using the " "package:net/samba416[] port or package. The client provides the ability for " "a FreeBSD system to access SMB/CIFS shares in a Microsoft(R) Windows(R) " "network." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2481 msgid "" "A FreeBSD system can also be configured to act as a Samba server by " "installing the same package:net/samba416[] port or package. This allows the " "administrator to create SMB/CIFS shares on the FreeBSD system which can be " "accessed by clients running Microsoft(R) Windows(R) or the Samba client " "libraries." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2482 #, no-wrap msgid "Server Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2486 msgid "" "Samba is configured in [.filename]#/usr/local/etc/smb4.conf#. This file " "must be created before Samba can be used." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2489 msgid "" "A simple [.filename]#smb4.conf# to share directories and printers with " "Windows(R) clients in a workgroup is shown here. For more complex setups " "involving LDAP or Active Directory, it is easier to use man:samba-tool[8] to " "create the initial [.filename]#smb4.conf#." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2499 #, no-wrap msgid "" "[global]\n" "workgroup = WORKGROUP\n" "server string = Samba Server Version %v\n" "netbios name = ExampleMachine\n" "wins support = Yes\n" "security = user\n" "passdb backend = tdbsam\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2511 #, no-wrap msgid "" "# Example: share /usr/src accessible only to 'developer' user\n" "[src]\n" "path = /usr/src\n" "valid users = developer\n" "writable = yes\n" "browsable = yes\n" "read only = no\n" "guest ok = no\n" "public = no\n" "create mask = 0666\n" "directory mask = 0755\n" msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2514 #, no-wrap msgid "Global Settings" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2517 msgid "" "Settings that describe the network are added in [.filename]#/usr/local/etc/" "smb4.conf#:" msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2518 #, no-wrap msgid "`workgroup`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2520 msgid "The name of the workgroup to be served." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2521 #, no-wrap msgid "`netbios name`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2524 msgid "" "The NetBIOS name by which a Samba server is known. By default, it is the " "same as the first component of the host's DNS name." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2525 #, no-wrap msgid "`server string`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2527 msgid "" "The string that will be displayed in the output of `net view` and some other " "networking tools that seek to display descriptive text about the server." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2528 #, no-wrap msgid "`wins support`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2531 msgid "" "Whether Samba will act as a WINS server. Do not enable support for WINS on " "more than one server on the network." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2533 #, no-wrap msgid "Security Settings" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2537 msgid "" "The most important settings in [.filename]#/usr/local/etc/smb4.conf# are the " "security model and the backend password format. These directives control " "the options:" msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2538 #, no-wrap msgid "`security`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2541 msgid "" "If the clients use usernames that are the same as their usernames on the " "FreeBSD machine, user level security should be used. `security = user` is " "the default security policy and it requires clients to first log on before " "they can access shared resources." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2543 msgid "" "Refer to man:smb.conf[5] to learn about other supported settings for the " "`security` option." msgstr "" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2544 #, no-wrap msgid "`passdb backend`" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2550 msgid "" "Samba has several different backend authentication models. Clients may be " "authenticated with LDAP, NIS+, an SQL database, or a modified password " "file. The recommended authentication method, `tdbsam`, is ideal for simple " "networks and is covered here. For larger or more complex networks, " "`ldapsam` is recommended. `smbpasswd` was the former default and is now " "obsolete." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2551 #, no-wrap msgid "Samba Users" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2555 msgid "" "FreeBSD user accounts must be mapped to the `SambaSAMAccount` database for " "Windows(R) clients to access the share. Map existing FreeBSD user accounts " "using man:pdbedit[8]:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2559 #, no-wrap msgid "# pdbedit -a -u username\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2563 msgid "" "This section has only mentioned the most commonly used settings. Refer to " "the https://wiki.samba.org[Official Samba Wiki] for additional information " "about the available configuration options." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2564 #, no-wrap msgid "Starting Samba" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2567 msgid "" "To enable Samba at boot time, add the following line to [.filename]#/etc/" "rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2571 #, no-wrap msgid "samba_server_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2574 msgid "To start Samba now:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2581 #, no-wrap msgid "" "# service samba_server start\n" "Performing sanity check on Samba configuration: OK\n" "Starting nmbd.\n" "Starting smbd.\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2586 msgid "" "Samba consists of three separate daemons. Both the nmbd and smbd daemons " "are started by `samba_enable`. If winbind name resolution is also required, " "set:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2590 #, no-wrap msgid "winbindd_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2593 msgid "Samba can be stopped at any time by typing:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2597 #, no-wrap msgid "# service samba_server stop\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2601 msgid "" "Samba is a complex software suite with functionality that allows broad " "integration with Microsoft(R) Windows(R) networks. For more information " "about functionality beyond the basic configuration described here, refer to " "https://www.samba.org[https://www.samba.org]." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2603 #, no-wrap msgid "Clock Synchronization with NTP" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2609 msgid "" "Over time, a computer's clock is prone to drift. This is problematic as " "many network services require the computers on a network to share the same " "accurate time. Accurate time is also needed to ensure that file timestamps " "stay consistent. The Network Time Protocol (NTP) is one way to provide " "clock accuracy in a network." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2611 msgid "" "FreeBSD includes man:ntpd[8] which can be configured to query other NTP " "servers to synchronize the clock on that machine or to provide time services " "to other computers in the network." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2614 msgid "" "This section describes how to configure ntpd on FreeBSD. Further " "documentation can be found in [.filename]#/usr/share/doc/ntp/# in HTML " "format." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2615 #, no-wrap msgid "NTP Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2619 msgid "" "On FreeBSD, the built-in ntpd can be used to synchronize a system's clock. " "ntpd is configured using man:rc.conf[5] variables and [.filename]#/etc/" "ntp.conf#, as detailed in the following sections." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2622 msgid "" "ntpd communicates with its network peers using UDP packets. Any firewalls " "between the machine and its NTP peers must be configured to allow UDP " "packets in and out on port 123." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2623 #, no-wrap msgid "The [.filename]#/etc/ntp.conf# file" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2633 msgid "" "ntpd reads [.filename]#/etc/ntp.conf# to determine which NTP servers to " "query. Choosing several NTP servers is recommended in case one of the " "servers becomes unreachable or its clock proves unreliable. As ntpd " "receives responses, it favors reliable servers over the less reliable ones. " "The servers which are queried can be local to the network, provided by an " "ISP, or selected from an http://support.ntp.org/bin/view/Servers/" "WebHome[ online list of publicly accessible NTP servers]. When choosing a " "public NTP server, select one that is geographically close and review its " "usage policy. The `pool` configuration keyword selects one or more servers " "from a pool of servers. An http://support.ntp.org/bin/view/Servers/" "NTPPoolServers[ online list of publicly accessible NTP pools] is available, " "organized by geographic area. In addition, FreeBSD provides a project-" "sponsored pool, `0.freebsd.pool.ntp.org`." msgstr "" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:2634 #, no-wrap msgid "Sample [.filename]#/etc/ntp.conf#" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2639 msgid "" "This is a simple example of an [.filename]#ntp.conf# file. It can safely be " "used as-is; it contains the recommended `restrict` options for operation on " "a publicly-accessible network connection." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2647 #, no-wrap msgid "" "# Disallow ntpq control/query access. Allow peers to be added only\n" "# based on pool and server statements in this file.\n" "restrict default limited kod nomodify notrap noquery nopeer\n" "restrict source limited kod nomodify notrap noquery\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2651 #, no-wrap msgid "" "# Allow unrestricted access from localhost for queries and control.\n" "restrict 127.0.0.1\n" "restrict ::1\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2654 #, no-wrap msgid "" "# Add a specific server.\n" "server ntplocal.example.com iburst\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2658 #, no-wrap msgid "" "# Add FreeBSD pool servers until 3-6 good servers are available.\n" "tos minclock 3 maxclock 6\n" "pool 0.freebsd.pool.ntp.org iburst\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2661 #, no-wrap msgid "" "# Use a local leap-seconds file.\n" "leapfile \"/var/db/ntpd.leap-seconds.list\"\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2667 msgid "" "The format of this file is described in man:ntp.conf[5]. The descriptions " "below provide a quick overview of just the keywords used in the sample file " "above." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2673 msgid "" "By default, an NTP server is accessible to any network host. The `restrict` " "keyword controls which systems can access the server. Multiple `restrict` " "entries are supported, each one refining the restrictions given in previous " "statements. The values shown in the example grant the local system full " "query and control access, while allowing remote systems only the ability to " "query the time. For more details, refer to the `Access Control Support` " "subsection of man:ntp.conf[5]." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2679 msgid "" "The `server` keyword specifies a single server to query. The file can " "contain multiple server keywords, with one server listed on each line. The " "`pool` keyword specifies a pool of servers. ntpd will add one or more " "servers from this pool as needed to reach the number of peers specified " "using the `tos minclock` value. The `iburst` keyword directs ntpd to " "perform a burst of eight quick packet exchanges with a server when contact " "is first established, to help quickly synchronize system time." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2683 msgid "" "The `leapfile` keyword specifies the location of a file containing " "information about leap seconds. The file is updated automatically by " "man:periodic[8]. The file location specified by this keyword must match the " "location set in the `ntp_db_leapfile` variable in [.filename]#/etc/rc.conf#." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2684 #, no-wrap msgid "NTP entries in [.filename]#/etc/rc.conf#" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2688 msgid "" "Set `ntpd_enable=YES` to start ntpd at boot time. Once `ntpd_enable=YES` " "has been added to [.filename]#/etc/rc.conf#, ntpd can be started immediately " "without rebooting the system by typing:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2692 #, no-wrap msgid "# service ntpd start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2696 msgid "" "Only `ntpd_enable` must be set to use ntpd. The [.filename]#rc.conf# " "variables listed below may also be set as needed." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2700 msgid "" "Set `ntpd_sync_on_start=YES` to allow ntpd to step the clock any amount, one " "time at startup. Normally ntpd will log an error message and exit if the " "clock is off by more than 1000 seconds. This option is especially useful on " "systems without a battery-backed realtime clock." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2702 msgid "" "Set `ntpd_oomprotect=YES` to protect the ntpd daemon from being killed by " "the system attempting to recover from an Out Of Memory (OOM) condition." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2704 msgid "" "Set `ntpd_config=` to the location of an alternate [.filename]#ntp.conf# " "file." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2706 msgid "" "Set `ntpd_flags=` to contain any other ntpd flags as needed, but avoid using " "these flags which are managed internally by [.filename]#/etc/rc.d/ntpd#:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2708 msgid "`-p` (pid file location)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2709 msgid "`-c` (set `ntpd_config=` instead)" msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2711 #, no-wrap msgid "ntpd and the unprivileged `ntpd` user" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2718 msgid "" "ntpd on FreeBSD can start and run as an unprivileged user. Doing so " "requires the man:mac_ntpd[4] policy module. The [.filename]#/etc/rc.d/ntpd# " "startup script first examines the NTP configuration. If possible, it loads " "the `mac_ntpd` module, then starts ntpd as unprivileged user `ntpd` (user id " "123). To avoid problems with file and directory access, the startup script " "will not automatically start ntpd as `ntpd` when the configuration contains " "any file-related options." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2720 msgid "" "The presence of any of the following in `ntpd_flags` requires manual " "configuration as described below to run as the `ntpd` user:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2722 msgid "-f or --driftfile" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2723 msgid "-i or --jaildir" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2724 msgid "-k or --keyfile" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2725 msgid "-l or --logfile" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2726 msgid "-s or --statsdir" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2728 msgid "" "The presence of any of the following keywords in [.filename]#ntp.conf# " "requires manual configuration as described below to run as the `ntpd` user:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2730 msgid "crypto" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2731 msgid "driftfile" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2732 msgid "key" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2733 msgid "logdir" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2734 msgid "statsdir" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2736 msgid "To manually configure ntpd to run as user `ntpd`:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2738 msgid "" "Ensure that the `ntpd` user has access to all the files and directories " "specified in the configuration." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2739 msgid "" "Arrange for the `mac_ntpd` module to be loaded or compiled into the kernel. " "See man:mac_ntpd[4] for details." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2740 msgid "Set `ntpd_user=\"ntpd\"` in [.filename]#/etc/rc.conf#" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2741 #, no-wrap msgid "Using NTP with a PPP Connection" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2746 msgid "" "ntpd does not need a permanent connection to the Internet to function " "properly. However, if a PPP connection is configured to dial out on demand, " "NTP traffic should be prevented from triggering a dial out or keeping the " "connection alive. This can be configured with `filter` directives in " "[.filename]#/etc/ppp/ppp.conf#. For example:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2757 #, no-wrap msgid "" "set filter dial 0 deny udp src eq 123\n" "# Prevent NTP traffic from initiating dial out\n" "set filter dial 1 permit 0 0\n" "set filter alive 0 deny udp src eq 123\n" "# Prevent incoming NTP traffic from keeping the connection open\n" "set filter alive 1 deny udp dst eq 123\n" "# Prevent outgoing NTP traffic from keeping the connection open\n" "set filter alive 2 permit 0/0 0/0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2760 msgid "" "For more details, refer to the `PACKET FILTERING` section in man:ppp[8] and " "the examples in [.filename]#/usr/share/examples/ppp/#." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2764 msgid "" "Some Internet access providers block low-numbered ports, preventing NTP from " "functioning since replies never reach the machine." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2767 #, no-wrap msgid "iSCSI Initiator and Target Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2771 msgid "" "iSCSI is a way to share storage over a network. Unlike NFS, which works at " "the file system level, iSCSI works at the block device level." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2775 msgid "" "In iSCSI terminology, the system that shares the storage is known as the " "_target_. The storage can be a physical disk, or an area representing " "multiple disks or a portion of a physical disk. For example, if the disk(s) " "are formatted with ZFS, a zvol can be created to use as the iSCSI storage." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2779 msgid "" "The clients which access the iSCSI storage are called _initiators_. To " "initiators, the storage available through iSCSI appears as a raw, " "unformatted disk known as a LUN. Device nodes for the disk appear in " "[.filename]#/dev/# and the device must be separately formatted and mounted." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2782 msgid "" "FreeBSD provides a native, kernel-based iSCSI target and initiator. This " "section describes how to configure a FreeBSD system as a target or an " "initiator." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2784 #, no-wrap msgid "Configuring an iSCSI Target" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2787 msgid "" "To configure an iSCSI target, create the [.filename]#/etc/ctl.conf# " "configuration file, add a line to [.filename]#/etc/rc.conf# to make sure the " "man:ctld[8] daemon is automatically started at boot, and then start the " "daemon." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2790 msgid "" "The following is an example of a simple [.filename]#/etc/ctl.conf# " "configuration file. Refer to man:ctl.conf[5] for a complete description of " "this file's available options." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2798 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2876 #, no-wrap msgid "" "portal-group pg0 {\n" "\tdiscovery-auth-group no-authentication\n" "\tlisten 0.0.0.0\n" "\tlisten [::]\n" "}\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2802 #, no-wrap msgid "" "target iqn.2012-06.com.example:target0 {\n" "\tauth-group no-authentication\n" "\tportal-group pg0\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2808 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2906 #, no-wrap msgid "" "\tlun 0 {\n" "\t\tpath /data/target0-0\n" "\t\tsize 4G\n" "\t}\n" "}\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2814 msgid "" "The first entry defines the `pg0` portal group. Portal groups define which " "network addresses the man:ctld[8] daemon will listen on. The `discovery-" "auth-group no-authentication` entry indicates that any initiator is allowed " "to perform iSCSI target discovery without authentication. Lines three and " "four configure man:ctld[8] to listen on all IPv4 (`listen 0.0.0.0`) and IPv6 " "(`listen [::]`) addresses on the default port of 3260." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2817 msgid "" "It is not necessary to define a portal group as there is a built-in portal " "group called `default`. In this case, the difference between `default` and " "`pg0` is that with `default`, target discovery is always denied, while with " "`pg0`, it is always allowed." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2825 msgid "" "The second entry defines a single target. Target has two possible meanings: " "a machine serving iSCSI or a named group of LUNs. This example uses the " "latter meaning, where `iqn.2012-06.com.example:target0` is the target name. " "This target name is suitable for testing purposes. For actual use, change " "`com.example` to the real domain name, reversed. The `2012-06` represents " "the year and month of acquiring control of that domain name, and `target0` " "can be any value. Any number of targets can be defined in this " "configuration file." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2827 msgid "" "The `auth-group no-authentication` line allows all initiators to connect to " "the specified target and `portal-group pg0` makes the target reachable " "through the `pg0` portal group." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2835 msgid "" "The next section defines the LUN. To the initiator, each LUN will be " "visible as a separate disk device. Multiple LUNs can be defined for each " "target. Each LUN is identified by a number, where LUN 0 is mandatory. The " "`path /data/target0-0` line defines the full path to a file or zvol backing " "the LUN. That path must exist before starting man:ctld[8]. The second line " "is optional and specifies the size of the LUN." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2837 msgid "" "Next, to make sure the man:ctld[8] daemon is started at boot, add this line " "to [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2841 #, no-wrap msgid "ctld_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2844 msgid "To start man:ctld[8] now, run this command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2848 #, no-wrap msgid "# service ctld start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2852 msgid "" "As the man:ctld[8] daemon is started, it reads [.filename]#/etc/ctl.conf#. " "If this file is edited after the daemon starts, use this command so that the " "changes take effect immediately:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2856 #, no-wrap msgid "# service ctld reload\n" msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2859 #, no-wrap msgid "Authentication" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2863 msgid "" "The previous example is inherently insecure as it uses no authentication, " "granting anyone full access to all targets. To require a username and " "password to access targets, modify the configuration as follows:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2870 #, no-wrap msgid "" "auth-group ag0 {\n" "\tchap username1 secretsecret\n" "\tchap username2 anothersecret\n" "}\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2885 #, no-wrap msgid "" "target iqn.2012-06.com.example:target0 {\n" "\tauth-group ag0\n" "\tportal-group pg0\n" "\tlun 0 {\n" "\t\tpath /data/target0-0\n" "\t\tsize 4G\n" "\t}\n" "}\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2891 msgid "" "The `auth-group` section defines username and password pairs. An initiator " "trying to connect to `iqn.2012-06.com.example:target0` must first specify a " "defined username and secret. However, target discovery is still permitted " "without authentication. To require target discovery authentication, set " "`discovery-auth-group` to a defined `auth-group` name instead of `no-" "authentication`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2894 msgid "" "It is common to define a single exported target for every initiator. As a " "shorthand for the syntax above, the username and password can be specified " "directly in the target entry:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2900 #, no-wrap msgid "" "target iqn.2012-06.com.example:target0 {\n" "\tportal-group pg0\n" "\tchap username1 secretsecret\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2910 #, no-wrap msgid "Configuring an iSCSI Initiator" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2916 msgid "" "The iSCSI initiator described in this section is supported starting with " "FreeBSD 10.0-RELEASE. To use the iSCSI initiator available in older " "versions, refer to man:iscontrol[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2921 msgid "" "The iSCSI initiator requires that the man:iscsid[8] daemon is running. This " "daemon does not use a configuration file. To start it automatically at " "boot, add this line to [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2925 #, no-wrap msgid "iscsid_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2928 msgid "To start man:iscsid[8] now, run this command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2932 #, no-wrap msgid "# service iscsid start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2936 msgid "" "Connecting to a target can be done with or without an [.filename]#/etc/" "iscsi.conf# configuration file. This section demonstrates both types of " "connections." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2937 #, no-wrap msgid "Connecting to a Target Without a Configuration File" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2940 msgid "" "To connect an initiator to a single target, specify the IP address of the " "portal and the name of the target:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2944 #, no-wrap msgid "# iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2948 msgid "" "To verify if the connection succeeded, run `iscsictl` without any " "arguments. The output should look similar to this:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2953 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2957 msgid "" "In this example, the iSCSI session was successfully established, with " "[.filename]#/dev/da0# representing the attached LUN. If the " "`iqn.2012-06.com.example:target0` target exports more than one LUN, multiple " "device nodes will be shown in that section of the output:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2961 #, no-wrap msgid "Connected: da0 da1 da2.\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2965 msgid "" "Any errors will be reported in the output, as well as the system logs. For " "example, this message usually means that the man:iscsid[8] daemon is not " "running:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2970 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8)\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2973 msgid "" "The following message suggests a networking problem, such as a wrong IP " "address or port:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2978 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.11 Connection refused\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2981 msgid "This message means that the specified target name is wrong:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2986 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Not found\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2989 msgid "This message means that the target requires authentication:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2994 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2997 msgid "To specify a CHAP username and secret, use this syntax:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3001 #, no-wrap msgid "# iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret\n" msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:3004 #, no-wrap msgid "Connecting to a Target with a Configuration File" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3007 msgid "" "To connect using a configuration file, create [.filename]#/etc/iscsi.conf# " "with contents like this:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3017 #, no-wrap msgid "" "t0 {\n" "\tTargetAddress = 10.10.10.10\n" "\tTargetName = iqn.2012-06.com.example:target0\n" "\tAuthMethod = CHAP\n" "\tchapIName = user\n" "\tchapSecret = secretsecret\n" "}\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3024 msgid "" "The `t0` specifies a nickname for the configuration file section. It will " "be used by the initiator to specify which configuration to use. The other " "lines specify the parameters to use during connection. The `TargetAddress` " "and `TargetName` are mandatory, whereas the other options are optional. In " "this example, the CHAP username and secret are shown." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3026 msgid "To connect to the defined target, specify the nickname:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3030 #, no-wrap msgid "# iscsictl -An t0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3033 msgid "" "Alternately, to connect to all targets defined in the configuration file, " "use:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3037 #, no-wrap msgid "# iscsictl -Aa\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3040 msgid "" "To make the initiator automatically connect to all targets in [.filename]#/" "etc/iscsi.conf#, add the following to [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3045 #, no-wrap msgid "" "iscsictl_enable=\"YES\"\n" "iscsictl_flags=\"-Aa\"\n" msgstr "" diff --git a/documentation/content/fr/books/handbook/network-servers/_index.adoc b/documentation/content/fr/books/handbook/network-servers/_index.adoc index 0250e6129b..2424129048 100644 --- a/documentation/content/fr/books/handbook/network-servers/_index.adoc +++ b/documentation/content/fr/books/handbook/network-servers/_index.adoc @@ -1,2397 +1,2396 @@ --- title: Chapitre 30. Serveurs réseau part: Partie IV. Réseau prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 35 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Serveurs réseau :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 30 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Synopsis Ce chapitre abordera certains des services réseaux les plus fréquemment utilisés sur les systèmes UNIX(R). Nous verrons comment installer, configurer, tester et maintenir plusieurs types différents de services réseaux. De plus, des exemples de fichier de configuration ont été inclus tout au long de ce chapitre pour que vous puissiez en bénéficier. Après la lecture de ce chapitre, vous connaîtrez: * Comment gérer le "daemon" inetd. * Comment configurer un système de fichiers réseau. * Comment mettre en place un serveur d'information sur le réseau pour partager les comptes utilisateurs. * Comment configurer le paramétrage réseau automatique en utilisant DHCP. * Comment configurer un serveur de noms de domaine. * Comment configurer le serveur HTTP Apache. * Comment configurer un serveur de transfert de fichier (FTP). * Comment configurer un serveur de fichiers et d'impression pour des clients Windows(R) en utilisant Samba. * Comment synchroniser l'heure et la date, et mettre en place en serveur de temps, avec le protocole NTP. Avant de lire ce chapitre, vous devrez: * Comprendre les bases des procédures [.filename]#/etc/rc#. * Etre familier avec la terminologie réseau de base. * Savoir comment installer des applications tierce-partie (crossref:ports[ports,Installer des applications. les logiciels pré-compilés et les logiciels portés]). [[network-inetd]] == Le "super-serveur" inetd [[network-inetd-overview]] === Généralités On fait parfois référence à man:inetd[8] comme étant le "super-serveur Internet" parce qu'il gère les connexions pour plusieurs services. Quand une connexion est reçue par inetd, ce dernier détermine à quel programme la connexion est destinée, invoque le processus en question et lui délègue la "socket" (le programme est invoqué avec la "socket" service comme entrée standard, sortie et descripteurs d'erreur). Exécuter inetd pour les serveurs qui ne sont pas utilisés intensément peut réduire la charge système globale quand on compare avec l'exécution de chaque "daemon" individuellement en mode autonome. inetd est utilisé pour invoquer d'autres "daemon"s, mais plusieurs protocoles triviaux sont gérés directement, comme chargen, auth, et daytime. Cette section abordera la configuration de base d'inetd à travers ses options en ligne de commande et son fichier de configuration [.filename]#/etc/inetd.conf#. [[network-inetd-settings]] === Configuration inetd est initialisé par l'intermédiaire du système man:rc[8]. L'option `inetd_enable` est positionnée à la valeur `NO` par défaut, mais peut être activée par sysinstall lors de l'installation en fonction de la configuration choisie par l'utilisateur. Placer [.programlisting] .... inetd_enable="YES" .... ou [.programlisting] .... inetd_enable="NO" .... dans [.filename]#/etc/rc.conf# activera ou désactivera le lancement d'inetd à la mise en route du système. La commande: [source,shell] .... # /etc/rc.d/inetd rcvar .... peut être lancée pour afficher le paramétrage en vigueur. De plus, différentes options de ligne de commande peuvent être passées à inetd par l'intermédiaire de l'option `inetd_flags`. [[network-inetd-cmdline]] === Options en ligne de commande Comme la plupart des "daemons", inetd possède de nombreuses options que l'on peut passer à son lancement afin de modifier son comportement. La liste complète des options se présente sous la forme: `inetd [-d] [-l] [-w] [-W] [-c maximum] [-C taux] [-a adresse | nom de machine] [-p fichier] [-R taux] [fichier de configuration]` Les options peuvent être passées à inetd en utilisant le paramètre `inetd_flags` dans [.filename]#/etc/rc.conf#. Par défaut, `inetd_flags` contient `-wW -C 60`, qui active le "TCP wrapping" pour les services inetd, et empêche l'invocation d'un service plus de 60 fois par minute à partir d'une unique adresse IP. Les novices seront heureux d'apprendre que ce paramétrage n'a en général pas besoin d'être modifié, cependant nous présentons ci-dessous les options de limitation du taux d'invocation étant donné que cela peut être utile si vous recevez une quantité excessive de connexions. Une liste complète d'options peut être trouvée dans la page de manuel de man:inetd[8]. -c maximum:: Spécifie le nombre maximal par défaut d'invocations simultanées pour chaque service; il n'y a pas de limite par défaut. Cette option peut être surchargée pour chaque service à l'aide du paramètre `nb-max-enfants`. -C taux:: Précise le nombre maximal de fois qu'un service peut être invoqué à partir d'une unique adresse IP et cela sur une minute. Ce paramètre peut être configuré différemment pour chaque service avec le paramètre `nb-max-connexions-par-ip-par-minute`. -R taux:: Précise le nombre maximal de fois qu'un service peut être invoqué par minute; la valeur par défaut est 256. Un taux de 0 autorise un nombre illimité d'invocations. -s maximum:: Précise le nombre maximal de fois qu'un service peut être invoqué simultanément à partir d'une adresse IP unique; il n'y a pas de limite par défaut. Cette option peut-être surchargée pour chaque service individuellement avec le paramètre `max-child-per-ip`. [[network-inetd-conf]] === [.filename]#inetd.conf# La configuration d'inetd se fait par l'intermédiaire du fichier [.filename]#/etc/inetd.conf#. Quand le fichier [.filename]#/etc/inetd.conf# est modifié, inetd peut être forcé de relire son fichier de configuration en utilisant la commande: [[network-inetd-reread]] .Recharger le fichier de configuration d'inetd [example] ==== [source,shell] .... # /etc/rc.d/inetd reload .... ==== Chaque ligne du fichier de configuration ne mentionne qu'un seul "daemon". Les commentaires dans le fichier sont précédés par un "#". Le format de chaque entrée du fichier [.filename]##/etc/inetd.conf## est le suivant: [.programlisting] .... nom-du-service type-de-socket protocole {wait|nowait}[/nb-max-enfants[/nb-connexions-max-par-minute]] {wait|nowait}[/nb-max-enfants[/nb-connexions-max-par-minute[/nb-max-enfants-par-ip]]] utilisateur[:groupe][/classe-session] programme-serveur arguments-du-programme-serveur .... Un exemple d'entrée pour le "daemon" man:ftpd[8] utilisant l'IPv4 ressemblerait: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... nom-du-service:: C'est le nom de service du "daemon" en question. Il doit correspondre à un des services listés dans le fichier [.filename]#/etc/services#. Cela détermine quel port inetd doit écouter. Si un nouveau service est créé, il doit être ajouté en premier lieu dans [.filename]#/etc/services#. type-de-socket:: Soit `stream`, soit `dgram`, soit `raw`, ou `seqpacket`. `stream` doit être utilisé pour les "daemon"s TCP, alors que `dgram` est utilisé pour les "daemon"s utilisant le protocole UDP. protocole:: Un des suivants: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protocole | Explication |tcp, tcp4 |TCP IPv4 |udp, udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |TCP IPv4 et v6 |udp46 |UDP IPv4 et v6 |=== {wait|nowait}[/nb-max-enfants[/nb-max-connexions-par-ip-par-minute[/nb-max-enfants-par-ip]]]:: `wait|nowait` indique si le "daemon" invoqué par inetd est capable ou non de gérer sa propre "socket". Les "socket"s de type `dgram` doivent utiliser l'option `wait`, alors que les "daemons à socket stream", qui sont généralement multi-threadés, devraient utiliser `nowait`. L'option `wait` a généralement pour conséquence de fournir plusieurs "socket"s à un "daemon", tandis que l'option `nowait` invoquera un "daemon" enfant pour chaque nouvelle "socket". + Le nombre maximal de "daemon"s qu'inetd peut invoquer peut être fixé en utilisant l'option `nb-max-enfants`. Si une limite de dix instances pour un "daemon" est nécessaire, `/10` devra être placé après `nowait`. Spécifier `/0` autorise un nombre illimité d'enfant. + En plus de `nb-max-enfants`, deux autres options limitant le nombre maximal de connexions à partir d'un emplacement vers un "daemon" particulier peuvent être activéees. L'option `nb-max-connexions-par-ip-par-minute` limite le nombre de connexions par minutes à partir d'une adresse IP donnée, par exemple, une valeur de dix limiterait à dix le nombre de tentatives de connexions par minute pour une adresse IP particulière. L'option `max-child-per-ip` limite le nombre d'enfants qui peuvent être lancés pour une adresse IP unique à un instant donné. Ces options sont utiles pour empêcher l'abus excessif intentionnel ou par inadvertance des ressources d'une machine et les attaques par déni de service ("Denial of Service-DOS"). + Dans ce champ, `wait` ou `nowait` est obligatoire. `nb-max-enfants`, `nb-max-connexions-par-ip-par-minute` et `max-child-per-ip` sont optionnelles. + Un "daemon" utilisant un flux de type multi-threadé sans limites `nb-max-enfants`, `nb-max-connexions-par-ip-par-minute` ou `max-child-per-ip` sera tout simplement affecté de l'option `nowait`. + Le même "daemon" avec une limite maximale de dix "daemon" serait: `nowait/10`. + La même configuration avec une limite de vingt connexions par adresse IP par minute et une limite maximale de dix "daemon"s enfant serait: `nowait/10/20`. + Ces options sont utilisées comme valeurs par défaut par le "daemon" man:fingerd[8], comme le montre ce qui suit: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -s .... + Et enfin, un exemple de champ avec un maximum de 100 enfants en tout, avec un maximum de 5 adresses IP distinctes serait: `nowait/100/0/5`. utilisateur:: C'est l'utilisateur sous lequel le "daemon" en question est exécuté. En général les "daemon"s tournent sous l'utilisateur `root`. Pour des questions de sécurité, il est courant de rencontrer des serveurs tournant sous l'utilisateur `daemon`, ou sous l'utilisateur avec le moins de privilèges: `nobody`. programme-serveur:: Le chemin complet du "daemon" qui doit être exécuté quand une requête est reçue. Si le "daemon" est un service fourni en interne par inetd, alors l'option `internal` devrait être utilisée. arguments-programme-serveur:: Cette option va de pair avec `programme-serveur` en précisant les arguments, en commençant avec `argv[0]`, passés au "daemon" lors de son invocation. Si `mydaemon -d` est la ligne de commande, `mydaemon -d` sera la valeur de l'option `arguments-programme-serveur`. Ici également, si le "daemon" est un service interne, utilisez `internal`. [[network-inetd-security]] === Sécurité En fonction des choix effectués à l'installation, plusieurs services peuvent être activés par défaut. S'il n'y a pas de raison particulière à l'utilisation d'un "daemon", envisagez de le désactiver. Ajoutez un caractère "#" devant le "daemon" en question dans le fichier [.filename]##/etc/inetd.conf##, et ensuite <>. Certains "daemon"s comme fingerd, devraient être évités parce qu'ils peuvent fournir des informations utiles aux personnes malveillantes. Certains "daemon"s n'ont aucune conscience des problèmes de sécurité, et ont un long délai limite, ou pas du tout, d'expiration pour les tentatives de connexions. Cela permet à une personne malveillante d'envoyer régulièrement et de manière espacée des demandes de connexions à un "daemon" particulier, avec pour conséquence de saturer les ressources disponibles. Cela peut être une bonne idée de placer des limitations `nb-max-connexions-par-ip-par-minute`, `max-child` ou `nb-max-enfants` sur certains "daemon"s si vous trouvez que vous avez trop de connexions. Par défaut, le "TCP wrapping" est activé. Consultez la page de manuel man:hosts_access[5] pour plus d'information sur le placement de restrictions TCP pour divers "daemon"s invoqués par inetd. [[network-inetd-misc]] === Divers daytime, time, echo, discard, chargen, et auth sont des services fournis en interne par inetd. Le service auth fournit les services réseau d'identification, et est configurable à un certain degré, alors que les autres services ne peuvent être que stoppés ou en fonctionnement. Consultez la page de manuel de man:inetd[8] pour plus d'informations. [[network-nfs]] == Système de fichiers réseau (NFS) Parmi les différents systèmes de fichiers que FreeBSD supporte se trouve le système de fichiers réseau, connu sous le nom de NFS. NFS permet à un système de partager des répertoires et des fichiers avec d'autres systèmes par l'intermédiaire d'un réseau. En utilisant NFS, les utilisateurs et les programmes peuvent accéder aux fichiers sur des systèmes distants comme s'ils étaient des fichiers locaux. Certains des avantages les plus remarquables offerts par NFS sont: * Les stations de travail utilisent moins d'espace disque en local parce que les données utilisées en commun peuvent être stockées sur une seule machine tout en restant accessibles aux autres machines sur le réseau. * Les utilisateurs n'ont pas besoin d'avoir un répertoire personnel sur chaque machine du réseau. Les répertoires personnels pourront se trouver sur le serveur NFS et seront disponibles par l'intermédiaire du réseau. * Les périphériques de stockage comme les lecteurs de disquettes, de CDROM, de disquettes Zip(R) peuvent être utilisés par d'autres machines sur le réseau. Cela pourra réduire le nombre de lecteurs de medias amovibles sur le réseau. === Comment NFS fonctionne NFS consiste en deux éléments principaux: un serveur et un ou plusieurs clients. Le client accède à distance aux données stockées sur la machine serveur. Afin que tout cela fonctionne correctement quelques processus doivent être configurés et en fonctionnement. Sur le serveur, les "daemons" suivants doivent tourner: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Daemon | Description |nfsd |Le "daemon" NFS qui répond aux requêtes des clients NFS. |mountd |Le "daemon" de montage NFS qui traite les requêtes que lui passe man:nfsd[8]. |rpcbind |Ce "daemon" permet aux clients NFS de trouver le port que le serveur NFS utilise. |=== Le client peut également faire tourner un "daemon" connu sous le nom de nfsiod. Le "daemon" nfsiod traite les requêtes en provenance du serveur NFS. Ceci est optionnel, et améliore les performances, mais n'est pas indispensable pour une utilisation normale et correcte. Consultez la page de manuel man:nfsiod[8] pour plus d'informations. [[network-configuring-nfs]] === Configurer NFS La configuration de NFS est une opération relativement directe. Les processus qui doivent tourner peuvent tous être lancés au démarrage en modifiant légèrement votre fichier [.filename]#/etc/rc.conf#. Sur le serveur NFS, assurez-vous que les options suivantes sont configurées dans le fichier [.filename]#/etc/rc.conf#: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" .... mountd est automatiquement exécuté dès que le serveur NFS est activé. Sur le client, assurez-vous que cette option est présente dans le fichier [.filename]#/etc/rc.conf#: [.programlisting] .... nfs_client_enable="YES" .... Le fichier [.filename]#/etc/exports# indique quels systèmes de fichiers NFS devraient être exportés (parfois on utilise le terme de "partagés"). Chaque ligne dans [.filename]#/etc/exports# précise un système de fichiers à exporter et quelles machines auront accès à ce système de fichiers. En plus des machines qui auront accès, des options d'accès peuvent également être présentes. Ces options sont nombreuses mais seules quelques unes seront abordées ici. Vous pouvez aisément découvrir d'autres options en lisant la page de manuel man:exports[5]. Voici quelques exemples d'entrées du fichier [.filename]#/etc/exports#: Les exemples suivants donnent une idée de comment exporter des systèmes de fichiers bien que certains paramètres peuvent être différents en fonction de votre environnement et votre configuration réseau. Par exemple, pour exporter le répertoire [.filename]#/cdrom# pour les trois machines d'exemple qui appartiennent au même domaine que le serveur (d'où l'absence du nom de domaine pour chacune d'entre elles) ou qui ont une entrée dans votre fichier [.filename]#/etc/hosts#. Le paramètre `-ro` limite l'accès en lecture seule au système de fichiers exporté. Avec ce paramètre, le système distant ne pourra pas écrire sur le système de fichiers exporté. [.programlisting] .... /cdrom -ro host1 host2 host3 .... La ligne suivante exporte [.filename]#/home# pour les trois machines en utilisant les adresses IP. C'est une configuration utile si vous disposez d'un réseau privé sans serveur DNS configuré. Le fichier [.filename]#/etc/hosts# pourrait éventuellement être configuré pour les noms de machines internes, consultez la page de manuel man:hosts[5] pour plus d'information. Le paramètre `-alldirs` autorise l'utilisation des sous-répertoires en tant que point de montage. En d'autres termes, il ne montera pas les sous-répertoires mais autorisera le client à ne monter que les répertoires qui sont nécessaires ou désirés. [.programlisting] .... /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... La ligne suivante exporte [.filename]#/a# pour que deux clients d'un domaine différent puissent y accéder. Le paramètre `-maproot=root` autorise l'utilisateur `root` du système distant à écrire des données sur le système de fichiers exporté en tant que `root`. Si le paramètre `-maproot=root` n'est pas précisé, même si un utilisateur dispose d'un accès `root` sur le système distant, il ne pourra pas modifier de fichiers sur le système de fichiers exporté. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... Afin de pouvoir accéder à un système de fichiers exporté, le client doit avoir les permissions de le faire. Assurez-vous que le client est mentionné dans votre fichier [.filename]#/etc/exports#. Dans [.filename]#/etc/exports#, chaque ligne représente l'information d'exportation d'un système de fichiers vers une machine. Une machine distante ne peut être spécifiée qu'une fois par système de fichiers, et ne devrait avoir qu'une seule entrée par défaut. Par exemple, supposons que [.filename]#/usr# soit un seul système de fichiers. Le fichier [.filename]#/etc/exports# suivant serait invalide: [.programlisting] .... # Invalide quand /usr est un système de fichiers /usr/src client /usr/ports client .... Un système de fichiers, [.filename]#/usr#, a deux lignes précisant des exportations vers la même machine, `client`. Le format correct pour une telle situation est: [.programlisting] .... /usr/src /usr/ports client .... Les propriétés d'un système de fichiers exporté vers une machine donnée devraient apparaître sur une ligne. Les lignes sans client sont traitées comme destinée à une seule machine. Cela limite la manière dont vous pouvez exporter les systèmes de fichiers, mais pour la plupart des gens cela n'est pas un problème. Ce qui suit est un exemple de liste d'exportation valide, où les répertoires [.filename]#/usr# et [.filename]#/exports# sont des systèmes de fichiers locaux: [.programlisting] .... # Exporte src et ports vers client01 et client02, mais seul # client01 dispose des privilèges root dessus /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # Les machines clientes ont les privilèges root et peuvent monter tout # de /exports. N'importe qui peut monter en lecture seule # /exports/obj /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... Le "daemon"mountd doit être forcé de relire le fichier [.filename]#/etc/exports# à chacune de ses modifications, afin que les changements puissent prendre effet. Cela peut être effectué soit en envoyant un signal HUP au "daemon": [source,shell] .... # kill -HUP `cat /var/run/mountd.pid` .... soit en invoquant la procédure man:rc[8] de `mountd` avec le paramètre approprié: [source,shell] .... # /etc/rc.d/mountd onereload .... Veuillez consulter la crossref:config[configtuning-rcd,Utilisation du système rc sous FreeBSD] pour plus d'information sur l'utilisation des procédures rc. De plus, un redémarrage permettra à FreeBSD de tout configurer proprement. Un redémarrage n'est cependant pas nécessaire. Exécuter les commandes suivantes en tant que `root` devrait mettre en place ce qui est nécessaire. Sur le serveur NFS: [source,shell] .... # rpcbind # nfsd -u -t -n 4 # mountd -r .... Sur le client NFS: [source,shell] .... # nfsiod -n 4 .... Maintenant il devrait être possible de monter un système de fichiers distant. Dans nos exemples le nom du serveur sera `serveur` et le nom du client `client`. Si vous voulez monter temporairement un système de fichiers distant ou vous voulez simplement tester la configuration, exécutez juste une commande comme celle-ci en tant que `root` sur le client: [source,shell] .... # mount serveur:/home /mnt .... Cela montera le répertoire [.filename]#/home# situé sur le serveur au point [.filename]#/mnt# sur le client. Si tout est correctement configuré vous devriez être en mesure d'entrer dans le répertoire [.filename]#/mnt# sur le client et de voir tous les fichiers qui sont sur le serveur. Si vous désirez monter automatiquement un système de fichiers distant à chaque démarrage de l'ordinateur, ajoutez le système de fichiers au fichier [.filename]#/etc/fstab#. Voici un exemple: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... La page de manuel man:fstab[5] liste toutes les options disponibles. === Verrouillage Certaines applications (par exemple mutt) ont besoin du verrouillage des fichiers pour fonctionner correctement. Dans le cas du NFS, rpc.lockd peut être utilisé pour assurer le verrouillage des fichiers. Pour l'activer, ajouter ce qui suit au fichier [.filename]#/etc/rc.conf# sur les machines clientes et serveur (on suppose que les clients et le serveur NFS sont déjà configurés): [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... Lancez l'application en utilisant: [source,shell] .... # /etc/rc.d/nfslocking start .... Si un verrouillage réel n'est pas nécessaire entre les clients et le serveur NFS, il est possible de laisser le client NFS effectuer le verrouillage localement en passant l'option `-L` à man:mount_nfs[8]. Veuillez vous référer à la page de manuel man:mount_nfs[8] pour de plus amples détails. === Exemples pratiques d'utilisation Il existe de nombreuses applications pratiques de NFS. Les plus communes sont présentés ci-dessous: * Configurer plusieurs machines pour partager un CDROM ou un autre médium. C'est moins cher et souvent une méthode plus pratique pour installer des logiciels sur de multiples machines. * Sur les réseaux importants, il peut être plus pratique de configurer un serveur NFS central sur lequel tous les répertoires utilisateurs sont stockés. Ces répertoires utilisateurs peuvent alors être exportés vers le réseau, les utilisateurs devraient alors toujours avoir le même répertoire utilisateur indépendamment de la station de travail sur laquelle ils ouvrent une session. * Plusieurs machines pourront avoir un répertoire [.filename]#/usr/ports/distfiles# commun. De cette manière, quand vous avez besoin d'installer un logiciel porté sur plusieurs machines, vous pouvez accéder rapidement aux sources sans les télécharger sur chaque machine. [[network-amd]] === Montages automatiques avec amd man:amd[8] ("automatic mounter daemon"-"daemon" de montage automatique) monte automatiquement un système de fichiers distant dès que l'on accède à un fichier ou un répertoire contenu par ce système de fichiers. Les systèmes de fichiers qui sont inactifs pendant une certaine période seront automatiquement démontés par amd. L'utilisation d'amd offre une alternative simple aux montages permanents qui sont généralement listés dans [.filename]#/etc/fstab#. amd opère en s'attachant comme un serveur NFS aux répertoires [.filename]#/host# et [.filename]#/net#. Quand on accède à un fichier à l'intérieur de ces répertoires, amd recherche le montage distant correspondant et le monte automatiquement. [.filename]#/net# est utilisé pour monter un système de fichiers exporté à partir d'une adresse IP, alors que [.filename]#/host# est utilisé pour monter un système de fichiers exporté à partir d'un nom de machine distant. Un accès à un fichier dans [.filename]#/host/foobar/usr# demandera à amd de tenter de monter l'export [.filename]#/usr# sur la machine `foobar`. .Monter un systèmes de fichiers exporté avec amd [example] ==== Vous pouvez voir les systèmes de fichiers exportés par une machine distante avec la commande `showmount`. Par exemple, pour voir les répertoires exportés par une machine appelée `foobar`, vous pouvez utiliser: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /host/foobar/usr .... ==== Comme on le voit dans l'exemple, `showmount` liste [.filename]#/usr# comme une exportation. Quand on change de répertoire pour [.filename]#/host/foobar/usr#, amd tente de résoudre le nom de machine `foobar` et de monter automatiquement le système exporté désiré. amd peut être lancé par les procédures de démarrage en ajoutant les lignes suivantes dans le fichier [.filename]#/etc/rc.conf#: [.programlisting] .... amd_enable="YES" .... De plus, des paramètres peuvent être passés à amd à l'aide de l'option `amd_flags`. Par défaut, l'option `amd_flags` est possitionnée à: [.programlisting] .... amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" .... Le fichier [.filename]#/etc/amd.map# définit les options par défaut avec lesquelles les systèmes exportés sont montés. Le fichier [.filename]#/etc/amd.conf# définit certaines des fonctionnalités les plus avancées de amd. Consultez les pages de manuel de man:amd[8] et man:amd.conf[8] pour plus d'informations. [[network-nfs-integration]] === Problèmes d'intégration avec d'autres systèmes Certaines cartes Ethernet ISA présentent des limitations qui peuvent poser de sérieux problèmes sur un réseau, en particulier avec NFS. Ce n'est pas une particularité de FreeBSD, mais FreeBSD en est également affecté. Ce problème se produit pratiquement à chaque fois que des systèmes (FreeBSD) PC sont sur le même réseau que des stations de travail très performantes, comme celles de Silicon Graphics, Inc. et Sun Microsystems, Inc. Les montages NFS se feront sans difficulté, et certaines opérations pourront réussir, puis soudain le serveur semblera ne plus répondre au client, bien que les requêtes vers ou en provenance d'autres systèmes continueront à être traitées normalement. Cela se manifeste sur la machine cliente, que ce soit le système FreeBSD ou la station de travail. Sur de nombreux systèmes, il n'est pas possible d'arrêter le client proprement une fois que ce problème apparaît. La seule solution est souvent de réinitialiser le client parce que le problème NFS ne peut être résolu. Bien que la solution "correcte" est d'installer une carte Ethernet plus performante et de plus grande capacité sur le système FreeBSD, il existe une solution simple qui donnera satisfaction. Si le système FreeBSD est le _serveur_, ajoutez l'option `-w=1024` lors du montage sur le client. Si le système FreeBSD est le _client_, alors montez le système de fichiers NFS avec l'option `-r=1024`. Ces options peuvent être spécifiées dans le quatrième champ de l'entrée [.filename]#fstab# sur le client pour les montages automatiques, ou en utilisant le paramètre `-o` de la commande man:mount[8] pour les montages manuels. Il faut noter qu'il existe un problème différent, que l'on confond parfois avec le précédent, qui peut se produire lorsque les serveurs et les clients NFS sont sur des réseaux différents. Si c'est le cas, _assurez-vous_ que vos routeurs transmettent bien les informations UDP nécessaires, ou vous n'irez nulle part, quoi que vous fassiez par ailleurs. Dans les exemples suivants, `fastws` est le nom de la station de travail (interface) performante, et `freebox` celui d'une machine (interface) FreeBSD avec une carte Ethernet moins performante. [.filename]#/sharedfs# est le système de fichiers NFS qui sera exporté (consulter la page de manuel man:exports[5]), et [.filename]#/project# sera le point de montage sur le client pour le système de fichiers exporté. Dans tous les cas, des options supplémentaires, telles que `hard soft` et `bg` seront peut-être nécessaires pour vos applications. Exemple d'extrait du fichier [.filename]#/etc/fstab# sur `freebox` quand le système FreeBSD (`freebox`) est le client: [.programlisting] .... fastws:/sharedfs /project nfs rw,-r=1024 0 0 .... Commande de montage manuelle sur `freebox`: [source,shell] .... # mount -t nfs -o -r=1024 fastws:/sharedfs /project .... Exemple d'extrait du fichier [.filename]#/etc/fstab# sur `fastws` quand le système FreeBSD est le serveur: [.programlisting] .... freebox:/sharedfs /project nfs rw,-w=1024 0 0 .... Commande de montage manuelle sur `fastws`: [source,shell] .... # mount -t nfs -o -w=1024 freebox:/sharedfs /project .... Presque n'importe quelle carte Ethernet 16 bits permettra d'opérer sans l'utilisation des paramètres restrictifs précédents sur les tailles des tampons de lecture et d'écriture. Pour ceux que cela intéresse, voici ce qui se passe quand le problème survient, ce qui explique également pourquoi ce n'est pas récupérable. NFS travaille généralement avec une taille de "bloc" de 8 k (bien qu'il arrive qu'il les fragmente en de plus petits morceaux). Comme la taille maximale d'un paquet Ethernet est de 1500 octets, le "bloc" NFS est divisé en plusieurs paquets Ethernet, bien qu'il soit toujours vu comme quelque chose d'unitaire par les couches supérieures du code, et doit être réceptionné, assemblé, et _acquitté_ comme tel. Les stations de travail performantes peuvent traiter les paquets qui composent le bloc NFS les uns après les autres, pratiquement aussi rapidement que le standard le permet. Sur les cartes les plus petites, de moindre capacité, les derniers paquets d'un même bloc écrasent les paquets précédents avant qu'ils aient pu être transmis à la machine et le bloc ne peut être réassemblé ou acquitté. Avec pour conséquence, le dépassement du délai d'attente sur la station de travail qui recommence alors la transmission, mais en renvoyant l'intégralité des 8 K, et ce processus se répète à l'infini. En définissant la taille de bloc inférieure à la taille d'un paquet Ethernet, nous nous assurons que chaque paquet Ethernet complet sera acquitté individuellement, évitant ainsi la situation de blocage. Des écrasements peuvent toujours survenir quand des stations de travail performantes surchargent un système PC de données, mais avec de meilleures cartes, de tels écrasements ne sont pas systématiques pour les "blocs" NFS. Quand un écrasement apparaît, les blocs affectés sont retransmis, et ils y a de fortes chances pour qu'ils soient reçus, assemblés et acquittés. [[network-nis]] == Services d'information réseau (NIS/YP) === Qu'est-ce que c'est? NIS, qui signifie "Network Information Services" (services d'information réseau), fut développé par Sun Microsystems pour centraliser l'administration de systèmes UNIX(R) (à l'origine SunOS(TM)). C'est devenu aujourd'hui un standard industriel; tous les systèmes importants de type UNIX(R) (Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD, FreeBSD, etc.) supportent NIS. NIS était appelé au départ "Yellow Pages" (page jaunes), mais étant donné que c'était marque déposée, Sun changea le nom. L'ancienne appelation (et yp) est toujours rencontrée et utilisée. C'est un système client/serveur basé sur les RPCs qui permet à un groupe de machines d'un domaine NIS de partager un ensemble de fichiers de configuration communs. Cela permet à un administrateur système de mettre en place des clients NIS avec un minimum de configuration et d'ajouter, modifier ou supprimer les informations de configuration à partir d'un unique emplacement. C'est similaire au système de domaine Windows NT(R); bien que l'implémentation interne des deux n'est pas du tout identique, les fonctionnalités de base sont comparables. === Termes/processus à connaître Il existe plusieurs termes et processus utilisateurs que vous rencontrerez lors de la configuration de NIS sous FreeBSD, que vous vouliez mettre en place un serveur NIS ou un client NIS: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Terme | Description |Nom de domaine NIS |Un serveur maître NIS et tous ses clients (y compris ses serveurs esclaves) ont un domaine NIS. Similaire au nom de domaine Windows NT(R), le nom de domaine NIS n'a rien à voir avec le système DNS. |rpcbind |Doit tourner afin d'activer les RPC (Remote Procedure Call, appel de procédures distantes, un protocole réseau utilisé par NIS). Si rpcbind ne tourne pas, il sera impossible de faire fonctionner un serveur NIS, ou jouer le rôle d'un client NIS. |ypbind |Fait pointer un client NIS vers son serveur NIS. Il récupérera le nom de domaine NIS auprès du système, et en utilisant les RPC, se connectera au serveur. ypbind est le coeur de la communication client-serveur dans un environnement NIS; si ypbind meurt sur une machine cliente, elle ne sera pas en mesure d'accéder au serveur NIS. |ypserv |Ne devrait tourner que sur les serveurs NIS, c'est le processus serveur en lui-même. Si man:ypserv[8] meurt, alors le serveur ne pourra plus répondre aux requêtes NIS (avec un peu de chance, un serveur esclave prendra la relève). Il existe des implémentations de NIS (mais ce n'est pas le cas de celle de FreeBSD), qui n'essayent pas de se reconnecter à un autre serveur si le serveur utilisé précédemment meurt. Souvent, la seule solution dans ce cas est de relancer le processus serveur (ou même redémarrer le serveur) ou le processus ypbind sur le client. |rpc.yppasswdd |Un autre processus qui ne devrait tourner que sur les serveurs maître NIS; c'est un "daemon" qui permettra aux clients de modifier leur mot de passe NIS. Si ce "daemon" ne tourne pas, les utilisateurs devront ouvrir une session sur le serveur maître NIS et y changer à cet endroit leur mot de passe. |=== === Comment cela fonctionne-t-il? Dans un environnement NIS il y a trois types de machines: les serveurs maîtres, les serveurs esclaves et les clients. Les serveurs centralisent les informations de configuration des machines. Les serveurs maîtres détiennent l'exemplaire de référence de ces informations, tandis que les serveurs esclaves en ont un double pour assurer la redondance. Les clients attendent des serveurs qu'ils leur fournissent ces informations. Le contenu de nombreux fichiers peut être partagé de cette manière. Les fichiers [.filename]#master.passwd#, [.filename]#group#, et [.filename]#hosts# sont fréquemment partagés par l'intermédiaire de NIS. A chaque fois qu'un processus d'une machine cliente a besoin d'une information qu'il trouverait normalement localement dans un de ces fichiers, il émet une requête au serveur NIS auquel il est rattaché pour obtenir cette information. ==== Type de machine * Un _serveur NIS maître_. Ce serveur, analogue à un contrôleur de domaine Windows NT(R) primaire, gère les fichiers utilisés par tous les clients NIS. Les fichiers [.filename]#passwd#, [.filename]#group#, et les autres fichiers utilisés par les clients NIS résident sur le serveur maître. + [NOTE] ==== Il est possible pour une machine d'être un serveur NIS maître pour plus qu'un domaine NIS. Cependant, ce cas ne sera pas abordé dans cette introduction, qui suppose un environnement NIS relativement petit. ==== * _Serveurs NIS esclaves_. Similaire aux contrôleurs de domaine Windows NT(R) de secours, les serveurs NIS esclaves possèdent une copie des fichiers du serveur NIS maître. Les serveurs NIS esclaves fournissent la redondance nécessaire dans les environnements importants. Ils aident également à à la répartition de la charge du serveur maître: les clients NIS s'attachent toujours au serveur NIS dont ils reçoivent la réponse en premier, y compris si c'est la réponse d'un serveur esclave. * _Clients NIS_. Les clients NIS, comme la plupart des stations de travail Windows NT(R), s'identifient auprès du serveur NIS (ou le contrôleur de domaine Windows NT(R) dans le cas de stations de travail Windows NT(R)) pour l'ouverture de sessions. === Utiliser NIS/YP Cette section traitera de la configuration d'un exemple d'environnement NIS. ==== Planification Supposons que vous êtes l'administrateur d'un petit laboratoire universitaire. Ce laboratoire dispose de 15 machines FreeBSD, et ne possède pas actuellement de point central d'administration; chaque machine a ses propres fichiers [.filename]#/etc/passwd# et [.filename]#/etc/master.passwd#. Ces fichiers sont maintenus à jour entre eux grâce à des interventions manuelles; actuellement quand vous ajoutez un utilisateur pour le laboratoire, vous devez exécuter `adduser` sur les 15 machines. Cela doit changer, vous avez donc décidé de convertir le laboratoire à l'utilisation de NIS en utilisant deux machines comme serveurs. La configuration du laboratoire ressemble à quelque chose comme: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Nom de machine | Adresse IP | Rôle de la machine |`ellington` |`10.0.0.2` |Maître NIS |`coltrane` |`10.0.0.3` |Esclave NIS |`basie` |`10.0.0.4` |Station de travail |`bird` |`10.0.0.5` |Machine cliente |`cli[1-11]` |`10.0.0.[6-17]` |Autres machines clientes |=== Si vous mettez en place un système NIS pour la première fois, c'est une bonne idée de penser à ce que vous voulez en faire. Peu importe la taille de votre réseau, il y a quelques décisions à prendre. ===== Choisir un nom de domaine NIS Ce n'est pas le "nom de domaine" dont vous avez l'habitude. Il est plus exactement appelé "nom de domaine NIS". Quand un client diffuse des requêtes pour obtenir des informations, il y inclut le nom de domaine NIS auquel il appartient. C'est ainsi que plusieurs serveurs d'un même réseau peuvent savoir lequel d'entre eux doit répondre aux différentes requêtes. Pensez au nom de domaine NIS comme le nom d'un groupe de machines qui sont reliées entre elles. Certains choisissent d'utiliser leur nom de domaine Internet pour nom de domaine NIS. Ce n'est pas conseillé parce que c'est une source de confusion quand il faut résoudre un problème réseau. Le nom de domaine NIS devrait être unique sur votre réseau et est utile s'il décrit le groupe de machines qu'il représente. Par exemple, le département artistique de Acme Inc. pourrait avoir "acme-art" comme nom de domaine NIS. Pour notre exemple, nous supposerons que vous avez choisi le nom _test-domain_. Cependant, certains systèmes d'exploitation (notamment SunOS(TM)) utilisent leur nom de domaine NIS pour nom de domaine Internet. Si une ou plusieurs machines sur votre réseau présentent cette restriction, vous _devez_ utiliser votre nom de domaine Internet pour nom de domaine NIS. ===== Contraintes au niveau du serveur Il y a plusieurs choses à garder à l'esprit quand on choisit une machine destinée à être un serveur NIS. Un des problèmes du NIS est le degré de dépendance des clients vis à vis du serveur. Si un client ne peut contacter le serveur de son domaine NIS, la plupart du temps la machine n'est plus utilisable. L'absence d'information sur les utilisateurs et les groupes bloque la plupart des systèmes. Vous devez donc vous assurer de choisir une machine qui ne sera pas redémarré fréquemment, ni utilisée pour du développement. Idéalement, le serveur NIS devrait être une machine dont l'unique utilisation serait d'être un serveur NIS. Si vous avez un réseau qui n'est pas très chargé, il peut être envisagé de mettre le serveur NIS sur une machine fournissant d'autres services, gardez juste à l'esprit que si le serveur NIS n'est pas disponible à un instant donné, cela affectera _tous_ vos clients NIS. ==== Serveurs NIS La copie de référence de toutes les informations NIS est stockée sur une seule machine appelée serveur NIS maître. Les bases de données utilisées pour le stockage de ces informations sont appelées tables NIS ("NIS maps"). Sous FreeBSD ces tables se trouvent dans [.filename]#/var/yp/[domainname]# où [.filename]#[domainname]# est le nom du domaine NIS concerné. Un seul serveur NIS peut gérer plusieurs domaines à la fois, il peut donc y avoir plusieurs de ces répertoires, un pour chaque domaine. Chaque domaine aura son propre jeu de tables. Les serveurs NIS maîtres et esclaves traitent toutes les requêtes NIS à l'aide du "daemon" ypserv. ypserv reçoit les requêtes des clients NIS, traduit le nom de domaine et le nom de table demandés en chemin d'accès à la base de données correspondante et transmet l'information de la base de données au client. ===== Configurer un serveur NIS maître Selon vos besoins, la configuration d'un serveur NIS maître peut être relativement simple. FreeBSD offre par défaut un support direct du NIS. Tout ce dont vous avez besoin est d'ajouter les lignes qui suivent au fichier [.filename]#/etc/rc.conf#, et FreeBSD s'occupera du reste pour vous. [.procedure] ==== [.programlisting] .... nisdomainname="test-domain" .... . Cette ligne définie le nom de domaine NIS, `test-domain`, à la configuration du réseau (e.g. au démarrage). + [.programlisting] .... nis_server_enable="YES" .... + . Demandera à FreeBSD de lancer les processus du serveur NIS dès que le réseau est en fonctionnement. + [.programlisting] .... nis_yppasswdd_enable="YES" .... . Ceci activera le "daemon" rpc.yppasswdd, qui, comme mentionné précedement, permettra aux utilisateurs de modifier leur mot de passe à partir d'une machine cliente. ==== [NOTE] ==== Selon votre configuration NIS, vous aurez peut-être à ajouter des entrées supplémentaires. Consultez la <>, plus bas, pour plus de détails. ==== Maintenant, tout ce que vous devez faire est d'exécuter la commande `/etc/netstart` en tant que super-utilisateur. Elle configurera tout en utilisant les valeurs que vous avez définies dans [.filename]#/etc/rc.conf#. ===== Initialisation des tables NIS Les _tables NIS_ sont des fichiers de base de données, qui sont conservés dans le répertoire [.filename]#/var/yp#. Elles sont générées à partir des fichiers de configuration du répertoire [.filename]#/etc# du serveur NIS maître, avec une exception: le fichier [.filename]#/etc/master.passwd#. Et cela pour une bonne raison, vous ne voulez pas divulguer les mots de passe pour l'utilisateur `root` et autres comptes d'administration aux autres serveurs du domaine NIS. Par conséquent, avant d'initialiser les tables NIS, vous devrez faire: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... Vous devrez effacer toutes les entrées concernant les comptes système (`bin`, `tty`, `kmem`, `games`, etc.), tout comme les comptes que vous ne désirez pas propager aux clients NIS (par exemple `root` et tout autre compte avec un UID 0 (super-utilisateur)). [NOTE] ==== Assurez-vous que le fichier [.filename]#/var/yp/master.passwd# n'est pas lisible par son groupe ou le reste du monde (mode 600)! Utilisez la commande `chmod` si nécessaire. ==== Cela achevé, il est temps d'initialiser les tables NIS! FreeBSD dispose d'une procédure appelée `ypinit` pour le faire à votre place (consultez sa page de manuel pour plus d'informations). Notez que cette procédure est disponible sur la plupart des systèmes d'exploitation du type UNIX(R), mais pas tous. Sur Digital UNIX/Compaq Tru64 UNIX, elle est appelée `ypsetup`. Comme nous voulons générer les tables pour un maître NIS, nous passons l'option `-m` à `ypinit`. Pour générer les tables NIS, en supposant que vous avez effectué les étapes précédentes, lancez: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... `ypinit` devrait avoir créé [.filename]#/var/yp/Makefile# à partir de [.filename]#/var/yp/Makefile.dist#. Une fois créé, ce fichier suppose que vous être dans un environnement composé uniquement de machines FreeBSD et avec un seul serveur. Comme `test-domain` dispose également d'un serveur esclave, vous devez éditer [.filename]#/var/yp/Makefile#: [source,shell] .... ellington# vi /var/yp/Makefile .... Vous devez commenter la ligne [.programlisting] .... NOPUSH = "True" .... (si elle n'est pas déjà commentée). ===== Configurer un serveur NIS esclave Configurer un serveur NIS esclave est encore plus simple que de configurer un serveur maître. Ouvrez une session sur le serveur esclave et éditez le fichier [.filename]#/etc/rc.conf# comme précédemment. La seule différence est que nous devons maintenant utiliser l'option `-s` avec `ypinit`. L'option `-s` a besoin du nom du serveur NIS maître, donc notre ligne de commande ressemblera à: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Don't forget to update map ypservers on ellington. .... Vous devriez avoir un répertoire appelé [.filename]#/var/yp/test-domain#. Des copies des tables du serveur NIS maître devraient se trouver dans ce répertoire. Vous devrez vous assurer que ces tables restent à jour. Les entrées suivantes dans [.filename]#/etc/crontab# sur vos serveurs esclaves s'en chargeront: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... Ces deux lignes obligent le serveur esclave à synchroniser ses tables avec celles du serveur maître. Bien que ces entrées ne soient pas indispensables puisque le serveur maître essaye de s'assurer que toute modification de ses tables NIS soit répercutée à ses serveurs esclaves et comme l'information sur les mots de passe est vitale pour les systèmes qui dépendent du serveur, il est bon de forcer les mises à jour. C'est d'autant plus important sur les réseaux chargés où il n'est pas certain que les mises à jour soient intégrales. Maintenant, exécutez la commande `/etc/netstart` sur le serveur esclave, ce qui lancera le serveur NIS. ==== Clients NIS Un client NIS établit une connexion avec un serveur NIS donné par l'intermédiaire du "daemon" ypbind. ypbind consulte le nom de domaine par défaut du système (défini par la commande `domainname`), et commence à diffuser des requêtes RPC sur le réseau local. Ces requêtes précisent le nom de domaine auquel ypbind essaye de se rattacher. Si un serveur configuré pour ce domaine reçoit une des requêtes diffusées, il répond à ypbind, qui enregistrera l'adresse du serveur. S'il y a plusieurs serveurs disponibles (un maître et plusieurs esclaves par example), ypbind utilisera l'adresse du premier à répondre. Dès lors, le système client dirigera toutes ses requêtes NIS vers ce serveur. ypbind enverra de temps en temps des requêtes "ping" au serveur pour s'assurer qu'il fonctionne toujours. S'il ne reçoit pas de réponse dans un laps de temps raisonnable, ypbind considérera ne plus être attaché au domaine et recommencera à diffuser des requêtes dans l'espoir de trouver un autre serveur. ===== Configurer un client NIS Configurer une machine FreeBSD en client NIS est assez simple. [.procedure] ==== . Editez le fichier [.filename]#/etc/rc.conf# et ajoutez les lignes suivantes afin de définir le nom de domaine NIS et lancez ypbind au démarrage du réseau: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... + . Pour importer tous les mots de passe disponibles du serveur NIS, effacez tous les comptes utilisateur de votre fichier [.filename]#/etc/master.passwd# et utilisez `vipw` pour ajouter la ligne suivante à la fin du fichier: + [.programlisting] .... +::::::::: .... + [NOTE] ====== Cette ligne permet à chaque utilisateur ayant un compte valide dans les tables de mots de passe du serveur d'avoir un compte sur le client. Il y a plusieurs façons de configurer votre client NIS en modifiant cette ligne. Consultez la section <> plus bas pour plus d'informations. Pour en savoir plus, reportez-vous à l'ouvrage `Managing NFS and NIS` de chez O'Reilly. ====== + [NOTE] ====== Vous devriez conservez au moins un compte local (i.e. non-importé via NIS) dans votre fichier [.filename]#/etc/master.passwd# et ce compte devrait également être membre du groupe `wheel`. Si quelque chose se passe mal avec NIS, ce compte peut être utilisé pour ouvrir une session à distance, devenir `root`, et effectuer les corrections nécessaires. ====== + . Pour importer tous les groupes disponibles du serveur NIS, ajoutez cette ligne à votre fichier [.filename]#/etc/group#: + [.programlisting] .... +:*:: .... ==== Une fois que c'est fait, vous devriez être en mesure d'exécuter `ypcat passwd` et voir la table des mots de passe du serveur NIS. === Sécurité du NIS De façon générale, n'importe quel utilisateur distant peut émettre une requête RPC à destination de man:ypserv[8] et récupérer le contenu de vos tables NIS, en supposant que l'utilisateur distant connaisse votre nom de domaine. Pour éviter ces transactions non autorisées, man:ypserv[8] dispose d'une fonctionnalité appelée "securenets" qui peut être utilisée pour restreindre l'accès à un ensemble donné de machines. Au démarrage, man:ypserv[8] tentera de charger les informations sur les "securenets" à partir d'un fichier nommé [.filename]#/var/yp/securenets#. [NOTE] ==== Ce chemin d'accès peut varier en fonction du chemin d'accès défini par l'option `-p`. Ce fichier contient des entrées sous la forme de définitions de réseau et d'un masque de sous-réseau séparé par une espace. Les lignes commençant par un "#" sont considérées comme des commentaires. Un exemple de fichier [.filename]##securenets## peut ressembler à ceci: ==== [.programlisting] .... # autorise les connexions depuis la machine locale -- obligatoire 127.0.0.1 255.255.255.255 # autorise les connexions de n'importe quelle machine # du réseau 192.168.128.0 192.168.128.0 255.255.255.0 # autorise les connexions de n'importe quelle machine # entre 10.0.0.0 et 10.0.15.255 # y compris les machines du laboratoire de test 10.0.0.0 255.255.240.0 .... Si man:ypserv[8] reçoit une requête d'une adresse qui satisfait à ces règles, il la traite normalement. Si une adresse ne correspond pas aux règles, la requête sera ignorée et un message d'avertissement sera enregistré. Si le fichier [.filename]#/var/yp/securenets# n'existe pas, `ypserv` autorisera les connexions à partir de n'importe quelle machine. Le programme `ypserv` supporte également l'outil TCP Wrapper de Wietse Venema. Cela permet à l'administrateur d'utiliser les fichiers de configuration de TCP Wrapper pour contrôler les accès à la place de [.filename]#/var/yp/securenets#. [NOTE] ==== Bien que ces deux mécanismes de contrôle d'accès offrent une certaine sécurité, il sont, de même que le test du port privilégié, vulnérables aux attaques par "usurpation" d'adresses. Tout le trafic relatif à NIS devrait être bloqué par votre coupe-feu. Les serveurs utilisant [.filename]#/var/yp/securenets# pourront échouer à traiter les requêtes de clients NIS légitimes avec des implémentation TCP/IP archaïques. Certaines de ces implémentations positionnent à zéro les bits de la partie machine de l'adresse IP lors de diffusions et/ou sont incapables respecter le masque de sous-réseau lors du calcul de l'adresse de diffusion. Alors que certains de ces problèmes peuvent être corrigés en modifiant la configuration du client, d'autres problèmes peuvent forcer le retrait des systèmes clients fautifs ou l'abandon de [.filename]#/var/yp/securenets#. Utiliser [.filename]#/var/yp/securenets# sur un serveur avec une implémentation TCP/IP archaïque est une mauvaise idée et sera à l'origine de pertes de la fonctionnalité NIS pour une grande partie de votre réseau. L'utilisation du système TCP Wrapper augmente les temps de latence de votre serveur NIS. Le délai supplémentaire peut être suffisamment long pour dépasser le délai d'attente des programmes clients, tout particulièrement sur des réseaux chargés ou avec des serveurs NIS lents. Si un ou plusieurs de vos systèmes clients souffrent de ces symptômes, vous devrez convertir les systèmes clients en question en serveurs esclaves NIS et les forcer à se rattacher à eux-mêmes. ==== === Interdire l'accès à certains utilisateurs Dans notre laboratoire, il y a une machine `basie` qui est supposée être une station de travail de la faculté. Nous ne voulons pas retirer cette machine du domaine NIS, le fichier [.filename]#passwd# sur le serveur maître NIS contient les comptes pour la faculté et les étudiants. Que pouvons-nous faire? Il existe une méthode pour interdire à certains utilisateurs d'ouvrir une session sur une machine, même s'ils sont présents dans la base de données NIS. Pour cela, tout ce dont vous avez besoin de faire est d'ajouter __-nom_utilisateur__ à la fin du fichier [.filename]#/etc/master.passwd# sur la machine cliente, où _nom_utilisateur_ est le nom de l'utilisateur auquel vous désirez refuser l'accès. Ceci doit être fait de préférence avec `vipw`, puisque `vipw` contrôlera vos changements au fichier [.filename]#/etc/master.passwd#, et régénérera automatiquement la base de données à la fin de l'édition. Par exemple, si nous voulions interdire l'ouverture de session à l'utilisateur `bill` sur la machine `basie` nous ferions: [source,shell] .... basie# vipw [add -bill to the end, exit] vipw: rebuilding the database... vipw: done basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin +::::::::: -bill basie# .... [[network-netgroups]] === Utiliser les groupes réseau ("netgroups") La méthode présentée dans la section précédente fonctionne relativement bien si vous avez besoin de règles spécifiques pour un petit nombre d'utilisateurs et/ou de machines. Sur les réseaux plus important, vous _oublierez_ d'interdire l'accès aux machines sensibles à certains utilisateurs, ou vous devrez même modifier chaque machine séparément, perdant par là même les avantages du NIS: l'administration _centralisée_. La solution des développeurs du NIS pour ce problème est appelé _groupes réseau_ ("netgroups"). Leur objet et définition peuvent être comparés aux groupes utilisés par les systèmes UNIX(R). La principale différence étant l'absence d'identifiants (ID) numériques et la capacité de définir un groupe réseau à l'aide de comptes utilisateur et d'autres groupes réseau. Les groupes réseau furent développés pour gérer des réseaux importants et complexes avec des centaines de machines et d'utilisateurs. C'est une bonne option si vous êtes forcés de faire avec une telle situation. Cependant leur complexité rend impossible une explication avec des exemples simples. L'exemple utilisé dans le reste de cette section met en évidence ce problème. Supposons que l'introduction avec succès de NIS dans votre laboratoire a retenu l'attention de vos supérieurs. Votre mission suivante est d'étendre la couverture de votre domaine NIS à d'autres machines sur le campus. Les deux tables contiennent les noms des nouveaux utilisateurs et des nouvelles machines ainsi qu'une courte description de chacun. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Nom(s) d'utilisateurs | Description |`alpha`, `beta` |Les employés du département IT ("Information Technology") |`charlie`, `delta` |Les nouveaux apprentis du département IT |`echo`, `foxtrott`, `golf`, ... |Les employés ordinaires |`able`, `baker`, ... |Les internes actuels |=== [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Nom(s) de machines | Description |`war`, `death`, `famine`, `pollution` |Vos serveurs les plus importants. Seuls les employés du département IT sont autorisés à ouvrir des sessions sur ces machines. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |Serveurs moins importants. Tous les membres du laboratoire IT sont autorisés à ouvrir des sessions sur ces machines. |`one`, `two`, `three`, `four`, ... |Stations de travail ordinaires. Seuls les employés _réels_ sont autorisés à utiliser ces machines. |`trashcan` |Une très vielle machine sans données sensibles. Même les internes peuvent utiliser cette machine. |=== Si vous avez essayé d'implémenter ces restrictions en bloquant séparément chaque utilisateur, vous avez dû ajouter une ligne `-utilisateur` à chaque fichier [.filename]#passwd# de chaque système pour chaque utilisateur non-autorisé à ouvrir une session sur le système. Si vous omettez ne serait-ce qu'une entrée, vous aurez des problèmes. Il doit être possible de faire cela lors de la configuration initiale, cependant vous _finirez_ par oublier d'ajouter les lignes pour de nouveaux utilisateurs lors d'opérations quotidiennes. Après tout, Murphy était quelqu'un d'optimiste. Traiter cette situation avec les groupes réseau présente plusieurs avantages. Chaque utilisateur n'a pas besoin d'être traité séparément; vous assignez un utilisateur à un ou plusieurs groupes réseau et autorisez ou refusez l'ouverture de session à tous les membres du groupe réseau. Si vous ajoutez une nouvelle machine, vous n'aurez à définir les restrictions d'ouverture de session que pour les groupes réseau. Ces modifications sont indépendantes les unes des autres, plus de "pour chaque combinaison d'utilisateur et de machine faire..." Si votre configuration NIS est réfléchie, vous n'aurez à modifier qu'une configuration centrale pour autoriser ou refuser l'accès aux machines. La première étape est l'initialisation de la table NIS du groupe réseau. La version FreeBSD d'man:ypinit[8] ne crée pas de table par défaut, mais son implémentation NIS la supportera une fois créée. Pour créer une table vide, tapez simplement [source,shell] .... ellington# vi /var/yp/netgroup .... et commencez à ajouter du contenu. Pour notre exemple, nous avons besoin de quatre groupes réseau: les employées du département IT, les apprentis du département IT, les employés normaux et les internes. [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... `IT_EMP`, `IT_APP` etc. sont les noms des groupes réseau. Chaque groupement entre parenthèses ajoute un ou plusieurs comptes utilisateurs aux groupes. Les trois champs dans un groupement sont: . Le nom de la/les machine(s) où les éléments suivants sont valides. Si vous ne précisez pas un nom de machine, l'entrée est valide sur toutes les machines. Si vous précisez un nom de machine, vous pénétrerez dans un royaume obscure, d'horreur et de confusion totale. . Le nom du compte qui appartient au groupe réseau. . Le domaine NIS pour le compte. Vous pouvez importer les comptes d'autres domaines NIS dans votre groupe réseau si vous êtes une de ces personnes malchanceuses avec plus d'un domaine NIS. Chacun de ces champs peut contenir des jokers. Consultez la page de manuel man:netgroup[5] pour plus de détails. [NOTE] ==== Les noms de groupes réseau plus long que 8 caractères ne devraient pas être utilisés, tout particulièrement si vous avez des machines utilisant d'autres systèmes d'exploitation dans votre domaine NIS. Les noms sont sensibles à la casse des caractères; utiliser des majuscules pour vos noms de groupes réseau est une méthode simple pour distinguer les utilisateurs, les machines et les noms de groupes réseau. Certains clients NIS (autres que FreeBSD) ne peuvent gérer les groupes réseau avec un grand nombre d'entrées. Par exemple, certaines anciennes versions de SunOS(TM) commencent à causer des problèmes si un groupe réseau contient plus de 15 _entrées_. Vous pouvez contourner cette limite en créant plusieurs sous-groupes réseau avec 15 utilisateurs ou moins et un véritable groupe réseau constitué des sous-groupes réseau: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Vous pouvez répéter ce processus si vous avez besoin de plus de 255 utilisateurs dans un seul groupe réseau. ==== Activer et propager votre nouvelle table NIS est simple: [source,shell] .... ellington# cd /var/yp ellington# make .... Ceci générera les trois tables NIS [.filename]#netgroup#, [.filename]#netgroup.byhost# et [.filename]#netgroup.byuser#. Utilisez man:ypcat[1] pour contrôler si vos nouvelles tables NIs sont disponibles: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... La sortie devrait être semblable au contenu de [.filename]#/var/yp/netgroup#. La deuxième commande ne produira pas de sortie si vous n'avez pas précisé les groupes réseau spécifiques à une machine. La troisième commande peut être utilisée pour obtenir les listes des groupes réseau pour un utilisateur. La configuration du client est plutôt simple. Pour configurer le serveur `war`, vous devez lancer man:vipw[8] et remplacer la ligne [.programlisting] .... +::::::::: .... par [.programlisting] .... +@IT_EMP::::::::: .... Maintenant, seules les données pour les utilisateurs définis dans le groupe réseau `IT_EMP` sont importées dans la base de données de mots de passe de `war` et seuls ces utilisateurs sont autorisés à ouvrir une session. Malheureusement, cette limitation s'applique également à la fonction `~` de l'interpréteur de commandes et toutes les routines de conversion entre nom d'utilisateur et identifiant numérique d'utilisateur. En d'autres termes, `cd ~utilisateur` ne fonctionnera pas, et `ls -l` affichera l'ID numérique à la place du nom d'utilisateur et `find . -user joe -print` échouera avec le message d'erreur `No such user`. Pour corriger cela, vous devrez importer toutes les entrées d'utilisateurs _sans leur autoriser l'ouverture de session sur vos serveurs_. Cela peut être fait en ajoutant une autre ligne au fichier [.filename]#/etc/master.passwd#. Cette ligne devrait contenir: `+:::::::::/sbin/nologin`, signifiant "Importer toutes les entrées mais remplacer l'interpréteur de commandes avec [.filename]#/sbin/nologin# dans les entrées importées". Vous pouvez remplacer n'importe quel champ dans l'entrée `passwd` en plaçant une valeur par défaut dans votre fichier [.filename]#/etc/master.passwd#. [WARNING] ==== Assurez-vous que `+:::::::::/sbin/nologin` est placée après `+@IT_EMP:::::::::`. Sinon, tous les comptes utilisateur importés du NIS auront [.filename]#/sbin/nologin# comme interpréteur de commandes. ==== Après cette modification, vous ne devrez uniquement que modifier une des tables NIS si un nouvel employé rejoint le département IT. Vous pourrez utiliser une approche similaire pour les serveurs moins importants en remplaçant l'ancienne ligne `+:::::::::` dans leur version locale de [.filename]#/etc/master.passwd# avec quelque chose de semblable à ceci: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin .... Les lignes correspondantes pour les stations de travail normales seraient: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/sbin/nologin .... Tout était parfait jusqu'au changement de politique quelques semaines plus tard: le département IT commença à engager des internes. Les internes du département IT sont autorisés à utiliser les stations de travail normales et les serveurs les moins importants; les apprentis du département IT sont autorisés à ouvrir des sessions sur les serveurs principaux. Vous ajoutez alors un nouveau groupe réseau `IT_INTERN`, ajoutez les nouveaux internes IT à ce groupe réseau et commencez à modifier la configuration sur chaque machine... Comme disait l'ancien: "Erreurs dans la planification centralisée mènent à un désordre général". La capacité de NIS à créer des groupes réseau à partir d'autres groupes réseau peut être utilisée pour éviter de telles situations. Une possibilité est la création de groupes réseau basés sur le rôle du groupe. Par exemple vous pourriez créer un groupe réseau appelé `BIGSRV` pour définir les restrictions d'ouverture de session pour les serveurs importants, un autre groupe réseau appelé `SMALLSRV` pour les serveurs moins importants et un troisième groupe réseau nommé `USERBOX` pour les stations de travail normales. Chacun de ces groupes réseau contient les groupes réseau autorisés à ouvrir des sessions sur ces machines. Les nouvelles entrées pour la table NIS de groupes réseau devrait ressembler à ceci: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... Cette méthode qui consiste à définir des restrictions d'ouverture de session fonctionne relativement bien si vous pouvez définir des groupes de machines avec des restrictions identiques. Malheureusement, ceci est une exception et pas une généralité. La plupart du temps, vous aurez besoin de définir des restrictions d'ouverture de session par machine. La définition de groupes réseau spécifiques aux machines est une autre possibilité pour traiter la modification de politique soulignée précédemment. Dans ce scénario, le fichier [.filename]#/etc/master.passwd# de chaque machine contient deux lignes débutant par "+". La première ajoute un groupe réseau avec les comptes autorisés à ouvrir une session sur cette machine, la seconde ajoute tous les comptes avec l'interpréteur de commandes [.filename]#/sbin/nologin#. C'est une bonne idée d'utiliser des majuscules pour le nom de la machine ainsi que celui du groupe réseau. Dans d'autres termes, les lignes en question devraient être semblables à: [.programlisting] .... +@NOMMACHINE::::::::: +:::::::::/sbin/nologin .... Une fois cette tâche achevée pour toutes vos machines, vous n'aurez plus jamais à modifier les versions locales du fichier [.filename]#/etc/master.passwd#. Tous les changements futurs peuvent être gérés en modifiant la table NIS. Voici un exemple d'une table de groupes réseau possible pour ce scénario avec quelques petits plus: [.programlisting] .... # Définir tout d'abord les groupes d'utilisateurs IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Définir, maintenant, des groupes basés sur les rôles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # Et un groupe pour les tâches spéciales # Permettre à echo et golf d'accéder à notre machine anti-virus SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # les groupes réseau basés sur un ensemble de machines # Nos principaux serveurs WAR BIGSRV FAMINE BIGSRV # L'utilisateur india a besoin d'un accès à ce serveur POLLUTION BIGSRV (,india,test-domain) # # Celle-ci est très importante et nécessite plus de restrictions d'accès DEATH IT_EMP # # La machine anti-virus mentionnée précédemment ONE SECURITY # # Restreindre l'accès à une machine à un seul utilisateur TWO (,hotel,test-domain) # [...d'autres groupes suivent] .... Si vous utilisez une sorte de base de données pour gérer vos comptes utilisateur, vous devriez pouvoir créer la première partie de la table avec les outils de votre base de données. De cette façon, les nouveaux utilisateurs auront automatiquement accès aux machines. Dernier avertissement: il n'est pas toujours conseillé d'utiliser des groupes réseau basés sur les machines. Si vous déployez quelques douzaines ou même centaines de machines identiques pour des laboratoires pour étudiants, vous devriez utiliser des groupes basés sur les types d'utilisateurs plutôt que sur les machines pour conserver la taille de la table NIS dans des limites raisonnables. === Les choses importantes à ne pas oublier Il y a un certain nombre de choses que vous devrez effectuer différemment maintenant que vous êtes dans un environnement NIS. * A chaque fois que vous désirez ajouter un utilisateur au laboratoire, vous devez l'ajouter _uniquement_ sur le serveur NIS et _vous devez ne pas oublier de reconstruire les tables NIS_. Si vous oubliez de le faire, le nouvel utilisateur ne pourra pas ouvrir de session en dehors du serveur maître NIS. Par exemple, si nous devons ajouter au laboratoire un nouvel utilisateur `jsmith`, nous ferions: + [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... + Vous pouvez lancer `adduser jsmith` à la place de `pw useradd jsmith`. * _Conservez les comptes d'administration en dehors des tables NIS_. Vous ne voulez pas propager les comptes et mots de passe d'administration sur les machines qui auront des utilisateurs qui ne devraient pas avoir accès à ces comptes. * _Sécurisez les serveurs maître et esclave NIS, et réduisez leur temps d'arrêt_. Si quelqu'un tente soit d'attaquer soit de simplement arrêter ces machines, de nombreuses personnes ne pourront plus ouvrir de session dans le laboratoire. + C'est la principale faiblesse d'un système d'administration centralisée. Si vous ne protégez pas vos serveurs NIS, vous aurez à faire face à de nombreux utilisateurs mécontents! === Compatibilité NIS version 1 ypserv sous FreeBSD offre un support des clients NIS version 1. L'implémentation NIS de FreeBSD utilise uniquement le protocole NIS version 2, cependant d'autres implémentations disposent du support pour le protocole version 1 pour des raisons de compatibilité avec d'anciens systèmes. Les "daemons" ypbind fournis avec ces systèmes tenteront de s'attacher à un serveur NIS version 1 même s'ils n'en ont pas besoin (et ils pourront continuer à diffuser des requêtes pour en trouver un même après avoir reçu une réponse d'un serveur NIS version 2). Notez que bien que les requêtes des clients normaux soient supportées, cette version d'ypserv ne supporte pas les requêtes de transfert de tables version 1; par conséquent il n'est pas possible de l'utiliser comme serveur maître ou esclave avec des serveurs NIS plus anciens qui ne supportent que la version 1 du protocole. Heureusement, il n'y a, aujourd'hui, presque plus de serveurs de ce type actifs. [[network-nis-server-is-client]] === Serveurs NIS qui sont aussi des clients NIS Il faut faire attention quand on utilise ypserv dans un domaine avec plusieurs serveurs NIS qui sont également des clients NIS. Il est en général préférable de forcer les serveurs de se rattacher à eux-mêmes plutôt que de les laisser diffuser des requêtes de rattachement et éventuellement se rattacher réciproquement les uns aux autres. Il peut en résulter de curieux problèmes si l'un des serveurs tombe et que d'autres en dépendent. Tous les clients finiront par dépasser leur délai d'attente et se tenteront de se rattacher à d'autres serveurs, mais ce délai peut être considérable et le problème persistera puisque les serveurs peuvent à nouveau se rattacher les uns aux autres. Vous pouvez obliger une machine à se rattacher à un serveur particulier en exécutant `ypbind` avec l'option `-S`. Si vous ne désirez pas faire cela à la main à chaque fois que vous redémarrez votre serveur NIS, vous pouvez ajouter les lignes suivantes à votre fichier [.filename]#/etc/rc.conf#: [.programlisting] .... nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" .... Voir la page de manuel de man:ypbind[8] pour plus d'informations. === Formats des mots de passe Un des problèmes les plus courants que l'on rencontre en mettant en oeuvre NIS est celui de la compatibilité des formats de mots de passe. Si votre serveur NIS utilise des mots de passe chiffrés avec l'algorithme DES, il ne supportera que les clients utilisant également DES. Par exemple, si vous avez des client NIS Solaris(TM) sur votre réseau, alors vous aurez presque certainement besoin d'utiliser des mots de passe chiffrés avec le système DES. Pour déterminer quel format vos serveurs et clients utilisent, consultez le fichier [.filename]#/etc/login.conf#. Si la machine est configurée pour utiliser des mots de passe chiffrés avec DES, alors la classe `default` contiendra une entrée comme celle-ci: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Entrées suivantes omises] .... D'autres valeurs possibles pour la capacité `passwd_format` sont `blf` et `md5` (respectivement pour les chiffrages de mots de passe Blowfish et MD5). Si vous avez modifié le fichier [.filename]#/etc/login.conf#, vous devrez également regénérer la base de données des capacités de classes de session, ce qui est accompli en exécutant la commande suivante en tant que `root`: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== Le format des mots de passe utilisés dans [.filename]#/etc/master.passwd# ne sera pas mis à jour avant qu'un utilisateur ne change son mot de passe pour la première fois _après_ la régénération de la base de données des capacités de classes de session. ==== Ensuite, afin de s'assurer que les mots de passe sont chiffrés avec le format que vous avez choisi, vous devez vérifier que l'entrée `crypt_default` dans le fichier [.filename]#/etc/auth.conf# donne la priorité au format de mots de passe choisi. Par exemple, quand les mots de passe DES sont utilisés, l'entrée serait: [.programlisting] .... crypt_default = des blf md5 .... En suivant les points précédents sur chaque serveur et client NIS sous FreeBSD, vous pouvez être sûr qu'ils seront tous d'accord sur le format de mot de passe utilisé dans le réseau. Si vous avez des problèmes d'authentification sur un client NIS, c'est probablement la première chose à vérifier. Rappelez-vous: si vous désirez mettre en place un serveur NIS pour un réseau hétérogène, vous devrez probablement utiliser DES sur tous les systèmes car c'est le standard le plus courant. [[network-dhcp]] == Configuration réseau automatique (DHCP) === Qu'est-ce que DHCP? DHCP, le protocole d'attribution dynamique des adresses ("Dynamic Host Configuration Protocol"), décrit les moyens par lesquels un système peut se connecter à un réseau et obtenir les informations nécessaires pour dialoguer sur ce réseau. Les versions de FreeBSD antérieures à la version 6.0 utilisent l'implémentation du client DHCP (man:dhclient[8]) de l'ISC (Internet Software Consortium). Les versions suivantes utilisent le programme `dhclient` d'OpenBSD issu d'OpenBSD 3.7. Toutes les informations données ici au sujet de `dhclient` sont valables aussi bien pour le client DHCP d'ISC que pour celui d'OpenBSD. Le serveur DHCP est celui distribué par le consortium ISC. === Ce que traite cette section Cette section décrit les composants côté client des clients DHCP d'ISC et d' OpenBSD et côté serveur du système DHCP ISC. Le programme client, `dhclient`, est intégré à FreeBSD, la partie serveur est disponible à partir du logiciel porté package:net/isc-dhcp3-server[]. Les pages de manuel man:dhclient[8], man:dhcp-options[5], et man:dhclient.conf[5], en plus des références données plus bas, sont des ressources utiles. === Comment cela fonctionne-t-il? Quand `dhclient`, le client DHCP, est exécuté sur la machine cliente, il commence à diffuser des requêtes de demandes d'information de configuration. Par défaut, ces requêtes sont effectuées sur le port UDP 68. Le serveur répond sur le port UDP 67, fournissant au client une adresse IP et d'autres informations réseau importantes comme le masque de sous-réseau, les routeurs, et les serveurs DNS. Toutes ces informations viennent sous la forme d'un "bail" DHCP qui est uniquement valide pendant un certain temps (configuré par l'administrateur du serveur DHCP). De cette façon, les adresses IP expirées pour les clients qui ne sont plus connectés peuvent être automatiquement récupérées. Les clients DHCP peuvent obtenir une grande quantité d'informations à partir du serveur. Une liste exhaustive est donnée dans la page de manuel man:dhcp-options[5]. === Intégration dans FreeBSD Le client DHCP ISC ou OpenBSD (en fonction de la version de FreeBSD que vous utilisez), `dhclient`, est complètement intégré à FreeBSD. Le support du client DHCP est fourni avec l'installeur et le système de base, rendant évident le besoin d'une connaissance détaillée des configurations réseaux pour n'importe quel réseau utilisant un serveur DHCP. `dhclient` fait partie de toutes les versions de FreeBSD depuis la version 3.2. DHCP est supporté par sysinstall. Quand on configure une interface réseau sous sysinstall, la deuxième question posée est: "Voulez-vous tenter la configuration DHCP de l'interface?". Répondre par l'affirmative à cette question lancera `dhclient`, et en cas de succès, complétera automatiquement les informations de configuration réseau. Vous devez faire deux choses pour que votre système utilise DHCP au démarrage: * Assurez-vous que le périphérique [.filename]#bpf# est compilé dans votre noyau. Pour cela, vous devez ajouter la ligne `device bpf` à votre fichier de configuration du noyau, et recompiler le noyau. Pour plus d'informations sur la compilation de noyaux, consultez le crossref:kernelconfig[kernelconfig,Configurer le noyau de FreeBSD]. + Le périphérique [.filename]#bpf# est déjà présent dans le noyau [.filename]#GENERIC# qui est fourni avec FreeBSD, vous ne devez donc pas créer de noyau spécifique pour faire fonctionner DHCP. + [NOTE] ==== Ceux qui sont particulièrement conscients de l'aspect sécurité devraient noter que [.filename]#bpf# est également le périphérique qui permet le fonctionnement de "renifleurs" de paquets (de tels programmes doivent être lancés sous l'utilisateur `root`). [.filename]#bpf#_est_ nécessaire pour utiliser DHCP, mais si vous êtes très sensible à la sécurité, vous ne devriez probablement pas ajouter [.filename]#bpf# à votre noyau parce que vous projetez d'utiliser DHCP dans le futur. ==== * Editez votre fichier [.filename]#/etc/rc.conf# pour y ajouter ce qui suit: + [.programlisting] .... ifconfig_fxp0="DHCP" .... + [NOTE] ==== Assurez-vous de bien remplacer `fxp0` par l'interface que vous voulez configurer de façon dynamique comme décrit dans la crossref:config[config-network-setup,Configuration des cartes réseaux]. ==== + Si vous utilisez un emplacement différent pour `dhclient`, ou si vous désirez passer des arguments supplémentaires à `dhclient`, ajoutez ce qui suit (en effectuant des modifications si nécessaire): + [.programlisting] .... dhcp_program="/sbin/dhclient" dhcp_flags="" .... Le serveur DHCP, dhcpd, fait partie du logiciel porté package:net/isc-dhcp3-server[] disponible dans le catalogue des logiciels portés. Ce logiciel porté contient le serveur DHCP ISC et sa documentation. === Fichiers * [.filename]#/etc/dhclient.conf# + `dhclient` nécessite un fichier de configuration, [.filename]#/etc/dhclient.conf#. Généralement le fichier ne contient que des commentaires, les valeurs par défaut étant suffisantes. Ce fichier de configuration est décrit par la page de manuel man:dhclient.conf[5]. * [.filename]#/sbin/dhclient# + `dhclient` est lié statiquement et réside dans le répertoire [.filename]#/sbin#. La page de manuel man:dhclient[8] donne beaucoup plus d'informations au sujet de `dhclient`. * [.filename]#/sbin/dhclient-script# + `dhclient-script` est la procédure de configuration du client DHCP spécifique à FreeBSD. Elle est décrite dans la page de manuel man:dhclient-script[8], mais ne devrait pas demander de modification de la part de l'utilisateur pour fonctionner correctement. * [.filename]#/var/db/dhclient.leases# + Le client DHCP conserve une base de données des baux valides, qui est écrite comme un fichier journal. La page de manuel man:dhclient.leases[5] en donne une description légèrement plus longue. === Lecture supplémentaire Le protocole DHCP est intégralement décrit dans la http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Des informations sont également disponibles à l'adresse http://www.dhcp.org/[http://www.dhcp.org/]. [[network-dhcp-server]] === Installer et configurer un serveur DHCP ==== Ce que traite cette section Cette section fournit les informations nécessaires à la configuration d'un système FreeBSD comme serveur DHCP en utilisant l'implémentation ISC (Internet Software Consortium) du serveur DHCP. Le serveur n'est pas fourni dans le système de base de FreeBSD, et vous devrez installer le logiciel porté package:net/isc-dhcp3-server[] pour bénéficier de ce service. Lisez le crossref:ports[ports,Installer des applications. les logiciels pré-compilés et les logiciels portés] pour plus d'information sur l'utilisation du catalogue des logiciels portés. ==== Installation d'un serveur DHCP Afin de configurer votre système FreeBSD en serveur DHCP, vous devrez vous assurer que le support du périphérique man:bpf[4] est compilé dans votre noyau. Pour cela ajouter la ligne `device bpf` dans votre fichier de configuration du noyau. Pour plus d'information sur la compilation de noyaux, consultez le crossref:kernelconfig[kernelconfig,Configurer le noyau de FreeBSD]. Le périphérique [.filename]#bpf# est déjà présent dans le noyau [.filename]#GENERIC# qui est fourni avec FreeBSD, vous ne devez donc pas créer de noyau spécifique pour faire fonctionner DHCP. [NOTE] ==== Ceux qui sont particulièrement conscients de l'aspect sécurité devraient noter que [.filename]#bpf# est également le périphérique qui permet le fonctionnement de "renifleurs" de paquets (de tels programmes nécessitent également un accès avec privilèges). [.filename]#bpf#_est_ nécessaire pour utiliser DHCP, mais si vous êtes très sensible à la sécurité, vous ne devriez probablement pas ajouter [.filename]#bpf# à votre noyau parce que vous projetez d'utiliser DHCP dans le futur. ==== Il vous reste ensuite à éditer le fichier [.filename]#dhcpd.conf# d'exemple qui a été installé par le logiciel porté package:net/isc-dhcp3-server[]. Par défaut, cela sera [.filename]#/usr/local/etc/dhcpd.conf.sample#, et vous devriez le copier vers [.filename]#/usr/local/etc/dhcpd.conf# avant de commencer vos modifications. ==== Configuration du serveur DHCP [.filename]#dhcpd.conf# est composé de déclarations concernant les masques de sous-réseaux et les machines, il est peut-être plus facile à expliquer à l'aide d'un exemple: [.programlisting] .... option domain-name "example.com"; <.> option domain-name-servers 192.168.4.100; <.> option subnet-mask 255.255.255.0; <.> default-lease-time 3600; <.> max-lease-time 86400; <.> ddns-update-style none; <.> subnet 192.168.4.0 netmask 255.255.255.0 { range 192.168.4.129 192.168.4.254; <.> option routers 192.168.4.1; <.> } host mailhost { hardware ethernet 02:03:04:05:06:07; <.> fixed-address mailhost.example.com; <.> } .... <.> Cette option spécifie le domaine qui sera donné aux clients comme domaine par défaut. Consultez la page de manuel de man:resolv.conf[5] pour plus d'information sur sa signification. <.> Cette option donne une liste, séparée par des virgules, de serveurs DNS que le client devrait utiliser. <.> Le masque de sous-réseau qui sera fourni aux clients. <.> Un client peut demander un bail d'une durée bien précise. Sinon par défaut le serveur alloue un bail avec cette durée avant expiration (en secondes). <.> C'est la durée maximale d'allocation autorisée par le serveur. Si un client demande un bail plus long, le bail sera accordé mais il ne sera valide que durant `max-lease-time` secondes. <.> Cette option indique si le serveur DHCP doit tenter de mettre à jour le DNS quand un bail est accepté ou révoqué. Dans l'implémentation ISC, cette option est _obligatoire_. <.> Ceci indique quelles adresses IP devraient être utilisées dans l'ensemble des adresses réservées aux clients. Les adresses comprises dans l'intervalle spécifiée sont allouées aux clients. <.> Définit la passerelle par défaut fournie aux clients. <.> L'adresse matérielle MAC d'une machine (de manière à ce que le serveur DHCP puisse reconnaître une machine quand elle envoie une requête). <.> Indique que la machine devrait se voir attribuer toujours la même adresse IP. Notez que l'utilisation d'un nom de machine ici est correct, puisque le serveur DHCP effectuera une résolution de nom sur le nom de la machine avant de renvoyer l'information sur le bail. Une fois l'écriture de votre fichier [.filename]#dhcpd.conf# terminée, vous devez activer le serveur DHCP dans le fichier [.filename]#/etc/rc.conf#, en ajoutant: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... Remplacez le nom de l'interface `dc0` avec celui de l'interface (ou des interfaces, séparées par un espace) sur laquelle votre serveur DHCP attendra les requêtes des clients DHCP. Ensuite, vous pouvez lancer le serveur en tapant la commande suivante: [source,shell] .... # /usr/local/etc/rc.d/isc-dhcpd.sh start .... Si vous devez, dans le futur, effectuer des changements dans la configuration de votre serveur, il est important de savoir que l'envoi d'un signal `SIGHUP` à dhcpd ne provoque _pas_ le rechargement de la configuration, contrairement à la plupart des "daemons". Vous devrez envoyer un signal `SIGTERM` pour arrêter le processus, puis le relancer en utilisant la commande ci-dessus. ==== Fichiers * [.filename]#/usr/local/sbin/dhcpd# + dhcpd est lié statiquement et réside dans le répertoire [.filename]#/usr/local/sbin#. La page de manuel man:dhcpd[8] installée avec le logiciel porté donne beaucoup plus d'informations au sujet de dhcpd. * [.filename]#/usr/local/etc/dhcpd.conf# + dhcpd nécessite un fichier de configuration, [.filename]#/usr/local/etc/dhcpd.conf# avant de pouvoir commencer à offrir ses services aux client. Ce fichier doit contenir toutes les informations à fournir aux clients qui seront traités, en plus des informations concernant le fonctionnement du serveur. Ce fichier de configuration est décrit par la page de manuel man:dhcpd.conf[5] installée par le logiciel porté. * [.filename]#/var/db/dhcpd.leases# + Le serveur DHCP conserve une base de données des baux qu'il a délivré, qui est écrite comme un fichier journal. La page de manuel man:dhcpd.leases[5] installée par le logiciel porté en donne une description légèrement plus longue. * [.filename]#/usr/local/sbin/dhcrelay# + dhcrelay est utilisé dans les environnements avancés où un serveur DHCP fait suivre la requête d'un client vers un autre serveur DHCP sur un réseau séparé. Si vous avez besoin de cette fonctionnalité, installez alors le logiciel porté package:net/isc-dhcp3-server[]. La page de manuel man:dhcrelay[8] fournie avec le logiciel porté contient plus de détails. [[network-dns]] == Serveurs de noms (DNS) === Généralités FreeBSD utilise, par défaut, BIND (Berkeley Internet Name Domain), qui est l'implémentation la plus courante du protocole DNS. Le DNS est le protocole qui effectue la correspondance entre noms et adresses IP, et inversement. Par exemple une requête pour `www.FreeBSD.org` aura pour réponse l'adresse IP du serveur Web du projet FreeBSD, et une requête pour `ftp.FreeBSD.org` renverra l'adresse IP de la machine FTP correspondante. De même, l'opposé est possible. Une requête pour une adresse IP retourne son nom de machine. Il n'est pas nécessaire de faire tourner un serveur DNS pour effectuer des requêtes DNS sur un système. FreeBSD est actuellement fourni par défaut avec le serveur DNSBIND9. Notre installation est dotée de fonctionnalités étendues au niveau de la sécurité, d'une nouvelle organisation du système de fichiers et d'une configuration en environnement man:chroot[8] automatisée. Le DNS est coordonné sur l'Internet à travers un système complexe de serveurs de noms racines faisant autorité, de domaines de premier niveau ("Top Level Domain", TLD), et d'autres serveurs de noms de plus petites tailles qui hébergent, directement ou font office de "cache", l'information pour des domaines individuels. Actuellement, BIND est maintenu par l'Internet Software Consortium http://www.isc.org/[http://www.isc.org/]. === Terminologie Pour comprendre ce document, certains termes relatifs au DNS doivent être maîtrisés. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Terme | Definition |"Forward" DNS |Correspondance noms de machine vers adresses IP. |Origine |Fait référence au domaine couvert par un fichier de zone particulier. |named, BIND, serveur de noms |Noms courants pour le serveur de noms BIND de FreeBSD |Resolveur |Un processus système par l'intermédiaire duquel une machine contacte un serveur de noms pour obtenir des informations sur une zone. |DNS inverse |C'est l'inverse du DNS "classique" ("Forward" DNS). C'est la correspondance adresses IP vers noms de machine. |Zone racine |Début de la hiérarchie de la zone Internet. Toutes les zones sont rattachées à la zone racine, de la même manière qu'un système de fichier est rattaché au répertoire racine. |Zone |Un domaine individuel, un sous-domaine, ou une partie des noms administrés par un même serveur faisant autorité. |=== Exemples de zones: * `.` est la zone racine * `org.` est un domaine de premier niveau (TLD) sous la zone racine * `example.org.` est une zone sous le TLD `org.` * `1.168.192.in-addr.arpa` est une zone faisant référence à toutes les adresses IP qui appartiennent l'espace d'adresse `192.168.1.*`. Comme on peut le remarquer, la partie la plus significative d'un nom de machine est à sa gauche. Par exemple, `example.org.` est plus spécifique que `org.`, comme `org.` est à son tour plus spécifique que la zone racine. La constitution de chaque partie d'un nom de machine est proche de celle d'un système de fichiers: le répertoire [.filename]#/dev# se trouve sous la racine, et ainsi de suite. === Les raisons de faire tourner un serveur de noms Les serveurs de noms se présentent généralement sous deux formes: un serveur de noms faisant autorité, et un serveur de noms cache. Un serveur de noms faisant autorité est nécessaire quand: * on désire fournir des informations DNS au reste du monde, être le serveur faisant autorité lors des réponses aux requêtes. * un domaine, comme par exemple `example.org`, est enregistré et des adresses IP doivent être assignées à des noms de machine appartenant à ce domaine. * un bloc d'adresses IP nécessite des entrées DNS inverses (IP vers nom de machine). * un second serveur de noms ou de secours, appelé esclave, qui répondra aux requêtes. Un serveur de noms cache est nécessaire quand: * un serveur de noms local peut faire office de cache et répondre plus rapidement que l'interrogation d'un serveur de noms extérieur. Quand on émet des requêtes pour `www.FreeBSD.org`, le résolveur interroge généralement le serveur de noms du fournisseur d'accès, et récupère la réponse. Avec un serveur DNS cache local, la requête doit être effectuée qu'une seule fois vers le monde extérieur par le serveur DNS cache. Chaque interrogation suivante n'aura pas à être transmise en dehors du réseau local, puisque l'information est désormais disponible localement dans le cache. === Comment cela fonctionne-t-il? Sous FreeBSD le "daemon" BIND est appelé named pour des raisons évidentes. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Fichier | Description |man:named[8] |le "daemon" BIND |man:rndc[8] |le programme de contrôle du serveur de noms |[.filename]#/etc/namedb# |répertoire où se trouvent les informations sur les zones de BIND |[.filename]#/etc/namedb/named.conf# |le fichier de configuration du "daemon" |=== En fonction de la manière dont est configurée sur le serveur une zone donnée, les fichiers relatifs à cette zone pourront être trouvés dans les sous-répertoires [.filename]#master#, [.filename]#slave#, ou [.filename]#dynamic# du répertoire [.filename]#/etc/namedb#. Ces fichiers contiennent les informations DNS qui seront données par le serveur de noms en réponse aux requêtes. === Lancer BIND Puisque BIND est installé par défaut, sa configuration est relativement simple. La configuration par défaut de named est un serveur de noms résolveur basique, tournant dans un environnement man:chroot[8]. Pour lancer le serveur avec cette configuration, utilisez la commande suivante: [source,shell] .... # /etc/rc.d/named forcestart .... Pour s'assurer que le "daemon" named est lancé à chaque démarrage, ajoutez la ligne suivante dans [.filename]#/etc/rc.conf#: [.programlisting] .... named_enable="YES" .... Il existe, bien évidemment, de nombreuses options de configuration pour [.filename]#/etc/namedb/named.conf# qui dépassent le cadre de ce document. Si vous êtes intéressé par les options de démarrage de named sous FreeBSD, jetez un oeil aux paramètres `named_*` dans [.filename]#/etc/defaults/rc.conf# et consultez la page de manuel man:rc.conf[5]. La section crossref:config[configtuning-rcd,Utilisation du système rc(8) sous FreeBSD] constitue également une bonne lecture. === Fichiers de configuration Les fichiers de configuration pour named se trouvent dans le répertoire [.filename]#/etc/namedb# et devront être adaptés avant toute utilisation, à moins que l'on ait besoin que d'un simple résolveur. C'est dans ce répertoire où la majeure partie de la configuration se fera. ==== Utilisation de `make-localhost` Pour configurer une zone maître, il faut se rendre dans le répertoire [.filename]#/etc/namedb/# et exécuter la commande suivante: [source,shell] .... # sh make-localhost .... Si tout s'est bien passé, un nouveau fichier devrait apparaître dans le sous-répertoire [.filename]#master#. Les noms de fichiers devraient être [.filename]#localhost.rev# pour le nom de domaine local et [.filename]#localhost-v6.rev# pour les configurations IPv6. Tout comme le fichier de configuration par défaut, les informations nécessaires seront présentes dans le fichier [.filename]#named.conf#. ==== [.filename]#/etc/namedb/named.conf# [.programlisting] .... // $FreeBSD$ // // Reportez-vous aux pages de manuel named.conf(5) et named(8), et à // la documentation se trouvant dans /usr/shared/doc/bind9 pour plus de // détails. // // Si vous devez configurer un serveur primaire, assurez-vous d'avoir // compris les détails épineux du fonctionnement du DNS. Même avec de // simples erreurs, vous pouvez rompre la connexion entre les parties // affectées, ou causer un important et inutile trafic Internet. options { directory "/etc/namedb"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; // Si named est utilisé uniquement en tant que résolveur local, ceci // est un bon réglage par défaut. Pour un named qui doit être // accessible à l'ensemble du réseau, commentez cette option, précisez // l'adresse IP correcte, ou supprimez cette option. listen-on { 127.0.0.1; }; // Si l'IPv6 est activé sur le système, décommentez cette option pour // une utilisation en résolveur local. Pour donner l'accès au réseau, // précisez une adresse IPv6, ou le mot-clé "any". // listen-on-v6 { ::1; }; // En plus de la clause "forwarders", vous pouvez forcer votre serveur // de noms à ne jamais être à l'origine de // requêtes, mais plutôt faire suivre les demandes en // activant la ligne suivante: // // forward only; // Si vous avez accès à un serveur de noms au niveau de // votre fournisseur d'accès, ajoutez ici son adresse IP, et // activez la ligne ci-dessous. Cela vous permettra de // bénéficier de son cache, réduisant ainsi le // trafic Internet. /* forwarders { 127.0.0.1; }; */ .... Comme les commentaires le précisent, pour bénéficier d'un cache en amont de votre connexion, le paramètre `forwarders` peut être activé. Dans des circonstances normales, un serveur de noms interrogera de façon récursive certains serveurs de noms jusqu'à obtenir la réponse à sa requête. Avec ce paramètre activé, votre serveur interrogera le serveur de noms en amont (ou le serveur de noms fourni) en premier, en bénéficiant alors de son cache. Si le serveur en question gère beaucoup de trafic, et est un serveur rapide, activer cette option peut en valoir la peine. [WARNING] ==== `127.0.0.1` ne fonctionnera _pas_ ici. Remplacez cette adresse IP par un serveur de noms en amont de votre connexion. ==== [.programlisting] .... /* * S'il y a un coupe-feu entre vous et les serveurs de noms * avec lesquels vous voulez communiquer, vous aurez * peut-être besoin de décommenter la directive * query-source ci-dessous. Les versions * précédentes de BIND lançaient des * requêtes à partir du port 53, mais depuis la * version 8, BIND utilise * par défaut un port pseudo-aléatoire quelconque non * réservé. */ // query-source address * port 53; }; // Si vous activez un serveur de noms local, n'oubliez pas d'entrer // 127.0.0.1 dans votre fichier /etc/resolv.conf de sorte que ce // serveur soit interrogé le premier. Assurez-vous // également de l'activer dans /etc/rc.conf. zone "." { type hint; file "named.root"; }; zone "0.0.127.IN-ADDR.ARPA" { type master; file "master/localhost.rev"; }; // RFC 3152 zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA" { type master; file "master/localhost-v6.rev"; }; // NB: N'utilisez pas les adresses IP ci-dessous, elles sont factices, // et ne servent que pour des besoins de // démonstration/documentation! // // Exemple d'entrées de configuration de zone esclave. // Il peut être pratique de devenir serveur esclave pour la // zone à laquelle appartient votre domaine. Demandez à // votre administrateur réseau l'adresse IP du serveur primaire // responsable de la zone. // // N'oubliez jamais d'inclure la résolution de la zone inverse // (IN-ADDR.ARPA)! // (Ce sont les premiers octets de l'adresse IP, en ordre inverse, // auxquels ont a ajouté ".IN-ADDR.ARPA".) // // Avant de commencer à configurer une zone primaire, il faut // être sûr que vous avez parfaitement compris comment le // DNS et BIND fonctionnent. Il apparaît parfois des pièges // peu évidents à saisir. En comparaison, configurer une // zone esclave est plus simple. // // NB: N'activez pas aveuglément les exemples ci-dessous. :-) // Utilisez des noms et des adresses réelles. /* Un exemple de zone maître zone "example.net" { type master; file "master/example.net"; }; */ /* Un exemple de zone dynamique key "exampleorgkey" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "example.org" { type master; allow-update { key "exampleorgkey"; }; file "dynamic/example.org"; }; */ /* Exemple de zones esclaves directes et inverses zone "example.com" { type slave; file "slave/example.com"; masters { 192.168.1.1; }; }; zone "1.168.192.in-addr.arpa" { type slave; file "slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ .... Dans [.filename]#named.conf#, ce sont des exemples d'entrées d'un serveur esclave. Pour chaque nouvelle zone gérée, une nouvelle entrée de zone doit être ajoutée au fichier [.filename]#named.conf#. Par exemple, l'entrée de zone la plus simple possible pour `example.org` serait: [.programlisting] .... zone "example.org" { type master; file "master/example.org"; }; .... Ce sera un serveur maître pour la zone, comme indiqué par l'option `type`, concervant ses informations de zone dans le fichier [.filename]#/etc/namedb/master/example.org# comme précisé par l'option `file`. [.programlisting] .... zone "example.org" { type slave; file "slave/example.org"; }; .... Dans le cas d'un esclave, les informations concernant la zone seront transférées à partir du serveur maître pour la zone en question, et sauvegardées dans le fichier indiqué. Si ou lorsque le serveur maître tombe ou est inaccessible, le serveur esclave disposera des informations de la zone transférée et sera capable de les diffuser. ==== Fichiers de zone Un exemple de fichier de zone maître pour `example.org` (défini dans [.filename]#/etc/namedb/master/example.org#) suit: [.programlisting] .... $TTL 3600 ; 1 hour example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ; Minimum TTL ) ; Serveurs DNS IN NS ns1.example.org. IN NS ns2.example.org. ; Enregistrements MX IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Noms de machine localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 ; Alias www IN CNAME @ .... Notez que chaque nom de machine se terminant par un "." est un nom de machine complet, alors que tout ce qui se termine pas par un "." est référencé par rapport à une origine. Par exemple, `www` sera traduit en `www.origine`. Dans notre fichier de zone fictif, notre origine est `example.org.`, donc `www` sera traduit en `www.example.org.` Le format d'un fichier de zone est le suivant: [.programlisting] .... nom-enregistrement IN type-enregistrement valeur .... Les enregistrements DNS les plus couramment utilisés: SOA:: début des données de zone NS:: serveur de noms faisant autorité A:: adresse d'une machine CNAME:: alias d'un nom de machine MX:: serveur de messagerie recevant le courrier pour le domaine PTR:: un pointeur sur un nom de domaine (utilisé dans le DNS inverse) [.programlisting] .... example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh after 3 hours 3600 ; Retry after 1 hour 604800 ; Expire after 1 week 86400 ) ; Minimum TTL of 1 day .... `example.org.`:: le nom de domaine, également l'origine pour ce fichier de zone. `ns1.example.org.`:: le serveur de noms primaire/faisant autorité pour cette zone. `admin.example.org.`:: la personne responsable pour cette zone avec le caractère "@" remplacé. (mailto:admin@example.org[admin@example.org] devient `admin.example.org`) `2006051501`:: le numéro de série de ce fichier. Celui-ci doit être incrémenté à chaque modification du fichier de zone. De nos jours, de nombreux administrateurs préfèrent un format du type `aaaammjjrr` pour le numéro de série. `2006051501` signifierait dernière modification le 15/05/2006, le `01` indiquant que c'est la seconde fois que ce fichier a été révisé ce jour. Le numéro de série est important puisqu'il indique aux serveurs de noms esclaves pour la zone une modification de celle-ci. [.programlisting] .... IN NS ns1.example.org. .... C'est une entrée de type NS. Tous les serveurs de noms qui doivent faire autorité pour la zone devront inclure une de ces entrées. [.programlisting] .... localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 .... Un enregistrement de type A indique des noms de machine. Comme présenté ci-dessus `ns1.example.org` sera résolu en `192.168.1.2`. [.programlisting] .... IN A 192.168.1.1 .... Cette ligne assigne l'adresse IP `192.168.1.1` à l'origine, dans cet exemple `example.org`. [.programlisting] .... www IN CNAME @ .... L'enregistrement de type CNAME est généralement utilisé pour créer des alias à une machine. Dans l'exemple, `www` est un alias de la machine connue sous le nom `localhost.example.org` (`127.0.0.1`). Les enregistrements CNAME peuvent être utilisés pour fournir des alias à des noms de machines, ou permettre la rotation ("round robin") d'un nom de machine entre plusieurs machines. [.programlisting] .... IN MX 10 mail.example.org. .... L'enregistrement MX indique quels serveurs de messagerie sont responsables de la gestion du courrier entrant pour la zone. `mail.example.org` est le nom de machine du serveur de messagerie, et 10 étant la priorité du serveur de messagerie. On peut avoir plusieurs serveurs de messagerie, avec des priorités de 10, 20, etc. Un serveur de messagerie tentant de transmettre du courrier au domaine `example.org` essaiera en premier le MX avec la plus haute priorité (l'enregistrement avec le numéro de priorité le plus bas), puis celui venant en second, etc, jusqu'à ce que le courrier puisse être correctement délivré. Pour les fichiers de zone in-addr.arpa (DNS inverse), le même format est utilisé, à l'exception du fait que des entrées PTR seront utilisées en place de A ou CNAME. [.programlisting] .... $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 3600 ) ; Minimum IN NS ns1.example.org. IN NS ns2.example.org. 1 IN PTR example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 4 IN PTR mx.example.org. 5 IN PTR mail.example.org. .... Ce fichier donne la correspondance entre adresses IP et noms de machines de notre domaine fictif. === Serveur de noms cache Un serveur de noms cache est un serveur de noms qui ne fait autorité pour aucune zone. Il émet simplement des requêtes, et se souvient du résultat pour une utilisation ultérieure. Pour mettre en place un tel serveur, configurez le serveur de noms comme à l'accoutumé, en prenant bien soin de n'inclure aucune zone. === Sécurité Bien que BIND soit l'implémentation la plus courante du DNS, le problème de la sécurité subsiste toujours. De possibles problèmes de sécurité exploitables sont parfois découvert. Bien que FreeBSD enferme automatiquement named dans un environnement man:chroot[8], il existe plusieurs autres mécanismes de sécurité qui pourraient aider à se prémunir contre de possibles attaques DNS. C'est une bonne idée de lire les avis de sécurité du http://www.cert.org/[CERT] et de s'inscrire à la {freebsd-security-notifications} pour se maintenir au courant des problèmes de sécurité actuels de l'Internet et de FreeBSD. [TIP] ==== Si un problème surgit, conserver les sources à jour et disposer d'une version compilée de named récente ne seront pas de trop. ==== === Lectures supplémentaires Les pages de manuel de BIND/named: man:rndc[8] man:named[8] man:named.conf[8]. * http://www.isc.org/products/BIND/[Page officielle ISC concernant BIND] * http://www.isc.org/sw/guild/bf/[Forum officiel ISC concernant BIND] * http://www.nominum.com/getOpenSourceResource.php?id=6[FAQ BIND] * http://www.oreilly.com/catalog/dns5/[DNS et BIND 5ème Edition de chez O'Reilly] * link:ftp://ftp.isi.edu/in-notes/rfc1034.txt[RFC1034 - Domain Names - Concepts and Facilities] * link:ftp://ftp.isi.edu/in-notes/rfc1035.txt[RFC1035 - Domain Names - Implementation and Specification] [[network-apache]] == Serveur HTTP Apache === Généralités FreeBSD est utilisé pour faire tourner certains des sites les plus chargés au monde. La majorité des serveurs web sur l'Internet utilisent le serveur HTTP Apache. Les versions pré-compilées d'Apache devraient se trouver sur le support d'installation de FreeBSD que vous avez utilisé. Si vous n'avez pas installé Apache à l'installation de FreeBSD, alors vous pouvez installer le serveur à partir du logiciel porté package:www/apache13[] ou package:www/apache20[]. Une fois qu'Apache a été installé avec succès, il doit être configuré. [NOTE] ==== Cette section traite de la version 1.3.X du serveur HTTP Apache étant donné que c'est la version la plus largement utilisée sous FreeBSD. Apache 2.X introduit de nombreuses nouvelles technologies mais elles ne sont pas abordées ici. Pour plus d'informations concernant Apache 2.X veuillez consulter http://httpd.apache.org/[http://httpd.apache.org/]. ==== === Configuration Le fichier principal de configuration du serveur HTTP Apache est, sous FreeBSD, le fichier [.filename]#/usr/local/etc/apache/httpd.conf#. Ce fichier est un fichier texte de configuration UNIX(R) typique avec des lignes de commentaires débutant par un caractère `#`. Une description complète de toutes les options de configuration possibles dépasse le cadre de cet ouvrage, aussi seules les directives les plus fréquemment modifiées seront décrites ici. `ServerRoot "/usr/local"`:: Indique le répertoire d'installation par défaut pour l'arborescence Apache. Les binaires sont stockés dans les sous-répertoires [.filename]#bin# et [.filename]#sbin# de la racine du serveur, et les fichiers de configuration dans [.filename]#etc/apache#. `ServerAdmin you@your.address`:: L'adresse électronique à laquelle tous les problèmes concernant le serveur doivent être rapportés. Cette adresse apparaît sur certaines pages générées par le serveur, comme des pages d'erreur. `ServerName www.example.com`:: La directive `ServerName` vous permet de fixer un nom de machine qui est renvoyé aux clients de votre serveur si le nom est différent de celui de la machine (i.e, utilisez `www` à la place du véritable nom de la machine). `DocumentRoot "/usr/local/www/data"`:: `DocumentRoot` est le répertoire où se trouvent les documents que votre serveur diffusera. Par défaut, toutes les requêtes sont prises en compte par rapport à ce répertoire, mais des liens symboliques et des alias peuvent être utilisés pour pointer vers d'autres emplacements. C'est toujours une bonne idée de faire des copies de sauvegarde de votre fichier de configuration d'Apache avant de faire des modifications. Une fois que vous êtes satisfait avec votre configuration, vous êtes prêt à lancer Apache. === Exécuter Apache Apache n'est pas lancé à partir du "super-serveur" inetd comme pour beaucoup d'autres serveurs réseau. Il est configuré pour tourner de façon autonome pour de meilleures performances à la réception des requêtes HTTP des navigateurs web. Une procédure est fournie pour rendre le démarrage, l'arrêt, et le redémarrage du serveur aussi simple que possible. Pour démarrer Apache pour la première fois, exécutez: [source,shell] .... # /usr/local/sbin/apachectl start .... Vous pouvez arrêter le serveur à tout moment en tapant: [source,shell] .... # /usr/local/sbin/apachectl stop .... Après avoir effectué des modifications dans le fichier de configuration, vous devez redémarrer le serveur: [source,shell] .... # /usr/local/sbin/apachectl restart .... Pour redémarrer Apache sans faire échouer les connexions en cours, exécutez: [source,shell] .... # /usr/local/sbin/apachectl graceful .... Des informations supplémentaires sont disponibles dans la page de manuel d'man:apachectl[8]. Pour lancer Apache au démarrage du système, ajoutez la ligne suivante au fichier [.filename]#/etc/rc.conf#: [.programlisting] .... apache_enable="YES" .... Si vous désirez passer des options en ligne de commande supplémentaires au programme `httpd` d'Apache lancé au démarrage du système, vous pouvez les spécifier à l'aide d'une ligne dans [.filename]#rc.conf#: [.programlisting] .... apache_flags="" .... Maintenant que le serveur web tourne, vous pouvez voir votre site web en pointant votre navigateur sur `http://localhost/`. La page web affichée par défaut est [.filename]#/usr/local/www/data/index.html#. === Serveurs virtuels Apache supporte deux types différents de serveurs virtuels. Le premier type est celui des serveurs virtuels basés sur les noms. Ce type de serveurs virtuels utilise les entêtes HTTP/1.1 pour déterminer le nom de la machine. Cela autorise le partage de la même adresse IP entre plusieurs domaines différents. Pour configurer Apache à l'utilisation de serveurs virtuels basés sur les noms, ajoutez une entrée comme la suivante à votre fichier [.filename]#httpd.conf#: [.programlisting] .... NameVirtualHost * .... Si votre serveur web est appelé `www.domain.tld` et que vous voulez mettre en place un domain virtuel pour `www.someotherdomain.tld` alors vous ajouterez les entrées suivantes au fichier [.filename]#httpd.conf#: [source,shell] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... Remplacez les addresses avec celles que vous désirez utiliser et le chemin d'accès des documents avec celui que vous utilisez. Pour plus d'informations sur la mise en place de serveurs virtuels, veuillez consulter la documentation officielle d'Apache à l'adresse http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]. === Modules Apache Il existe de nombreux modules Apache disponibles en vue d'ajouter des fonctionnalités au serveur de base. Le catalogue des logiciels portés offre une méthode simple d'installation d'Apache avec certains des modules les plus populaires. ==== mod_ssl Le module mod_ssl utilise la bibliothèque OpenSSL pour offrir un chiffrement solide à l'aide des protocoles "Secure Sockets Layer" (SSL v2/v3) et "Transport Layer Security". Ce module fourni tout ce qui est nécessaire à la demande de certificats signés auprès d'une autorité de certification connue de façon à pouvoir faire tourner un serveur web sécurisé sous FreeBSD. Si vous n'avez pas déjà installé Apache, alors une version d'Apache 1.3.X comprenant mod_ssl peut être installée à l'aide du logiciel porté package:www/apache13-modssl[]. Le support SSL est également disponible pour Apache 2.X avec le logiciel porté package:www/apache20[], où il est activé par défaut. ==== Sites Web dynamiques avec Perl PHP Ces dernières années, de plus en plus d'entreprises se sont tournées vers l'Internet pour augmenter leurs revenus et renforcer leur exposition. Cela a eu pour conséquence d'accroître le besoin de contenus Web interactifs. Quand certaines entreprises, comme Microsoft(R), ont introduit dans leurs produits propriétaires des solutions à ces besoins, la communauté des logiciels libres a également répondu à l'appel. Deux options pour obtenir du contenu Web dynamique sont mod_perl et mod_php. ===== mod_perl Le projet d'intégration Apache/Perl réuni la puissance du langage de programmation Perl et le serveur HTTP Apache. Avec le module mod_perl il est alors possible d'écrire des modules Apache entièrement en Perl. De plus, la présence d'un interpréteur intégré au serveur évite la surcharge due au lancement d'un interpréteur externe et le délai pénalisant du démarrage de Perl. Le module mod_perl est peut être obtenu de diverses manières. Pour l'utilisation du module mod_perl souvenez-vous que mod_perl 1.0 ne fonctionne qu'avec Apache 1.3 et mod_perl 2.0 ne fonctionne qu'avec Apache 2. Le module mod_perl 1.0 est disponible sous package:www/mod_perl[] et une version compilée en statique sous package:www/apache13-modperl[]. Le module mod_perl 2.0 est disponible sous package:www/mod_perl2[]. ===== mod_php PHP, aussi connu sous le nom de "PHP: Hypertext Preprocessor" est un langage de script tout particulièrement adapté au développement Web. Pouvant être intégré à du HTML, sa syntaxe est dérivée du C, Java(TM), et du Perl avec pour objectif de permettre aux développeurs Web d'écrire rapidement des pages Web au contenu généré dynamiquement. Pour ajouter le support de PHP5 au serveur Web Apache, commencez par installer le logiciel porté package:lang/php5[]. Si c'est la première installation du logiciel package:lang/php5[], les `OPTIONS` disponibles seront affichées automatiquement. Si aucun menu n'est affiché, parce que le logiciel porté package:lang/php5[] a été installé par le passé, il est toujours possible de forcer l'affichage du menu des options de compilation en utilisant la commande: [source,shell] .... # make config .... dans le répertoire du logiciel porté. Dans le menu des options de compilation, sélectionnez l'option `APACHE` pour compiler mod_php5 sous forme de module chargeable pour le serveur Web Apache. [NOTE] ==== De nombreux sites utilisent toujours PHP4 pour diverses raisons (des problèmes de compatibilité ou des applications Web déjà déployées). Si mod_php4 est requis à la place de mod_php5, utilisez alors le logiciel porté package:lang/php4[]. Le logiciel porté package:lang/php4[] supporte plusieurs des options de configuration et de compilation du logiciel porté package:lang/php5[]. ==== Cela installera et configurera les modules requis au support des applications dynamiques PHP. Assurez-vous que les sections suivantes ont été ajoutées au fichier [.filename]#/usr/local/etc/apache/httpd.conf#: [.programlisting] .... LoadModule php5_module libexec/apache/libphp5.so .... [.programlisting] .... AddModule mod_php5.c IfModule mod_php5.c DirectoryIndex index.php index.html /IfModule IfModule mod_php5.c AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps /IfModule .... Ensuite, un simple appel à la commande `apachectl` pour un redémarrage élégant est requis pour charger le module PHP: [source,shell] .... # apachectl graceful .... Lors des futures mises à jour de PHP, la commande `make config` ne sera pas nécessaire; les `OPTIONS` précédemment sélectionnées sont automatiquement sauvegardées par le système des logiciels portés de FreeBSD. Le support de PHP sous FreeBSD est extrêmement modulaire ce qui donne lieu à une installation de base limitée. Il est très simple d'ajouter une fonctionnalité en utilisant le logiciel porté package:lang/php5-extensions[]. Ce logiciel porté fournit un menu pour l'installation des extensions PHP. Alternativement, il est possible d'installer les extensions individuellement en utilisant les logiciels portés correspondants. Par exemple, pour ajouter à PHP5 le support pour le serveur de bases de données MySQL, installez simplement le logiciel porté package:databases/php5-mysql[]. Après l'installation d'une extension, le serveur Apache doit être redémarré pour prendre en compte les changements de configuration: [source,shell] .... # apachectl graceful .... [[network-ftp]] == Protocole de transfert de fichiers (FTP) === Généralités Le protocol de transfert de fichiers (FTP) offre aux utilisateurs une méthode simple pour transférer des fichiers vers ou à partir d'un serveur FTP. FreeBSD comprend un serveur FTP, ftpd, dans le système de base. Cela rend la configuration et l'administration d'un serveur FTP sous FreeBSD très simple. === Configuration L'étape de configuration la plus important est de décider quels comptes seront autorisés à accéder au serveur FTP. Un système FreeBSD classique possède de nombreux comptes système utilisés par divers "daemon"s, mais les utilisateurs inconnus ne devraient pas être autorisés à ouvrir de session sous ces comptes. Le fichier [.filename]#/etc/ftpusers# est une liste d'utilisateurs interdits d'accès au serveur FTP. Par défaut, il inclut les comptes systèmes précédemment mentionnés, mais il est possible d'ajouter des utilisateurs précis qui ne devraient pas avoir accès au serveur FTP. Vous pouvez vouloir restreindre l'accès à certains utilisateurs sans leur refuser complètement l'utilisation du serveur FTP. Cela peut être réalisé à l'aide du fichier [.filename]#/etc/ftpchroot#. Ce fichier liste les utilisateurs et les groupes sujet à des restrictions d'accès FTP. La page de manuel man:ftpchroot[5] fournit tous les détails, cela ne sera donc pas décrit ici. Si vous désirez activer l'accès FTP anonyme sur votre serveur, vous devez alors créer un utilisateur appelé `ftp` sur votre serveur FreeBSD. Les utilisateurs seront donc en mesure d'ouvrir une session FTP sur votre serveur sous le nom d'utilisateur `ftp` ou `anonymous` et sans aucun mot de passe (par convention l'adresse électronique de l'utilisateur devrait être utilisée comme mot de passe). Le serveur FTP appellera man:chroot[2] quand un utilisateur anonyme ouvrira une session, pour restreindre l'accès juste au répertoire personnel de l'utilisateur `ftp`. Il existe deux fichiers texte qui spécifient les messages de bienvenue à afficher aux clients FTP. Le contenu du fichier [.filename]#/etc/ftpwelcome# sera affiché aux utilisateurs avant qu'ils atteignent l'invite de session. Après une ouverture de session, le contenu du fichier [.filename]#/etc/ftpmotd# sera affiché. Notez que le chemin d'accès à ce fichier est relatif à l'environnement de la session, aussi le fichier [.filename]#~ftp/etc/ftpmotd# sera affiché aux utilisateurs anonymes. Une fois que le serveur FTP a été configuré correctement, il doit être activé dans le fichier [.filename]#/etc/inetd.conf#. Ici il faut juste retirer le symbole de commentaire "#" en face de la ligne ftpd: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... Comme expliqué dans la <>, la configuration d'inetd doit être rechargée après que le fichier de configuration ait été modifié. Vous pouvez maintenant ouvrir une session FTP sur votre serveur en tapant: [source,shell] .... % ftp localhost .... === Maintenance Le "daemon" ftpd utilise man:syslog[3] pour l'enregistrement des messages. Par défaut, le "daemon" de gestion des journaux du système enverra les messages relatifs au FTP dans le fichier [.filename]#/var/log/xferlog#. L'emplacement des journaux FTP peut être modifié en changeant la ligne suivante dans le fichier [.filename]#/etc/syslog.conf#: [.programlisting] .... ftp.info /var/log/xferlog .... Soyez conscient des éventuels problèmes impliqués par l'utilisation d'un serveur FTP acceptant les connexions anonymes. Vous devriez, tout particulièrement, penser à deux fois avant d'autoriser les utilisateurs anonyme à déposer des fichiers sur le serveur. Votre site FTP pourrait devenir un forum d'échange de logiciels commerciaux sans les licences ou pire. Si vous devez autoriser le dépôt de fichiers de façon anonyme sur le serveur FTP, alors vous devriez fixer les permissions sur ces fichiers de telle sorte qu'ils ne puissent être lus par d'autres utilisateurs anonymes avant qu'ils n'aient pu être contrôlés. [[network-samba]] == Serveur de fichiers et d'impression pour clients Microsoft(R) Windows(R) (Samba) === Généralités Samba est un logiciel libre très populaire qui offre des services de partage de fichiers et d'imprimantes pour les clients Microsoft(R) Windows(R). De tels clients peuvent se connecter et utiliser l'espace de fichiers d'une machine FreeBSD comme si c'était un disque local, ou utiliser des imprimantes FreeBSD comme si elles étaient des imprimantes locales. Samba devrait se trouver sur votre support d'installation. Si vous n'avez pas installé Samba à l'installation de FreeBSD, vous pouvez alors l'installer à partir de la version pré-compilée ou portée package:net/samba3[]. === Configuration Le fichier de configuration par défaut de Samba est installé sous le nom [.filename]#/usr/local/etc/smb.conf.default#. Ce fichier doit être copié vers [.filename]#/usr/local/etc/smb.conf# et personnalisé avant que Samba ne puisse être utilisé. Le fichier [.filename]#smb.conf# contient la configuration nécessaire à l'exécution de Samba, comme la définition des imprimantes et des "systèmes de fichiers partagés" que vous désirez partager avec les clients Windows(R). Le logiciel Samba comprend une interface Web appelé swat qui offre une méthode simple de configuration du fichier [.filename]#smb.conf#. ==== Utilisation de l'interface web d'administration de Samba (SWAT) L'interface web d'administration de Samba (SWAT) est exécutée sous la forme d'un "daemon" à partir d'inetd. Par conséquent, la ligne suivante dans le fichier [.filename]#/etc/inetd.conf# doit être décommentée avant que swat ne puisse être utilisé pour configurer Samba: [.programlisting] .... swat stream tcp nowait/400 root /usr/local/sbin/swat swat .... Comme expliqué dans la <>, la configuration d'inetd doit être rechargée après modification de ce fichier de configuration. Une fois que swat a été activé dans [.filename]#inetd.conf#, vous pouvez utiliser un navigateur pour vous connecter à l'adresse http://localhost:901[http://localhost:901]. Vous devez ouvrir tout d'abord une session sous le compte système `root`. Une fois que vous avez ouvert une session sur la page principale de configuration de Samba, vous pouvez naviguer dans la documentation du système, ou commencer par cliquer sur l'onglet menu:Globals[]. Le menu menu:Globals[] correspond aux variables situées dans la section `[global]` du fichier [.filename]#/usr/local/etc/smb.conf#. ==== Paramétrages généraux Que vous utilisiez swat ou éditiez directement le fichier [.filename]#/usr/local/etc/smb.conf#, les premières directives que vous allez sûrement rencontrer en configurant Samba seront: `workgroup`:: Le nom de domaine NT ou le groupe de travail pour les ordinateurs qui accéderont à ce serveur. `netbios name`:: Fixe le nom NetBIOS sous lequel est connu le serveur Samba. Par défaut c'est le même que la première composante du nom de la machine pour le DNS. `server string`:: Cette directive définie la chaîne de caractères qui sera affichée lors de l'utilisation de la commande `net view` et par d'autres outils réseau recherchant à afficher une description du serveur. ==== Paramètres de sécurité Deux des plus importants paramétrages de [.filename]#/usr/local/etc/smb.conf# sont le mode de sécurité choisi, et le format de mot de passe pour les utilisateurs. Les directives suivantes contrôlent ces options: `security`:: Les deux options les plus courantes sont `security = share` et `security = user`. Si vos clients utilisent des noms d'utilisateur identiques à ceux sur votre machine FreeBSD, alors vous voudrez utiliser un niveau de sécurité utilisateur. C'est le mode de sécurité par défaut et qui demande aux clients de d'ouvrir une session avant de pouvoir accéder aux ressources partagées. + Dans le niveau de sécurité partage ("share"), le client n'a pas besoin d'ouvrir de session avant de pouvoir se connecter à une ressource partagée. C'était le mode de sécurité par défaut d'anciennes versions de Samba. `passdb backend`:: Samba possède plusieurs modèles de support d'authentification. Vous pouvez authentifier des clients avec LDAP, NIS+, une base de données SQL ou un fichier de mot de passe modifié. La méthode d'authentification par défaut est appelée `smbpasswd`, et c'est celle qui sera présentée ici. En supposant que le modèle `smbpasswd` par défaut est utilisé, le fichier [.filename]#/usr/local/private/smbpasswd# doit être créé pour permettre à Samba d'identifier les clients. Si vous désirez donner accès à vos comptes utilisateur UNIX(R) à partir de clients Windows(R), utilisez la commande suivante: [source,shell] .... # smbpasswd -a username .... Veuillez consulter le http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/[tutorial officiel de Samba] pour des informations supplémentaires sur les options de configuration. Avec les bases présentées ici, vous devriez disposer de tous les éléments nécessaires au démarrage de Samba. === Démarrage de Samba Le logiciel porté package:net/samba3[] amène une nouvelle procédure de démarrage qui peut être employée pour contrôler Samba. Pour activer cette procédure de manière à ce qu'elle soit utilisée pour par exemple lancer, arrêter ou relancer Samba, ajoutez la ligne suivante au fichier [.filename]#/etc/rc.conf#: [.programlisting] .... samba_enable="YES" .... Ou, pour un contrôle plus fin: [.programlisting] .... nmbd_enable="YES" smbd_enable="YES" .... [NOTE] ==== Avec cela, Samba sera automatiquement lancé au démarrage. ==== Il est alors possible de démarrer Samba à n'importe quel moment en tapant: [source,shell] .... # /usr/local/etc/rc.d/samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. .... Veuillez consulter la crossref:config[configtuning-rcd,Utilisation du système rc(8) sous FreeBSD] pour plus d'information sur les procédures rc. Samba consiste essentiellement en trois "daemon"s séparés. Vous devriez vous rendre compte que les "daemon"s nmbd et smbd sont lancés par la procédure [.filename]#samba#. Si vous avez activé la résolution de noms winbind dans le fichier [.filename]#smb.conf#, alors le "daemon" winbindd sera également lancé. Vous pouvez arrêter Samba à tout moment en tapant: [source,shell] .... # /usr/local/etc/rc.d/samba stop .... Samba est une suite logiciels complexes avec des fonctionnalités permettant une large intégration avec les réseaux Microsoft(R) Windows(R). Pour plus d'information sur les fonctionnalités non-abordées dans ce document, veuillez consulter http://www.samba.org[http://www.samba.org]. [[network-ntp]] == Synchronisation de l'horloge avec NTP === Généralités Avec le temps, l'horloge d'un ordinateur tend à dériver. Le protocole NTP ("Network Time Protocol") est une des manières pour s'assurer que votre horloge reste précise. De nombreux services Internet ont besoin, ou tirent partie, de la précision des horloges des ordinateurs. Par exemple, un serveur web, peut recevoir des requêtes pour n'envoyer un fichier que s'il a été modifié depuis un certain temps. Sur un réseau local, il est essentiel que les ordinateurs partageant des fichiers à partir du même serveur de fichiers aient des horloges synchronisées de manière à ce que les dates de création ou de dernière modification d'un fichier ("timestamp") soient cohérentes. Des services comme man:cron[8] reposent sur une horloge système précise pour exécuter des commandes à des moments précis. FreeBSD est fourni avec le serveur NTP man:ntpd[8] qui peut être utilisé pour contacter d'autres serveurs NTP pour régler l'horloge de votre machine ou pour jouer le rôle de serveur de temps pour d'autres. === Choisir les serveurs NTP appropriés Afin de synchroniser votre horloge, vous devrez trouver un ou plusieurs serveurs NTP. Votre administrateur réseau ou votre FAI peuvent avoir mis en place un serveur NTP dans cet objectif-consultez leur documentation pour voir si c'est le cas. Il existe une http://ntp.isc.org/bin/view/Servers/WebHome[liste en ligne de serveurs NTP accessibles par le public] que vous pouvez utiliser pour trouver un serveur NTP proche de vous. Assurez-vous d'avoir pris connaissance de la politique d'utilisation des serveurs que vous choisissez, et demandez la permission si nécessaire. Choisir plusieurs serveurs NTP non-connectés entre eux est une bonne idée au cas où un des serveurs que vous utilisez devient inaccessible ou que son horloge n'est plus fiable. man:ntpd[8] utilise intelligemment les réponses qu'il reçoit d'autres serveurs-il favorisera les plus fiables par rapport aux moins fiables. === Configuration de votre machine ==== Configuration de base Si vous désirez synchroniser votre horloge uniquement lors du démarrage de la machine, vous pouvez alors employer man:ntpdate[8]. Cela peut être approprié pour certaines machines de bureau qui sont fréquemment redémarrées et qui ne nécessites qu'une synchronisation épisodique, cependant la plupart des machines devraient utiliser man:ntpd[8]. Utiliser man:ntpdate[8] au moment du démarrage est également une bonne idée pour les machines qui exécutent man:ntpd[8]. Le programme man:ntpd[8] modifie l'horloge graduellement, alors que man:ntpdate[8] change directement l'horloge, peu importe la différence entre l'heure actuelle de la machine et l'heure correcte. Pour activer man:ntpdate[8] au démarrage, ajoutez la ligne `ntpdate_enable="YES"` au fichier [.filename]#/etc/rc.conf#. Vous devrez également préciser tous les serveurs avec lesquels vous désirez vous synchroniser et tous les indicateurs devant être passés à man:ntpdate[8] avec `ntpdate_flags`. ==== Configuration générale NTP est configuré par l'intermédiaire du fichier [.filename]#/etc/ntp.conf# suivant le format décrit dans la page de manuel man:ntp.conf[5]. Voici un exemple simple: [.programlisting] .... server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift .... L'option `server` précise quels serveurs doivent être utilisés, avec un serveur listé par ligne. Si un serveur est spécifié avec l'argument `prefer`, comme c'est le cas pour `ntplocal.example.com`, ce serveur est préféré par rapport aux autres serveurs. Une réponse en provenance d'un serveur _préféré_ sera ignorée si elle diffère de façon significative des réponses des autres serveurs, sinon elle sera utilisée sans considérer les autres réponses. L'argument `prefer` est normalement employé pour les serveurs NTP qui sont connus pour leur grande précision, comme ceux avec des systèmes spéciaux de contrôle du matériel. L'option `driftfile` précise quel fichier est utilisé pour stocker le décalage de fréquence de l'horloge. Le programme man:ntpd[8] l'utilise pour compenser automatiquement la dérive naturelle de l'horloge, permettant de maintenir un réglage raisonnablement correct même s'il est coupé d'autres sources extérieures de temps pendant une certaine période. L'option `driftfile` précise également quel fichier est utilisé pour stocker l'information concernant les réponses précédentes des serveurs NTP que vous utilisez. Il ne devrait pas être modifié par un autre processus. ==== Contrôler l'accès à votre serveur Par défaut, votre serveur NTP sera accessible par toutes les machines sur l'Internet. L'option `restrict` du fichier [.filename]#/etc/ntp.conf# vous permet de contrôler quelles machines peuvent accéder à votre serveur. Si vous voulez refuser à tout le monde l'accès à votre serveur NTP, ajoutez la ligne suivante au fichier [.filename]#/etc/ntp.conf#: [.programlisting] .... restrict default ignore .... [NOTE] ==== Cela empêchera également à votre serveur d'accéder à tout serveur listé dans votre configuration locale. Si vous avez besoin de synchroniser votre serveur NTP avec un serveur NTP externe, vous devez alors autoriser le serveur en question. Consultez la page de manuel de man:ntp.conf[5] pour plus d'information. ==== Si vous désirez autoriser uniquement l'accès aux machines de votre réseau pour qu'elles puissent synchroniser leur horloge, tout en vous assurant qu'elles ne peuvent configurer le serveur ou être utilisées comme point de de synchronisation, ajoutez: [.programlisting] .... restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap .... à la place, où `192.168.1.0` est une adresse IP de votre réseau et `255.255.255.0` est votre masque de sous-réseau. Le fichier [.filename]#/etc/ntp.conf# peut contenir plusieurs options `restrict`. Pour plus de détails, lisez la section `Access Control Support` de la page de manuel man:ntp.conf[5]. === Exécuter le serveur NTP Pour s'assurer que le serveur NTP est lancé au démarrage, ajoutez la ligne `ntpd_enable="YES"` dans le fichier [.filename]#/etc/rc.conf#. Si vous désirez passer des indicateurs supplémentaires à man:ntpd[8], éditez les paramètres de l'option `ntpd_flags` dans [.filename]#/etc/rc.conf#. Pour lancer le serveur sans redémarrer votre machine, exécutez `ntpd` en étant sûr de préciser tout paramètre supplémentaire de `ntpd_flags` dans [.filename]#/etc/rc.conf#. Par exemple: [source,shell] .... # ntpd -p /var/run/ntpd.pid .... === Utiliser ntpd avec une connexion Internet temporaire Le programme man:ntpd[8] n'a pas besoin d'une connexion permanente à l'Internet pour fonctionner correctement. Cependant, si vous disposez d'une connexion temporaire qui est configurée de telle sorte qu'il y ait établissement de la connexion à la demande, c'est une bonne idée d'empêcher le trafic NTP de déclencher la numérotation ou de maintenir constamment établie la connexion. Si vous utilisez PPP en mode utilisateur, vous pouvez employer les directives `filter` dans le fichier [.filename]#/etc/ppp/ppp.conf#. Par exemple: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Empêche le trafic NTP de lancer une connexion set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Empêche le trafic NTP entrant de garder la connexion établie set filter alive 1 deny udp dst eq 123 # Empêche le trafic NTP sortant de garder la connexion établie set filter alive 2 permit 0/0 0/0 .... Pour plus de détails lisez la section `PACKET FILTERING` de la page de manuel man:ppp[8] et les exemples du répertoire [.filename]#/usr/shared/examples/ppp/#. [NOTE] ==== Certains fournisseurs d'accès Internet bloquent les ports dont le numéro est faible, empêchant NTP de fonctionner puisque les réponses n'atteignent jamais votre machine. ==== === Information supplémentaire La documentation pour le serveur NTP peut être trouvée dans le répertoire [.filename]#/usr/shared/doc/ntp/# sous le format HTML. diff --git a/documentation/content/hu/books/handbook/mac/_index.adoc b/documentation/content/hu/books/handbook/mac/_index.adoc index eee1e1562b..ed95a275c1 100644 --- a/documentation/content/hu/books/handbook/mac/_index.adoc +++ b/documentation/content/hu/books/handbook/mac/_index.adoc @@ -1,947 +1,945 @@ --- title: 16. Fejezet - Kötelező hozzáférés-vezérlés (MAC) part: III. Rész Rendszeradminisztráció prev: books/handbook/jails next: books/handbook/audit showBookMenu: true weight: 20 params: path: "/books/handbook/mac/" --- [[mac]] = Kötelező hozzáférés-vezérlés (MAC) :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 16 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == Áttekintés A FreeBSD 5.X változata új biztonsági bõvítéseket vett át a TrustedBSD projektbõl a POSIX(R).1e nyomán. A két legjelentõsebb új biztonsági mechanizmus az állományrendszerekben megtalálható hozzáférés-vezérlési listák (Access Control List, ACL) és a kötelezõ hozzáférés-vezérlés (Mandatory Access Control, MAC). A kötelezõ hozzáférés-vezérlés segítségével olyan új hozzáférés-vezérlési modulok tölthetõek be, amelyek új biztonsági házirendeket implementálnak. Némelyek közülük védelmet nyújtanak a rendszer egy szûk részének, amivel így egy adott szolgáltatást bástyáznak alá. Mások minden részletre kiterjedõ címkézett biztonságot szolgáltatnak alanyokon és objektumokon keresztül. A meghatározás "kötelezõ" része onnan fakad, hogy a szabályok betartatását a rendszergazdák és a rendszer végzik, és nem bízzák a felhasználókra, ahogy azt a System V típusú rendszerekben a szabványos állományokra és IPC-re érvényes engedélyeken keresztül a tetszés szerinti hozzáférés-vezérlés (Discretionary Access Control, DAC) teszi. Ebben a fejezetben a kötelezõ hozzáférés-vezérlést övezõ keretrendszerre (MAC Framework) és a különbözõ biztonsági házirendeket megvalósító, beilleszthetõ modulokra fogunk összpontosítani. A fejezet elolvasása során megismerjük: * hogy a FreeBSD jelen pillanatban milyen modulokat tartalmaz a MAC rendszeren belül és milyen mechanizmusok tartoznak hozzájuk; * hogy a MAC biztonsági házirendjeit képezõ modulok miket valósítanak meg, valamint mi a különbség a címkézett és címkézetlen házirendek között; * hogyan kell hatékonyan beállítani és használni rendszerünkben a MAC rendszert; * hogyan állítsuk be a MAC rendszerben található különféle biztonsági házirendeket képezõ modulokat; * hogyan hozzunk létre a MAC rendszer segítségével egy biztonságosabb környezetet, amire példákat is mutatunk; * hogyan teszteljük le a MAC rendszer beállításait és bizonyosodjunk meg mûködésének helyességérõl. A fejezet elolvasásához ajánlott: * a UNIX(R) és a FreeBSD alapjainak ismerete (crossref:basics[basics,A UNIX alapjai]) * a rendszermag beállításának és lefordításának ismerete (crossref:kernelconfig[kernelconfig,A FreeBSD rendszermag testreszabása]) * tisztában lenni az alapvetõ biztonsági kérdésekkel és azok hatásával a FreeBSD-n belül (crossref:security[security,Biztonság]) [WARNING] ==== Az itt ismertetésre kerülõ információk helytelen alkalmazása a rendszer hozzáférhetõségének teljes elvesztését, a felhasználók bosszantását vagy az X11 által felkínált lehetõségek kirekesztését eredményezheti. Ami viszont ennél is fontosabb, hogy a MAC rendszerre nem úgy kell tekinteni, mint amitõl a rendszerünk tökéletesen biztonságossá válik. A MAC segítségével csupán a meglevõ biztonsági házirendeket gyarapítjuk. A szilárd biztonsági rutin és a rendszeres ellenõrzések elvégzése nélkül a rendszerünk valójában sosem lesz teljesen biztonságos. Hozzá kell tennünk, hogy a fejezetben bemutatott példák tényleg csak példák. Senkinek sem tanácsoljuk, hogy az itt említett beállításokat egy éles rendszerre is kiterjessze. A különbözõ biztonsági modulok felépítése rengeteg gondolkodást és próbálgatást igényel. Aki nem érti meg az egész mûködését, könnyen azon kaphatja magát, hogy újra végig kell mennie a rendszeren és egyenként be kell állítania minden könyvtárat és állományt. ==== === Amivel itt nem foglalkozunk Ebben a fejezetben a MAC rendszerrel kapcsolatban rengeteg biztonsági kérdéssel foglalkozni fogunk. Az új MAC biztonsági modulok kifejlesztését azonban már nem érintjük. Számos olyan biztonsági modul található a MAC rendszerben, amelyek rendelkeznek az új modulok kialakításához és teszteléséhez szükséges jellemzõkkel. Ilyenek többek közt a man:mac_test[4], man:mac_stub[4] és a man:mac_none[4]. Ezekrõl a biztonsági modulokról és az általuk szolgáltatott mechnanizmusokról a man oldalaik tudnak bõvebb tájékoztatást adni. [[mac-inline-glossary]] == A fejezet fontosabb fogalmai A fejezet tartalmának kifejtéséhez szükségünk lesz néhány fontosabb alapfogalom tisztázására. Segítségükkel vélhetõen sikerül eloszlatni a téma feldolgozása során felmerülõ félreértéseket, illetve elkerülni az új fogalmak és információk váratlan felbukkanását. * _alany_: Alanynak tekintünk a rendszerben minden olyan aktív egyedet, amely információt áramoltat az _objektumok_, tehát a felhasználók, a processzorok, a rendszerben futó programok stb. között. A FreeBSD-ben majdnem minden esetben a felhasználók egy szálon keresztül vezérlik a futó programokat. * _címke_: A címke egy olyan biztonsági tulajdonság, ami vonatkozhat állományokra, könyvtárakra vagy a rendszer más elemeire. Egy címke tekinthetõ a bizalmasságot jelzõ pecsétnek is: ha egy állományra címkét teszünk, akkor benne megadjuk a rá vonatkozó biztonsági jellemzõket, és csak a hozzá hasonló biztonsági beállításokkal rendelkezõ állományok, felhasználók, erõforrások stb. érhetik el. A címkék jelentését és értelmezését a házirendek beállítása határozza meg: míg egyes házirendek a címkéket egy objektum sértetlenségének vagy titkosságának tekintik, addig mások a hozzáféréssel kapcsolatos szabályokat rögzítik bennük. * _egycímkés_: Egycímkés esetrõl akkor beszélünk, amikor az adat áramlásának szabályozására az egész állományrendszer egyetlen címkét alkalmaz. Ha ezt beállítjuk egy állományrendszernél, de nem adjuk meg vele együtt a `multilabel` opciót, akkor az összes állományra ugyanaz a címke érvényes. * _erõs vízjel_: Az erõs vízjel házirendje szerint a biztonsági szint akkor növelhetõ, ha magasabb szintû információkhoz akarunk hozzájutni. A legtöbb esetben a folyamatok befejezõdése után visszaállítódik az eredeti szint. A FreeBSD MAC rendszere pillanatnyilag ehhez nem tartalmaz házirendet, de a teljesség kedvéért megadtuk ennek a definícióját is. * _gyenge vízjel_: A gyenge vízjel házirendje szerint a biztonsági szint csökkenthetõ az alacsonyabb szintû információk elérése érdekében. A legtöbb esetben a folyamatok befejezõdése után visszaállítódik az eredeti szint. A FreeBSD-ben ezt a házirendet egyedül a man:mac_lomac[4] alkalmazza. * _házirend_: Szabályok olyan gyûjteménye, amely megadja, hogy miként kell a célokat teljesíteni. Egy _házirend_ általában az egyes elemek kezelését rögzíti. Ebben a fejezetben a _házirend_ kifejezés alatt a _biztonsági házirendet_ értjük, tehát olyan szabályok gyûjteményét, amelyek az adatok és az információ áramlását határozzák meg, továbbá megadják, hogy közülük ki mihez férhet hozzá. * _kényesség_: Általában az MLS tárgyalásakor kerül elõ. Az kényesség szintjével az adatok fontosságát vagy titkosságát szokták jelölni. A kényességi szint növekedésével növekszik az adat titkosságának vagy bizalmasságának szintje. * _objektum_: Objektum vagy rendszerobjektum minden olyan egyed, amelyen információ folyik keresztül az _alanyok_ irányításával. Ezek lehetnek többek közt könyvtárak, állományok, mezõk, képernyõk, billentyûzetek, a memória, mágneses tárolóeszközök, nyomtatók vagy bármilyen más adattároló/hordozó eszköz. Az objektumok alapvetõen adattárolók vagy a rendszer erõforrásai. Egy _objektum_ elérésén gyakorlatilag az adatok elérését értjük. * _rekesz_: Egy rekeszbe soroljuk az elrekeszteni vagy elkülöníteni kívánt programok és adatok összeségét, ahol a felhasználók explicit módon képesek hozzáférni a rendszer bizonyos komponenseihez. Emellett a rekesz utalhat egy tetszõleges csoportosításra is, például munkacsoportra, osztályra, projektre vagy témára. A rekeszek használata elengedhetetlen a biztonsági házirendek kialakításához. * _sértetlenség_: A sértetlenség, mint kulcsfogalom, az adatok megbízhatóságának szintje. Minél sértetlenebb az adat, annál inkább tekinthetjük megbízhatónak. * _szint_: Egy biztonsági tulajdonság megnövelt vagy lecsökkentett beállítása. A szint növekedésével együtt a biztonság mértéke is növekszik. * _többcímkés_: A `multilabel` vagyis többcímkés jellemzõ az állományrendszerek esetén fordulhat elõ, és a man:tunefs[8] segédprogrammal állítható be egyfelhasználós módban vagy a rendszer indítása során az man:fstab[5] állományon keresztül, esetleg egy új állományrendszer létrehozásakor. Ezzel a beállítással a rendszergazda különféle MAC címkéket rendelhet különbözõ objektumokhoz. Ez a beállítás természetesen csak olyan biztonsági modulok esetén él, amelyek tudnak címkézni. [[mac-initial]] == A MAC ismertetése Az imént definiált új fogalmak tükrében most nézzük meg, hogy a MAC rendszer alkalmazásával miként javíthatunk rendszerünk biztonságán. A MAC rendszerhez készített különbözõ biztonsági modulok alkalmasak a hálózat és az állományrendszerek védelmére, valamint segítségükkel megakadályozhatjuk, hogy a felhasználók elérhessenek bizonyos portokat és socketeket stb. A házirendeket formázó modulokat talán együttesen tudjuk a leghatékonyabban alkalmazni, és ha egyszerre több modul betöltésével egy többrétegû védelmi rendszert alakítunk ki. Ez nem ugyanaz, mint a rendszer megerõsítése, ahol a rendszer összetevõit jellemzõ módon csak bizonyos célok tekintetében edzzük meg. A módszer egyedüli hátulütõi a többszörös állományrendszeri címkékkel, a felhasználónként beállítandó hálózati eléréssel stb. járó adminisztrációs költségek. Ezek a hátrányok azonban eltörpülnek a létrehozott rendszer tartósságával szemben. Például, ha képesek vagyunk megmondani, hogy az adott konfigurációban milyen házirendek alkalmazására van szükség, akkor ezzel az adminisztrációs költségek visszaszoríthatóak. A szükségtelen házirendek eltávolításával még növelhetjük is a rendszer összteljesítményét, valamint az így felkínált rugalmasságot. Egy jó kialakításban figyelembe kell venni az összes biztonsági elõírást, és hatékonyan megvalósítani ezeket a rendszer által felajánlott különféle biztonsági modulokkal. Ezért tehát a MAC lehetõségeit kihasználó rendszerekben legalább annyit meg kell tudni oldani, hogy a felhasználók ne változtathassák kedvükre a biztonsági tulajdonságokat. Az összes felhasználói segédprogramnak, programnak és szkriptnek a kiválasztott biztonsági modulokban szereplõ hozzáférési szabályokkal kifeszített kereten belül kell mozognia. A MAC totális irányítása pedig a rendszergazda kezében van. A rendszergazda így egyedül csak a megfelelõ biztonsági modulok gondos összeválogatásáért felelõs. Bizonyos környezetekben szükséges lehet a hálózaton keresztüli hozzáférések korlátozása is. Ilyen esetekben a man:mac_portacl[4], man:mac_ifoff[4] vagy a man:mac_biba[4] moduloktól érdemes elindulnunk. Más esetekben az állományrendszerek objektumainak bizalmasságát kell csupán megõriznünk. Erre a célra a man:mac_bsdextended[4] és man:mac_mls[4] modulok a legalkalmasabbak. A házirendekhez kapcsolódó döntések a hálózati beállítások alapján is meghozhatóak. Elképzelhetõ, hogy csak bizonyos felhasználók férhetnek hozzá az man:ssh[1] szolgáltatásain keresztül a hálózathoz vagy az internethez. A man:mac_portacl[4] pontosan ilyen helyzetekben tud a segítségünkre sietni. Mit tegyünk viszont az állományrendszerek esetén? Vágjunk el adott felhasználókat vagy csoportokat bizonyos könyvtáraktól? Vagy korlátozzuk a felhasználók vagy segédprogramok hozzáférését adott állományokhoz bizonyos objektumok bizalmassá tételével? Az állományrendszerek esetében az objektumokat néhány felhasználó elérheti, mások pedig nem. Például egy nagyobb fejlesztõcsapat kisebb csoportokra bontható. Az A projektben résztvevõ fejlesztõk nem férhetnek hozzá a B projektben dolgozó fejlesztõk munkájához. Ellenben szükségük lehet a C projekten munkálkodó fejlesztõk által létrehozott objektumokra. Ez egy igen érdekes helyzet. A MAC rendszer által felkínált különbözõ biztonsági modulokra építkezve azonban könnyedén csoportokba tudjuk szervezni a felhasználókat, és a megfelelõ területekhez az információ kiszivárgása nélkül hozzá tudjuk õket engedni. Ennek következtében minden egyes biztonsági modul a maga módján gondoskodik az egész rendszer biztonságáról. A céljainknak megfelelõ modulokat egy jól átgondolt biztonsági házirend alapján válasszuk ki. Sok esetben az egész házirendet át kell tekinteni és újra kell alkalmazni a rendszerben. A MAC által felajánlott különbözõ biztonsági modulok megértése segít a rendszergazdáknak megválasztani az adott helyzetben legjobban alkalmazható házirendeket. A FreeBSD rendszermagja alapból nem tartalmazza a MAC rendszert. Ezért a fejezetben szereplõ példák vagy az itt leírtak kipróbálásához az alábbi beállítást kell hozzátennünk a rendszermag beállításait tartalmazó állományhoz: [.programlisting] .... options MAC .... Majd fordítsuk és telepítsük újra a rendszermagot. [CAUTION] ==== Miközben a MAC rendszerhez készült különbözõ modulok a saját man oldalaik szerint igénylik a beépítésüket, vigyázzunk velük, mert ezzel a rendszerüket pillanatok alatt ki tudjuk zárni a hálózatból és így tovább. A MAC alapú védelem felépítése leginkább egy tûzfal összeállításához hasonlítható, ahol ugyanígy számolni kell azzal, hogy egy óvatlan paranccsal kizárhatjuk magunkat a rendszerbõl. Valamilyen módon mindig próbáljunk gondoskodni a rendszer elõzõ állapotának visszaállíthatóságáról, és a MAC távoli adminisztrációját mindig nagyfokú körültekintéssel végezzük. ==== [[mac-understandlabel]] == Bõvebben a MAC címkéirõl A MAC-címke egy olyan biztonsági tulajdonság, amelyet a rendszerben található alanyokhoz és objektumokhoz rendelhetünk. Egy címke beállításához a felhasználónak pontosan ismernie kell, hogy ilyenkor mi történik. Az objektumokhoz tartozó tulajdonságok a betöltött moduloktól függenek, és az egyes modulok eltérõ módon értelmezik ezeket a tulajdonságokat. Ha a precíz megértésük hiányában helytelenül állítjuk be ezeket, vagy nem vagyunk képesek tisztázni a velük járó következményeket, akkor az a rendszerünk kiszámíthatatlan és valószínûleg kedvezõtlen viselkedését eredményezi. A házirendek az objektumhoz rendelt biztonsági címkéket a hozzáféréssel kapcsolatos döntések meghozásában használják fel. Bizonyos házirendek esetében már maga a címke elegendõ információt tartalmaz a döntés megformálásához. Máshol viszont a címkék egy nagyobb szabályrendszer részeként dolgozódnak fel stb. Például, ha egy állományra beállítjuk a `biba/low` címkét, akkor az arra fog utalni, hogy a címkét a Biba nevû biztonsági modul kezeli és értéke "low". Az a néhány modul, amely a FreeBSD-ben támogatja a címkézés lehetõségét, három speciális címkét definiál elõre. Ezek rendre a "low" (alacsony), "high" (magas) és "equal" (egyezõ) címkék. Habár az egyes modulok esetén eltérõ módon képesek vezérelni a hozzáférést, azt mindig biztosra vehetjük, hogy a "low" a legalacsonyabb érték, az "equal" címke hatására az adott alanyt vagy objektumot érintetlenül hagyják, és a "high" értékû címke a Biba és MLS modulok esetében a legmagasabb beállítást jelenti. Az egycímkés állományrendszerek használata során az egyes objektumonkhoz csak egyetlen címkét rendelhetünk hozzá. Ezzel az egész rendszerben csak egyfajta engedélyt alkalmazunk, ami sok esetben pontosan elegendõ. Létezik néhány különleges eset, amikor az állományrendszerben levõ alanyokhoz vagy objektumokhoz egyszerre több címkét is hozzá kell rendelnünk. Ilyenkor a `multilabel` opciót kell átadnunk a man:tunefs[8] segédprogramnak. A Biba és az MLS esetében elõfordulhat, hogy egy numerikus címkével fogjuk jelölni a hierarchikus irányítás pontos szintjét. A numerikus szintek használatával tudjuk az információt különbözõ csoportokba szétosztani vagy elrendezni, például úgy, hogy csak az adott szintû vagy a felette álló csoportok számára engedélyezzük a hozzáférést. Az esetek többségében a rendszergazdának csak egyetlen címkét kell beállítania az egész állományrendszerre. _Hé, álljunk csak meg! Akkor ez viszont pont olyan, mint a DAC! Én azt hittem, hogy a MAC szigorúan a rendszergazda kezébe adja az irányítást._ Ez az állítás továbbra is fennáll, mivel bizonyos értelemben a `root` lesz az, aki beállítja a házirendeket, tehát õ mondja meg, hogy a felhasználók milyen kategóriákba vagy hozzáférési szintekbe sorolódnak. Sajnos, sok biztonsági modul még magát a `root` felhasználót is korlátozza. Az objektumok feletti irányítás ilyenkor a csoportra száll, de a `root` bármikor visszavonhatja vagy módosíthatja a beállításokat. Ezzel a hierarchikus/engedély alapú modellel a Biba és az MLS nevû házirendek foglalkoznak. === A címkék beállítása A címkézéshez kapcsolódó összes beállítást gyakorlatilag az alapvetõ rendszerprogramokkal végezhetjük el. Ezek a parancsok az objektumok és az alanyok szabályozásához, valamint a konfiguráció módosításához és ellenõrzéséhez adnak egy egyszerû kezelõfelületet. Az összes konfigurációs beállítást a man:setfmac[8] és man:setpmac[8] segédprogramokkal végezhetjük el. A `setfmac` segítségével a rendszerszintû objektumokhoz tudunk hozzárendelni a MAC-címkéket, míg a `setpmac` paranccsal a rendszerben levõ alanyokhoz tudunk címkéket rendelni. Vegyük például ezt: [source,shell] .... # setfmac biba/high próba .... Amennyiben az iménti parancs hibátlanul lefutott, visszakapjuk a paranccsort. Ezek a parancsok csak olyankor maradnak nyugodtan, amikor semmilyen hiba nem történt. Mûködésük hasonló a man:chmod[1] és man:chown[8] parancsokéhoz. Bizonyos esetekben `Permission denied` (`A hozzáférés nem engedélyezett`) hibát kapunk, ami általában akkor bukkan fel, ha egy korlátozott objektummal kapcsolatban próbálunk meg címkét beállítani vagy módosítani . A rendszergazda a következõ paranccsal tudja feloldani az ilyen helyzeteket: [source,shell] .... # setfmac biba/high próba Permission denied # setpmac biba/low setfmac biba/high próba # getfmac próba próba: biba/high .... Ahogy az itt tetten is érhetõ, a `setpmac` használható a modul beállításainak felülbírálására úgy, hogy a meghívott programban egy másik címkét állít be. A `getpmac` segédprogram általában a sendmailhez hasonló háttérben futó programok esetében alkalmazható: ilyenkor a konkrét parancs helyett a futó program azonosítóját kell megadnunk, de mûködése ugyanaz. Ha a felhasználók a hatókörükön túl levõ állományokat próbálnak meg módosítani, akkor a betöltött modulok szabályainak megfelelõen a `mac_set_link` függvény `Operation not permitted` (`A mûvelet nem engedélyezett`) hibát fog adni. ==== Gyakori címketípusok A man:mac_biba[4], man:mac_mls[4] és man:mac_lomac[4] moduloknál használhatunk címkéket. Értékük lehet "high", "equal" vagy "low", melyek rövid magyarázata a következõ: * A `low` címke az objektumra vagy alanyra érvényes leggyengébb beállítást jelenti. Az ilyen címkéjû objektumok vagy alanyok nem érhetik el a "high" címkéjûeket. * Az `equal` címke használható minden olyan objektum vagy alany esetében, amelyeket ki akarunk vonni az adott házirend hatálya alól. * A `high` címke adja az objektumhoz vagy alanyhoz tartozó legerõsebb beállítást. Az egyes moduloktól függõen ezek az értékek az információ áramoltatásának különbözõ irányait írhatják le. A megfelelõ man oldalak elolvasásával még jobban megismerhetjük az egyes címketípusok beállításának jellegzetességeit. ===== A címkék beállításáról részletesebben A numerikus osztályozó címkék `összehasonlítás:rekesz+rekesz` alakban használatosak, tehát a [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... kifejezés így értelmezhetõ: "A Biba házirend címkéje"/"10 osztály" :"2, 3 és 6 rekeszek": ("5 osztály...") Ebben a példában az elsõ osztály tekinthetõ "valódi osztálynak", amely a "valódi rekeszeket" jelenti, a második osztály egy alacsonyabb besorolás, míg az utolsó egy magasabb szintû. A legtöbb konfigurációban nem lesz szükségünk ennyire összetett beállításokra, noha képesek vagyunk felírni ezeket. Ha ezt kivetítjük a rendszer objektumaira, akkor a rendszerben levõ alanyokat illetõen csupán az aktuális osztály/rekeszek számítanak, mivel a rendszerben és hálózati csatolófelületeken elérhetõ hozzáférés-vezérlési jogokat tükrözi. Az alany-objektum párokban megadott osztályzatok és rekeszek használhatóak fel egy olyan kapcsolat kiépítésére, amit "dominanciának" nevezünk. Ilyenkor egy alany ural egy objektumot, vagy egy objektum ural egy alanyt, vagy egyikük sem uralja a másikat, esetleg mind a kettõ uralja egymást. A "kettõs dominancia" esete akkor forog fenn, amikor a két címke megegyezik. A Biba információáramoltatási sajátosságaiból adódóan jogunk van rekeszeket létrehozni, "tudunk kell", hogy ezek projekteknek feleltethetõek meg, de az objektumok is rendelkezhetnek rekeszekkel. A felhasználók ilyenkor csak úgy tudnak elérni egyes objektumokat, ha az `su` vagy a `setpmac` használatával leszûkítik a jogaikat egy olyan rekeszre, ahol már nem érvényesülnek rájuk korlátozások. ==== A felhasználók és címkék kapcsolata Maguknak a felhasználóknak is szükségük van címkékre, mivel csak ezek segítségével tudnak az állományaik és programjaik megfelelõ módon együttmûködni a rendszerben érvényes biztonsági házirenddel. Ezt a [.filename]#login.conf# állományban megadható bejelentkezési osztályokkal állíthatjuk be. Minden címkéket használó modulban a felhasználóknak is van címkéjük. Lentebb látható egy ilyen minta bejegyzés, amely minden modulhoz tartalmaz beállítást: [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... Itt a `label` opciót használtuk a felhasználói osztályhoz tartozó alapértelmezett címkék beállításához, amit majd a MAC betartat. A felhasználók nem módosíthatják ezt az értéket, ezért ez a felhasználók számára nem tekinthetõ tetszõlegesen elhagyható beállításnak. Egy valós konfigurációban azonban a rendszergazda valószínûleg nem akarja majd egyszerre az összes modult használni. Javasoljuk, hogy mielõtt egy ilyen jellegû konfigurációt adnánk meg, olvassuk el az egész fejezetet. [NOTE] ==== A felhasználók ezt a címkét meg tudják változtatni az elsõ bejelentkezés után, de csak a házirend keretein belül. A fenti példában úgy állítjuk be a Biba házirendet, hogy a futó programok sértetlenségi foka legalább 5, legfeljebb 15 lehet, de az alapértéke 10. Tehát a programok egészen addig 10-es szinten futnak, amíg a programok a Biba bejelentkezéskor megadott tartományában meg nem változtatják ezt a címkét, feltehetõen a `setpmac` parancs hatására. ==== Mindig, amikor megváltozatjuk a [.filename]#login.conf# beállításait, a `cap_mkdb` paranccsal újra kell generálni a bejelentkezési osztályokhoz tartozó adatbázist, amire a késõbbi példákban vagy részekben igyekszünk is mindig felhívni a figyelmet. Nem árt hozzátennünk, hogy sok rendszerben kifejezetten sok felhasználót kell kezelnünk, amihez több különbözõ bejelentkezési osztályra is szükségünk lehet. Mivel késõbb már csak egyre jobban bonyolódni fog a felhasználók kezelése, ezért soha ne felejtsünk el komolyan elõre tervezni. A FreeBSD következõ változataiban meg fognak jelenni más módszerek is a felhasználók és címkék közti kapcsolatok kezelésére. A FreeBSD 5.3 elõtt azonban ez még semmiképpen sem várható. ==== A hálózati csatolófelületek és a címkék kapcsolata A hálózati csatlakozások esetében is állíthatunk be címkéket, melyek a hálózaton keresztül folyó adatok áramlását határozzák meg. Minden esetben ugyanúgy mûködnek, mint ahogy a házirendek az objektumokra. Például a `biba` esetében a magas beállításokkal rendelkezõ felhasználók nem férhetnek hozzá az alacsonyabb címkéjû hálózati csatolófelületekhez. Ha MAC-címkéket akarunk rendelni egy hálózati felülethez, akkor az `ifconfig` parancsnak adjuk meg a `maclabel` paramétert. Például a [source,shell] .... # ifconfig bge0 maclabel biba/equal .... parancs beállítja a `biba/equal` MAC-címkét a man:bge[4] felületre. A `biba/high(low-high)` alakú címkéket átadásukhoz idézõjelek közé kell tenni, különben hibát kapunk. Minden címkézést támogató modulhoz tartoznak futási idõben állítható paraméterek, amelyekkel akár le is tudjuk tiltani a MAC-címkéket a hálózati csatolófelületeken. Ugyanezt jelenti egyébként, ha `equal` értéket adunk meg a címkének. Ezt behatóbban úgy ismerhetjük meg, ha kielemezzük a `sysctl` parancs kimenetét, a megfelelõ modul man oldalát vagy a fejezetben további részében található, erre vonatkozó információkat. === Egy címke vagy több címke? Alapértelmezés szerint a rendszer a `singlelabel` beállítást használja. Ez vajon mit tartogat a rendszergazda számára? Számos olyan eltérést, aminek megvannak a saját elõnyei és hátrányai a rendszer védelmi modelljének rugalmassága szempontjából. A `singlelabel` beállítás minden alany vagy objektum esetében csupán egyetlen címke, például a `biba/high` használatát engedi. Kevesebb adminisztrációs költséggel jár, azonban csökkenteni a címkézést támogató modulok testreszabhatóságát. Ezért sok rendszergazda inkább a `multilabel` beállítást választja a biztonsági házirend kialakítása során. A `multilabel` beállítás lehetõvé teszi, hogy mindegyik alanyhoz és objektumhoz a szabványos `singlelabel` beállítás lehetõségeivel szemben egymástól függetlenül külön-külön rendelhessünk címkéket a partíciókon. Az egy- és többcímkés opciónak csak olyan modulok esetében van értelme, amelyek támogatják a címkézést, mint például a Biba, Lomac, MLS és a SEBSD házirendek. Sokszor egyáltalán nincs is szükségünk a `multilabel` használatára. Tekintsük például a következõ helyzetet és biztonsági modellt: * Adott egy FreeBSD webszerver, ahol a MAC rendszert több biztonsági házirenddel alkalmazzuk. * A gépen egyedül csak a `biba/high` címkére van szükségünk mindenhez a rendszerben. Itt egyszerûen csak nem adjuk meg az állományrendszernek a `multilabel` beállítást, mivel az egycímkés rendszer mindig rendelkezésünkre áll. * Mivel azonban erre a gépre telepíteni akarunk egy webszervert is, ilyenkor a `biba/low` címke használatával igyekszünk korlátozni a szerver feldolgozási képességeit. A Biba házirendrõl és annak mûködésérõl csak a késõbbiekben fogunk írni, ezért ha az elõbbi megjegyzést még nem teljesen értjük, akkor egyszerûen csak olvassunk tovább és térjünk vissza ide. A szerver futása alatt, vagy legalább is idejének nagy részében egy külön partíciót használhatna, amire a `biba/low` címkét állítanánk be. Természetesen ez a példa korántsem teljes, hiszen hiányoznak belõle az adatokra érvényes korlátozások, a konfigurációs és felhasználói beállítások. Ez csupán az iménti gondolatmenet gyors illusztrációja. Amennyiben címkézést nem támogató modulokat alkalmazunk, a `multilabel` beállításra szinte sosem lesz szükségünk. Ilyenek például a `seeotheruids`, `portacl` és `partition` házirendek. A `multilabel` opció használata és így speciális, többcímkés védelmi modell létrehozása képes elbonyolítani a rendszer karbantartását, mert ilyenkor az állományrendszerben mindennek lennie kell címkéjének: könyvtáraknak, állományok és még az eszközleíróknak is. A most következõ paranccsal beállítjuk az állományrendszerre a `multilabel` opciót. Ez csak egyfelhasználós módban tehetõ meg: [source,shell] .... # tunefs -l enable / .... A lapozópartíció esetében erre nincs szükség. [NOTE] ==== Elõfordulhat, hogy néhány felhasználónak nem sikerül a `multilabel` opciót beállítania a rendszerindító partícióra. Ha ez történne, akkor olvassuk el a fejezet <>át. ==== [[mac-planning]] == A védelem megtervezése Mindig hasznos idõt szánni a tervezésre, amikor nekilátunk egy új technológia alkalmazásához. A tervezés közben a rendszergazdának "egyben kell látnia a képet", lehetõleg az alábbiak figyelembevételével: * Elvárások a modell felé * A modell célkitûzései Továbbá a MAC használata esetén: * Miként osztályozzuk a célrendszeren rendelkezésre álló információt és erõforrásokat * Milyen információt vagy erõforrást kell korlátoznunk és milyen típusú korlátozást alkalmazzunk rájuk * A MAC melyik moduljain keresztül tudjuk elérni céljainkat Habár mindig módunkban áll megváltoztatni és újra konfigurálni a rendszerben található erõforrásokat és biztonsági beállításokat, sokszor azért igen kényelmetlen utánanézni a rendszerben és állítgatni az állományok, illetve felhasználói hozzáférések paramétereit. A beállításainkat valamint azok konfigurációját _elõször_ külön próbáljuk ki, mielõtt a MAC alapú megvalósításunkat egy éles rendszeren kezdjük el használni. Ennek elhagyása szinte biztosan kudarcra ítél minket. A különbözõ környezetek igényei és elvárásai eltérnek. Egy alaposan és minden részletében átgondolt védelmi profil megalapozása csökkenti a rendszer üzembehelyezése után elvégzendõ módosítások számát. Mint olyanokra, a következõ szakaszokban kitérünk a rendszergazdák számára elérhetõ modulokra, bemutatjuk a használatukat és beállításukat és egyes esetekben betekintést is adunk olyan helyzetekbe, ahol a legjobban kiaknázhatóak a képességeik. Például egy webszerver esetén hasznos lehet a man:mac_biba[4] és man:mac_bsdextended[4] házirendek alkalmazása. Más esetekben, például egy kevés felhasználóval mûködõ számítógépen, a man:mac_partition[4] modul lehet jó választás. [[mac-modules]] == A modulok beállítása A MAC rendszerben megtalálható összes modul a korábban leírtak szerint beépíthetõ a rendszermagba vagy menet közben is betölthetõ modulként. A használni kívánt modulokat a [.filename]#/boot/loader.conf# állományba javasolt felvenni, így azok be tudnak töltõdni a rendszer indítása folyamán. A soron következõ szakaszokban a különbözõ MAC-modulokat dolgozzuk fel és foglaljuk össze a lehetõségeiket. Továbbá a fejezet szeretne szólni ezek alkalmazásáról speciális helyzetekben is. Egyes modulokkal címkézni is tudunk, aminek révén a hozzáféréseket címkékkel szabályozzuk, például úgy, hogy megmondjuk "mit szabad és mit nem". A címkék beállításait tartalmazó állomány vezérli az állományok elérését, a hálózati kommunikációt és még sok minden mást. Az elõzõ szakaszban már megismerhettük, hogy a `multilabel` opció segítségével hogyan állíthatjuk be az állományonkénti vagy partíciónkénti hozzáférés-vezérlést. Az egycímkés konfigurációban az egész rendszerben csupán egyetlen címke használatára nyílik mód, ezért is hívják a `tunefs` beállítását ``multilabel``nek. [[mac-seeotheruids]] == A seeotheruids MAC-modul A modul neve: [.filename]#mac_seeotheruids.ko# A rendszermag konfigurációs beállítása: `options MAC_SEEOTHERUIDS` Rendszerindítási beállítás: `mac_seeotheruids_load="YES"` A man:mac_seeotheruids[4] modul a `security.bsd.see_other_uids` és `security.bsd.see_other_gids sysctl`-változókat utánozza és terjeszti ki. A használatához semmilyen címkét nem kell beállítani és transzparens módon képes együttmûködni a többi modullal. A modult betöltése után az alábbi `sysctl`-változókkal tudjuk vezérelni: * A `security.mac.seeotheruids.enabled` engedélyezi a modult és az alapértelmezett beállításokat használja. Alapértelmezés szerint egyik felhasználó sem láthatja a többiek futó programjait és csatlakozásait. * A `security.mac.seeotheruids.specificgid_enabled` egy adott csoportot mentesít a házirend szabályozásai alól. Tehát ki akarunk vonni egy csoportot a házirend alkalmazásából, akkor állítsuk be a `security.mac.seeotheruids.specificgid=XXX sysctl`-változót, ahol az _XXX_ a mentesíteni kívánt csoport numerikus azonosítója. * A `security.mac.seeotheruids.primarygroup_enabled` segítségével adott elsõdleges csoportokat vonhatunk ki a házirend hatálya alól. Ezt a változót nem használhatjuk a `security.mac.seeotheruids.specificgid_enabled` változóval együtt. [[mac-bsdextended]] == A bsdextended MAC-modul A modul neve: [.filename]#mac_bsdextended.ko# A rendszermag konfigurációs beállítása: `options MAC_BSDEXTENDED` Rendszerindítási beállítás: `mac_bsdextended_load="YES"` A man:mac_bsdextended[4] modul segítségével egy állományrendszer szintjén mûködõ tûzfalat tudunk kialakítani. Ez a modul a szabványos állományrendszeri engedély alapú modelljét bõvíti ki, lehetõvé téve, hogy a rendszergazda tûzfalszerû szabályokkal nyújtson védelmet a könyvtárszerkezetben található állományoknak, segédprogramoknak és könyvtáraknak. Amikor egy állományrendszerbeli objektumhoz próbálunk meg hozzáférni, a modul illeszti ezt egy szabályrendszerre, amiben vagy talál egy hozzá tartozó szabályt vagy kifut belõle. Ez a viselkedés a `security.mac.bsdextended.firstmatch_enabled` man:sysctl[8] paraméter segítségével változtatható meg. Hasonlóan a FreeBSD-ben található többi tûzfalmodulhoz, az állományok elérését definiáló szabályok a rendszerindítás során egy man:rc.conf[5] változóból olvasódnak be. A szabályokat a man:ugidfw[8] segédprogrammal adhatjuk meg, amelynek a formai szabályai hasonlóak az man:ipfw[8] programéhoz. A man:libugidfw[3] függvénykönyvtár felhasználásával azonban további segédprogramok is írhatóak hozzá. A modul használata során igyekezzünk minél jobban odafigyelni, mert helytelen alkalmazásával el tudjuk vágni magunkat az állományrendszer bizonyos részeitõl. === Példák Miután sikerült betölteni a man:mac_bsdextended[4] modult, a következõ paranccsal tudjuk lekérdezni a jelenleg érvényes szabályokat: [source,shell] .... # ugidfw list 0 slots, 0 rules .... Ahogy az várható is volt, pillanatnyilag még egyetlen szabályt sem adtunk meg. Ennek értelmében tehát mindent el tudunk érni. A következõ paranccsal tudunk olyan szabályt létrehozni, ahol a `root` kivételével elutasítjuk az összes felhasználó hozzáférését: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... Ez egyébként egy nagyon buta ötlet, mivel így a felhasználók még a legegyszerûbb parancsokat, mint például az `ls`-t, sem tudják rájuk kiadni. Ennél sokkal humánusabb lesz, ha: [source,shell] .... # ugidfw set 2 subject uid felhasználó1 object uid felhasználó2 mode n # ugidfw set 3 subject uid felhasználó1 object gid felhasználó2 mode n .... Ilyenkor a `felhasználó1` nevû felhasználótól megvonjuk a `_felhasználó2_` felhasználói könyvtárának összes hozzáférését, beleértve a listázhatóságot is. A `felhasználó1` helyett megadhatjuk a `not uid _felhasználó2_` opciót is. Ebben az esetben egy felhasználó helyett az összes felhasználóra ugyanaz a korlátozás fog érvényesülni. [NOTE] ==== A `root` felhasználóra ezek a beállítások nem vonatkoznak. ==== Ezzel felvázoltuk, miként lehet a man:mac_bsdextended[4] modult felhasználni az állományrendszerek megerõsítésére. Részletesebb információkért járuljunk a man:mac_bsdextended[4] és man:ugidfw[8] man oldalakhoz. [[mac-ifoff]] == Az ifoff MAC-modul A modul neve: [.filename]#mac_ifoff.ko# A rendszermag konfigurációs beállítása: `options MAC_IFOFF` Rendszerindítási beállítás: `mac_ifoff_load="YES"` A man:mac_ifoff[4] modul kizárólag abból a célból készült, hogy segítségével menet közben le tudjuk tiltani bizonyos hálózati csatolófelületek beállítását a rendszerindítás közben. Sem címkékre, sem pedig a többi MAC-modulra nincs szükségünk a használatához. A vezérlést nagyrészt az alábbi `sysctl`-változókkal tudjuk megoldani. * A `security.mac.ifoff.lo_enabled` engedélyezi vagy letiltja a (man:lo[4]) helyi loopback felületen az összes forgalmat. * A `security.mac.ifoff.bpfrecv_enabled` engedélyezi vagy letiltja a Berkeley csomagszûrõ (BPF, Berkeley Packet Filter) felületén az összes forgalmat. * A `security.mac.ifoff.other_enabled` engedélyezi vagy letiltja az összes többi csatolófelületen az összes forgalmat. A man:mac_ifoff[4] modult általában olyan környezetek monitorozásakor szokták használni, ahol a rendszer indítása során még nem szabad hálózati forgalomnak keletkeznie. Vagy például a package:security/aide[] porttal együtt használva automatikusan el tudjuk zárni a rendszerünket, ha a védett könyvtárakban új állományok keletkeznek vagy megváltoznak a régiek. [[mac-portacl]] == A portacl MAC-modul A modul neve: [.filename]#mac_portacl.ko# A rendszermag konfigurációs beállítása: `MAC_PORTACL` Rendszerindítási beállítás: `mac_portacl_load="YES"` A man:mac_portacl[4] modul a helyi TCP és UDP portok kiosztásának korlátozását teszi lehetõvé különféle `sysctl`-változókon keresztül. A man:mac_portacl[4] segítségével lényegében a nem-`root` felhasználók is használhatnak privilegizált, tehát 1024 alatti portokat. Miután betöltöttük, a modul az összes csatlakozásra alkalmazza a MAC-házirendet. Ezután az alábbi változókkal hangolhatjuk a viselkedését: * A `security.mac.portacl.enabled` totálisan engedélyezi vagy letiltja a házirend használatát. * A `security.mac.portacl.port_high` megadja azt a legmagasabb portot, amelyre még kiterjed a man:mac_portacl[4] védelme. * Ha a `security.mac.portacl.suser_exempt` változónak nem nulla értéket adunk meg, akkor azzal a `root` felhasználót kivonjuk a szabályozások alól. * A `security.mac.portacl.rules` az érvényes mac_portacl házirendet adja meg, lásd lentebb. A `security.mac.portacl.rules` változó által megadott aktuális `mac_portacl` házirend formátuma a következõ: `szabály[,szabály,...]`, ahol ezen a módon tetszõleges számú szabályt adhatunk meg. Az egyes szabályok pedig így írhatóak fel: `azonosítótípus: azonosító: protokoll: port`. Az [parameter]#azonosítótípus# értéke `uid` vagy `gid` lehet, amivel megadjuk, hogy az [parameter]#azonosító# paraméter felhasználóra vagy csoportra hivatkozik. A [parameter]#protokoll# paraméter adja meg, hogy a szabályt TCP vagy UDP típusú kapcsolatra értjük, és ennek megfelelõen az értéke is `tcp` vagy `udp` lehet. A sort végül a [parameter]#port# paraméter zárja, ahol annak a portnak számát adjuk meg, amelyhez az adott felhasználót vagy csoportot akarjuk kötni. [NOTE] ==== Mivel a szabályokat közvetlenül maga a rendszermag dolgozza fel, ezért a felhasználók illetve csoportok azonosítója, valamint a port értéke kizárólag numerikus érték lehet. Tehát a szabályokban név szerint nem hivatkozhatunk felhasználókra, csoportokra vagy szolgáltatásokra. ==== A UNIX(R)-szerû rendszereken alapértelmezés szerint az 1024 alatti portokat csak privilegizált programok kaphatják meg és használhatják, tehát a `root` felhasználó neve alatt kell futniuk. A man:mac_portacl[4] azonban a nem privilegizált programok számára is lehetõvé teszi, hogy elfoglalhassanak 1024 alatti portokat, amihez viszont elõször le kell tiltani ezt a szabvány UNIX(R)-os korlátozást. Ezt úgy érhetjük el, ha a `net.inet.ip.portrange.reservedlow` és `net.inet.ip.portrange.reservedhigh` változókat egyaránt nullára állítjuk. A man:mac_portacl[4] mûködésének részleteirõl a példákon keresztül vagy a megfelelõ man oldalakból tudhatunk meg többet. === Példák A következõ példák az iméntieket igyekeznek jobban megvilágítani: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 net.inet.ip.portrange.reservedhigh=0 .... Elsõként beállítjuk, hogy a man:mac_portacl[4] vegye át a szabványos privilegizált portok vezérlését és letiltjuk a normál UNIX(R)-os korlátozásokat. [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... A `root` felhasználót azonban nem akarjuk kitenni a házirendnek, ezért a `security.mac.portacl.suser_exempt` változónak egy nem nulla értéket adunk meg. A man:mac_portacl[4] modul most pontosan ugyanúgy mûködik, mint a UNIX(R)-szerû rendszerek alapértelmezés szerint. [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... A 80-as azonosítóval rendelkezõ felhasználó (aki általában a `www`) számára engedélyezzük a 80-as port használatát. Így a `www` felhasználó anélkül képes webszervert futtatni, hogy szüksége lenne a `root` jogosultságaira. [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... Az 1001-es azonosítóval rendelkezõ felhasználónak megengedjük, hogy elfoglalhassa a 110-es ("pop3") és 995-ös ("pop3s") portokat. Ennek köszönhetõen az adott felhasználó el tud indítani egy szervert, amihez a 110-es és 995-ös portokon lehet kapcsolódni. [[mac-partition]] == A partition MAC-modul A modul neve: [.filename]#mac_partition.ko# A rendszermag konfigurációs beállítása: `options MAC_PARTITION` Rendszerindítási beállítás: `mac_partition_load="YES"` A man:mac_partition[4] házirend a futó programokat címkéjük szerint adott "partíciókra" osztja szét. Ezt leginkább egy speciális man:jail[8] megoldásként tudjuk elképzelni, noha teljesen felesleges összehasonlítani a kettõt. Ez egy olyan modul, amelyet a man:loader.conf[5] állományba kell felvenni, hogy a rendszerindítása közben be tudjon töltõdni. Ezt a házirendet többségében a man:setpmac[8] segédprogrammal tudjuk állítgatni, ahogy az majd lentebb látható lesz. A következõ `sysctl`-változó tartozik még a modulhoz: * A `security.mac.partition.enabled` engedélyezi a futó programok MAC rendszeren keresztüli felosztását. A házirend engedélyezésével a felhasználók csak a saját programjaikat láthatják, illetve mindazokat, amelyek az övékével egy partícióba tartoznak, de a rajta kívül levõ programokkal már nem dolgozhatnak. Például, ha egy felhasználó az `insecure` ("nem biztonságos") osztály tagja, akkor ne engedjük, hogy hozzáférhessen a `top` vagy bármilyen más olyan parancshoz, amely további futó programokat hoz létre. A `setpmac` használatával tudunk címkéket készíteni a partíciókhoz és programokat rendelni hozzájuk: [source,shell] .... # setpmac partition/13 top .... Így a `top` parancsot hozzáadjuk az `insecure` osztályban levõ felhasználókhoz rendelt címkéhez. Vegyük észre, hogy az `insecure` osztályba tartozó felhasználók által elindított összes program a `partition/13` címkét fogja használni. === Példák A következõ parancs megmutatja a partíciók címkéit és a futó programok listáját: [source,shell] .... # ps Zax .... Ezzel paranccsal pedig megnézhetjük egy másik felhasználó programjainak címkéit és a felhasználó által futtatott programokat: [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== A felhasználók látják a `root` címkéjével futó programokat is, hacsak be nem töltjük a man:mac_seeotheruids[4] házirendet. ==== Ezt a megoldást úgy tudnánk igazán ravaszul felhasználni, ha például az [.filename]#/etc/rc.conf# állományban letiltanánk az összes szolgáltatást és egy olyan szkripttel indítanánk el ezeket, amely futtatásuk elõtt beállítja hozzájuk a megfelelõ címkét. [NOTE] ==== A most következõ házirendek a három alapértelmezett címkeérték helyett egész számokat használnak. Ezekrõl, valamint a rájuk vonatkozó korlátozásokról a megfelelõ modulok man oldalain ismerhetünk meg többet. ==== [[mac-mls]] == A többszintû biztonsági MAC-modul A modul neve: [.filename]#mac_mls.ko# A rendszermag konfigurációs beállítása: `options MAC_MLS` Rendszerindítási beállítás: `mac_mls_load="YES"` A man:mac_mls[4] (MLS, Multi-Level Security) házirend az információ szigorú áramoltatásával vezérli a rendszerben található alanyok és objektumok közti elérést. A MLS megoldását alkalmazó környezetekben a rekeszek mellett minden alanyra és objektumra be kell még állítanunk egy adott szintû "engedélyt" is. Mivel az engedélyek avagy az érzékenység szintje akár a hatezret is meghaladhatja, egy rendszergazda számára valódi rémálommá válthat az egyes alanyok és objektumok precíz beállítása. Szerencsére a házirend erre a célra tartalmaz három elõre definiált "instant" címkét. Ezek az `mls/low`, `mls/equal` és `mls/high`. Mivel a man oldal elég részletesen kifejti ezeket, ezért itt csak érintõlegesen foglalkozunk velük: * Az `mls/low` címke egy olyan alacsony szintû beállítást képvisel, amely lehetõvé teszi, hogy az összes többi objektum uralja. Tehát bárminek is adjuk az `mls/low` címkét, alacsony szintû engedéllyel fog rendelkezni és nem lesz képes elérni a magasabb szinten levõ információt. Ráadásul a címke a magasabb szintû objektumok számára se fogja engedni, hogy információt közöljön vagy adjon át az alacsonyabb szintek felé. * Az `mls/equal` címke olyan objektumok esetében ajánlott, amelyeket ki akarunk hagyni a házirend szabályozásaiból. * Az `mls/high` címke az elérhetõ legmagasabb szintû engedélyt ábrázolja. Az ilyen címkével ellátott objektumok a rendszer összes többi objektuma felett uralommal rendelkeznek, habár az alacsonyabb szintû objektumok felé nem képesek információt közvetíteni. Az MLS: * Egy hierarchikus védelmi szinteket épít fel nem hierarchikus kategóriákkal. * Szabályai rögzítettek: a felsõbb szintek olvasása és az alsóbb szintek írása egyaránt tiltott (az alanyok csak a saját vagy az alatta levõ szinteken levõ objektumokat képesek olvasni, de a felette állókat már nem. Ehhez hasonlóan az alanyok a velük egyezõ vagy a felsõbb szinteket tudják írni, de az alattuk levõket már nem). * Megõrzi a titkokat (megakadályozza az adatok alkalmatlan közzétételét). * Megadja mindazt az alapot, ami szükséges ahhoz, hogy az adatokat több kényességi szinten, párhuzamosan is kezelni tudjuk (anélkül, hogy titkos és bizalmas információkat szivárogtatnánk ki). A speciális szolgáltatások és felületek beállításához az alábbi `sysctl`-változók használhatóak: * A `security.mac.mls.enabled` engedélyezi vagy tiltja le az MLS házirend alkalmazását. * A `security.mac.mls.ptys_equal` hatására látja el `mls/equal` címkével az összes man:pty[4] eszközt létrehozásuk során. * A `security.mac.mls.revocation_enabled` használható az alacsonyabb szintre minõsített objektumok hozzáférésének megvonására. * A `security.mac.mls.max_compartments` segítségével adható meg az objektumok által használt rekeszek szintjének maximális száma. Lényegében a rekeszek rendszerben engedélyezett maximuma. Az MLS címkéit a man:setfmac[8] paranccsal tudjuk módosítani. Egy ehhez hasonló paranccsal tudunk egy objektumhoz címkét rendelni: [source,shell] .... # setfmac mls/5 próba .... A [.filename]#próba# állomány MLS-címkéjét az alábbi paranccsal kérhetjük le: [source,shell] .... # getfmac próba .... Ezzel össze is foglaltuk az MLS házirend lehetõségeit. Az eddigiket úgy is megoldhatjuk, hogy létrehozunk egy központi házirendet az [.filename]#/etc# könyvtárban, amelyben megadjuk az MLS házirendhez tartozó információkat, majd átadjuk a `setfmac` parancsnak. Erre a módszerre majd a házirendek bemutatása után kerül sor. === A kényesség megállapítása A többszintû biztonsági házirend használatával a rendszergazda a kényes információk áramlásának irányát tudja befolyásolni. A megoldás "felfele nem lehet olvasni, lefele nem lehet írni" jellege folytán alapból mindent a legalacsonyabb szintre helyez. Így tehát kezdetben minden elérhetõ, és a rendszergazdának lassanként ebbõl az állapotból elindulva kell behangolnia az erre alapozó védelmi rendszert az információ bizalmasságának megfelelõen. A fentebb említett három alapvetõ címke mellett a rendszergazdának valószínûleg szüksége lesz a felhasználók csoportosítására és a csoportok közti információáramlás szabályozására. A információ bizalmasságának szintjeit minden bizonnyal könnyebb szavakkal beazonosítani, például `Confidential` (bizalmas), `Secret` (titkos) vagy `Top Secret` (szigorúan bizalmas). Bizonyos helyzetekben elég csak a futó projekteknek megfelelõen kialakítani csoportokat. Az osztályozás konkrét módszerétõl függetlenül azonban mindig elmondható, hogy elõzetes tervezés nélkül sose állítsunk össze ilyen fajsúlyú házirendet. Ezt a biztonsági modult például webes üzletek esetén érdemes használnunk, ahol egy állományszerver tárolja a cég fontos adatait és pénzügyi információit. Viszont egy két vagy három felhasználóval üzemelõ munkaállomás esetében szinte teljesen felesleges gondolkodni rajta. [[mac-biba]] == A Biba MAC-modul A modul neve: [.filename]#mac_biba.ko# A rendszermag konfigurációs beállítása: `options MAC_BIBA` Rendszerindítási beállítás: `mac_biba_load="YES"` A man:mac_biba[4] modul a MAC Biba elnevezésû házirendjét tölti be. Ez leginkább az MLS házirendhez hasonlít, azzal a kivétellel, hogy az információ áramoltatására vonatkozó szabályok némileg visszafelé mûködnek. Tehát míg az MLS házirend a kényes információ áramlását felfelé nem engedi, addig ez a lefelé irányuló áramlást állítja meg. Emiatt ez a szakasz tulajdonképpen mind a két házirendre érvényesül. A Biba alkalmazása során minden alany és objektum egy "sértetlenséget" jelképezõ címkét visel. Ezek a címkék hierarchikus osztályokból, nem peidg hiearchikus összetevõkbõl származnak. Egy objektum vagy alany sértetlensége a besorolásával növekszik. A modul a `biba/low`, `biba/equal` és `biba/high` címkéket ismeri, vagyis bõvebben: * A `biba/low` címke tekinthetõ az alanyok és objektumok legkisebb sértetlenségének. Ha beállítjuk egy objektumra vagy alanyra, akkor ezzel megakadályozzuk, hogy nagyobb sértetlenségû objektumokat vagy alanyokat tudjanak írni. Ettõl függetlenül azonban még képesek olvasni ezeket. * A `biba/equal` címke használata kizárólag olyan objektumok esetében javasolt, amelyeket ki akarunk vonni a házirend alól. * A `biba/high` címke megengedi az alacsonyabb szinteken levõ objektumokat írását, de az olvasását viszont már nem. Ezt a címkét olyan objektumra érdemes ragasztani, amelyek hatással vannak az egész rendszer sértetlenségére. A Biba: * Hierarchikus sértetlenségi szinteket épít fel nem hiearchikus sértetlenségi kategóriákkal kiegészítve. * Szabályai rögzítettek: az felsõbb szintek írása és az alsóbb szintek olvasása egyaránt tilos (pontosan az MLS ellentéte). Egy alany csak a saját vagy az alatta álló szinteken szereplõ objektumokat tudja írni. Ehhez hasonló módon egy alany csak a saját vagy az afeletti szinten található objektumokat képes olvasni. * Az adatok sértetlenségét biztosítja (megakadályozza az alkalmatlan módosításukat) * Sértetlenségi szinteket határoz meg (szemben az MLS kényességi szintjeivel). Az alábbi `sysctl`-változókkal vezérlhetjük a Biba házirend mûködését: * A `security.mac.biba.enabled` használható a célrendszeren a Biba házirend engedélyezére vagy letiltására. * A `security.mac.biba.ptys_equal` segítségével kapcsolhatjuk ki a Biba házirend alkalmazását a man:pty[4] eszközökön. * A `security.mac.biba.revocation_enabled` hatására visszavonódik az objektumok hozzáférése, ha az rájuk vonatkozó címke megváltozik. A rendszer objektumain a Biba házirendet a `setfmac` és `getfmac` paranccsal állíthatjuk be: [source,shell] .... # setfmac biba/low próba # getfmac próba próba: biba/low .... === A sértetlenség megállapítása A sértetlenség a kényességtõl eltérõen azt igyekszik szavatolni, hogy az információt illetéktelenek nem módosítják. Ez egyaránt vonatkozik az alanyok, objektumok és a kettõ között átadott adatokra. Gondoskodik róla, hogy a felhasználók csak olyan információkat változtathathassanak meg, sõt csak olyat érhessenek el, amire ténylegesen szükségük van. A man:mac_biba[4] biztonsági modul megengedi a rendszergazda számára, hogy megmondja milyen állományokat és programokat láthat vagy hívhat meg a felhasználó vagy felhasználók egy csoportja, miközben biztosítja, hogy az állományok és a programok nincsenek kitéve semmilyen fenyegetésnek, és a rendszer az adott felhasználóban vagy felhasználói csoportban megbízik. A kezdeti tervezési fázis során a rendszergazdának fel kell készülnie arra, hogy a felhasználókat osztályokra, szintekre és területekre kell osztania. A felhasználók nem csak adatokhoz, hanem programokhoz és segédprogramokhoz sem lesznek képesek hozzáférni, mind az indításuk elõtt és után. A modul aktiválás után a rendszer alapból rögtön a legmagasabb címkét kapja meg, és teljesen a rendszergazdára hárul, hogy a felhasználókhoz beállítsa a különféle osztályokat és szinteket. A fentebb leírt engedélyszintek helyett akár témák alapján is tervezhetünk. Például kizárólag csak a fejlesztõk számára engedjük meg a forráskód módosítását, a forráskód lefordítását és a többi fejlesztõeszköz használatát. Eközben a többi felhasználót felosztjuk további csoportokba, például tesztelõkre és tervezõkre, vagy meghagyjuk ezeket átlagos felhasználóknak, akik csak olvasási joggal rendelkeznek. A megvalósított biztonsági modell természetébõl fakadóan egy kevésbé sértetlenebb alany nem írhatja a sokkal sértetlenebb alanyokat, a sokkal sértetlenebb alanyok pedig nem érhetik el vagy olvashatják a kevésbé sértetlen objektumokat. A lehetõ legkisebb osztályú címke beállításával gyakorlatilag elérhetetlenné teszük az alanyok számára. A modult valószínûleg egy korlátozott webszerver, fejlesztõi- és tesztgépek vagy forráskód tárolására szánt környezetben érdemes bevetni. Annál esélytelenebb a használata viszont egy munkaállomás, útválasztó vagy hálózati tûzfal esetében. [[mac-lomac]] == A LOMAC MAC-modul A modul neve: [.filename]#mac_lomac.ko# A rendszermag konfigurációs beállítása: `options MAC_LOMAC` Rendszerindítás beállítás: `mac_lomac_load="YES"` Eltérõen a MAC Biba házirendjétõl, a man:mac_lomac[4] egyedül csak azután engedi elérni az kevésbé sértetlenebb objektumokat, miután csökkentjük a sértetlenség szintjét és ezzel betartjuk a sértetlenségre vonatkozó szabályokat. A gyenge vízjeles sértetlenségi házirend MAC alapú változatát nem szabad összetéveszteni a korábbi man:lomac[4] implementációval, amely majdnem ugyanúgy mûködik, mint a Biba, azzal az a kivétellel, hogy a lebegõ címkékkel támogatjuk az alanyok lefokozását egy kisegítõ osztály rekeszén keresztül. Ez a másodlagos rekesz `[kisegítõ_osztály]` alakú. Tehát amikor egy kisegítõ osztállyal adjuk meg a lomac házirendet, valahogy így néz ki: `lomac/10[2]`, ahol a kettes (2) szám ez a kisegítésre használt osztály. A MAC LOMAC házirendje az összes rendszerszintû objektum esetében jelenlevõ sértetlenségi címkézésen alapszik, megengedve az alanyok számára, hogy az kevésbé sértetlen objektumokat olvasni tudják, majd a címke leminõsítésével az alany meg tudja akadályozni a sokkal sértetlenebbnek ítélt objektumok jövõbeni írását. Ez az a fentebb tárgyalt `[kisegítõ_osztály]` opció, ezért ez a modul a Bibáénál több kompatibilitást és kevesebb kezdeti beállítást igényel. === Példák Hasonlóan a Biba és MLS házirendeknél megszokottakhoz, a `setfmac` és `setpmac` segédprogramok használhatóak a címkék hozzárendeléséhez: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... Itt a kisegítõ osztály a `low`. Ezt csak a LOMAC MAC-házirendnél adhatjuk meg. [[mac-implementing]] == A Nagios elzárása a MAC rendszerrel A most következõ bemutatóban a MAC moduljainak és a megfelelõen beállított házirendek használatával fogunk kialakítani egy biztonságos környezetet. Ne feledjük azonban, hogy ez csupán egy ártatlan próba és nem pedig a mindenki biztonsági aggályait kielégítõ legvégsõ megoldás. Ha egy házirendet vakon építünk fel és nem értjük meg a mûködését, az soha nem válik hasznunkra, és egy éles helyzetben katasztrofális hatással járhat. A folyamat megkezdése elõtt be kell állítanunk a `multilabel` opciót mindegyik állományrendszerre, a fejezet elején leírtaknak megfelelõen. Ha ezt a lépést kihagyjuk, akkor hibákat kapunk. Továbbá még az elõkészület részeként ne felejtsünk el gondoskodni a package:net-mngt/nagios-plugins[], package:net-mngt/nagios[] és package:www/apache13[] portok telepítésérõl, beállításáról és megfelelõ mûködésérõl sem. === A nem megbízható felhasználók osztályának létrehozása Az eljárást kezdjük az alábbi (insecure) felhasználói osztály hozzáadásával az [.filename]#/etc/login.conf# állományban: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... Valamint egészítsük ki az alapértelmezett (default) felhasználói osztályt a következõ sorral: [.programlisting] .... :label=biba/high: .... Ahogy ezzel elkészültünk, az hozzá tartozó adatbázis újbóli legyártásához a következõ parancsot kell kiadnunk: [source,shell] .... # cap_mkdb /etc/login.conf .... === A rendszerindítással kapcsolatos beállítások Még ne indítsuk újra a számítógépet, csupán a szükséges modulok betöltéséhez bõvítsük ki a [.filename]#/boot/loader.conf# állományt az alábbi sorokkal: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" .... === A felhasználók beállítása Soroljuk be a `root` felhasználót a `default` osztályba: [source,shell] .... # pw usermod root -L default .... Az összes `root` felhasználón kívüli hozzáférésnek vagy rendszerfelhasználónak most kelleni fog egy bejelentkezési osztály. A bejelentkezési osztályra egyébként is szükség lesz, mert ennek hiányában a felhasználók még az olyan egyszerû parancsokat sem tudják kiadni, mint például a man:vi[1]. A következõ `sh` szkript nekünk erre pontosan megfelel: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ /etc/passwd`; do pw usermod $x -L default; done; .... Helyezzük át a `nagios` és `www` felhasználókat az insecure osztályba: [source,shell] .... # pw usermod nagios -L insecure .... [source,shell] .... # pw usermod www -L insecure .... === A [.filename]#contexts# állomány létrehozása Most csinálnunk kell egy [.filename]#contexts# állományt. Ebben példában az [.filename]#/etc/policy.contexts# állományt használjuk. [.programlisting] .... # Ez a rendszer alapértelmezett BIBA házirendje. # Rendszer: /var/run biba/equal /var/run/* biba/equal /dev biba/equal /dev/* biba/equal /var biba/equal /var/spool biba/equal /var/spool/* biba/equal /var/log biba/equal /var/log/* biba/equal /tmp biba/equal /tmp/* biba/equal /var/tmp biba/equal /var/tmp/* biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # Nagios: /usr/local/etc/nagios /usr/local/etc/nagios/* biba/10 /var/spool/nagios biba/10 /var/spool/nagios/* biba/10 # Apache: /usr/local/etc/apache biba/10 /usr/local/etc/apache/* biba/10 .... Ezzel a házirenddel az információ áramlását szabályozzuk. Ebben a konkrét konfigurációban a felhasználók, a `root` és társai, nem férhetnek hozzá a Nagioshoz. A Nagios beállításait tároló állományok és a neve alatt futó programok így teljesen különválnak vagyis elzáródnak a rendszer többi részétõl. Ez az iménti állomány a következõ parancs hatására kerül be a rendszerünkbe: [source,shell] .... # setfsmac -ef /etc/policy.contexts / # setfsmac -ef /etc/policy.contexts / .... [NOTE] ==== A fenti állományrendszer felépítése a környezettõl függõen eltérhet, habár ezt minden egyes állományrendszeren le kell futtatni. ==== Az [.filename]#/etc/mac.conf# állományt törzsét a következõképpen kell még átírnunk: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === A hálózat engedélyezése Tegyük hozzá a következõ sort az [.filename]#/boot/loader.conf# állományhoz: [.programlisting] .... security.mac.biba.trust_all_interfaces=1 .... Ezt az alábbi beállítást pedig szúrjuk be az [.filename]#rc.conf# állományba a hálózati kártya konfigurációjához. Amennyiben az internetet DHCP segítségével érjük el, ezt a beállítást manuálisan kell megtenni minden rendszerindítás alkalmával: [.programlisting] .... maclabel biba/equal .... === A konfiguráció kipróbálása Gondoskodjunk róla, hogy a webszerver és a Nagios nem fog elindulni a rendszer indításakor, majd indítsuk újra a gépet. Ezenkívül még ellenõrizzük, hogy a `root` ne tudjon hozzáférni a Nagios beállításait tartalmazó könyvtárhoz. Ha a `root` képes kiadni egy man:ls[1] parancsot a [.filename]#/var/spool/nagios# könyvtárra, akkor valamit elronthattunk. Normális esetben egy `permission denied` üzenetet kell kapnunk. Ha minden jónak tûnik, akkor a Nagios, Apache és Sendmail most már elindítható a biztonsági házirend szabályozásai szerint. Ezt a következõ parancsokkal tehetjük meg: [source,shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Kétszer is ellenõrizzük, hogy minden a megfelelõ módon viselkedik-e. Ha valamilyen furcsaságot tapasztalunk, akkor nézzük át a naplókat vagy a hibaüzeneteket. A man:sysctl[8] használatával tiltsuk le a man:mac_biba[4] biztonsági modult és próbáljunk meg mindent a szokott módon újraindítani. [NOTE] ==== A `root` felhasználó különösebb aggodalom nélkül képes megváltoztatni a biztonsági rend betartatását és átírni a konfigurációs állományokat. Egy frissen indított parancsértelmezõ számára ezzel a paranccsal tudjuk csökkenteni a biztonsági besorolást: [source,shell] .... # setpmac biba/10 csh .... Ennek kivédésére a felhasználókat a man:login.conf[5] beállításaival le kell korlátozni. Ha a man:setpmac[8] megpróbál a rekesz határain túl futtatni egy parancsot, akkor hibát ad vissza és a parancs nem fut le. Ebben az esetben a `root` felhasználót tegyük a `biba/high(high-high)` értékek közé. ==== [[mac-userlocked]] == A felhasználók korlátozása Ebben a példában egy viszonylag kicsi, nagyjából mindössze ötven felhasználós, adattárolásra használatos rendszert veszünk alapul. A felhasználók rendelkezhetnek bizonyos bejelentkezési tulajdonságokkal, és nem csak adatokat tudnak tárolni, hanem az erõforrásokhoz is hozzá tudnak férni. Itt most a man:mac_bsdextended[4] és a man:mac_seeotheruids[4] modulokat vetjük be együttesen, és nem csak a rendszer objektumainak elérését tudjuk megakadályozni, hanem az egyes felhasználók futó programjait is elrejtjük. A mûveletet kezdjük azzal, hogy a [.filename]#/boot/loader.conf# állományt kibõvítjük a következõ módon: [.programlisting] .... mac_seeotheruids_load="YES" .... A man:mac_bsdextended[4] biztonsági modul az alábbi [.filename]#rc.conf#-változóval hozható mûködésbe: [.programlisting] .... ugidfw_enable="YES" .... A hozzá tartozó alapértelmezett szabálykészlet az [.filename]#/etc/rc.bsdextended# állományban tárolódik, amely pedig a rendszer indítása során töltõdik be. Ezeket némileg módosítanunk kell majd. Mivel a példában szereplõ számítógép csak a felhasználók kiszolgálását hivatott ellátni, az utolsó kettõ kivételével mindent hagyhatunk megjegyzésben. Így kikényszerítjük felhasználók által birtokolt rendszerobjektumok alapértelmezés szerinti betöltését. Vegyük fel a szükséges felhasználókat a számítógépre és indítsuk újra. Tesztelési célból próbáljunk meg különbözõ felhasználókként bejelentkezni két konzolon. Futassuk le a `ps aux` parancsot, és így meg tudjuk figyelni, hogy mennyire látjuk a többi felhasználót. Amikor megpróbáljuk kiadni a man:ls[1] parancsot a többiek felhasználói könyvtáraira, akkor hibát kell kapnunk. Ne próbálgassunk a `root` felhasználóval, hacsak a megfelelõ `sysctl` változókban be nem állítottuk az õ hozzáférésének blokkolását is. [NOTE] ==== Amikor felveszük egy felhasználót a rendszerbe, a hozzá tartozó man:mac_bsdextended[4] szabály nem fog szerepelni a szabályrendszerben. A szabályrendszer gyors frissítését úgy tudjuk megoldani, ha a man:kldunload[8] használatával egyszerûen eltávolítjuk a biztonsági modult a memóriából és újratöltjük a man:kldload[8] paranccsal. ==== [[mac-troubleshoot]] == A hibák elhárítása a MAC rendszerben A fejlesztés fázisában bizonyos normál konfigurációval rendelkezõ felhasználók gondokat jeleztek. Ezeket foglaljuk most itt össze: === A `multilabel` beállítás nem adható meg a [.filename]#/# állományrendszerre A `multilabel` beállítás nem marad meg a rendszerindító ([.filename]#/#) partíciómon! A tapasztalatok szerint körülbelül minden ötvenedik felhasználó szembesül ezzel a problémával, és mi is találkozunk vele a kezdeti konfigurációk kialakítása során. Ennek az úgynevezett "hibának" a behatóbb tanulmányozása során arra jutottunk, hogy ez többnyire vagy a hibás dokumentálásból vagy a dokumentáció félreértelmezésébõl ered. Független attól, hogy ez mitõl is következett be, a következõ lépések megtételével orvosolhatjuk: [.procedure] ==== . Nyissuk meg az [.filename]#/etc/fstab# állományt és adjuk meg a rendszerindító partíciónak az `ro`, vagyis az írásvédett (read-only) beállítást. . Indítsuk újra a gépet egyfelhasználós módban. . A `tunefs -l enable` parancsot futtassuk le a [.filename]#/# állományrendszeren. . Indítsuk újra a rendszert normál módban. . Adjuk ki a `mount -urw`[.filename]#/# parancsot, majd az [.filename]#/etc/fstab# állományban írjuk át a `ro` beállítást az `rw` értékre és megint indítsuk újra a rendszert. . Alaposan nézzük át a `mount` parancs kimenetét és gyõzödjünk meg róla, hogy a `multilabel` opció valóban beállítódott a rendszerindító állományrendszerre. ==== === A MAC után nem lehet indítani az X11 szervert Nem indul az X, miután MAC-kel kialakítottunk egy biztonságos környezetet! Ezt vagy a MAC `partition` házirendje okozza, vagy az egyik címkékeket használó házirend helytelen beállítása. A következõ módon deríthetjük ki az okát: [.procedure] ==== . Figyelmesen olvassuk el a hibaüzenetet: ha a felhasználó az `insecure` osztály tagja, akkor a `partition` házirend lesz a bûnös. Próbáljuk meg a felhasználót visszatenni a `default` osztályba és a `cap_mkdb` paranccsal újragenerálni az adatbázist. Ha ez nem segít a problémán, akkor haladjunk tovább. . Alaposan ellenõrizzük a címkékhez tartozó házirendeket. Vizsgáljuk meg, hogy a kérdeses felhasználó esetében a házirendet és az X11 alkalmazást, valamint a [.filename]#/dev# eszközöket tényleg jól állítottuk be. . Ha az iméntiek egyik sem oldja meg gondunkat, küldjük el a hibaüzenetet és a környezetünk rövid leírását a a http://www.TrustedBSD.org[TrustedBSD] honlapjáról elérhetõ TrustedBSD levelezési lista vagy a {freebsd-questions} címére. ==== === Hiba: man:_secure_path[3] cannot stat [.filename]#.login_conf# Amikor a rendszerben megpróbálok a `root` felhasználóról átváltani egy másik felhasználóra, a `_secure_path: unable to state .login_conf` hibaüzenet jelenik meg. Ez az üzenet általában akkor látható, amikor a felhasználó nagyobb értékû címkével rendelkezik annál, mint akivé válni akar. Például vegyük a `joska` nevû felhasználót a rendszerben, aki az alap `biba/low` címkével rendelkezik. A `root` felhasználó, akinek `biba/high` címkéje van, nem láthatja `joska` felhasználói könyvtárát. Ez attól függetlenül megtörténik, hogy a `root` a `su` paranccsal váltott át a `joska` nevû felhasználóra vagy sem. Egy ilyen helyzetben a Biba sértetlenségi modellje nem fogja engedni a `root` felhasználóra számára, hogy láthassa a kevésbé sértetlen objektumokat. === A `root` felhasználó nem megy! A rendszer normál vagy egyfelhasználós módban sem ismeri fel a `root` felhasználót. A `whoami` parancs 0 (nullát) ad vissza és a `su` parancs pedig annyit mond: `who are you?` (`ki vagy?`). Mi történhetett? Ez csak olyankor történhet, ha a címkézési házirendet nem engedélyezzük, vagy a man:sysctl[8] használatával, vagy pedig a modul eltávolításával. Ha a házirendet letiltjuk vagy ideiglenesen letiltódik, akkor a bejelentkezési tulajdonságokat tároló adatbázist a `label` beállítás eltávolításával kell újrakonfigurálni. A [.filename]#login.conf# állományból ne felejtsük el kivenni az összes `label` beállítást és a `cap_mkdb` paranccsal újragenerálni az adatbázist. Ilyen akkor is elõfordulhat, amikor a házirend valamilyen módon korlátozza a [.filename]#master.passwd# állomány vagy adatbázis elérhetõségét. Ezt általában az okozza, hogy a rendszergazda az állományt olyan címke alatt módosítja, amely ütközik a rendszerben alkalmazott általános házirenddel. Ebben az esetekben a rendszer próbálja meg beolvasni a felhasználók adatait, azonban mivel közben az állomány új címkét örökölt, nem fér hozzá. Ha a man:sysctl[8] paranccsal letiltjuk a házirendet, minden vissza fog térni a rendes kerékvágásba. diff --git a/documentation/content/hu/books/handbook/network-servers/_index.adoc b/documentation/content/hu/books/handbook/network-servers/_index.adoc index cff4e8ec71..1cf1daea1b 100644 --- a/documentation/content/hu/books/handbook/network-servers/_index.adoc +++ b/documentation/content/hu/books/handbook/network-servers/_index.adoc @@ -1,2835 +1,2834 @@ --- title: 29. Fejezet - Hálózati szerverek part: IV. Rész Hálózati kommunikáció prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 33 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Hálózati szerverek :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 29 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Áttekintés Ebben a fejezetben a UNIX(R) típusú rendszerekben leggyakrabban alkalmazott hálózati szolgáltatások közül fogunk néhányat bemutatni. Ennek során megismerjük a hálózati szolgáltatások különbözõ típusainak telepítését, beállítását, tesztelését és karbantartását. A fejezet tartalmát folyamatosan példákkal igyekszünk illusztrálni. A fejezet elolvasása során megismerjük: * hogyan dolgozzunk az inetd démonnal; * hogyan állítsuk be a hálózati állományrendszereket; * hogyan állítsunk be egy hálózati információs szervert a felhasználói hozzáférések megosztására; * hogyan állítsuk be automatikusan a hálózati hozzáférésünket a DHCP használatával; * hogyan állítsunk be névfeloldó szervereket; * hogyan állítsuk be az Apache webszervert; * hogyan állítsuk be az állományok átviteléért felelõs (FTP) szervert; * a Samba használatával hogyan állítsunk be Windows(R)-os kliensek számára állomány- és nyomtatószervert; * az NTP protokoll segítségével hogyan egyeztessük az idõt és dátumot, hogyan állítsunk be egy idõszervert; * a szabványos naplózó démon, a `syslogd` beállítását hálózati keresztüli naplózásra. A fejezet elolvasásához ajánlott: * az [.filename]#/etc/rc# szkriptek alapjainak ismerete; * az alapvetõ hálózati fogalmak ismerete; * a külsõ szoftverek telepítésének ismerete (crossref:ports[ports,Alkalmazások telepítése. csomagok és portok]). [[network-inetd]] == Az inetd"szuperszerver" [[network-inetd-overview]] === Áttekintés Az man:inetd[8] démont gyakran csak "internet szuperszerverként" nevezik, mivel a helyi szolgáltatások kapcsolatainak kezeléséért felelõs. Amikor az inetd fogad egy csatlakozási kérelmet, akkor eldönti róla, hogy ez melyik programhoz tartozik és elindít egy példányt belõle, majd átadja neki a socketet (az így meghívott program a szabvány bemenetéhez, kimenetéhez és hibajelzési csatornájához kapja meg a socket leíróit). Az inetd használatával úgy tudjuk csökkenteni a rendszerünk terhelését, hogy a csak alkalmanként meghívott szolgáltatásokat nem futtatjuk teljesen független önálló módban. Az inetd démont elsõsorban más démonok elindítására használjuk, de néhány triviális protokollt közvetlenül is képes kezelni, mint például a chargen, auth és a daytime. Ebben a fejezetben az inetd beállításának alapjait foglaljuk össze mind parancssoros módban, mind pedig az [.filename]#/etc/inetd.conf# konfigurációs állományon keresztül. [[network-inetd-settings]] === Beállítások Az inetd mûködése az man:rc[8] rendszeren keresztül inicializálható. Az `inetd_enable` ugyan alapból a `NO` értéket veszi fel, vagyis tiltott, de a sysinstall használatával már akár a telepítés során bekapcsolható attól függõen, hogy a felhasználó milyen konfigurációt választott. Ha tehát a: [.programlisting] .... inetd_enable="YES" .... vagy [.programlisting] .... inetd_enable="NO" .... sort tesszük az [.filename]#/etc/rc.conf# állományba, akkor azzal az inetd démont indíthatjuk el vagy tilthatjuk le a rendszer indítása során. Az [source,shell] .... # /etc/rc.d/inetd rcvar .... paranccsal lekérdezhetjük a pillanatnyilag érvényes beállítást. Emellett még az inetd démonnak az `inetd_flags` változón keresztül különbözõ parancssori paramétereket is át tudunk adni. [[network-inetd-cmdline]] === Parancssori paraméterek Hasonlóan a legtöbb szerverhez, az inetd viselkedését is befolyásolni tudjuk a parancssorban átadható különbözõ paraméterekkel. Ezek teljes listája a következõ: `inetd [-d] [-l] [-w] [-W] [-c maximum] [-C arány] [-a cím | név] [-p állomány] [-R arány] [-s maximum] [konfigurációs állomány]` Ezek a paraméterek az [.filename]#/etc/rc.conf# állományban az `inetd_flags` segítségével adhatóak meg az inetd részére. Alapértelmezés szerint az `inetd_flags` értéke `-wW -C 60`, ami az inetd által biztosított szolgáltatások TCP protokollon keresztüli wrappelését kapcsolja be, illetve egy IP-címrõl nem engedi a felkínált szolgáltatások elérését percenként hatvannál többször. A kezdõ felhasználók örömmel nyugtázhatják, hogy ezeket az alapbeállításokat nem szükséges módosítaniuk. A késõbbiekben majd fény derül arra, hogy a kiszolgálás gyakoriságának szabályozása remek védekezést nyújthat túlzottan nagy mennyiségû kapcsolódási kérelem ellen. A megadható paraméterek teljes listája az man:inetd[8] man oldalán olvasható. -c _maximum_:: Az egyes szolgáltatásokhoz egyszerre felépíthetõ kapcsolatok alapértelmezett maximális számát adja meg. Alapból ezt a démont nem korlátozza. A `max-child` beállítással ez akár szolgáltatásonként külön is megadható. -C _arány_:: Korlátozza, hogy egyetlen IP-címrõl alapból hányszor hívhatóak meg az egyes szolgáltatások egy percen belül. Ez az érték alapból korlátlan. A `max-connections-per-ip-per-minute` beállítással ez szolgáltatásonként is definiálható. -R _arány_:: Megadja, hogy egy szolgáltatást egy perc alatt mennyiszer lehet meghívni. Ez az érték alapértelmezés szerint 256. A 0 megadásával eltöröljük ezt a típusú korlátozást. -s _maximum_:: Annak maximumát adja meg, hogy egyetlen IP-címrõl egyszerre az egyes szolgáltatásokat mennyiszer tudjuk elérni. Alapból ez korlátlan. Szolgáltatásonként ezt a `max-child-per-ip` paraméterrel tudjuk felülbírálni. [[network-inetd-conf]] === Az [.filename]#inetd.conf# állomány Az inetd beállítását az [.filename]#/etc/inetd.conf# konfigurációs állományon keresztül végezhetjük el. Amikor az [.filename]#/etc/inetd.conf# állományban módosítunk valamit, az inetd démont a következõ paranccsal meg kell kérnünk, hogy olvassa újra: [[network-inetd-reread]] .Az inetd konfigurációs állományának újraolvasása [example] ==== [source,shell] .... # /etc/rc.d/inetd reload .... ==== A konfigurációs állomány minden egyes sora egy-egy démont ír le. A megjegyzéseket egy "#" jel vezeti be. Az [.filename]##/etc/inetd.conf## állomány bejegyzéseinek formátuma az alábbi: [.programlisting] .... szolgáltatás-neve socket-típusa protokoll {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] felhasználó[:csoport][/bejelentkezési-osztály] szerver-program szerver-program-paraméterei .... Az IPv4 protokollt használó man:ftpd[8] démon bejegyzése például így néz ki: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... szolgáltatás-neve:: Ez az adott démon által képviselt szolgáltatást nevezi meg, amelynek szerepelnie kell az [.filename]#/etc/services# állományban. Ez határozza meg, hogy az inetd milyen porton figyelje a beérkezõ kapcsolatokat. Ha egy új szolgáltatást hozunk létre, akkor azt elõször az [.filename]#/etc/services# állományba kell felvennünk. csatlakozás-típusa:: Ennek az értéke `stream`, `dgram`, `raw`, vagy `seqpacket` lehet. A `stream` típust használja a legtöbb kapcsolat-orientált TCP démon, miközben a `dgram` típus az UDP szállítási protokollt alkalmazó démonok esetében használatos. protokoll:: Valamelyik a következõk közül: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protokoll | Magyarázat |tcp, tcp4 |TCP IPv4 |udp, udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |TCP IPv4 és v6 |udp46 |UDP IPv4 és v6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: A `wait|nowait` beállítás mondja meg, hogy az inetd démonból meghívott démon saját maga képes-e kezelni kapcsolatokat. A `dgram` típusú kapcsolatok esetében egyértelmûen a `wait` beállítást kell használni, miközben a `stream` esetén, ahol általában több szálon dolgozunk, a `nowait` megadása javasolt. A `wait` hatására általában egyetlen démonnak adunk át több socketet, míg a `nowait` minden sockethez egy újabb példányt indít el. + Az inetd által indítható példányokat a `max-child` megadásával korlátozhatjuk. Ha tehát például az adott démon számára legfeljebb példány létrehozását engedélyezzük, akkor a `nowait` után `/10` beállítást kell megadnunk. A `/0` használatával korlátlan mennyiségû példányt engedélyezhetünk. + A `max-child` mellett még további két másik beállítás jöhet számításba az egyes démonok által kezelhetõ kapcsolatok maximális számának korlátozásában. A `max-connections-per-ip-per-minute` az egyes IP-címekrõl befutó lekezelhetõ kapcsolatok percenkénti számát szabályozza, így például ha itt a tizes értéket adjuk meg, akkor az adott szolgáltatáshoz egy IP-címrõl percenként csak tízszer férhetünk hozzá. A `max-child-per-ip` az egyes IP-címekhez egyszerre elindítható példányok számára ír elõ egy korlátot. Ezek a paraméterek segítenek megóvni rendszerünket az erõforrások akaratos vagy akaratlan kimerítésétõl és a DoS (Denial of Service) típusú támadásoktól. + Ebben a mezõben a `wait` vagy `nowait` valamelyikét kötelezõ megadni. A `max-child`, `max-connections-per-ip-per-minute` és `max-child-per-ip` paraméterek ellenben elhagyhatóak. + A `stream` típusú több szálon futó démonok a `max-child`, `max-connections-per-ip-per-minute` vagy `max-child-per-ip` korlátozása nélkül egyszerûen csak így adhatóak meg: `nowait`. + Ha ugyanezt a démont tíz kapcsolatra lekorlátozzuk, akkor a következõt kell megadnunk: `nowait/10`. + Amikor pedig IP-címenként 20 kapcsolatot engedélyezünk percenként és mindössze 10 példányt, akkor: `nowait/10/20`. + Az iménti beállítások a man:fingerd[8] démon alapértelmezett paramétereinél is megtalálhatóak: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -s .... + Végezetül engedélyezzük 100 példányt, melyek közül IP-címenként 5 használható: `nowait/100/0/5`. felhasználó:: Ezzel azt a felhasználót adjuk meg, akinek a nevében az adott démon futni fog. Az esetek túlnyomó részében a démonokat a `root` felhasználó futtatja. Láthatjuk azonban, hogy biztonsági okokból bizonyos démonok a `daemon` vagy a legkevesebb joggal rendelkezõ `nobody` felhasználóval futnak. szerver-program:: A kapcsolat felépülésekor az itt teljes elérési úttal megadott démon indul el. Ha ezt a szolgáltatást maga az inetd belsõleg valósítja meg, akkor ebben a mezõben az `internal` értéket adjuk meg. szerver-program-paraméterei:: Ez a `szerver-program` beállítással együtt mûködik, és ebben a mezõben a démon meghívásakor alkalmazandó paramétereket tudjuk rögzíteni, amelyet a démon nevével kezdünk. Ha a démont a parancssorból a `sajátdémon -d` paranccsal hívnánk meg, akkor a `sajátdémon -d` lesz `szerver-program-paraméterei` beállítás helyes értéke is. Természetesen, ha a démon egy belsõleg megvalósított szolgáltatás, akkor ebben a mezõben is az `internal` fog megjelenni. [[network-inetd-security]] === Védelem Attól függõen, hogy a telepítés során mit választottunk, az inetd által támogatott szolgáltatások egyes része talán alapból engedélyezett is. Amennyiben egy adott démont konkrétan nem használunk, akkor érdemes megfontolni a letiltását. A kérdéses démon sorába tegyünk egy "#" jelet az [.filename]##/etc/inetd.conf## állományba, majd <>. Egyes démonok, mint például az fingerd használata egyáltalán nem ajánlott, mivel a támadók számára hasznos információkat tudnak kiszivárogtatni. Más démonok nem ügyelnek a védelemre, és a kapcsolatokhoz rendelt lejárati idejük túlságosan hosszú vagy éppen nincs is. Ezzel a támadónak lehetõsége van lassú kapcsolatokkal leterhelni az adott démont, ezáltal kimeríteni a rendszer erõforrásait. Ha úgy találjuk, hogy túlságosan sok az ilyen kapcsolat, akkor jó ötletnek bizonyulhat a démonok számára a `max-connections-per-ip-per-minute`, `max-child` vagy `max-child-per-ip` korlátozások elrendelése. Alapértelmezés szerint a TCP kapcsolatok wrappelése engedélyezett. A man:hosts_access[5] man oldalon találhatjuk meg az inetd által meghívható különféle démonok TCP-alapú korlátozásainak lehetõségeit. [[network-inetd-misc]] === Egyéb lehetõségek A daytime, time, echo, discard, chargen és auth szolgáltatások feladatainak mindegyikét maga az inetd is képes ellátni. Az auth szolgáltatás a hálózati keresztül azonosítást teszi lehetõvé és bizonyos mértékig beállítható. A többit egyszerûen csak kapcsoljuk ki vagy be. A témában az man:inetd[8] man oldalán tudunk még jobban elmerülni. [[network-nfs]] == A hálózati állományrendszer (NFS) A FreeBSD több állományrendszert ismer, köztük a hálózati állományrendszert (Network File System, NFS) is. Az NFS állományok és könyvtárak megosztását teszi lehetõvé a hálózaton keresztül. Az NFS használatával a felhasználók és a programok képesek majdnem úgy elérni a távoli rendszereken található állományokat, mintha helyben léteznének. Íme az NFS néhány legjelentõsebb elõnye: * A helyi munkaállomások kevesebb tárterületet használnak, mivel a közös adatokat csak egyetlen számítógépen tároljuk és megosztjuk mindenki között. * A felhasználóknak nem kell a hálózat minden egyes gépén külön felhasználói könyvtárral rendelkezniük. Ezek ugyanis az NFS segítségével akár egy szerveren is beállíthatóak és elérhetõvé tehetõek a hálózaton keresztül. * A különbözõ háttértárak, mint például a floppy lemezek, CD-meghajtók és Zip(R) meghajtók a hálózaton több számítógép között megoszthatóak. Ezzel csökkenteni tudjuk a hálózatunkban szükséges cserélhetõ lemezes eszközök számát. === Ahogy az NFS mûködik Az NFS legalább két fõ részbõl rakható össze: egy szerverbõl és egy vagy több kliensbõl. A kliensek a szerver által megosztott adatokhoz képesek távolról hozzáférni. A megfelelõ mûködéshez mindössze csak néhány programot kell beállítani és futtatni. A szervernek a következõ démonokat kell mûködtetnie: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Démon | Leírás |nfsd |Az NFS démon, amely kiszolgálja az NFS kliensektõl érkezõ kéréseket. |mountd |Az NFS csatlakoztató démonja, amely végrehajtja az man:nfsd[8] által átküldött kéréseket. |rpcbind |Ez a démon lehetõvé teszi az NFS kliensek számára, hogy fel tudják deríteni az NFS szerver által használt portot. |=== A kliensen is futnia kell egy démonnak, amelynek a neve nfsiod. Az nfsiod démon az NFS szerver felõl érkezõ kéréseket szolgálja ki. A használata teljesen opcionális, csupán a teljesítményt hívatott javítani, de a normális és helyes mûködéshez nincs rá szükségünk. Az man:nfsiod[8] man oldalán errõl többet is megtudhatunk. [[network-configuring-nfs]] === Az NFS beállítása Az NFS beállítása viszonylag egyértelmûen adja magát. A mûködéséhez szükséges programok automatikus elindítása csupán néhány apró módosítást igényel az [.filename]#/etc/rc.conf# állományban. Az NFS szerveren gondoskodjunk róla, hogy az alábbi beállítások szerepeljenek az [.filename]#/etc/rc.conf# állományban: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" .... A mountd magától el fog indulni, ha az NFS szervert engedélyezzük. A kliensen a következõ beállítást kell felvennünk az [.filename]#/etc/rc.conf# állományba: [.programlisting] .... nfs_client_enable="YES" .... Az [.filename]#/etc/exports# állomány adja meg, hogy az NFS milyen állományrendszereket exportáljon (vagy másképpen szólva "osszon meg"). Az [.filename]#/etc/exports# állományban tehát a megosztani kívánt állományrendszereket kell szerepeltetnünk, és azt, hogy melyik számítógépekkel tudjuk ezeket elérni. A gépek megnevezése mellett a hozzáférésre további megszorításokat írhatunk fel. Ezek részletes leírását az man:exports[5] man oldalon találjuk meg. Lássunk néhány példát az [.filename]#/etc/exports# állományban megjelenõ bejegyzésekre: A most következõ példákban az állományrendszerek exportálásának finomságait igyekszünk érzékeltetni, noha a konkrét beállítások gyakran a rendszerünktõl és a hálózati konfigurációtól függenek. Például, ha a [.filename]#/cdrom# könytárat akarjuk három gép számára megosztani, akik a szerverrel megegyezõ tartományban találhatóak (ezért nem is kell megadnunk a tartományt) vagy mert egyszerûen megtalálhatók az [.filename]#/etc/hosts# állományunkban. Az `-ro` beállítás az exportált állományrendszereket írásvédetté teszi. Ezzel a beállítással a távoli rendszerek nem lesznek képesek módosítani az exportált állományrendszer tartalmát. [.programlisting] .... /cdrom -ro gép1 gép2 gép3 .... A következõ sorban a [.filename]#/home# könyvtárat három gép számára osztjuk meg, melyeket IP-címekkel adtunk meg. Ez olyan helyi hálózat esetén hasznos, ahol nem állítottunk be névfeloldást. Esetleg a belsõ hálózati neveket az [.filename]#/etc/hosts# állományban is tárolhatjuk. Ezzel utóbbival kapcsolatban a man:hosts[5] man oldalt érdemes fellapoznunk. Az `-alldirs` beállítás lehetõvé teszi, hogy az alkönyvtárak is csatlakozási pontok lehessenek. Más szóval, nem fogja csatlakoztatni az alkönyvtárakat, de megengedi a kliensek számára, hogy csak azokat a könyvtárakat csatlakoztassák, amelyeket kell vagy amelyekre szükségünk van. [.programlisting] .... /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... A következõ sorban az [.filename]#/a# könyvtárat úgy exportáljuk, hogy az állományrendszerhez két különbözõ tartományból is hozzá lehessen férni. A `-maproot=root` beállítás hatására a távoli rendszer `root` felhasználója az exportált állományrendszeren szintén `root` felhasználóként fogja írni az adatokat. Amennyiben a `-maproot=root` beállítást nem adjuk meg, akkor a távoli rendszeren hiába `root` az adott felhasználó, az exportált állományrendszeren nem lesz képes egyetlen állományt sem módosítani. [.programlisting] .... /a -maproot=root gep.minta.com doboz.haz.org .... A kliensek is csak a megfelelõ engedélyek birtokában képesek elérni a megosztott állományrendszereket. Ezért a klienst ne felejtsük el felvenni a szerver [.filename]#/etc/exports# állományába. Az [.filename]#/etc/exports# állományban az egyes sorok az egyes állományrendszerekre és az egyes gépekre vonatkoznak. A távoli gépek állományrendszerenként csak egyszer adhatóak meg, és csak egy alapértelmezett bejegyzésük lehet. Például tegyük fel, hogy a [.filename]#/usr# egy önálló állományrendszer. Ennek megfelelõen az alábbi bejegyzések az [.filename]#/etc/exports# állományban érvénytelenek: [.programlisting] .... # Nem használható, ha a /usr egy állományrendszer: /usr/src kliens /usr/ports kliens .... Egy állományrendszerhez, vagyis itt a [.filename]#/usr# partícióhoz, két export sort is megadtunk ugyanahhoz a `kliens` nevû géphez. Helyesen így kell megoldani az ilyen helyzeteket: [.programlisting] .... /usr/src /usr/ports kliens .... Az adott géphez tartozó egy állományrendszerre vonatkozó exportoknak mindig egy sorban kell szerepelniük. A kliens nélkül felírt sorok egyetlen géphez tartozónak fognak számítani. Ezzel az állományrendszerek megosztását tudjuk szabályozni, de legtöbbek számára nem jelent gondot. Most egy érvényes exportlista következik, ahol a [.filename]#/usr# és az [.filename]#/exports# mind helyi állományrendszerek: [.programlisting] .... # Osszuk meg az src és ports könyvtárakat a kliens01 és kliens02 részére, de csak a # kliens01 férhessen hozzá rendszeradminisztrátori jogokkal: /usr/src /usr/ports -maproot=root kliens01 /usr/src /usr/ports kliens02 # A kliensek az /exports könyvtárban teljes joggal rendelkeznek és azon belül # bármit tudnak csatlakoztatni. Rajtuk kívül mindenki csak írásvédetten képes # elérni az /exports/obj könyvtárat: /exports -alldirs -maproot=root kliens01 kliens02 /exports/obj -ro .... A mountd démonnal az [.filename]#/etc/exports# állományt minden egyes módosítása után újra be kell olvastatni, mivel a változtatásaink csak így fognak érvényesülni. Ezt megcsinálhatjuk úgy is, hogy küldünk egy HUP (hangup, avagy felfüggesztés) jelzést a már futó démonnak: [source,shell] .... # kill -HUP `cat /var/run/mountd.pid` .... vagy meghívjuk a `mountd` man:rc[8] szkriptet a megfelelõ paraméterrel: [source,shell] .... # /etc/rc.d/mountd onereload .... Az crossref:config[configtuning-rcd,Az rc használata FreeBSD alatt]ban tudhatunk meg részleteket az rc szkriptek használatáról. Ezek után akár a FreeBSD újraindításával is aktiválhatjuk a megosztásokat, habár ez nem feltétlenül szükséges. Ha `root` felhasználónként kiadjuk a következõ parancsokat, akkor azzal minden szükséges programot elindítunk. Az NFS szerveren tehát: [source,shell] .... # rpcbind # nfsd -u -t -n 4 # mountd -r .... Az NFS kliensen pedig: [source,shell] .... # nfsiod -n 4 .... Ezzel most már minden készen áll a távoli állományrendszer csatlakoztatására. A példákban a szerver neve `szerver` lesz, valamint a kliens neve `kliens`. Ha csak ideiglenesen akarunk csatlakoztatni egy állományrendszert vagy egyszerûen csak ki akarjuk próbálni a beállításainkat, a kliensen `root` felhasználóként az alábbi parancsot hajtsuk végre: [source,shell] .... # mount szerver:/home /mnt .... Ezzel a szerveren található [.filename]#/home# könyvtárat fogjuk a kliens [.filename]#/mnt# könyvtárába csatlakoztatni. Ha mindent jól beállítottunk, akkor a kliensen most már be tudunk lépni az [.filename]#/mnt# könyvtárba és láthatjuk a szerveren található állományokat. Ha a számítógép indításával automatikusan akarunk hálózati állományrendszereket csatlakoztatni, akkor vegyük fel ezeket az [.filename]#/etc/fstab# állományba. Erre íme egy példa: [.programlisting] .... szerver:/home /mnt nfs rw 0 0 .... Az man:fstab[5] man megtalálhatjuk az összes többi beállítást. === Zárolások Bizonyos alkalmazások (például a mutt) csak akkor mûködnek megfelelõen, ha az állományokat a megfelelõ módon zárolják. Az NFS esetében az rpc.lockd használható az ilyen zárolások megvalósítására. Az engedélyezéséhez mind a szerveren és a kliensen vegyük fel a következõ sort az [.filename]#/etc/rc.conf# állományba (itt már feltételezzük, hogy az NFS szervert és klienst korábban beállítottuk): [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... A következõ módon indíthatjuk el: [source,shell] .... # /etc/rc.d/lockd start # /etc/rc.d/statd start .... Ha nincs szükségünk valódi zárolásra az NFS kliensek és az NFS szerver között, akkor megcsinálhatjuk azt is, hogy az NFS kliensen a man:mount_nfs[8] programnak az `-L` paraméter átadásával csak helyileg végzünk zárolást. Ennek további részleterõl a man:mount_nfs[8] man oldalon kaphatunk felvilágosítást. === Gyakori felhasználási módok Az NFS megoldását a gyakorlatban rengeteg esetben alkalmazzák. Ezek közül most felsoroljuk a legelterjedtebbeket: * Több gép között megosztunk egy telepítõlemezt vagy más telepítõeszközt. Ez így sokkal olcsóbb és gyakorta kényelmes megoldás abban az esetben, ha egyszerre több gépre akarjuk ugyanazt a szoftvert telepíteni. * Nagyobb hálózatokon sokkal kényelmesebb lehet egy központi NFS szerver használata, ahol a felhasználók könyvtárait tároljuk. Ezek a felhasználói könyvtárak aztán megoszthatóak a hálózaton keresztül, így a felhasználók mindig ugyanazt a könyvárat kapják függetlenül attól, hogy milyen munkaállomásról is jelentkeztek be. * Több géppel is képes így osztozni az [.filename]#/usr/ports/distfiles# könyvtáron. Ezen a módon sokkal gyorsabban tudunk portokat telepíteni a gépekre, mivel nem kell külön mindegyikre letölteni az ehhez szükséges forrásokat. [[network-amd]] === Automatikus csatlakoztatás az amd használatával Az man:amd[8] (automatikus csatlakoztató démon, az automatic mounter daemon) önmûködõen csatlakoztatja a távoli állományrendszereket, amikor azokon belül valamelyik állományhoz vagy könyvtárhoz próbálunk hozzáférni. Emellett az amd az egy ideje már inaktív állományrendszereket is automatikusan leválasztja. Az amd használata egy remek alternatívát kínál az általában az [.filename]#/etc/fstab# állományban megjelenõ állandóan csatlakoztatott állományrendszerekkel szemben. Az amd úgy mûködik, hogy kapcsolódik egy NFS szerver [.filename]#/host# és [.filename]#/net# könyvtáraihoz. Amikor egy állományt akarunk elérni ezeken a könyvtárakon belül, az amd kikeresi a megfelelõ távoli csatlakoztatást és magától csatlakoztatja. A [.filename]#/net# segítségével egy IP-címrõl tudunk exportált állományrendszereket csatlakoztatni, miközben a [.filename]#/host# a távoli gép hálózati neve esetében használatos. Ha tehát a [.filename]#/host/izemize/usr# könyvtárban akarunk elérni egy állományt, akkor az amd démonnak ahhoz elõször az `izemize` nevû géprõl exportált [.filename]#/usr# könyvtárat kell csatlakoztatnia. .Egy exportált állományrendszer csatlakoztatása az amd használatával [example] ==== Egy távoli számítógép által rendelkezésre bocsátott megosztásokat a `showmount` paranccsal tudjuk lekérdezni. Például az `izemize` gépen elérhetõ exportált állományrendszereket így láthatjuk: [source,shell] .... % showmount -e izemize Exports list on izemize: /usr 10.10.10.0 /a 10.10.10.0 % cd /host/izemize/usr .... ==== Ahogy a példában látjuk is, a `showmount` parancs a [.filename]#/usr# könyvtárat mutatja megosztásként. Amikor tehát belépünk a [.filename]#/host/izemize/usr# könyvtárba, akkor amd magától megpróbálja feloldani az `izemize` hálózati nevet és csatlakoztatni az elérni kívánt exportált állományrendszert. Az amd az indító szkripteken keresztül az [.filename]#/etc/rc.conf# alábbi beállításával engedélyezhetõ: [.programlisting] .... amd_enable="YES" .... Emellett még az `amd_flags` használatával további paraméterek is átadható az amd felé. Alapértelmezés szerint az `amd_flags` tartalmaz az alábbi: [.programlisting] .... amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" .... Az [.filename]#/etc/amd.map# állomány adja meg az exportált állományrendszerek alapértelmezett beállításait. Az [.filename]#/etc/amd.conf# állományban az amd további lehetõségeit konfigurálhatjuk.. Ha többet is szeretnénk tudni a témáról, akkor az man:amd[8] és az man:amd.conf[8] man oldalakat javasolt elolvasnunk. [[network-nfs-integration]] === Problémák más rendszerek használatakor Némely PC-s ISA buszos Ethernet kártyákra olyan korlátozások érvényesek, melyek komoly hálózati problémák keletkezéséhez vezethetnek, különösen az NFS esetében. Ez a nehézség nem FreeBSD-függõ, de a FreeBSD rendszereket is érinti. Ez gond általában majdnem mindig akkor merül fel, amikor egy (FreeBSD-s) PC egy hálózatba kerül többek közt a Silicon Graphic és a Sun Microsystems által gyártott nagyteljesítményû munkaállomásokkal. Az NFS csatlakoztatása és bizonyos mûveletek még hibátlanul végrehajtódnak, azonban hirtelen a szerver látszólag nem válaszol többet a kliens felé úgy, hogy a többi rendszertõl folyamatosan dolgozza felfele a kéréseket. Ez a kliens rendszeren tapasztalható csak, amikor a kliens FreeBSD vagy egy munkaállomás. Sok rendszeren egyszerûen rendesen le sem lehet állítani a klienst, ha a probléma egyszer már felütötte a fejét. Egyedüli megoldás gyakran csak a kliens újraindítása marad, mivel az NFS-ben kialakult helyzetet máshogy nem lehet megoldani. Noha a "helyes" megoldás az lenne, ha beszereznénk egy nagyobb teljesítményû és kapacitású kártyát a FreeBSD rendszer számára, azonban egy jóval egyszerûbb kerülõút is található a kielégítõ mûködés eléréséhez. Ha a FreeBSD rendszer képviseli a _szervert_, akkor a kliensnél adjuk meg a `-w=1024` beállítást is a csatlakoztatásnál. Ha a FreeBSD rendszer a _kliens_ szerepét tölti be, akkor az NFS állományrendszert az `-r=1024` beállítással csatlakoztassuk róla. Ezek a beállítások az [.filename]#fstab# állomány negyedik mezõjében is megadhatóak az automatikus csatlakoztatáshoz, vagy manuális esetben a man:mount[8] parancsnak a `-o` paraméterrel. Hozzá kell azonban tennünk, hogy létezik egy másik probléma, amit gyakran ezzel tévesztenek össze, amikor az NFS szerverek és kliensek nem ugyanabban a hálózatban találhatóak. Ilyen esetekben mindenképpen _gyõzõdjünk meg róla_, hogy az útválasztók rendesen továbbküldik a mûködéshez szükséges UDP információkat, különben nem sokat tudunk tenni a megoldás érdekében. A most következõ példákban a `gyorsvonat` lesz a nagyteljesítményû munkaállomás (felület) neve, illetve a `freebsd` pedig a gyengébb teljesítményû Ethernet kártyával rendelkezõ FreeBSD rendszer (felület) neve. A szerveren az [.filename]#/osztott# nevû könyvtárat fogjuk NFS állományrendszerként exportálni (lásd man:exports[5]), amelyet majd a [.filename]#/projekt# könyvtárba fogunk csatlakoztatni a kliensen. Minden esetben érdemes lehet még megadnunk a `hard` vagy `soft`, illetve `bg` opciókat is. Ebben a példában a FreeBSD rendszer (`freebsd`) lesz a kliens, és az [.filename]#/etc/fstab# állományában így szerepel az exportált állományrendszer: [.programlisting] .... gyorsvonat:/osztott /projekt nfs rw,-r=1024 0 0 .... És így tudjuk manuálisan csatlakoztatni: [source,shell] .... # mount -t nfs -o -r=1024 gyorsvonat:/osztott /projekt .... Itt a FreeBSD rendszer lesz a szerver, és a `gyorsvonat`[.filename]#/etc/fstab# állománya így fog kinézni: [.programlisting] .... freebsd:/osztott /projekt nfs rw,-w=1024 0 0 .... Manuálisan így csatlakoztathatjuk az állományrendszert: [source,shell] .... # mount -t nfs -o -w=1024 freebsd:/osztott /projekt .... Szinte az összes 16 bites Ethernet kártya képes mûködni a fenti írási vagy olvasási korlátozások nélkül is. A kíváncsibb olvasók számára eláruljuk, hogy pontosan miért is következik be ez a hiba, ami egyben arra is magyarázatot ad, hogy miért nem tudjuk helyrehozni. Az NFS általában 8 kilobyte-os "blokkokkal" dolgozik (habár kisebb méretû darabkákat is tud készíteni). Mivel az Ethernet által kezelt legnagyobb méret nagyjából 1500 byte, ezért az NFS "blokkokat" több Ethernet csomagra kell osztani - még olyankor is, ha ez a program felsõbb rétegeiben osztatlan egységként látszik - ezt aztán fogadni kell, összerakni és _nyugtázni_ mint egységet. A nagyteljesítményû munkaállomások a szabvány által még éppen megengedett szorossággal képesek ontani magukból az egy egységhez tartozó csomagokat, közvetlenül egymás után. A kisebb, gyengébb teljesítményû kártyák esetében azonban az egymáshoz tartozó, késõbb érkezõ csomagok ráfutnak a korábban megkapott csomagokra még pontosan azelõtt, hogy elérnék a gépet, így az egységek nem állíthatóak össze vagy nem nyugtázhatóak. Ennek eredményeképpen a munkaállomás egy adott idõ múlva megint próbálkozik, de ismét az egész 8 kilobyte-os blokkot küldi el, ezért ez a folyamat a végtelenségig ismétlõdik. Ha a küldendõ egységek méretét az Ethernet által kezelt csomagok maximális mérete alá csökkentjük, akkor biztosak lehetünk benne, hogy a teljes Ethernet csomag egyben megérkezik és nyugtázódik, így elkerüljük a holtpontot. A nagyteljesítményû munkaállomások természetesen továbbra is küldhetnek a PC-s rendszerek felé túlfutó csomagokat, de egy jobb kártyával az ilyen túlfutások nem érintik az NFS által használt "egységeket". Amikor egy ilyen túlfutás bekövetkezik, az érintett egységet egyszerûen újra elküldik, amelyet a rákövetkezõ alkalommal nagy valószínûséggel már tudunk rendesen fogadni, összerakni és nyugtázni. [[network-nis]] == Hálózati információs rendszer (NIS/YP) === Mi ez? A hálózati információs szolgáltatást (Network Information Service, avagy NIS) a Sun Microsystems fejlesztette ki a UNIX(R) (eredetileg SunOS(TM)) rendszerek központosított karbantartásához. Mostanra már lényegében ipari szabvánnyá nõtte ki magát, hiszen az összes nagyobb UNIX(R)-szerû rendszer (a Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD, FreeBSD stb.) támogatja a NIS használatát. A NIS régebben sárga oldalak (Yellow Pages) néven volt ismert, de a különbözõ jogi problémák miatt késõbb ezt a Sun megváltoztatta. A régi elnevezést (és a yp rövidítést) azonban még napjainkban is lehet néhol látni. Ez egy RPC alapján mûködõ, kliens/szerver felépítésû rendszer, amely az egy NIS tartomány belül levõ számítógépek számára teszi lehetõvé ugyanazon konfigurációs állományok használatát. Segítségével a rendszergazda a NIS klienseket a lehetõ legkevesebb adat hozzáadásával, eltávolításával vagy módosításával képes egyetlen helyrõl beállítani. Hasonló a Windows NT(R) tartományaihoz, és habár a belsõ implementációt tekintve már akadnak köztük jelentõs eltérések is, az alapvetõ funkciók szintjén mégis összevethetõek. === A témához tartozó fogalmak és programok A NIS telepítése számos fogalom és fontos felhasználói program kerül elõ FreeBSD-n, akár egy NIS szervert akarunk beállítani, akár csak egy NIS klienst: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Fogalom | Leírás |NIS tartománynév |A NIS központi szerverei és az összes hozzájuk tartozó kliens (beleértve az alárendelt szervereket) rendelkezik egy NIS tartománynévvel. Hasonló a Windows NT(R) által használt tartománynevekhez, de a NIS tartománynevei semmilyen kapcsolatban nem állnak a névfeloldással. |rpcbind |Az RPC (Remote Procedure Call, a NIS által használt egyik hálózati protokoll) engedélyezéséhez lesz rá szükségünk. Ha az rpcbind nem fut, akkor sem NIS szervert, sem pedig NIS klienst nem tudunk mûködtetni. |ypbind |A NIS klienst "köti össze" a hozzá tartozó NIS szerverrel. A NIS tartománynevet a rendszertõl veszi, és az RPC használatával csatlakozik a szerverhez. Az ypbind a NIS környezet kliens és szerver közti kommunikációjának magját alkotja. Ha az ypbind leáll a kliens gépén, akkor nem tudjuk elérni a NIS szervert. |ypserv |Csak a NIS szervereken szabad futnia, mivel ez maga a NIS szerver programja. Ha az man:ypserv[8] leáll, akkor a szerver nem lesz képes tovább kiszolgálni a NIS kéréseket (szerencsére az alárendelt szerverek képesek átvenni ezeket). A NIS bizonyos változatai (de nem az, amelyik a FreeBSD-ben is megjelenik) nem próbálnak meg más szerverekhez csatlakozni, ha bedöglik az aktuális használt szerver. Ezen gyakran egyedül csak a szervert képviselõ program (vagy akár az egész szerver) újraindítása segíthet, illetve az ypbind újraindítása a kliensen. |rpc.yppasswdd |Ez egy olyan program, amelyet csak a NIS központi szerverein kell csak futtatni. Ez a démon a NIS kliensek számára a NIS jelszavaik megváltoztatását teszi lehetõvé. Ha ez a démon nem fut, akkor a felhasználók csak úgy tudják megváltoztatni a jelszavukat, ha bejelentkeznek a központi NIS szerverre. |=== === Hogyan mûködik? A NIS környezetekben háromféle gép létezik: a központi szerverek, az alárendelt szerverek és a kliensek. A szerverek képezik a gépek konfigurációs információinak központi tárhelyét. A központi szerverek tárolják ezen információk hiteles másolatát, míg ezt az alárendelt szerverek redundánsan tükrözik. A kliensek a szerverekre támaszkodnak ezen információk beszerzéséhez. Sok állomány tartalma megosztható ezen a módon. Például a [.filename]#master.passwd#, a [.filename]#group# és [.filename]#hosts# állományokat meg szokták osztani NFS-en. Amikor a kliensen futó valamelyik programnak olyan információra lenne szüksége, amely általában ezekben az állományokban nála megtalálható lenne, akkor helyette a NIS szerverhez fordul. ==== A gépek típusai * A _központi NIS szerver_. Ez a szerver, amely leginkább a Windows NT(R) elsõdleges tartományvezérlõjéhez hasonlítható tartja karban az összes, NIS kliensek által használt állományt. A [.filename]#passwd#, [.filename]#group#, és összes többi ehhez hasonló állomány ezen a központi szerveren található meg. + [NOTE] ==== Egy gép akár több NIS tartományban is lehet központi szerver. Ezzel a lehetõséggel viszont itt most nem foglalkozunk, mivel most csak egy viszonylag kis méretû NIS környezetet feltételezünk. ==== * Az _alárendelt NIS szerverek_. A Windows NT(R) tartalék tartományvezérlõihez hasonlítanak, és az alárendelt NIS szerverek feladata a központi NIS szerveren tárolt adatok másolatainak karbantartása. Az alárendelt NIS szerverek a redundancia megvalósításában segítenek, aminek leginkább a fontosabb környezetekben van szerepe. Emellett a központi szerver terhelésének kiegyenlítését is elvégzik. A NIS kliensek elsõként mindig ahhoz a NIS szerverhez csatlakoznak, amelytõl elõször választ kapnak, legyen akár az egy alárendelt szerver. * A _NIS kliensek_. A NIS kliensek, hasonlóan a Windows NT(R) munkaállomásokhoz, a NIS szerveren (amely a Windows NT(R) munkaállomások esetében a tartományvezérlõ) keresztül jelentkeznek be. === A NIS/YP használata Ebben a szakaszban egy példa NIS környezetet állítunk be. ==== Tervezés Tegyük fel, hogy egy aprócska egyetemi labor rendszergazdái vagyunk. A labor, mely 15 FreeBSD-s gépet tudhat magáénak, jelen pillanatban még semmilyen központosított adminisztráció nem létezik. Mindegyik gép saját [.filename]#/etc/passwd# és [.filename]#/etc/master.passwd# állománnyal rendelkezik. Ezeket az állományokat saját kezûleg kell szinkronban tartani. Tehát ha most felveszünk egy felhasználót a laborhoz, akkor az `adduser` parancsot mind a 15 gépen ki kell adni. Egyértelmû, hogy ez így nem maradhat, ezért úgy döntöttük, hogy a laborban NIS-t fogunk használni, és két gépet kinevezünk szervernek. Az iméntieknek megfelelõen a labor most valahogy így néz ki: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | A gép neve | IP-cím | A gép szerepe |`ellington` |`10.0.0.2` |központi NIS |`coltrane` |`10.0.0.3` |alárendelt NIS |`basie` |`10.0.0.4` |tanszéki munkaállomás |`bird` |`10.0.0.5` |kliensgép |`cli[1-11]` |`10.0.0.[6-17]` |a többi kliensgép |=== Ha még nincs tapasztalatunk a NIS rendszerek összeállításában, akkor elõször jó ötlet lehet végiggondolni, miként is akarjuk kialakítani. A hálózatunk méretétõl függetlenül is akadnak olyan döntések, amelyeket mindenképpen meg kell hoznunk. ===== A NIS tartománynév megválasztása Ez nem az a "tartománynév", amit megszokhattunk. Ennek a pontos neve "NIS tartománynév". Amikor a kliensek kérnek valamilyen információt, akkor megadják annak a NIS tartománynak a nevét is, amelynek részei. Így tud egy hálózaton több szerver arról dönteni, hogy melyikük melyik kérést válaszolja meg. A NIS által használt tartománynévre tehát inkább úgy érdemes gondolni, mint egy valamilyen módon összetartozó gépek közös nevére. Elõfordul, hogy egyes szervezetek az interneten is nyilvántartott tartománynevüket választják NIS tartománynévnek. Ez alapvetõen nem ajánlott, mivel a hálózati problémák felderítése közben félreértéseket szülhet. A NIS tartománynévnek a hálózatunkon belül egyedinek kell lennie, és lehetõleg minél jobban írja le az általa csoportba sorolt gépeket. Például a Kis Kft. üzleti osztályát tegyük a "kis-uzlet" NIS tartományba. Ebben a példában most a `proba-tartomany` nevet választottuk. A legtöbb operációs rendszer azonban (köztük a SunOS(TM)) a NIS tartománynevet használja internetes tartománynévként is. Ha a hálózatunkon egy vagy több ilyen gép is található, akkor a NIS tartomány nevének az internetes tartománynevet _kell_ megadnunk. ===== A szerverek fizikai elvárásai Nem árt néhány dolgot fejben tartani, amikor a NIS szervernek használt gépet kiválasztjuk. Az egyik ilyen szerencsétlen dolog az a szintû függõség, ami a NIS kliensek felõl megfigyelhetõ a szerverek felé. Ha egy kliens nem tudja a NIS tartományon belül felvenni a kapcsolatot valamelyik szerverrel, akkor az a gép könnyen megbízhatatlanná válhat. Felhasználói- és csoportinformációk nélkül a legtöbb rendszer egy idõre le is merevedik. Ennek figyelembevételével tehát olyan gépet kell szervernek választanunk, amelyet nem kell gyakran újraindítani, és nem végzünk rajta semmilyen komoly munkát. A célnak legjobban megfelelõ NIS szerverek valójában olyan gépek, amelyek egyedüli feladata csak a NIS kérések kiszolgálása. Ha a hálózatunk nem annyira leterhelt, akkor még a NIS szerver mellett más programokat is futtathatunk, de ne feledjük, hogy ha a NIS szolgáltatás megszûnik, akkor az az _összes_ NIS kliensen éreztetni fogja kedvezõtlen hatását. ==== A NIS szerverek A NIS rendszerben tárolt összes információ általános példánya egyetlen gépen található meg, amelyet a központi NIS szervernek hívunk. Az információk tárolására szánt adatbázis pedig NIS táblázatoknak (NIS map) nevezzük. FreeBSD alatt ezek a táblázatok a [.filename]#/var/yp/tartománynév# könyvtárban találhatóak, ahol a [.filename]#tartománynév# a kiszolgált NIS tartományt nevezi meg. Egyetlen NIS szerver egyszerre akár több tartományt is kiszolgálhat, így itt több könyvtár is található, minden támogatott tartományhoz egy. Minden tartomány saját, egymástól független táblázatokkal rendelkezik. A központi és alárendelt NIS szerverek az `ypserv` démon segítségével dolgozzák fel a NIS kéréseket. Az `ypserv` felelõs a NIS kliensektõl befutó kérések fogadásáért, és a kért tartomány valamint táblázat nevébõl meghatározza az adatbázisban tárolt állományt, majd innen visszaküldi a hozzá tartozó adatot a kliensnek. ===== A központi NIS szerver beállítása A központi NIS szerver beállítása viszonylag magától értetõdõ, de a nehézségét az igényeink szabják meg. A FreeBSD alapból támogatja a NIS használatát. Ezért mindössze annyit kell tennünk, hogy a következõ sorokat betesszük az [.filename]#/etc/rc.conf# állományba, és a FreeBSD gondoskodik a többirõl. [.procedure] ==== [.programlisting] .... nisdomainname="proba-tartomany" .... . Ez a sor adja meg a hálózati beállítások (vagy például az újraindítás) során a NIS tartomány nevét, amely a korábbiak szerint itt most a `proba-tartomany`. [.programlisting] .... nis_server_enable="YES" .... . Ezzel utasítjuk a FreeBSD-t, hogy a hálózati alkalmazások következõ indításakor a NIS szervert is aktiválja. [.programlisting] .... nis_yppasswdd_enable="YES" .... Ezzel engedélyezzük az `rpc.yppasswdd` démont, amely a korábban említettek szerint lehetõvé teszi a felhasználók számára, hogy a közvetlenül a kliensekrõl változtassák meg a NIS jelszavukat. ==== [NOTE] ==== A konkrét NIS beállításainktól függõen további bejegyzések felvételére is szükségünk lehet. Erre késõbb még <>, vissza fogunk térni. ==== Miután ezeket beállítottuk, rendszeradminisztrátorként adjuk ki az `/etc/netstart` parancsot. Az [.filename]#/etc/rc.conf# állományban szereplõ adatok alapján mindent beállít magától. Még mielõtt inicializálnánk a NIS táblázatokat, indítsuk el manuálisan az ypserv démont: [source,shell] .... # /etc/rc.d/ypserv start .... ===== A NIS táblázatok inicializálása A _NIS táblázatok_ lényegében a [.filename]#/var/yp# könyvtárban tárolt adatbázisok. A központi NIS szerver [.filename]#/etc# könyvtárában található konfigurációs állományokból állítódnak elõ, egyetlen kivétellel: ez az [.filename]#/etc/master.passwd# állomány. Ennek megvan a maga oka, hiszen nem akarjuk a `root` és az összes többi fontosabb felhasználóhoz tartozó jelszót az egész NIS tartománnyal megosztani. Ennek megfelelõen a NIS táblázatok inicializálásához a következõt kell tennünk: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... El kell távolítanunk az összes rendszerszintû (`bin`, `tty`, `kmem`, `games`, stb), és minden olyan egyéb hozzáférést, amelyeket nem akarjuk közvetíteni a NIS kliensek felé (például a `root` és minden más nullás, vagyis rendszeradminisztrátori azonosítóval ellátott hozzáférést). [NOTE] ==== Gondoskodjunk róla, hogy az [.filename]#/var/yp/master.passwd# állomány sem a csoport, sem pedig bárki más számára nem olvasható (600-as engedély)! Ennek beállításához használjuk az `chmod` parancsot, ha szükséges. ==== Ha végeztünk, akkor már tényleg itt az ideje inicializálni NIS táblázatainkat. A FreeBSD erre egy `ypinit` nevû szkriptet ajánl fel (errõl a saját man oldalán tudhatunk meg többet). Ez a szkript egyébként a legtöbb UNIX(R) típusú operációs rendszeren megtalálható, de nem az összesen. A Digital UNIX/Compaq Tru64 UNIX rendszereken ennek a neve `ypsetup`. Mivel most a központi NIS szerver táblázatait hozzuk létre, azért az `ypinit` szkriptnek át kell adnunk a `-m` opciót is. A NIS táblázatok elõállításánál feltételezzük, hogy a fentebb ismertetett lépéseket már megtettük, majd kiadjuk ezt a parancsot: [source,shell] .... ellington# ypinit -m proba-tartomany Server Type: MASTER Domain: proba-tartomany Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [ .. a táblázatok generálása .. ] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... Az üzenetek fordítása: [source,shell] .... A szerver típusa: KÖZPONTI, tartomány: proba-tartomany Az YP szerver létrehozásához meg kell válaszolni néhány kérdést az eljárás megkezdése előtt. Szeretnénk, ha az eljárás megszakadna a nem végzetes hibák esetén is? [i/n: n] n Rendben, akkor ne felejtsük el manuálisan kijavítani a hibát, ha valamivel gond lenne. Ha nem tesszük meg, akkor előfordulhat, hogy valami nem fog rendesen működni. Most össze kell állítanunk egy listát a tartomány YP szervereiről. Jelenleg a rod.darktech.org a központi szerver. Kérjünk, adjon meg további alárendelt szervereket, soronként egyet. Amikor ezt befejeztük, a lenyomásával tudunk kilépni. központi szerver : ellington következő gép : coltrane következő gép : ^D A NIS szerverek listája jelenleg a következő: ellington coltrane Ez megfelelő? [i/n: i] i [ .. a táblázatok generálása .. ] A NIS táblázatok sikeressen frissültek. Az elligon szervert minden hiba nélkül sikerült központi szerverként beállítani. .... Az `ypinit` a [.filename]#/var/yp/Makefile.dist# állományból létrehozza a [.filename]#/var/yp/Makefile# állományt. Amennyiben ez létrejött, az állomány feltételezi, hogy csak FreeBSD-s gépek részvételével akarunk kialakítani egy egyszerveres NIS környezetet. Mivel a `proba-tartomany` még egy alárendelt szervert is tartalmaz, ezért át kell írnunk a [.filename]#/var/yp/Makefile# állományt: [source,shell] .... ellington# vi /var/yp/Makefile .... Ezt a sort kell megjegyzésbe tennünk: [.programlisting] .... NOPUSH = "True" .... (ha még nem lenne úgy). ===== Az alárendelt NIS szerverek beállítása Az alárendelt NIS szerverek beállítása még a központinál is egyszerûbb. Jelentkezzünk be az alárendelt szerverre és az eddigieknek megfelelõen írjuk át az [.filename]#/etc/rc.conf# állományt. Az egyetlen különbség ezúttal csupán annyi lesz, hogy az `ypinit` lefuttatásakor a `-s` opciót kell megadnunk (mint slave, vagyis alárendelt). A `-s` opció használatához a központi NIS szerver nevét is át kell adnunk, ezért a konkrét parancs valahogy így fog kinézni: [source,shell] .... coltrane# ypinit -s ellington proba-tartomany Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Don't forget to update map ypservers on ellington. .... Most már lennie kell egy [.filename]#/var/yp/proba-tartomany# nevû könyvtárunknak is. A központi NIS szerver táblázatainak másolata itt fognak tárolódni. Ezeket soha ne felejtsük el frissen tartani. Az alárendelt szervereken a következõ [.filename]#/etc/crontab# bejegyzések pontosan ezt a feladatot látják el: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... Ez a két sor gondoskodik róla, hogy az alárendelt szerverek ne felejtsék el egyeztetni a táblázataikat a központi szerver táblázataival. Ezek a bejegyzések nem nélkülözhetetlenek a megfelelõ mûködéshez, mivel a központi szerver automatikusan feltölti az alárendelt szerverekre a létrejött változásokat. Mivel azonban a jelszavak létfontosságúak a szervertõl függõ rendszerek számára, ezért ajánlott explicit módon is elõírni a frissítést. Ez a forgalmasabb hálózatokon nagyobb jelentõséggel bír, mivel ott a táblázatok frissítése nem mindig fejezõdik be rendesen. Most pedig futassuk le a `/etc/netstart` parancsot az alárendelt szervereken is, amivel így elindul a NIS szerver. ==== A NIS kliensek A NIS kliens az `ypbind` démon segítségével egy kötésnek (bind) nevezett kapcsolatot épít ki egy adott NIS szerverrel. Az `ypbind` ellenõrzi a rendszer alapértelmezett tartományát (ezt a `domainname` paranccsal állítottunk be), majd RPC kéréseket kezd szórni a helyi hálózaton. Ezek a kérések annak a tartománynak a nevét tartalmazzák, amelyhez az `ypbind` megpróbál kötést létrehozni. Ha az adott tartomány kiszolgálására beállított szerver észleli ezeket a kéréseket, akkor válaszol az `ypbind` démonnak, amely pedig feljegyzi a szerver címét. Ha több szerver is elérhetõ (például egy központi és több alárendelt), akkor az `ypbind` az elsõként válaszoló címét fogja rögzíteni. Innentõl kezdve a kliens közvetlenül ennek a szervernek fogja küldeni a NIS kéréseit. Az `ypbind` idõnként "megpingeli" a szervert, hogy meggyõzõdjön az elérhetõségérõl. Ha az `ypbind` egy adott idõn belül nem kap választ a ping kéréseire, akkor megszünteti a kötést a tartományhoz és nekilát keresni egy másik szervert. ===== A NIS kliensek beállítása Egy FreeBSD-s gépet NIS kliensként meglehetõsen egyszerûen lehet beállítani. [.procedure] ==== . Nyissuk meg az [.filename]#/etc/rc.conf# állományt és a NIS tartománynév beállításához, valamint az `ypbind` elindításához a következõket írjuk bele: + [.programlisting] .... nisdomainname="proba-tartomany" nis_client_enable="YES" .... + . A NIS szerveren található jelszavak importálásához távolítsuk el az összes felhasználói hozzáférést az [.filename]#/etc/master.passwd# állományunkból és a `vipw` segítségével adjuk hozzá az alábbi sort az állomány végéhez: + [.programlisting] .... +::::::::: .... + [NOTE] ====== Ez a sor beenged bárkit a rendszerünkre, akinek a NIS szervereken van érvényes hozzáférése. A NIS klienseket ezzel a sorral sokféle módon tudjuk állítani. A <> találunk majd errõl több információt. A téma mélyebb megismeréséhez az O'Reilly `Managing NFS and NIS` címû könyvét ajánljuk. ====== + [NOTE] ====== Legalább helyi hozzáférést (vagyis amit nem NIS-en keresztül importálunk) azonban mindenképpen hagyjunk meg az [.filename]#/etc/master.passwd# állományunkban, és ez a hozzáférés legyen a `wheel` csoport tagja. Ha valami gond lenne a NIS használatával, akkor ezen a hozzáférésen keresztül tudunk a gépre távolról bejelentkezni, majd innen `root` felhasználóra váltva megoldani a felmerült problémákat. ====== + . A NIS szerverrõl az összes lehetséges csoport-bejegyzést az [.filename]#/etc/group# állományban így tudjuk importálni: + [.programlisting] .... +:*:: .... ==== Miután elvégeztük ezeket a lépéseket, képesek leszünk futtatni az `ypcat passwd` parancsot, és látni a NIS szerver jelszavakat tartalmazó táblázatát. === A NIS biztonsága Általában tetszõleges távoli felhasználó küldhet RPC kéréseket az man:ypserv[8] számára és kérheti le a NIS táblázatok tartalmát, feltéve, hogy ismeri a tartomány nevét. Az ilyen hitelesítés nélküli mûveletek ellen az man:ypserv[8] úgy védekezik, hogy tartalmaz egy "securenets" nevû lehetõséget, amellyel az elérhetõségüket tudjuk leszûkíteni gépek egy csoportjára. Az man:ypserv[8] indításakor ezeket az információkat a [.filename]#/var/yp/securenets# állományból próbálja meg betölteni. [NOTE] ==== Az elérési útvonala megadható a `-p` opció használatával. Ez az állomány olyan bejegyzéseket tartalmaz, amelyekben egy hálózati cím és tõle láthatatlan karakterekkel elválasztva egy hálózati maszk szerepel. A "#" karakterrel kezdõdõ sorokat megjegyzésnek nyilvánítjuk. Egy minta securenets állomány valahogy így nézne ki: ==== [.programlisting] .... # Engedélyezzük önmagunkról a csatlakozást -- kell! 127.0.0.1 255.255.255.255 # Engedélyezzük a 192.168.128.0 hálózatról érkezõ csatlakozásokat: 192.168.128.0 255.255.255.0 # Engedélyezzük a laborban található 10.0.0.0 és 10.0.15.255 közti # címekkel rendelkezõ gépek csatlakozását: 10.0.0.0 255.255.240.0 .... Ha az man:ypserv[8] olyan címrõl kap kérést, amely illeszkedik az elõírt címek valamelyikére, akkor a szokásos módon feldolgozza azt. Ellenkezõ esetben a kérést figyelmen kívül hagyja és egy figyelmeztetést vesz fel hozzá a naplóba. Ha a [.filename]#/var/yp/securenets# állomány nem létezik, akkor az `ypserv` tetszõleges géprõl engedélyezi a csatlakozást. Az `ypserv` lehetõséget ad a Wietse Venema által fejlesztett TCP Wrapper csomag használatára is. Ezzel a rendszergazda a [.filename]#/var/yp/securenets# állomány helyett a TCP Wrapper konfigurációs állományai alapján képes szabályozni az elérhetõséget. [NOTE] ==== Miközben mind a két módszer nyújt valamilyen fajta védelmet, de a privilegizált portok teszteléséhez hasonlóan az "IP álcázásával" (IP spoofing) sebezhetõek. Ezért az összes NIS-hez tartozó forgalmat tûzfallal kell blokkolnunk. Az [.filename]#/var/yp/securenets# állományt használó szerverek nem képesek az elavult TCP/IP implementációkat használó érvényes klienseket rendesen kiszolgálni. Egyes ilyen implementációk a címben a géphez tartozó biteket nullára állítják az üzenetszóráshoz, és/vagy ezért az üzenetszóráshoz használt cím kiszámításakor nem tudja észleli a hálózati maszkot. A legtöbb ilyen probléma megoldható a kliens konfigurációjának megváltoztatásával, míg más problémák megoldása a kérdéses kliensek nyugdíjazását kívánják meg, vagy a [.filename]#/var/yp/securenets# használatának elhagyását. Egy régebbi TCP/IP implementációval üzemelõ szerveren pedig a [.filename]#/var/yp/securenets# állomány használata kifejezetten rossz ötlet, és a hálózatunk nagy részében képes használhatatlanná tenni a NIS funkcióit. A TCP Wrapper csomag alkalmazása a NIS szerverünk válaszadáshoz szükséges idejét is segít csökkenteni. Az ilyenkor jelentkezõ plusz késlekedés mellesleg elég nagy lehet ahhoz, hogy a klienseknél idõtúllépés következzen be, különösen a terheltebb hálózatokon vagy a lassú NIS szerverek esetében. Ha egy vagy több kliensünk is ilyen tüneteket mutat, akkor érdemes a kérdéses kliens rendszereket alárendelt NIS szerverekké alakítani és önmagukhoz rendelni. ==== === Egyes felhasználók bejelentkezésének megakadályozása A laborunkban van egy `basie` nevû gép, amely a tanszék egyetlen munkaállomása. Ezt a gépet nem akarjuk kivenni a NIS tartományból, de a központi NIS szerver [.filename]#passwd# állománya mégis egyaránt tartalmazza a hallgatók és az oktatók eléréseit. Mit lehet ilyenkor tenni? Adott felhasználók esetében le tudjuk tiltani a bejelentkezést a gépen még olyankor is, ha léteznek a NIS adatbázisában. Ehhez mindössze a kliensen az [.filename]#/etc/master.passwd# állomány végére be kell tennünk egy `-felhasználónév` sort, ahol a _felhasználónév_ annak a felhasználónak a neve, akit nem akarunk beengedni a gépre. Ezt leginkább a `vipw` használatán keresztül érdemes megtennünk, mivel a `vipw` az [.filename]#/etc/master.passwd# állomány alapján végez némi ellenõrzést, valamint a szerkesztés befejeztével magától újragenerálja a jelszavakat tároló adatbázist. Például, ha a `bill` nevû felhasználót ki akarjuk tiltani a `basie` nevû géprõl, akkor: [source,shell] .... basie# vipw [vegyük fel a -bill sort a végére, majd lépjünk ki] vipw: rebuilding the database... vipw: done basie# cat /etc/master.passwd root:[jelszó]:0:0::0:0:The super-user:/root:/bin/csh toor:[jelszó]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin +::::::::: -bill basie# .... [[network-netgroups]] === A hálózati csoportok alkalmazása Az elõzõ szakaszban ismertetett módszer viszonylag jól mûködik olyan esetekben, amikor nagyon kevés felhasználóra és/vagy számítógépre kell alkalmaznunk speciális megszorításokat. A nagyobb hálózatokban szinte _biztos_, hogy elfelejtünk kizárni egyes felhasználókat az érzékeny gépekrõl, vagy az összes gépen egyenként kell ehhez a megfelelõ beállításokat elvégezni, és ezzel lényegében elvesztjük a NIS legfontosabb elõnyét, vagyis a _központosított_ karbantarthatóságot. A NIS fejlesztõi erre a problémára a _hálózati csoportokat_ létrehozásával válaszoltak. A céljuk és mûködésük szempontjából leginkább a UNIX(R)-os állományrendszerekben található csoportokhoz mérhetõek. A legnagyobb eltérés a numerikus azonosítók hiányában mutatkozik meg, valamint a hálózati csoportokat a felhasználókon kívül további hálózati csoportok megadásával is ki lehet alakítani. A hálózati csoportok a nagyobb, bonyolultabb, többszáz felhasználós hálózatok számára jöttek létre. Egy részrõl ez nagyon jó dolog, különösen akkor, ha egy ilyen helyzettel kell szembenéznünk. Másrészrõl ez a mértékû bonyolultság szinte teljesen lehetetlenné teszi a hálózati csoportok egyszerû bemutatását. A szakasz további részében használt példa is ezt a problémát igyekszik illusztrálni. Tételezzük fel, hogy laborunkban a NIS sikeres bevezetése felkeltette a fõnökeink figyelmét. Így a következõ feladatunk az lett, hogy terjesszük ki a NIS tartományt az egyetemen található néhány másik gépre is. Az alábbi két táblázatban az új felhasználók és az új számítógép neveit találjuk, valamint a rövid leírásukat. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Felhasználók nevei | Leírás |`alpha`, `beta` |az IT tanszék hétköznapi dolgozói |`charlie`, `delta` |az IT tanszék újdonsült dolgozói |`echo`, `foxtrott`, `golf`, ... |átlagos dolgozók |`able`, `baker`, ... |ösztöndíjasok |=== [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Gépek nevei | Leírás |`haboru`, `halal`, `ehseg`, `szennyezes` |A legfontosabb szervereink. Csak az IT tanszék dolgozói férhetnek hozzájuk. |`buszkeseg`, `kapzsisag`, `irigyseg`, `harag`, `bujasag`, `lustasag` |Kevésbé fontos szerverek. Az IT tankszék összes tagja el tudja érni ezeket a gépeket. |`egy`, `ketto`, `harom`, `negy`, ... |Átlagos munkaállomások. Egyedül csak a _valódi_ dolgozók jelentkezhetnek be ezekre a gépekre. |`szemetes` |Egy nagyon régi gép, semmi értékes adat nincs rajta. Akár még az öszöndíjasok is nyúzhatják. |=== Ha ezeket az igényeket úgy próbáljuk meg teljesíteni, hogy a felhasználókat egyenként blokkoljuk, akkor minden rendszer [.filename]#passwd# állományába külön fel kell vennünk a `-felhasználó` sorokat a letiltott felhasználókhoz. Ha csak egyetlen bejegyzést is kihagyunk, akkor könnyen bajunk származhat belõle. Ez a rendszer kezdeti beállítása során még talán nem okoz gondot, de az új felhasználókat _biztosan_ el fogjuk felejteni felvenni a megfelelõ csoportokba. Elvégre Murphy is optimista volt. A hálózati csoportok használata ilyen helyzetekben számos elõnyt rejt. Nem kell az egyes felhasználókat külön felvenni, egy felhasználót felveszünk valamelyik csoportba vagy csoportokba, és a csoportok összes tagjának egyszerre tudjuk tiltani vagy engedélyezni a hozzáféréseket. Ha hozzáadunk egy új gépet a hálózatunkhoz, akkor mindössze a hálózati csoportok bejelentkezési korlátozásait kell beállítani. Ha új felhasználót veszünk fel, akkor a felhasználót kell vennünk egy vagy több hálózati csoportba. Ezek a változtatások függetlenek egymástól, és nincs szükség "minden felhasználó és minden gép összes kombinációjára". Ha a NIS beállításainkat elõzetesen körültekintõen megterveztük, akkor egyetlen központi konfigurációs állományt kell módosítani a gépek elérésének engedélyezéséhez vagy tiltásához. Az elsõ lépés a hálózati csoportokat tartalmazó NIS táblázat inicializálása. A FreeBSD man:ypinit[8] programja alapértelmezés szerint nem hozza létre ezt a táblázatot, de ha készítünk egy ilyet, akkor a NIS implementációja képes kezelni. Egy ilyen üres táblázat elkészítéséhez ennyit kell begépelni: [source,shell] .... ellington# vi /var/yp/netgroup .... Ezután elkezdhetjük felvenni a tartalmát. A példánk szerint legalább négy hálózati csoportot kell csinálnunk: az IT dolgozóinak, az IT új dolgozóinak, a normál dolgozóknak és az öszöndíjasoknak. [.programlisting] .... IT_DOLG (,alpha,proba-tartomany) (,beta,proba-tartomany) IT_UJDOLG (,charlie,proba-tartomany) (,delta,proba-tartomany) FELHASZNALO (,echo,proba-tartomany) (,foxtrott,proba-tartomany) \ (,golf,proba-tartomany) OSZTONDIJAS (,able,proba-tartomany) (,baker,proba-tartomany) .... Az `IT_DOLG`, `IT_UJDOLG` stb. a hálózati csoportok nevei lesznek. Minden egyes zárójelezett csoport egy vagy több felhasználói hozzáférést tartalmaz. A csoportokban szereplõ három mezõ a következõ: . Azon gépek neve, amelykre a következõ elemek érvényesek. Ha itt nem adunk meg neveket, akkor a bejegyzés az összes gépre vonatkozik. Ha megadjuk egy gép nevét, akkor jutalmunk a teljes sötétség, a rettegetés és totális megtébolyodás. . A csoporthoz tartozó hozzáférés neve. . A hozzáféréshez kapcsolódó NIS tartomány. A csoportba más NIS tartományokból is át tudunk hozni hozzáféréseket, ha netalán éppen olyan szerencsétlenek lennénk, hogy több NIS tartományt is felügyelnünk kell. A mezõk mindegyike tartalmazhat dzsókerkaraktereket. Errõl részletesebben a man:netgroup[5] man oldalon olvashatunk. [NOTE] ==== A hálózati csoportoknak lehetõleg ne adjunk 8 karakternél hosszabb nevet, különösen abban az esetben, ha a NIS tartományban más operációs rendszereket is használunk. A nevekben eltérnek a kis- és nagybetûk. Ha a hálózati csoportokat nevét nagybetûkkel írjuk, akkor könnyen különbséget tudunk tenni a felhasználók, gépek és hálózati csoportok nevei között. Egyes (nem FreeBSD alapú) NIS kliensek nem képesek kezelni a nagyon sok bejegyzést tartalmazó hálózati csoportokat. Például a SunOS(TM) néhány korábbi verziója fennakad rajta, ha egy hálózati csoport 15 _bejegyzésnél_ többet tartalmaz. Az ilyen korlátozások alól úgy tudunk kibújni, ha 15 felhasználónként újabb hálózati csoportokat hozunk létre, amelyekkel az eredeti hálózati csoportot építjük fel: [.programlisting] .... NAGYCSP1 (,joe1,tartomany) (,joe2,tartomany) (,joe3,tartomany) [...] NAGYCSP2 (,joe16,tartomany) (,joe17,tartomany) [...] NAGYCSP3 (,joe31,tartomany) (,joe32,tartomany) NAGYCSOPORT NAGYCSP1 NAGYCSP2 NAGYCSP3 .... Ugyanez a folyamat javasolt olyan esetekben is, ahol 225 felhasználónál többre lenne szükség egyetlen hálózati csoporton belül. ==== Az így létrehozott új NIS táblázat szétküldése meglehetõsen könnyû feladat: [source,shell] .... ellington# cd /var/yp ellington# make .... Ez a parancs létrehoz három NIS táblázatot: [.filename]#netgroup#, [.filename]#netgroup.byhost# és [.filename]#netgroup.byuser#. Az man:ypcat[1] paranccsal ellenõrizni is tudjuk az új NIS táblázatainkat: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... Az elsõ parancs kimenete a [.filename]#/var/yp/netgroup# állomány tartalmára emlékeztethet minket. A második parancsnak nincs semmilyen kimenete, hacsak nem adtunk meg valamilyen gépfüggõ hálózati csoportot. A harmadik parancs a hálózati csoportokat listázza ki a felhasználókhoz. A kliensek beállítása tehát nagyon egyszerû. A `haboru` nevû szerver beállításához indítsuk el a man:vipw[8] programot, és cseréljük a [.programlisting] .... +::::::::: .... sort erre: [.programlisting] .... +@IT_DOLG::::::::: .... Innentõl kezdve kizárólag csak az `IT_DOLG` csoportban található felhasználók fognak bekerülni a `haboru` jelszó adatbázisába, és csak ezek a felhasználók tudnak ide bejelentkezni. Sajnos ez a korlátozás a parancsértelmezõ `~` funkciójára és összes olyan rutinra is vonatkozik, amelyet a felhasználói nevek és azok numerikus azonosító között képez le. Más szóval a `cd ~felhasználó` parancs nem fog mûködni, és az `ls -l` parancs kimenetében a felhasználói nevek helyett csak numerikus azonosítók jelennek meg, továbbá a `find . -user joe -print No such user` (`Nincs ilyen felhasználó`) hibát fog visszaadni. Ez úgy tudjuk megjavítani, ha úgy importáljuk a szerverre az összes felhasználó bejegyzését, hogy _közben tiltjuk a hozzáférésüket_. Ehhez vegyünk fel egy újabb sort az [.filename]#/etc/master.passwd# állományba. A sor valahogy így fog kinézni: `+:::::::::/sbin/nologin`, amely annyit tesz, hogy "importáljuk az összes bejegyzést, de a hozzájuk tartozó parancsértelmezõ a [.filename]#/sbin/nologin# legyen". A `passwd` állományban tetszõleges mezõ tartalmát le tudjuk úgy cserélni, ha megadunk neki egy alapértelmezett értéket az [.filename]#/etc/master.passwd# állományban. [WARNING] ==== Vigyázzunk, hogy a `+:::::::::/sbin/nologin` sort az `+@IT_DOLG:::::::::` sor után írjuk. Ha nem így teszünk, akkor a NIS-bõl importált összes felhasználói hozzáférés a [.filename]#/sbin/nologin# parancsértelmezõt kapja. ==== Miután elvégeztük ezt a változtatást, minden újabb dolgozó felvétele után csupán egyetlen táblázatot kell megváltoztatnunk. Ugyanezt a taktikát követhetjük a kevésbé fontosabb szerverek esetében is, hogy ha a helyi [.filename]#/etc/master.passwd# állományukban a korábbi `+:::::::::` bejegyzést valami ilyesmivel helyettesítjük: [.programlisting] .... +@IT_DOLG::::::::: +@IT_UJDOLG::::::::: +:::::::::/sbin/nologin .... Az egyszerû munkaállomások esetében pedig ezekre a sorokra lesz szükségünk: [.programlisting] .... +@IT_DOLG::::::::: +@FELHASZNALOK::::::::: +:::::::::/sbin/nologin .... Minden remekül üzemel egészen addig, amíg néhány hét múlva ismét változik a házirend: az IT tanszékre ösztöndíjasok érkeznek. Az IT ösztöndíjasai a munkaállomásokat és a kevésbé fontosabb szervereket tudják használni. Az új IT dolgozók már a központi szerverekre is bejelentkezhetnek. Így tehát létrehozunk egy új hálózati csoportot `IT_OSZTONDIJAS` néven, majd felvesszük ide az új IT ösztöndíjasokat, és nekilátunk végigzongorázni az összes gép összes konfigurációs állományát... Ahogy azonban egy régi mondás is tartja: "A központosított tervezésben ejtett hibák teljes káoszhoz vezetnek". A NIS az ilyen helyzeteket úgy igyekszik elkerülni, hogy megengedi újabb hálózati csoportok létrehozását más hálózati csoportokból. Egyik ilyen lehetõség a szerep alapú hálózati csoportok kialakítása. Például, ha a fontosabb szerverek bejelentkezési korlátozásai számára hozzunk létre egy `NAGYSRV` nevû csoportot, valamint egy másik hálózati csoportot `KISSRV` néven a kevésbé fontosabb szerverekhez, végül `MUNKA` néven egy harmadik hálózati csoportot a munkaállomásokhoz. Mindegyik ilyen hálózati csoport tartalmazza azokat a csoportokat, amelyek engedélyezik a gépek elérését. A hálózati csoportok leírását tartalmazó NIS táblázat most valahogy így fog kinézni: [.programlisting] .... NAGYSRV IT_DOLG IT_UJDOLG KISSRV IT_DOLG IT_UJDOLG IT_OSZTONDIJAS MUNKA IT_DOLG IT_OSZTONDIJAS FELHASZNALOK .... A bejelentkezési megszorítások ilyen típusú megadása viszonylag jól mûködik, hogy ha azonos korlátozások alá esõ gépek csoportjait akarjuk felírni. Bánatunk ez a kivétel, és nem a szabály. Az esetek nagy többségében ugyanis a bejelentkezésre vonatkozó korlátozásokat gépenként kell egyesével megadni. A hálózati csoportok gépfüggõ megadása tehát az iménti házirendhez társuló igények kielégítésének egyik módja. Ebben a forgatókönyvben az [.filename]#/etc/master.passwd# állomány minden számítógépen két "+"-os sorral kezdõdik. Közülük az elsõ a gépen engedélyezett hozzáféréseket tartalmazó hálózati csoportra vonatkozik, a második pedig az összes többi hozzáféréshez az [.filename]#/sbin/nologin# parancsértelmezõt kapcsolja hozzá. Itt jó ötlet, ha a gép nevének "VÉGIG-NAGYBETûS" változatát adjuk meg a hozzá tartozó hálózati csoport nevének: [.programlisting] .... +@GÉPNÉV::::::::: +:::::::::/sbin/nologin .... Miután elvégeztük ezt a feladatot minden egyes gépen, az [.filename]#/etc/master.passwd# állomány helyi változatait soha többé nem kell módosítanunk. Az összes többi változtatást a NIS táblázaton keresztül tudjuk keresztül vinni. Íme a felvázolt forgatókönyvhöz tartozó hálózati csoportok kiépítésének egyik lehetséges változata, egy-két finomsággal kiegészítve: [.programlisting] .... # Elõször a felhasználók csoportjait adjuk meg: IT_DOLG (,alpha,proba-tartomany) (,beta,proba-tartomany) IT_UJDOLG (,charlie,proba-tartomany) (,delta,proba-tartomany) TANSZ1 (,echo,proba-tartomany) (,foxtrott,proba-tartomany) TANSZ2 (,golf,proba-taromany) (,hotel,proba-tartomany) TANSZ3 (,india,proba-taromany) (,juliet,proba-tartomany) IT_OSZTONDIJAS (,kilo,proba-tartomany) (,lima,proba-tartomany) D_OSZTONDIJAS (,able,proba-tartomany) (,baker,proba-tartomany) # # Most pedig hozzunk létre csoportokat szerepek szerint: FELHASZNALOK TANSZ1 TANSZ2 TANSZ3 NAGYSRV IT_DOLG IT_UJDOLG KISSRV IT_DOLG IT_UJDOLG IT_OSZTONDIJAS MUNKA IT_DOLG IT_OSZTONDIJAS FELHASZNALOK # # Következzenek a speciális feladatokhoz tartozó csoportok: # Az echo és a golf tudja elérni a vírusvédelemért felelõs gépet: VEDELEM IT_DOLG (,echo,proba-tartomany) (,golf,proba-tartomany) # # Gép alapú hálózati csoportok # A fõ szervereink: HABORU NAGYSRV EHSEG NAGYSRV # Az india nevû felhasználó hozzá szeretné ehhez férni: SZENNYEZES NAGYSRV (,india,proba-tartomany) # # Ez valóban fontos és komolyan szabályoznunk kell: HALAL IT_DOLG # # Az elõbb említett vírusvédelmi gép: EGY VEDELEM # # Egyetlen felhasználóra korlátozzuk le ezt a gépet: KETTO (,hotel,proba-tartomany) # [...és itt folytatódik a többi csoporttal] .... Ha a felhasználói hozzáféréseinket valamilyen adatbázisban tároljuk, akkor a táblázat elsõ részét akár az adatbázis lekérdezésein keresztül is elõ tudjuk állítani. Ezzel a módszerrel az új felhasználók automatikusan hozzáférnek a gépekhez. Legyünk viszont óvatosak: nem mindig javasolt gépeken alapuló hálózati csoportokat készíteni. Ha a hallgatói laborokba egyszerre több tucat vagy akár több száz azonos konfigurációjú gépet telepítünk, akkor a gép alapú csoportok helyett inkább szerep alapú csoportokat építsünk fel, mivel így a NIS táblázatok méretét egy elfogadható méreten tudjuk tartani. === Amit feltétlenül észben kell tartanunk Még mindig akad néhány olyan dolog, amit másképpen kell csinálnunk azután, hogy most már NIS környezetben vagyunk. * Amikor egy új felhasználót akarunk felvenni a laborba, akkor _csak_ a központi NIS szerverre kell felvennünk, és _újra kell generáltatnunk a NIS táblázatokat_. Ha ezt elfelejtjük megtenni, akkor az új felhasználó a központi NIS szerveren kívül sehova sem lesz képes bejelentkezni. Például, ha fel akarjuk venni a `jsmith` nevû felhasználót a laborba, akkor ezt kell tennünk: + [source,shell] .... # pw useradd jsmith # cd /var/yp # make proba-tartomany .... + Vagy a `pw useradd jsmith` parancs helyett az `adduser jsmith` parancsot is használhatjuk. * _A rendszergazdai szintû hozzáféréseket ne tároljuk a NIS táblázatokban_. Olyan gépekre egyáltalán ne is küldjünk olyan karbantartáshoz használt hozzáféréseket, amelynek a felhasználói hivatalosan nem is férhetnének hozzájuk. * _A központi NIS szervert és az alárendelt szervereket óvjuk minél jobban, és igyekezzünk minimalizálni a kieséseiket_. Ha valaki feltöri vagy egyszerûen csak kikapcsolja ezeket a gépeket, akkor ezzel lényegében mindenkit megakadályoz abban, hogy be tudjon jelentkezni a laborban. + Ezek a központosított vezérlésû rendszerek legfõbb gyengeségei. Ha nem védjük kellõen a NIS szervereinket, akkor azzal nagyon ellenséget szerezhetünk magunknak! === Kompatibilitás a NIS elsõ változatával A FreeBSD-ben megtalálható ypserv szolgáltatás valamennyire képes ellátni a NIS elsõ változatát használó klienseket is. A FreeBSD NIS implementációja csak a NIS v2 protokollt használja, azonban mivel más implementációk kompatibilisek kívánnak maradni a régebbi rendszerekkel, ismerik a v1 protokollt is. Az ilyen rendszerekhez tartozó ypbind démonok még olyankor is megpróbálnak v1-es NIS szerverekhez kötést létrehozni, amikor valójában nincs is rá szükségük (és gyakran még akkor is ilyet keresnek, amikor az üzenetükre már válaszolt egy v2-es szerver). Hozzátennénk, hogy bár az ypserver ezen változata a normál klienshívásokat képes feldolgozni, a táblázatokat már nem tudja átküldeni a v1-es klienseknek. Ebbõl következik, hogy a központi vagy alárendelt szerverek nem tudnak együttmûködni olyan NIS szerverekkel, amelyek csak a v1-es protokollt beszélik. Szerencsére ilyen szervereket manapság már alig használnak. [[network-nis-server-is-client]] === NIS szerverek, melyek egyben NIS kliensek Óvatosan kell bánnunk az ypserv elindításával olyan többszerveres tartományokban, ahol a szerverek maguk is NIS kliensek. Alapvetõen nincs abban semmi kivetnivaló, ha a szervereket saját magukhoz kötjük ahelyett, hogy engednénk nekik a kötési kérések küldését és így egymáshoz kötnénk ezeket. Különös hibák tudnak származni olyan helyzetekben, amikor az egyik szerver leáll, miközben a többiek pedig függenek tõle. Végül is ilyenkor minden kliens szépen kivárja a szükséges idõt, aztán megpróbál más szerverekhez kötõdni, de az itt fellépõ késlekedés jelentõs mennyiségû lehet, és ez a hibajelenség ismét fennállhat, mivel elõfordulhat, hogy a szerverek megint egymáshoz kapcsolódnak. A klienst úgy tudjuk egy adott szerverhez kötni, ha az `ypbind` parancsot a `-S` beállítással indítjuk. Ha mindezt nem akarjuk manuálisan megtenni a NIS szerver minden egyes újraindításakor, akkor vegyük fel a következõ sorokat az [.filename]#/etc/rc.conf# állományba: [.programlisting] .... nis_client_enable="YES" # elindítjuk a klienst is nis_client_flags="-S NIS tartomány,szerver" .... Részletesebb lásd az man:ypbind[8] man oldalát. === A jelszavak formátuma A NIS rendszerek kiépítése során az emberek leggyakrabban a jelszavak formátumával kapcsolatban tapasztalnak nehézségeket. Ha a szerverünk DES titkosítású jelszavakat használ, akkor csak olyan klienseket fog tudni támogatni, amelyek szintén így kódolják ezeket. Például, ha a hálózaton vannak Solaris(TM) rendszerû NIS klienseink, akkor szinte biztos, hogy DES titkosítást kell használnunk. A szerverek és a kliensek által használt formátumokat az [.filename]#/etc/login.conf# állományba tekintve deríthetjük ki. Ha a gépek többségén a DES titkosítást látjuk, akkor a `default` osztálynak egy ilyen bejegyzést kell tartalmaznia: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [a többit most nem mutatjuk] .... A `passwd_format` tulajdonság további lehetséges értékei lehetnek a `blf` és az `md5` (melyek rendre a Blowfish és MD5 titkosítású jelszavakat adják meg). Ha változtattunk valamit az [.filename]#/etc/login.conf# állományban, akkor a bejelentkezési tulajdonságok adatbázisát is újra kell generálni, melyet `root` felhasználóként a következõ módon tehetünk meg: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== Az [.filename]#/etc/master.passwd# állományban jelenlevõ jelszavak formátuma azonban nem frissítõdik egészen addig, amíg a felhasználók a bejelentkezési adatbázis újragenerálása _után_ meg nem változtatják a jelszavaikat. ==== Úgy tudjuk még biztosítani, hogy a jelszavak megfelelõ formátumban kódolódjanak, ha az [.filename]#/etc/auth.conf# állományban megkeressük a `crypt_default` sort, amelyben a választható jelszóformátumok felhasználásái sorrendjét találhatjuk meg. Itt tehát mindössze annyit kell tennünk, hogy a kiszemelt formátumot a lista elejére tesszük. Például, ha a DES titkosítású jelszavakat akarunk használni, akkor ez a bejegyzés így fog kinézni: [.programlisting] .... crypt_default = des blf md5 .... Ha a fenti lépéseket követjük az összes FreeBSD alapú NIS szervernél és kliensnél, akkor biztosra mehetünk abban, hogy a hálózatunkon belül ugyanazt a jelszóformátumot fogják használni. Ha gondunk akadna a NIS kliensek hitelesítésével, akkor itt érdemes kezdeni a hiba felderítését. Ne felejtsük: ha egy NIS szervert egy heterogén hálózatba akarunk telepíteni, akkor valószínûleg az összes rendszeren a DES titkosítást kell választani, mivel általában ez a közös nevezõ ebben a tekintetben. [[network-dhcp]] == A hálózat automatikus beállítása (DHCP) === Mi az a DHCP? A Dinamikus állomáskonfigurációs protokoll, avagy Dynamic Host Configuration Protocol (DHCP) annak eszközeit írja le, hogy egy rendszer miként tud csatlakozni egy hálózathoz és miként tudja azon belül megszerezni a kommunikációhoz szükséges információkat. A FreeBSD 6.0 elõtti változatai az ISC (Internet Systems Consortium, vagyis az internetes rendszerkonzorcium) által kidolgozott DHCP kliens (man:dhclient[8]) implementációját tartalmazzák. A késõbbi verziókban pedig az OpenBSD 3.7 verziójából átvett `dhclient` paranccsal dolgozhatunk. Ebben a szakaszban a `dhclient` parancsra vonatkozó összes információ egyaránt érvényes az ISC és az OpenBSD által fejlesztett DHCP kliensekre. A DHCP szerver az ISC-tõl származik. === Mivel foglalkozik ez a szakasz Ebben a szakaszban az ISC és az OpenBSD DHCP klienseinek kliens- és szerver oldali komponsenseit mutatjuk be. A kliens oldali program neve a `dhclient`, amely a FreeBSD részeként érkezik, és a szerver oldali elem pedig a package:net/isc-dhcp31-server[] porton keresztül érhetõ el. A lentebb említett hivatkozások mellett a témában még a man:dhclient[8], man:dhcp-options[5] és a man:dhclient.conf[5] man adhatnak bõvebb felvilágosítást a témában. === Ahogyan mûködik Amikor a `dhclient`, vagyis a DHCP kliens elindul egy kliensgépen, akkor a hálózaton üzenetszórással próbálja meg elkérni a konfigurációjához szükséges adatokat. Alapértelmezés szerint ezek a kérések a 68-as UDP porton keresztül mennek. A szerver ezekre a 67-es UDP porton válaszol, ahol visszaad a kliensnek egy IP-címet és a hálózat használatához szükséges további információkat, mint például a hálózati maszkot, az alapértelmezett átjáró és a névfeloldásért felelõs szerverek címét. Az összes ilyen jellegû adat egy DHCP "bérlet" (lease) formájában érkezik meg, amely csak egy adott ideig érvényes (ezt a DHCP szerver karbantartója állítja be). Így a hálózaton a kliens nélküli IP-címeket egy idõ után automatikusan visszanyerjük. A DHCP kliensek rengeteg információt képes elkérni a szervertõl. Ezek teljes listáját a man:dhcp-options[5] man oldalán olvashatjuk el. === Használat a FreeBSD-n belül A FreeBSD teljes egészében tartalmazza az ISC vagy az OpenBSD DHCP kliensét, a `dhclient` programot (attól függõen, hogy a FreeBSD melyik változatát használjuk). A DHCP kliensek támogatása a telepítõben és az alaprendszerben is megtalálható, és ezzel mentesülünk minden konkrét hálózati beállítás alól a DHCP szervereket alkalmazó hálózatokon. A `dhclient` a FreeBSD 3.2 változata óta megtalálható a rendszerben. DHCP használatát a sysinstall is lehetõvé teszi. Amikor egy hálózati felületet a sysinstall programon belül állítunk be, akkor a második kérdés mindig ez szokott lenni: "Do you want to try DHCP configuration of the interface?" ("Megpróbáljuk DHCP használatával beállítani a felületet?") Ha erre igennel válaszolunk, akkor azzal lényegében a `dhclient` parancsot indítjuk el, és ha mindez sikerrel zárul, akkor szinte magától kitöltõdik az összes hálózati beállításunk. A DHCP használatához két dolgot kell beállítanunk a rendszerünkön: * Gondoskodjunk róla, hogy a [.filename]#bpf# eszköz része a rendszermagunknak. Ha még nem lenne benne, akkor a rendszermag beállításait tartalmazó állományba vegyük fel a `device bpf` sort és fordítsuk újra a rendszermagot. A rendszermagok fordításáról a crossref:kernelconfig[kernelconfig,A FreeBSD rendszermag testreszabása]ben tudhatunk meg többet. + A [.filename]#bpf# eszköz alapból megtalálható a [.filename]#GENERIC# rendszermagokban, így ha ezt használjuk, akkor nem kell saját verziót készítenünk a DHCP használatához. + [NOTE] ==== Azok számára viszont, akik biztonsági szempontból aggódnak a rendszerük miatt, meg kell említenünk, hogy a [.filename]#bpf# egyben az az eszköz, amely a csomagok lehallgatását is lehetõvé teszi (habár az ilyeneket `root` felhasználóként lehet csak elindítani). A [.filename]#bpf#_kell_ a DHCP használatához, azonban ha nagyon fontos nekünk a rendszerünk biztonsága, akkor a [.filename]#bpf# eszközt érdemes kivennünk a rendszermagból, ha még pillanatnyilag nem használunk ilyet. ==== * Az [.filename]#/etc/rc.conf# állományunkat az alábbiak szerint kell módosítani: + [.programlisting] .... ifconfig_fxp0="DHCP" .... + [NOTE] ==== Az `fxp0` eszközt ne felejtsük el kicserélni arra a felületre, amelyet automatikusan akarunk beállítani. Ennek mikéntje a crossref:config[config-network-setup,A hálózati kártyák beállítása]ban olvasható. ==== + Ha a `dhclient` a rendszerünkben máshol található, vagy egyszerûen csak további beállításokat akarunk átadni a `dhclient` parancsnak, akkor adjuk meg a következõt is (változtassuk meg igényeink szerint): + [.programlisting] .... dhclient_program="/sbin/dhclient" dhclient_flags="" .... A DHCP szerver, a dhcpd a package:net/isc-dhcp31-server[] port részeként érhetõ el. Az a port tartalmazza az ISC DHCP szerverét és a hozzá tartozó dokumentációt. === Állományok * [.filename]#/etc/dhclient.conf# + A `dhclient` mûködéséhez szükség lesz egy konfigurációs állományra, aminek a neve [.filename]#/etc/dhclient.conf#. Ez az állomány általában csak megjegyzéseket tartalmaz, mivel az alapértelmezett értékek többnyire megfelelõek. Ezt a konfigurációs állományt a man:dhclient.conf[5] man oldal írja le. * [.filename]#/sbin/dhclient# + A `dhclient` statikusan linkelt és az [.filename]#/sbin# könyvtárban található. A man:dhclient[8] man oldal tud róla részletesebb felvilágosítást adni. * [.filename]#/sbin/dhclient-script# + A `dhclient-script` a FreeBSD-ben levõ DHCP kliens konfigurációs szkriptje. Mûködését a man:dhclient-script[8] man oldal írja le, de a felhasználók részérõl semmilyen módosítást nem igényel. * [.filename]#/var/db/dhclient.leases# + A DHCP kliens az érvényes bérleteket tartja nyilván ezekben az állományban és naplóként használja. A man:dhclient.leases[5] man oldal ezt valamivel bõvebben kifejti. === További olvasnivalók A DHCP protokoll mûködését az http://www.freesoft.org/CIE/RFC/2131/[RFC 2131] mutatja be. A témához kapcsolódóan http://www.dhcp.org/[ itt] tudunk még leírásokat találni. [[network-dhcp-server]] === A DHCP szerverek telepítése és beállítása ==== Mirõl szól ez a szakasz Ebben a szakaszban arról olvashatunk, hogy miként kell egy FreeBSD típusú rendszert DHCP szervernek beállítani, ha az ISC (internetes rendszerkonzorcium) DHCP szerverét használjuk. Ez a szerver nem része a FreeBSD-nek, ezért a szolgáltatás elindításához elõször fel kell raknunk a package:net/isc-dhcp31-server[] portot. A Portgyûjtemény használatára vonatkozóan a crossref:ports[ports,Alkalmazások telepítése. csomagok és portok] lehet segítségünkre. ==== A DHCP szerver telepítése Ha a FreeBSD rendszerünket DHCP szerverként akarjuk beállítani, akkor ehhez elsõként a man:bpf[4] eszköz jelenlétét kell biztosítani a rendszermagban. Ehhez vegyük fel a `device bpf` sort a rendszermagunk beállításait tartalmazó állományba, majd fordítsuk újra a rendszermagot. A rendszermag lefordításáról a crossref:kernelconfig[kernelconfig,A FreeBSD rendszermag testreszabása]ben olvashatunk. A [.filename]#bpf# eszköz a FreeBSD-hez alapból adott [.filename]#GENERIC# rendszermag része, ezért a DHCP használatához nem kell feltétlenül újat fordítanunk. [NOTE] ==== A biztonsági szempontok miatt aggódó felhasználók részére megjegyezzük, hogy a [.filename]#bpf# eszköz egyben a csomagok lehallgatását is lehetõvé teszi (habár az ilyen témájú programok futtatásához megfelelõ jogokra is szükség van). A [.filename]#bpf# használata _kötelezõ_ a DHCP mûködtetéséhez, de ha nagyon kényesek vagyunk a biztonságot illetõen, akkor minden olyan esetben, amikor nem használjuk ki ezt a lehetõséget, távolítsuk el a rendszermagból. ==== A következõ lépésben át kell szerkesztenünk a mintaként mellékelt [.filename]#dhcpd.conf# állományt, amelyet a package:net/isc-dhcp31-server[] port rakott fel. Ez alapértelmezés szerint a [.filename]#/usr/local/etc/dhcpd.conf.sample# néven található meg, és mielõtt bármit is változtatnánk rajta, másoljuk le [.filename]#/usr/local/etc/dhcpd.conf# néven. ==== A DHCP szerver beállítása A [.filename]#dhcpd.conf# az alhálózatokat illetve a gépeket érintõ deklarációkat tartalmazza, és talán a legkönnyebben a következõ példa alapján mutatható be: [.programlisting] .... option domain-name "minta.com";<.> option domain-name-servers 192.168.4.100;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 3600;<.> max-lease-time 86400;<.> ddns-update-style none;<.> subnet 192.168.4.0 netmask 255.255.255.0 { range 192.168.4.129 192.168.4.254;<.> option routers 192.168.4.1;<.> } host mailhost { hardware ethernet 02:03:04:05:06:07;<.> fixed-address levelezes.minta.com;<.> } .... <.> Ez a beállítás adja meg a kliensek számára az alapértelmezett keresési tartományt (search domain). A man:resolv.conf[5] tud ezzel kapcsolatban részletesebb információkat adni. <.> Ez a beállítás adja meg a kliensek által használt névfeloldó szerverek vesszõvel elválasztott felsorolását. <.> A kliensekhez tartozó hálózati maszk. <.> A kliens egy adott idõre kérhet bérleti jogot, egyébként a szerver dönt a bérlet lejárati idejérõl (másodpercekben). <.> Ez az a maximális idõ, amennyire a szerver hajlandó bérbe adni IP-címet. A kliens ugyan hosszabb idõre is kérheti és meg is kapja, de legfeljebb csak `max-lease-time` másodpercig lesz érvényes. <.> Ez a beállítás határozza meg, hogy a DHCP szervernek frissítse-e a névoldási információkat a bérlések elfogadásánál vagy visszamondásánál. Az ISC implementációjánál ez a beállítás _kötelezõ_. <.> Ezzel adjuk meg milyen tartományból tudunk IP-címeket kiosztani a kliensek számára. A kezdõ címet is beleértve, innen fogunk kiutalni egyet a klienseknek. <.> A kliensek felé elküldött alapértelmezett átjáró címe. <.> A gép hardveres MAC-címe (így a DHCP szerver képes felismerni a kérés küldõjét). <.> Ennek megadásával a gépek mindig ugyanazt az IP-címet kapják. Itt már megadhatunk egy hálózati nevet, mivel a bérlethez tartozó információk visszaküldése elõtt maga a DHCP szerver fogja feloldani a gép nevét. Miután befejeztük a [.filename]#dhcpd.conf# módosítását, a DHCP szerver az [.filename]#/etc/rc.conf# állományban tudjuk engedélyezni, vagyis tegyük bele a következõt: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... A `dc0` felület nevét helyettesítsük annak a felületnek (vagy whitespace karakterekkel elválasztott felületeknek) a nevével, amelyen keresztül a DHCP szerver várni fogja a kliensek kéréseit. Ezután a következõ parancs kiadásával indítsuk el a szervert: [source,shell] .... # /usr/local/etc/rc.d/isc-dhcpd start .... Amikor a jövõben valamit változtatunk a konfigurációs állományon, akkor ezzel kapcsolatban fontos megemlíteni, hogy ha csak egy `SIGHUP` jelzést küldünk a dhcpd démonnak, akkor az a többi démontól eltérõen önmagában még _nem_ eredményezi a konfigurációs adatok újraolvasását. Helyette a `SIGTERM` jelzéssel kell leállítani a programot, majd újraindítani a fenti paranccsal. ==== Állományok * [.filename]#/usr/local/sbin/dhcpd# + A dhcpd statikusan linkelt és a [.filename]#/usr/local/sbin# könyvtárban található. A porttal együtt felkerülõ man:dhcpd[8] man oldal ad részletesebb útmutatást dhcpd használatáról. * [.filename]#/usr/local/etc/dhcpd.conf# + Mielõtt a dhcpd megkezdhetné mûködését, egy konfigurációs állományra is szükségünk lesz, amely a [.filename]#/usr/local/etc/dhcpd.conf#. Ez az állomány tartalmazza az összes olyan információt, ami kell a kliensek megfelelõ kiszolgálásához valamint a szerver mûködéséhez. Ez a konfigurációs állomány porthoz tartozó man:dhcpd.conf[5] man oldalon kerül ismertetésre. * [.filename]#/var/db/dhcpd.leases# + A DHCP szerver ebben az állományba tartja nyilván a kiadott bérleteket, egy napló formájában. A porthoz kapcsolódó man:dhcpd.leases[5] man oldalon errõl többet is megtudhatunk. * [.filename]#/usr/local/sbin/dhcrelay# + A dhcrelay állománynak olyan komolyabb környezetekben van szerepe, ahol a DHCP szerver a kliensektõl érkezõ kéréseket egy másik hálózaton található DHCP szerverhez továbbítja. Ha szükség lenne erre a lehetõségre, akkor telepítsük fel a package:net/isc-dhcp31-relay[] portot. A porthoz tartozó man:dhcrelay[8] man oldal ennek részleteit taglalja. [[network-dns]] == Névfeloldás (DNS) === Áttekintés A FreeBSD alapértelmezés szerint a BIND (Berkeley Internet Name Domain) egyik verzióját tartalmazza, amely a névfeloldási (Domain Name System, DNS) protokoll egyik elterjedt implementációja. A DNS protokollon keresztül tudunk az IP-címekhez neveket rendelni és fordítva. Például a `www.FreeBSD.org` névre a FreeBSD Projekt webszerverének IP-címét kapjuk meg, miközben a `ftp.FreeBSD.org` pedig a hozzá tartozó FTP szerver IP-címét fogja visszaadni. Ehhez hasonlóan a fordítottja is megtörténhet, vagyis egy IP-címhez is kérhetjük a hálózati név feloldását. A névfeloldási kérések kiszolgálásához nem feltétlenül szükséges névszervert futtatni a rendszerünkön. A FreeBSD jelen pillanatban alapból a BIND9 névszervert tartalmazza. A benne szereplõ változata több biztonsági javítást, új állományrendszeri kiosztást és automatizált man:chroot[8] beállítást is magában foglal. Az interneten keresztüli névfeloldást legfelsõ szintû tartományoknak (Top Level Domain, TLD) nevezett hitelesített tövek némileg bonyolult rendszerén alapszik, valamint más egyéb olyan névszervereken, amelyek további egyéni információkat tárolnak és táraznak. A BIND fejlesztését jelenleg az Internet Systems Consortium (http://www.isc.org/[http://www.isc.org/]) felügyeli. === Alapfogalmak A leírás megértéséhez be kell mutatnunk néhány névfeloldással kapcsolatos fogalmat. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Fogalom | Meghatározás |Közvetlen névfeloldás (forward DNS) |A hálózati nevek leképezése IP-címekre. |õs (origin) |Egy adott zóna állományban szereplõ tartományra vonatkozik. |named, BIND |A FreeBSD-n belüli BIND névszerver különbözõ megnevezései. |Névfeloldó (resolver) |Az a program a rendszerben, amelyhez a hálózaton levõ gépek a zónák adatainak elérésével kapcsolatban fordulnak. |Inverz névfeloldás (reverse DNS) |Az IP-címek leképzése hálózati nevekre. |Gyökérzóna (root zone) |Az interneten található zónák hierarchiájának töve. Minden zóna ebbe a gyökérzónába esik, ahhoz hasonlóan, ahogy egy állományrendszerben az állományok a gyökérkönyvtárba. |Zóna (zone) |Egy különálló tartomány, altartomány vagy a névfeloldás azon része, amelyet egyazon fennhatóság alatt tartanak karban. |=== Példák zónákra: * A gyökérzónára a leírásokban általában `.` néven szoktak hivatkozni. * A `org.` egy legfelsõ szintû tartomány (TLD) a gyökérzónán belül. * A `minta.org.` a `org.` TLD tartomány alatti zóna. * A `1.168.192.in-addr.arpa` egy olyan zóna, amelyek a `192.168.1.*` IP-címtartományban szereplõ összes címet jelöli. Mint láthatjuk, a hálózati nevek balról kiegészülve pontosodnak. Tehát például a `minta.org.` sokkal pontosabb meghatározás, mint a `org.`, ahogy az `org.` magánál a gyökérzónánál jelent többet. A hálózati nevek felosztása leginkább egy állományrendszerhez hasonlítható, például a [.filename]#/dev# könyvtár a gyökéren belül található, és így tovább. === Miért érdemes névszervert futtatni A névszerverek általában két alakban jelennek meg. Egyikük a hitelesített névszerver, a másikuk a gyorsítótárazó névszerver. Egy hitelesített névszerverre akkor van szükségünk, ha: * a világ többi része felé akarunk hiteles névfeloldási információkat szolgáltatni; * regisztráltunk egy tartományt (például `minta.org`) és az alatta levõ hálózati nevekhez is szeretnénk IP-címeket rendeltetni; * a IP-címtartományunkban szükség van inverz névfeloldási bejegyzésekre (amely IP-címbõl ad meg hálózati nevet) is; * a kérések teljesítéséhez egy tartalék avagy második, alárendelt (slave) névszerver kell. A gyorsítótárazó névszerverre akkor van szükségünk, ha: * egy helyi névfeloldó szerver felhasználásával fel akarjuk gyorsítani az egyébként a külsõ névszerver felé irányuló kérések kiszolgálását. Amikor valaki lekérdezi a `www.FreeBSD.org` címét, akkor a névfeloldó elõször általában a kapcsolatot rendelkezésre bocsátó internet-szolgáltató névszerverét kérdezi meg és onnan kapja meg a választ. Egy helyi, gyorsítótárazó névszerver használata esetén azonban egy ilyen kérést csak egyszer kell kiadni a külsõ névszervernek. Ezután már minden további ilyen kérés el sem hagyja a belsõ hálózatunkat, mivel a válasz szerepel a gyorsítótárban. === Ahogyan mûködik FreeBSD alatt a BIND démon nyilvánvaló okokból named néven érhetõ el. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Állomány | Leírás |man:named[8] |A BIND démon. |man:rndc[8] |A névszervert vezérlõ segédprogram. |[.filename]#/etc/namedb# |A BIND által kezelt zónák adatait tároló könyvtár. |[.filename]#/etc/namedb/named.conf# |A démon konfigurációs állománya. |=== Attól függõen, hogy miként állítjuk be az adott zónát a szerveren, a hozzá tartozó állományok a [.filename]#/etc/namedb# könyvtáron belül a [.filename]#master#, [.filename]#slave# vagy [.filename]#dynamic# alkönyvtárban foglalnak helyet. Az itt tárolt állományokban levõ névfeloldási információk alapján válaszol a névszerver a felé intézett kérésekre. === A BIND elindítása Mivel a BIND alapból elérhetõ a rendszerben, viszonylag könnyen be tudjuk állítani. A named alapértelmezett beállítása szerint egy man:chroot[8] környezetben futó egyszerû névfeloldást végzõ szerver, amely a helyi IPv4 interfészen (127.0.0.1) fogadja a kéréseket. Ezzel a beállítással a következõ parancson keresztül tudjuk elindítani: [source,shell] .... # /etc/rc.d/named onestart .... Ha engedélyezni akarjuk a named démont minden egyes rendszerindításkor, tegyük a következõ sort az [.filename]#/etc/rc.conf# állományba: [.programlisting] .... named_enable="YES" .... Értelemszerûen az [.filename]#/etc/namedb/named.conf# tele van olyan beállítási lehetõségekkel, amelyek meghaladják ennek a leírásnak a kereteit. Ha viszont kíváncsiak vagyunk a FreeBSD-ben a named indításához használt beállításokra, akkor az [.filename]#/etc/defaults/rc.conf# állományban nézzük meg `named_*` változókat és olvassuk át az man:rc.conf[5] man oldalt. Emellett még a crossref:config[configtuning-rcd,Az rc használata FreeBSD alatt]t is hasznos lehet elolvasni. === A konfigurációs állományok A named beállításait tartalmazó állományok pillanatnyilag az [.filename]#/etc/namedb# könyvtárban találhatóak és hacsak nem egy egyszerû névfeloldóra tartunk igényt, akkor a használata elõtt módosítanunk is kell. Itt ejtjük meg a beállítások nagy részét. ==== [.filename]#/etc/namedb/named.conf# [.programlisting] .... // $FreeBSD$ // // Részletesebb leírást a named.conf(5) és named(8) man oldalakon, valamint // a /usr/shared/doc/bind9 könyvtárban találhatunk. // // Ha egy hitelesített szervert akarunk beállítani, akkor igyekezzünk // a névfeloldás összes finom részletével pontosan tisztában lenni. // Ugyanis még a legkisebb hibákkal is egyrészt elvághatunk gépeket az // internet-lérésétõl, vagy másrészt felesleges forgalmat tudunk // generálni // options { // A chroot könyvtárhoz relatív elérési út, amennyiben létezik directory "/etc/namedb"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; // Ha a named démont csak helyi névfeloldóként használjuk, akkor ez // egy biztonságos alapbeállítás. Ha viszont a named démon az egész // hálózatunkat is kiszolgálja, akkor ezt a beállítást tegyük // megjegyzésbe, vagy adjunk meg egy rendes IP-címet, esetleg // töröljük ki. listen-on { 127.0.0.1; }; // Ha rendszerünkön engedélyezett az IPv6 használata, akkor a helyi // névfeloldó használatához ezt a sort vegyük ki a megjegyzésbõl. // A hálózatunk többi részérõl pedig úgy lehet elérni, ha itt megadunk // egy IPv6 címet, vagy az "any" kulcsszót. // listen-on-v6 { ::1; }; // Az alábbi zónákat már a lentebb található üres zónák eleve lefedik. // Ha tehát a lenti üres zónákat kivesszük a konfigurációból, akkor // ezeket a sorokat is tegyük megjegyzésbe. disable-empty-zone "255.255.255.255.IN-ADDR.ARPA"; disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; // Ha a szolgáltatónk névszervert is elérhetõvé tett számunkra, akkor // itt adjuk meg annak az IP-címét és engedélyezzük az alábbi sort. // Ezzel egyben kihasználjuk a gyorsítótárat is, így mérsékeljük az // internet felé mozgó névfeloldásokat. /* forwarders { 127.0.0.1; }; * // Ha a 'forwarders' rész nem üres, akkor alapértelmezés szerint a // 'forward first' értékkel rendelkezik. Ekkor a kérést a helyi szerver // kapja abban az esetben, amikor a 'forwarders' részben megadott // szerverek nem tudják megválaszolni. Emellett a névszerverben a // következõ sor hozzáadásával letilthatjuk, hogy önmagától ne // kezdeményezzen kéréseket: // forward only; // Ha a kérések továbbítását az /etc/resolv.conf állományban megadott // bejegyzések mentén szeretnénk automatikusan konfigurálni, akkor vegyük // ki a megjegyzésbõl az alábbi sort és adjuk hozzá az /etc/rc.conf // állományhoz a name_auto_forward=yes sort. Emellett használható még a // named_auto_forward_only beállítás is (amely fentebb leírt funkciót // valósítja meg). // include "/etc/namedb/auto_forward.conf"; .... Ahogy arról a megjegyzésekben is szó esik, úgy tudjuk aktiválni a gyorsítótárat, ha megadjuk a `forwarders` beállítást. Normális körülmények között a névszerver az interneten az egyes névszervereket rekurzívan fogja keresni egészen addig, amíg meg nem találja a keresett választ. Az iménti beállítás engedélyezésével azonban elõször a szolgáltató névszerverét (vagy az általa kijelölt névszervert) fogjuk megkérdezni, a saját gyorsítótárából. Ha a szolgáltató kérdéses névszervere egy gyakran használt, gyors névszerver, akkor ezt érdemes bekapcsolnunk. [WARNING] ==== Itt a `127.0.0.1` megadása _nem_ mûködik. Mindenképpen írjuk át a szolgáltatónk névszerverének IP-címére. ==== [.programlisting] .... /* A BIND legújabb változataiban alapértelmezés szerint minden egyes kimenõ kérésnél más, véletlenszerûen választott UDP portot használnak, ezáltal jelentõs mértékben csökkenthetõ a gyorsítótár meghamisíthatóságának (cache poisoning) esélye. Javasoljuk mindenkinek, hogy használják ki ezt a lehetõséget és eszerint állítsák be a tûzfalakat. Ha nem sikerül a tûzfalat hozzáigazítani ehhez a viselkedéshez AKKOR ÉS CSAK IS AKKOR engedélyezzük a lenti beállítást. Alkalmazásával sokkal kevésbé lesz ellenálló a névszerver a különbözõ hamisítási kísérletekkel szemben, ezért lehetõség szerint kerüljük el. Az NNNNN helyére egy 49160 és 65530 közti számot kell beírnunk. */ // query-source address * port NNNNN; }; // Ha engedélyezzük a helyi névszervert, akkor az /etc/resolv.conf // állományban elsõ helyen megadni a 127.0.0.1 címet. Sõt, az // /etc/rc.conf állományból se felejtsük ki. // A hagyományos "root-hints" megoldás. Használjuk ezt VAGY a lentebb // megadott alárendelt zónákat. zone "." { type hint; file "named.root"; }; /* Több szempontból is elõnyös, ha a következõ zónákat alárendeljük a gyökér névfeloldó szervereknek: 1. A helyi felhasználók kéréseit gyorsabban tudjuk feloldalni. 2. A gyökérszerverek felé nem megy semmilyen hamis forgalom. 3. A gyökérszerverek meghibásodása vagy elosztott DoS támadás esetén rugalmasabban tudunk reagálni. Másfelöl azonban ez a módszer a "hints" állomány alkalmazásával szemben több felügyeletet igényel, mivel figyelnünk kell, nehogy egy váratlan meghibásodás mûködésképtelenné tegye a szerverünket. Ez a megoldás leginkább a sok klienst kiszolgáló névszerverek esetén bizonyulhat jövedelmezõbbnek. Óvatosan bánjunk vele! A módszer alkalmazásához vegyük ki a megjegyzésbõl a következõ bejegyzéseket és tegyük megjegyzésbe a fenti hint zónát. */ zone "." { type slave; file "slave/root.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; zone "arpa" { type slave; file "slave/arpa.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; } zone "in-addr.arpa" { type slave; file "slave/in-addr.arpa.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; */ /* Az alábbi zónák helyi kiszolgálásával meg tudjuk akadályozni, hogy a belõlük indított kérések elhagyják a hálózatunkat és a elérjük a gyökér névfeloldó szervereket. Ez a megközelítés két komoly elõnnyel rendelkezik: 1. A helyi felhasználók kéréseit gyorsabban tudjuk megválaszolni. 2. A gyökérszerverek felé nem továbbítódik semmilyen hamis forgalom. */ // RFC 1912 zone "localhost" { type master; file "master/localhost-forward.db"; }; zone "127.in-addr.arpa" { type master; file "master/localhost-reverse.db"; }; zone "255.in-addr.arpa" { type master; file "master/empty.db"; }; // A helyi IPv6 címek részére létrehozott RFC 1912-szerû zóna zone "0.ip6.arpa" { type master; file "master/localhost-reverse.db"; }; // "Ez" a hálózat (RFC 1912 és 3330) zone "0.in-addr.arpa" { type master; file "master/empty.db"; }; // Magáncélú hálózatok (RFC 1918) zone "10.in-addr.arpa" { type master; file "master/empty.db"; }; zone "16.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "17.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "18.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "19.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "20.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "21.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "22.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "23.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "24.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "25.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "26.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "27.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "28.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "29.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "30.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "31.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "168.192.in-addr.arpa" { type master; file "master/empty.db"; }; // Helyi link/APIPA (RFC 3330 és 3927) zone "254.169.in-addr.arpa" { type master; file "master/empty.db"; }; // Dokumentációs próbahálózat (RFC 3330) zone "2.0.192.in-addr.arpa" { type master; file "master/empty.db"; }; // Útválasztási teljesítmény tesztelésére (RFC 3330) zone "18.198.in-addr.arpa" { type master; file "master/empty.db"; }; zone "19.198.in-addr.arpa" { type master; file "master/empty.db"; }; // Az IANA részére fentartott - a régi E osztályú címtér zone "240.in-addr.arpa" { type master; file "master/empty.db"; }; zone "241.in-addr.arpa" { type master; file "master/empty.db"; }; zone "242.in-addr.arpa" { type master; file "master/empty.db"; }; zone "243.in-addr.arpa" { type master; file "master/empty.db"; }; zone "244.in-addr.arpa" { type master; file "master/empty.db"; }; zone "245.in-addr.arpa" { type master; file "master/empty.db"; }; zone "246.in-addr.arpa" { type master; file "master/empty.db"; }; zone "247.in-addr.arpa" { type master; file "master/empty.db"; }; zone "248.in-addr.arpa" { type master; file "master/empty.db"; }; zone "249.in-addr.arpa" { type master; file "master/empty.db"; }; zone "250.in-addr.arpa" { type master; file "master/empty.db"; }; zone "251.in-addr.arpa" { type master; file "master/empty.db"; }; zone "252.in-addr.arpa" { type master; file "master/empty.db"; }; zone "253.in-addr.arpa" { type master; file "master/empty.db"; }; zone "254.in-addr.arpa" { type master; file "master/empty.db"; }; // Hozzárendelés nélküli IPv6-címek (RFC 4291) zone "1.ip6.arpa" { type master; file "master/empty.db"; }; zone "3.ip6.arpa" { type master; file "master/empty.db"; }; zone "4.ip6.arpa" { type master; file "master/empty.db"; }; zone "5.ip6.arpa" { type master; file "master/empty.db"; }; zone "6.ip6.arpa" { type master; file "master/empty.db"; }; zone "7.ip6.arpa" { type master; file "master/empty.db"; }; zone "8.ip6.arpa" { type master; file "master/empty.db"; }; zone "9.ip6.arpa" { type master; file "master/empty.db"; }; zone "a.ip6.arpa" { type master; file "master/empty.db"; }; zone "b.ip6.arpa" { type master; file "master/empty.db"; }; zone "c.ip6.arpa" { type master; file "master/empty.db"; }; zone "d.ip6.arpa" { type master; file "master/empty.db"; }; zone "e.ip6.arpa" { type master; file "master/empty.db"; }; zone "0.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "1.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "2.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "3.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "4.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "5.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "6.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "7.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "8.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "9.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "a.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "b.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "0.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "1.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "2.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "3.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "4.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "5.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "6.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "7.e.f.ip6.arpa" { type master; file "master/empty.db"; }; // IPv6 ULA (RFC 4193) zone "c.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "d.f.ip6.arpa" { type master; file "master/empty.db"; }; // IPv6 helyi link (RFC 4291) zone "8.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "9.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "a.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "b.e.f.ip6.arpa" { type master; file "master/empty.db"; }; // Elavult IPv6 helyi címek (RFC 3879) zone "c.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "d.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "e.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "f.e.f.ip6.arpa" { type master; file "master/empty.db"; }; // Az IP6.INT már elavult (RFC 4159) zone "ip6.int" { type master; file "master/empty.db"; }; // FONTOS: Ne használjuk ezeket az IP-címeket, mert nem valódiak, // csupán illusztrációs és dokumentációs célokból adtuk meg! // // Az alárendelt zónák beállításaira vonatkozó bejegyzések. Érdemes // ilyet beállítani legalább ahhoz a zónához, amelyhez a tartományunk is // tartozik. Az elsõdleges névszerverhez tartozó IP-címet érdeklõdjük meg // az illetékes hálózati rendszergazdától. // // Soha ne felejtsünk el megadni zónát az inverz kereséshez! A neve az IP-cím // tagjainak fordított sorrendjébõl // származik, amelyhez hozzátoldunk még egy // ".IN-ADDR.ARPA" (illetve IPv6 esetén ".IP6.ARPA") részt. // // Mielõtt nekilátnánk egy elsõdleges zóna beállításának, gondoljuk // végig, hogy tényleg a megfelelõ szinten ismerjük a névfeloldás és // a BIND mûködését. Gyakran ugyanis egyáltalán nem nyilvánvaló // csapdákba tudunk esni. Egy alárendelt zóna beállítása általában sokkal egyszerûbb feladat. // // FONTOS: Ne kövessük vakon a most következõ példát :-) Helyette inkább // valódi neveket és címeket adjunk meg. /* Példa dinamikus zónára key "mintaorgkulcs" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "minta.org" { type master; allow-update { key "mintaorgkulcs"; }; file "dynamic/minta.org"; }; */ /* Példa inverz alárendelt zónákra zone "1.168.192.in-addr.arpa" { type slave; file "slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ .... A [.filename]#named.conf# állományban tehát így adhatunk meg közvetlen és inverz alárendelt zónákat. Minden egyes újabb kiszolgált zónához az egy új bejegyzést kell felvenni a [.filename]#named.conf# állományban. Például a `minta.org` címhez tartozó legegyszerûbb ilyen bejegyzés így néz ki: [.programlisting] .... zone "minta.org" { type master; file "master/minta.org"; }; .... Ez egy központi zóna, ahogy arról a `type` mezõ, vagyis a típusa is árulkodik. Továbbá a `file` mezõben láthatjuk, hogy a hozzá tartozó információkat az [.filename]#/etc/namedb/master/minta.org# állományban tárolja. [.programlisting] .... zone "minta.org" { type slave; file "slave/minta.org"; }; .... Az alárendelt esetben a zónához tartozó információkat a zóna központi szerverétõl kapjuk meg és megadott állományban mentjük el. Ha valamiért a központi szerver leáll vagy nem érhetõ el, akkor az alárendelt szerver az átküldött zóna információk alapján képes helyette kiszolgálni a kéréseket. ==== A zóna állományok A `minta.org` címhez tartozó példa központi zóna állomány (amely az [.filename]#/etc/namedb/master/néven.org# érhetõ el) tartalma az alábbi: [.programlisting] .... $TTL 3600 ; alapértelmezés szerint 1 óra minta.org. IN SOA ns1.minta.org. admin.minta.org. ( 2006051501 ; sorozatszám 10800 ; frissítés 3600 ; ismétlés 604800 ; lejárat 300 ; TTL negatív válasz ) ; névszerverek IN NS ns1.minta.org. IN NS ns2.minta.org. ; MX rekordok IN MX 10 mx.minta.org. IN MX 20 levelezes.minta.org. IN A 192.168.1.1 ; a gépek nevei localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 levelezes IN A 192.168.1.5 ; álnevek www IN CNAME minta.org. .... A "."-ra végzõdõ hálózati nevek abszolút nevek, míg minden más "." nélküli név az õsére vezehetõ vissza (tehát relatív). Például az `ns1` névbõl az `ns1.minta.org` keletkezik. A zóna állományok felépítése a következõ: [.programlisting] .... rekordnév IN rekordtípus érték .... A névfeloldásban leggyakrabban alkalmazott rekordok típusai: SOA:: a zóna fennhatóságának kezdete NS:: egy hitelesített névszerver A:: egy gép címe CNAME:: egy álnév kanonikus neve MX:: levélváltó PTR:: mutató a tartománynévre (az inverz feloldás használja) [.programlisting] .... minta.org. IN SOA ns1.minta.org. admin.minta.org. ( 2006051501 ; sorozatszám 10800 ; 3 óránként frissítsünk 3600 ; 1 óra után próbálkozzunk újra 604800 ; 1 hét után jár le 300 ) ; TTL negatív válasz .... `minta.org.`:: a tartomány neve, amely egyben a zóna õse `ns1.minta.org.`:: a zóna elsõdleges/hitelesített névszervere `admin.minta.org.`:: a zónáért felelõs személy neve, akinek az e-mail címét a "@" behelyettesítésével kapjuk meg. (Tehát a mailto:admin@example.org[admin@example.org] címbõl `admin.example.org` lesz.) `2006051501`:: az állomány sorozatszáma. Ezt a zóna állomány módosításakor mindig növelnünk kell. Manapság a rendszergazdák a sorozatszámot `ééééhhnnvv` alakban adják meg. A `2006051501` tehát azt jelenti, hogy az állományt 2006. május 15-én módosították utoljára, és a `01` pedig arra utal, hogy aznap elõször. A sorozatszám megadása fontos az alárendelt névszerverek számára, mivel így tudják megállapítani, hogy a zóna mikor változott utoljára. [.programlisting] .... IN NS ns1.minta.org. .... Ez egy NS bejegyzés. A zónához tartozó minden hitelesített névszervernek lennie kell legalább egy ilyen bejegyzésének. [.programlisting] .... localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 levelezes IN A 192.168.1.5 .... Az A rekord egy gép nevét adja meg. Ahogy a fenti példából is kiderül, az `ns1.minta.org` név a `192.168.1.2` címre képzõdik le. [.programlisting] .... IN A 192.168.1.1 .... Ez a sor `192.168.1.1` címet rendeli az aktuális õshöz, amely jelen esetünkben az `example.org`. [.programlisting] .... www IN CNAME @ .... A kanonikus neveket tároló rekordokat általában egy gép álneveihez használjuk. Ebben a példában a `www` a "fõgép" egyik álneve, amely itt éppenséggel a `minta.org` (`192.168.1.1`) tartományneve. A CNAME rekordok mellé más típusú rekordokat ugyanarra a hálózati névre soha ne adjunk meg. [.programlisting] .... IN MX 10 levelezes.minta.org. .... Az MX rekord adja meg, hogy milyen levelezõ szerverek felelõsek a zónába érkezõ levelek fogadásáért. A `levelezes.minta.org` a levelezõ szerver hálózati neve, ahol a 10 az adott levelezõ szerver prioritása. Több levelezõ szerver is megadható 10-es, 20-as stb. prioritásokkal. A `minta.org` tartományon belül elõször mindig a legnagyobb MX prioritással rendelkezõ levelezõ szervernek próbáljuk meg továbbítani a leveleket (a legkisebb prioritási értékkel rendelkezõ rekord), majd ezután a második legnagyobbnak stb. egészen addig, amíg a levelet tovább nem küldtük. Az in-addr.arpa zóna állományok (inverz DNS) esetén ugyanez a felépítés, kivéve, hogy a PTR típusú bejegyzések szerepelnek az A és CNAME helyett. [.programlisting] .... $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.minta.org. admin.minta.org. ( 2006051501 ; sorozatszám 10800 ; frissítés 3600 ; ismétlés 604800 ; lejárat 300 ) ; TTL negatív válasz IN NS ns1.minta.org. IN NS ns2.minta.org. 1 IN PTR minta.org. 2 IN PTR ns1.minta.org. 3 IN PTR ns2.minta.org. 4 IN PTR mx.minta.org. 5 IN PTR levelezes.minta.org. .... Ez az állomány írja le tehát a kitalált tartományunkon belül az IP-címek és hálózati nevek összerendelését. Érdemes megemlíteni, hogy a PTR rekordok jobb oldalán álló nevek mindegyikének teljes hálózati névnek kell lennie (vagyis "." karakterrel kell végzõdnie). === A gyorsítótárazó névszerver A gyorsítótárazó névszerver az a névszerver, amely elsõdleges feladata a rekurzív kérések kiszolgálása. Egyszerûen továbbítja a beérkezõ kéréseket, majd megjegyzi azokat, így késõbb közvetlenül tud válaszolni. === Biztonság Habár a névfeloldás szempontjából a BIND a legelterjedtebb, a biztonságosságával azért akadnak gondok. Gyakran találnak benne potenciális és kihasználható biztonsági réseket. A FreeBSD azonban a named démont automatikusan egy man:chroot[8] környezetbe helyezi. Emellett még léteznek további más védelmi mechanizmusok is, amelyek segítségével el tudjuk kerülni a névfeloldást célzó esetleges támadásokat. Sosem árt olvasgatni a http://www.cert.org/[CERT] által kiadott biztonsági figyelmeztetéseket és feliratkozni a {freebsd-security-notifications} címére, hogy folyamatosan értesüljünk az interneten és a FreeBSD-ben talált különbözõ biztonsági hibákról. [TIP] ==== Ha valamilyen gondunk támadna, akkor esetleg próbálkozzunk meg a forrásaink frissítésével és a named újrafordításával. ==== === Egyéb olvasnivalók A BIND/named man oldalai: man:rndc[8] man:named[8] man:named.conf[8] * http://www.isc.org/software/bind[Az ISC BIND hivatalos honlapja (angolul)] * http://www.isc.org/software/guild[Az ISC BIND hivatalos fóruma (angolul)] * http://www.oreilly.com/catalog/dns5/[O'Reilly DNS and BIND 5th Edition] * http://www.rfc-editor.org/rfc/rfc1034.txt[RFC1034 - Domain Names - Concepts and Facilities] * http://www.rfc-editor.org/rfc/rfc1035.txt[RFC1035 - Domain Names - Implementation and Specification] [[network-apache]] == Az Apache webszerver === Áttekintés A FreeBSD szolgálja ki a legforgalmasabb honlapok nagy részét szerte a világban. A mögöttük álló webszerverek általában az Apache webszervert alkalmazzák. Az Apache használatához szükséges csomagok megtalálhatóak a FreeBSD telepítõlemezén is. Ha a FreeBSD elsõ telepítésekor még nem telepítettük volna az Apache szerverét, akkor a package:www/apache13[] vagy package:www/apache12[] portból tudjuk feltenni. Az Apache szervert sikeres telepítését követõen be kell állítanunk. [NOTE] ==== Ebben a szakaszban az Apache webszerver 1.3.X változatát mutatjuk be, mivel ezt használják a legtöbben FreeBSD alatt. Az Apache 2.X rengeteg új technológiát vezetett be, de ezekkel itt most nem foglalkozunk. Az Apache 2.X változatával kapcsolatban keressük fel a http://httpd.apache.org/[http://httpd.apache.org/] oldalt. ==== === Beállítás Az Apache webszerver konfigurációs állománya FreeBSD alatt [.filename]#/usr/local/etc/apache/httpd.conf# néven található. Ez az állomány egy szokványos UNIX(R)-os szöveges konfigurációs állomány, ahol a megjegyzéseket egy `#` karakterrel vezetjük be. Az itt használható összes lehetséges beállítási lehetõség átfogó ismertetése meghaladná az egész kézikönyv határait, ezért most csak a leggyakrabban módosított direktívákat fogjuk ismertetni. `ServerRoot "/usr/local"`:: Ez adja meg az Apache számára az alapértelmezett könyvtárat. A binárisai ezen belül a [.filename]#bin# és [.filename]#sbin# alkönyvtárakban, a konfigurációs állományai pedig az [.filename]#etc/apache# könyvtárban tárolódnak. `ServerAdmin saját@címünk.az.interneten`:: Erre a címre küldhetik nekünk a szerverrel kapcsolatos hibákat. Ez a cím egyes szerver által generált oldalakon jelenik meg, például hibák esetében. `ServerName www.minta.com`:: A `ServerName` segítségével meg tudjuk adni, hogy milyen nevet küldjön vissza a szerver a klienseknek olyankor, ha az nem egyezne meg a jelenlegivel (vagyis a `www` nevet használjuk a gépünk valódi neve helyett). `DocumentRoot "/usr/local/www/data"`:: A `DocumentRoot` adja meg azt a könyvtárat, ahonnan kiszolgáljuk a dokumentumokat. Alapértelmezés szerint az összes kérés erre a könyvtárra fog vonatkozni, de a szimbolikus linkek és az álnevek akár más helyekre is mutathatnak. A változtatások végrehajtása elõtt mindig is jó ötlet biztonsági másolatot készíteni az Apache konfigurációs állományairól. Ahogy sikerült összerakni egy számunkra megfelelõ konfigurációt, készen is állunk az Apache futtatására. === Az Apache futtatása A többi hálózati szervertõl eltérõen az Apache nem az inetd szuperszerverbõl fut. A kliensektõl érkezõ HTTP kérések minél gyorsabb kiszolgálásának érdekében úgy állítottuk be, hogy önállóan fusson. Ehhez egy szkriptet is mellékeltünk, amellyel igyekeztünk a lehetõ legjobban leegyszerûsíteni a szerver indítását, leállítását és újraindítását. Az Apache elsõ indításához adjuk ki a következõ parancsot: [source,shell] .... # /usr/local/sbin/apachectl start .... Így pedig a szervert bármikor leállíthatjuk: [source,shell] .... # /usr/local/sbin/apachectl stop .... Ha valamilyen okból megváltoztattuk volna a szerver beállításait, akkor ezen a módon tudjuk újraindítani: [source,shell] .... # /usr/local/sbin/apachectl restart .... Ha a jelenleg megnyitott kapcsolatok felbontása nélkül akarjuk újraindítani az Apache szervert, akkor ezt írjuk be: [source,shell] .... # /usr/local/sbin/apachectl graceful .... Mindezekrõl az man:apachectl[8] man oldalon találunk bõvebb leírást. Amennyiben szükségünk lenne az Apache elindítására a rendszer indításakor, akkor a következõ sort vegyünk fel az [.filename]#/etc/rc.conf# állományba: [.programlisting] .... apache_enable="YES" .... Az Apache 2.2 esetében: [.programlisting] .... apache22_enable="YES" .... Amikor az Apache `httpd` nevû programjának szeretnénk további paranccsori paramétereket átadni a rendszer indítása során, akkor ezeket így tudjuk megadni az [.filename]#rc.conf# állományban: [.programlisting] .... apache_flags="" .... Most, miután a webszerverünk mûködik, a böngészõnkkel mindezt ellenõrizni is tudjuk a `http://localhost/` cím beírásával. Ilyenkor az alapértelmezés szerinti [.filename]#/usr/local/www/data/index.html# állomány tartalmát láthatjuk. === Virtuális nevek Az Apache a virtuális nevek használatának két különbözõ módját ismeri. Ezek közül az elsõ módszer a név alapú virtualizáció (Name-based Virtual Hosting). Ilyenkor a kliens HTTP/1.1 fejlécébõl próbálja meg a szerver megállapítani a hivatkozási nevet. Segítségével több tartomány is osztozhat egyetlen IP-címen. Az Apache név alapú virtualizációjának beállításához az alábbi beállítást kell hozzátennünk a [.filename]#httpd.conf# állományhoz: [.programlisting] .... NameVirtualHost * .... Ha a webszerverünk neve `www.tartomany.hu`, és hozzá egy `www.valamilyenmasiktartomany.hu` virtuális nevet akarunk megadni, akkor azt a következõképpen tehetjük meg a [.filename]#httpd.conf# állományon belül: [source,shell] .... ServerName www.tartomany.hu DocumentRoot /www/tartomany.hu ServerName www.valamilyenmasiktartomany.hu DocumentRoot /www/valamilyenmasiktartomany.hu .... A címek és elérési utak helyére helyettesítsük be a használni kívánt címeket és elérési utakat. A virtuális nevek beállításának további részleteivel kapcsolatosan keressük fel az Apache hivatalos dokumentációját a http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/] címen (angolul). === Apache-modulok Az alap szerver képességeinek kiegészítéséhez több különbözõ Apache modul áll rendelkezésünkre. A FreeBSD Portgyûjteménye az Apache telepítése mellett lehetõséget ad a népszerûbb bõvítményeinek telepítésére is. ==== mod_ssl A mod_ssl modul az OpenSSL könyvtár használatával valósít meg erõs titkosítást a biztonságos socket réteg második, illetve harmadik verziójával (Secure Sockets Layer, SSL v2/v3) és a biztonságos szállítási rétegbeli (Transport Layer Security v1) protokoll segítségével. Ez a modul mindent biztosít ahhoz, hogy a megfelelõ hatóságok által aláírt tanúsítványokat tudjunk kérni, és ezáltal egy védett webszervert futtassunk FreeBSD-n. Ha még nem telepítettünk volna fel az Apache szervert, akkor a package:www/apache13-modssl[] porton keresztül a mod_ssl modullal együtt is fel tudjuk rakni az Apache 1.3.X változatát. Az SSL támogatása pedig már az Apache 2.X package:www/apache22[] porton keresztül elérhetõ változataiban alapértelmezés szerint engedélyezett. ==== Kapcsolódás nyelvekhez Mindegyik nagyobb szkriptnyelvhez létezik egy külön Apache-modul, amelyek segítségével komplett Apache-modulokat tudunk készíteni az adott nyelven. Gyakran a dinamikus honlapok is így próbálják a szerverbe épített belsõ értelmezõn keresztül a külsõ értelmezõ indításából és benne a szkriptek lefuttatásából fakadó költségeket megspórolni, ahogy errõl a következõ szakaszokban olvashatunk. === Dinamikus honlapok Az utóbbi évtizedben egyre több vállalkozás fordult az internet felé bevételeik és részesedéseinek növelésének reményében, amivel egyre jobban megnõtt az igény a dinamikus honlapokra is. Miközben bizonyos cégek, mint például a Microsoft(R), a saját fejlesztésû termékeikbe építettek be ehhez támogatást, addig a nyílt forrásokkal foglalkozó közösség sem maradt tétlen és felvette a kesztyût. A dinamikus tartalom létrehozásához többek közt Django, Ruby on Rails, a mod_perl és a mod_php modulok használhatóak. ==== Django A Django egy BSD típusú licensszel rendelkezõ keretrendszer, amelynek használatával nagy teljesítményû és elegáns webes alkalmazásokat tudunk gyorsan kifejleszteni. Tartalmaz egy objektum-relációs leképezõt, így az adattípusokat Python-objektumokként tudjuk leírni, és ezekhez az objektumokhoz egy sokrétû, dinamikus adatbázis hozzáférést nyújtó alkalmazásfejlesztõi felületet, így a fejlesztõknek egyetlen SQL utasítást sem kell megírniuk. Találhatunk még benne továbbá egy bõvíthetõ sablonrendszert, amelynek köszönhetõen az alkalmazás belsõ mûködése elválasztható a HTML-beli megjelenésétõl. A Django mûködéséhez a mod_python modulra, az Apache szerverre és egy tetszõlegesen választott SQL alapú adatbázisrendszerre van szükség. A hozzá tartozó FreeBSD port mindezeket automatikusan telepíti a megadott beállítások szerint. [[network-www-django-install]] .A Django telepítése az Apache, mod_python3 és a PostgreSQL használatával [example] ==== [source,shell] .... # cd /usr/ports/www/py-django; make all install clean -DWITH_MOD_PYTHON3 -DWITH_POSTGRESQL .... ==== Miután a Django és a hozzá szükséges komponensek felkerültek rendszerünkre, hozzunk létre egy könyvtárat a leendõ Django projektünknek és állítsuk be az Apache szervert, hogy az oldalunk belül a megadott linkekre a saját alkalmazásunkat hívja meg a beágyazott Python-értelmezõn keresztül. [[network-www-django-apache-config]] .Az Apache beállítása a Django és mod_python használatához [example] ==== A következõ sort kell hozzátennünk a [.filename]#httpd.conf# állományhoz, hogy az Apache bizonyos linkeket a webes alkalmazás felé irányítson át: [source,shell] .... SetHandler python-program PythonPath "['/a/django/csomagok/helye/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE azoldalam.beallitasai PythonAutoReload On PythonDebug On .... ==== ==== Ruby on Rails A Ruby on Rails egy olyan másik nyílt forráskódú keretrendszer, amivel lényegében egy teljes fejlesztõi készletet kapunk és amelyet kifejezetten arra élezték ki, hogy segítségével a webfejlesztõk sokkal gyorsabban tudjanak haladni és a komolyabb alkalmazások gyorsabb elkészítése se okozzon nekik gondot. A Portrgyûjteménybõl pillanatok alatt telepíthetõ. [source,shell] .... # cd /usr/ports/www/rubygem-rails; make all install clean .... ==== mod_perl Az Apache és Perl egyesítésén fáradozó projekt a Perl programozási nyelv és az Apache webszerver erejének összehangolásán dolgozik. A mod_perl modulon keresztül Perlben vagyunk képesek modulokat készíteni az Apache szerverhez. Ráadásul a szerverben egy belsõ állandó értelmezõ is található hozzá, ezzel igyekeznek megspórolni a külsõ értelmezõ és a Perl indításából keletkezõ többletköltségeket. A mod_perl több különbözõ módon állítható munkába. A mod_perl használatához nem szabad elfelejtenünk, hogy a mod_perl 1.0-ás verziója csak az Apache 1.3 változatával mûködik, és a mod_perl 2.0-ás változata pedig csak az Apache 2.X változataival. A mod_perl 1.0 a package:www/mod_perl[] portból telepíthetõ, valamint a statikusan beépített változata a package:www/apache13-modperl[] portban található. A mod_perl 2.0 a package:www/mod_perl2[] portból rakható fel. ==== mod_php A PHP, vagy másik nevén "PHP, a hipertext feldolgozó" egy általános célú szkriptnyelv, amelyet kifejezetten honlapok fejlesztéséhez hoztak létre. A szabványos HTML ágyazható nyelv felépítésében a C, Java(TM) és Perl nyelveket ötvözi annak elérése érdekében, hogy ezzel segítse a fejlesztõket a dinamikusan generált oldalak minél gyorsabb megírásában. A PHP5 támogatását úgy tudjuk hozzáadni az Apache webszerverhez, ha telepítjük a package:lang/php5[] portot. Ha a package:lang/php5[] portot most telepítjük elõször, akkor a vele kapcsolatos beállításokat tartalmazó `OPTIONS` menü automatikusan megjelenik. Ha ezzel nem találkoznánk, mert például valamikor korábban már felraktuk volna a package:lang/php5[] portot, akkor a port könyvtárában következõ parancs kiadásával tudjuk újra visszahozni: [source,shell] .... # make config .... A beállítások között jelöljük be az `APACHE` opciót, amelynek eredményeképpen létrejön az Apache webszerverhez használható mod_php5 betölthetõ modul. [NOTE] ==== A PHP4 modult még ma is rengeteg szerver használja több különbözõ okból (például kompatibilitási problémák vagy a már korábban kiadott tartalom miatt). Ha tehát a mod_php5 helyett inkább a mod_php4 modulra lenne szükségünk, akkor a package:lang/php4[] portot használjuk. A package:lang/php4[] portnál is megtalálhatjuk a package:lang/php5[] fordítási idejû beállításainak nagy részét. ==== Az iméntiek révén települnek és beállítódnak a dinamikus PHP alkalmazások támogatásához szükséges mouldok. Az [.filename]#/usr/local/etc/apache/httpd.conf# állományban ellenõrizni is tudjuk, hogy az alábbi részek megjelentek-e: [.programlisting] .... LoadModule php5_module libexec/apache/libphp5.so .... [.programlisting] .... AddModule mod_php5.c DirectoryIndex index.php index.html AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps .... Ahogy befejezõdött a mûvelet, a PHP modul betöltéséhez mindösszesen az `apachectl` paranccsal kell óvatosan újraindítanunk a webszervert: [source,shell] .... # apachectl graceful .... A PHP jövõbeni frissítéseihez már nem lesz szükségünk a `make config` parancsra, mivel a korábban kiválasztott `OPTIONS` menün belüli beállítasainkat a FreeBSD Portgyûjteményéhez tartozó keretrendszer automatikusan elmenti. A PHP FreeBSD-ben megtalálható támogatása kifejezetten moduláris, ezért az alap telepítése igencsak korlátozott. A további elemek hozzáadásához a package:lang/php5-extensions[] portot tudjuk használni. A port egy menüvezérelt felületet nyújt a PHP különbözõ bõvítményeinek telepítéséhez. Az egyes bõvítményeket azonban a megfelelõ portok használatával is fel tudjuk rakni. Például PHP5 modulhoz úgy tudunk támogatást adni a MySQL adatbázis szerverhez, ha telepítjük a [.filename]#databases/php5-mysql# portot. Miután telepítettünk egy bõvítményt, az Apache szerverrel újra be kell töltetnünk a megváltozott beállításokat: [source,shell] .... # apachectl graceful .... [[network-ftp]] == Állományok átvitele (FTP) === Áttekintés Az adatállomány átviteli protokoll (File Transfer Protocol, FTP) a felhasználók számára lehetõséget ad az ún. FTP szerverekre állományokat feltölteni, illetve onnan állományokat letölteni. A FreeBSD alaprendszere is tartalmaz egy ilyen FTP szerverprogramot, ftpd néven. Ezért FreeBSD alatt egy FTP szerver beállítása meglehetõsen egyszerû. === Beállítás A beállítás legfontosabb lépése, hogy eldöntsük milyen hozzáféréseken át lehet elérni az FTP szervert. Egy hétköznapi FreeBSD rendszerben rengeteg hozzáférés a különbözõ démonokhoz tartozik, de az ismeretlen felhasználók számára nem kellene megengednünk ezek használatát. Az [.filename]#/etc/ftpusers# állományban szerepelnek azok a felhasználók, akik semmilyen módon nem érhetik el az FTP szolgáltatást. Alapértelmezés szerint itt találhatjuk az elõbb említett rendszerszintû hozzáféréseket is, de ide minden további nélkül felvehetjük azokat a felhasználókat, akiknél nem akarjuk engedni az FTP elérését. Más esetekben elõfordulhat, hogy csak korlátozni akarjuk egyes felhasználók FTP elérését. Ezt az [.filename]#/etc/ftpchroot# állományon keresztül tehetjük meg. Ebben az állományban a lekorlátozni kívánt felhasználókat és csoportokat írhatjuk bele. Az man:ftpchroot[5] man oldalán olvashatjuk el ennek részleteit, ezért ennek pontos részleteit itt most nem tárgyaljuk. Ha az FTP szerverünkhöz névtelen (anonim) hozzáférést is engedélyezni akarunk, akkor ahhoz elõször készítenünk kell egy `ftp` nevû felhasználót a FreeBSD rendszerünkben. A felhasználók ezután az `ftp` vagy `anonymous` nevek, valamint egy tetszõleges jelszó (ez a hagyományok szerint a felhasználó e-mail címe) használatával is képesek lesznek bejelentkezni. Az FTP szerver ezután a névtelen felhasználók esetében meghívja a man:chroot[2] rendszerhívást, és ezzel lekorlátozza hozzáférésüket az `ftp` felhasználó könyvtárára. Két szöveges állományban adhatunk meg a becsatlakozó FTP kliensek számára üdvözlõ üzeneteket. Az [.filename]#/etc/ftpwelcome# állomány tartalmát még a bejelentkezés elõtt látni fogják a felhasználók, a sikeres bejelentkezést követõen pedig az [.filename]#/etc/ftpmotd# állomány tartalmát látják. Vigyázzunk, mert ennek az állománynak már a bejelentkezési környezethez képest relatív az elérése, ezért a névtelen felhasználók esetében ez konkrétan az [.filename]#~ftp/etc/ftpmotd# állomány lesz. Ahogy beállítottuk az FTP szervert, az [.filename]#/etc/inetd.conf# állományban is engedélyeznünk kell. Itt mindössze annyira lesz szükségünk, hogy eltávolítsuk a megjegyzést jelzõ "#" karaktert a már meglevõ ftpd sor elõl: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... Ahogy arról már a <> szót ejtett, az inetd beállításait újra be kell olvastatnunk a konfigurációs állomány megváltoztatása után. A <> írja le az inetd engedélyezésének részleteit. Az ftpd önálló szerverként is elindítható. Ehhez mindössze elegendõ csak a megfelelõ változót beállítani az [.filename]#/etc/rc.conf# állományban: [.programlisting] .... ftpd_enable="YES" .... Miután megadtuk az iménti változót, a szerver el fog indulni a rendszer következõ indítása során. Szükség esetén természetesen `root` felhasználóként a következõ paranccsal is közvetlenül elindítható: [source,shell] .... # /etc/rc.d/ftpd start .... Most már be is tudunk jelentkezni az FTP szerverre: [source,shell] .... % ftp localhost .... === Karbantartás Az ftpd démon a man:syslog[3] használatával naplózza az üzeneteket. Alapértelmezés szerint a rendszernaplózó démon az FTP mûködésére vonatkozó üzeneteket az [.filename]#/var/log/xferlog# állományba írja. Az FTP naplóinak helyét az [.filename]#/etc/syslog.conf# állományban tudjuk módosítani: [.programlisting] .... ftp.info /var/log/xferlog .... Legyünk körültekintõek a névtelen FTP szerverek üzemeltetésekor. Azt pedig kétszer is gondoljuk meg, hogy engedélyezzük-e a névtelen felhasználók számára állományok feltöltését, hiszen könnyen azon kaphatjuk magunkat, hogy az FTP oldalunk illegális állománycserék színterévé válik vagy esetleg valami sokkal rosszabb történik. Ha mindenképpen szükségünk lenne erre a lehetõségre, akkor állítsunk be olyan engedélyeket a feltöltött állományokra, hogy a többi névtelen felhasználó ezeket a tartalmuk tüzetes ellenõrzéséig ne is olvashassa. [[network-samba]] == Állomány- és nyomtatási szolgáltatások Microsoft(R) Windows(R) kliensek számára (Samba) === Áttekintés A Samba egy olyan elterjedt nyílt forráskódú szoftver, ami Microsoft(R) Windows(R) kliensek számára tesz lehetõvé állomány- és nyomtatási szolgáltatásokat. Az ilyen kliensek általa helyi meghajtóként képesek elérni a FreeBSD állományrendszerét, vagy helyi nyomtatóként a FreeBSD általt kezelt nyomtatókat. A Samba csomagja általában megtalálható a FreeBSD telepítõeszközén. Ha a FreeBSD-vel együtt nem raktuk fel a Samba csomagját, akkor ezt késõbb package:net/samba3[] port vagy csomag telepítésével pótolhatjuk. === Beállítás A Samba konfigurációs állománya a telepítés után [.filename]#/usr/local/shared/examples/samba/smb.conf.default# néven található meg. Ezt kell lemásolnunk [.filename]#/usr/local/etc/smb.conf# néven, amelyet aztán a Samba tényleges használata elõtt módosítanunk kell. Az [.filename]#smb.conf# állomány a Samba futásához használt beállításokat tartalmazza, mint például Windows(R) kliensek számára felkínált a nyomtatók és "megosztások" adatait. A Samba csomagban ezen kívül találhatunk még egy swat nevû webes eszközt, amellyel egyszerû módon tudjuk az [.filename]#smb.conf# állományt állítgatni. ==== A Samba webes adminisztrációs eszköze (SWAT) A Samba webes adminisztrációs segédeszköze (Samba Web Administration Tool, SWAT) az inetd démonon keresztül fut démonként. Ennek megfelelõn az [.filename]#/etc/inetd.conf# állományban a következõ sort kell kivennünk megjegyzésbõl, mielõtt a swat segítségével megkezdenénk a Samba beállítását: [.programlisting] .... swat stream tcp nowait/400 root /usr/local/sbin/swat swat .... Ahogy azt a <> is mutatja, az inetd démont újra kell indítanunk a megváltozott konfigurációs állományának újbóli beolvasásához. Miután az [.filename]#inetd.conf# állományban a swat engedélyezésre került, a böngészõnk segítségével próbáljunk meg a http://localhost:901[http://localhost:901] címre csatlakozni. Elõször a rendszer `root` hozzáférésével kell bejelentkeznünk. Miután sikeresen bejelentkeztünk a Samba beállításait tárgyaló lapra, el tudjuk olvasni a rendszer dokumentációját, vagy a menu:Globals[] fülre kattintva nekiláthatunk a beállítások elvégzésének. A menu:Globals[] részben található opciók az [.filename]#/usr/local/etc/smb.conf# állomány `[global]` szekciójában található változókat tükrözik. ==== Általános beállítások Akár a swat eszközzel, akár a [.filename]#/usr/local/etc/smb.conf# közvetlen módosításával dolgozunk, a Samba beállítása során a következõkkel mindenképpen össze fogunk futni: `workgroup`:: A szervert elérni kívánó számítógépek által használt NT tartomány vagy munkacsoport neve. `netbios name`:: A Samba szerver NetBIOS neve. Alapértelmezés szerint ez a név a gép hálózati nevének elsõ tagja. `server string`:: Ez a szöveg jelenik meg akkor, ha például a `net view` paranccsal vagy valamilyen más hálózati segédprogrammal kérdezzük le a szerver beszédesebb leírását. ==== Biztonsági beállítások A [.filename]#/usr/local/etc/smb.conf# állományban a két legfontosabb beállítás a választott biztonsági modell és a kliensek felhasználói jelszavainak tárolásához használt formátum. Az alábbi direktívák vezérlik ezeket: `security`:: Itt a két leggyakoribb beállítás a `security = share` és a `security = user`. Ha a kliensek a FreeBSD gépen található felhasználói neveiket használják, akkor felhasználói szintû védelemre van szükségünk (tehát a user beállításra). Ez az alapértelmezett biztonsági házirend és ilyenkor a klienseknek elõször be kell jelentkezniük a megosztott erõforrások eléréséhez. + A megosztás (share) szintû védelem esetében, a klienseknek nem kell a szerveren érvényes felhasználói névvel és jelszóval rendelkezniük a megosztott erõforrások eléréséhez. Ez volt az alapbeállítás a Samba korábbi változataiban. `passdb backend`:: A Samba számos különbözõ hitelesítési modellt ismer. A klienseket LDAP, NIS+, SQL adatbázis vagy esetleg egy módosított jelszó állománnyal is tudjuk hitelesíteni. Az alapértelmezett hitelesítési módszer a `smbpasswd`, így itt most ezzel foglalkozunk. Ha feltesszük, hogy az alapértelmezett `smbpasswd` formátumot választottuk, akkor a Samba úgy fogja tudni hitelesíteni a klienseket, ha elõtte létrehozzuk a [.filename]#/usr/local/private/smbpasswd# állományt. Ha a Windows(R)-os kliensekkel is el akarjuk érni a UNIX(R)-os felhasználói hozzáféréseinket, akkor használjuk a következõ parancsot: [source,shell] .... # smbpasswd -a felhasználónév .... [NOTE] ==== A Samba a 3.0.23c verziójától kezdõdõen a hitelesítéshez szükséges állományokat a [.filename]#/usr/local/etc/samba# könyvtárban tárolja. A felhasználói hozzáférések hozzáadására innentõl már a `tdbsam` parancs használata javasolt: [source,shell] .... # pdbedit -a -u felhasználónév .... ==== A http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/[ hivatalos Samba HOGYAN] ezekrõl a beállításokról szolgál további információkkal (angolul). Viszont az itt vázolt alapok viszont már elegendõek a Samba elindításához. === A Samba elindítása A package:net/samba3[] port a Samba irányítására egy új indító szkriptet tartalmaz. A szkript engedélyezéséhez, tehát általa a Samba elindításának, leállításának és újraindításának lehetõvé tételéhez vegyük fel a következõ sort az [.filename]#/etc/rc.conf# állományba: [.programlisting] .... samba_enable="YES" .... Ha még finomabb irányításra vágyunk: [.programlisting] .... nmbd_enable="YES" .... [.programlisting] .... smbd_enable="YES" .... [NOTE] ==== Ezzel egyben a rendszer indításakor automatikusan be is indítjuk a Samba szolgáltatást. ==== A Samba a következõkkel bármikor elindítható: [source,shell] .... # /usr/local/etc/rc.d/samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. .... Az rc szkriptekkel kapcsolatban a crossref:config[configtuning-rcd,Az rc használata FreeBSD alatt]t ajánljuk elolvasásra. A Samba jelen pillanatban három különálló démonból áll. Láthatjuk is, hogy az nmbd és smbd démonokat elindította a [.filename]#samba# szkript. Ha az [.filename]#smb.conf# állományban engedélyeztük a winbind névfeloldási szolgáltatást is, akkor láthatjuk, hogy ilyenkor a winbindd démon is elindul. A Samba így állítható le akármikor: [source,shell] .... # /usr/local/etc/rc.d/samba stop .... A Samba egy összetett szoftvercsomag, amely a Microsoft(R) Windows(R) hálózatokkal kapcsolatos széles körû együttmûködést tesz lehetõvé. Az általa felkínált alapvetõ lehetõségeken túl a többit a http://www.samba.org[http://www.samba.org] honlapon ismerhetjük meg (angolul). [[network-ntp]] == Az órák egyeztetése az NTP használatával === Áttekintés Idõvel a számítógép órája hajlamos elmászni. A hálózati idõ protokoll (Network Time Protocol, NTP) az egyik módja az óránk pontosan tartásának. Rengeteg internetes szolgáltatás elvárja vagy éppen elõnyben részesíti a számítógép órájának pontosságát. Például egy webszervertõl megkérdezhetik, hogy egy állományt adott ideje módosítottak-e. A helyi hálózatban az egyazon állományszerveren megosztott állományok ellentmondásmentes dátumozása érdekében szinte elengedhetetlen az órák szinkronizálása. Az olyan szolgáltatások, mint a man:cron[8] is komolyan építkeznek a pontosan járó rendszerórára, amikor egy adott pillanatban kell lefuttatniuk parancsokat. A FreeBSD alapból az man:ntpd[8] NTP szervert tartalmazza, amellyel más NTP szerverek segítségével tudjuk beállítani gépünk óráját, vagy éppen idõvel kapcsolatos információkat szolgáltatni másoknak. === A megfelelõ NTP szerverek kiválasztása Az óránk egyeztetéséhez egy vagy több NTP szerverre lesz szükségünk. Elõfordulhat, hogy a hálózati rendszergazdánk vagy az internet-szolgáltatónk már beállított egy ilyen szervert erre a célra. Ezzel kapcsolatban olvassuk el a megfelelõ leírásokat. A http://ntp.isc.org/bin/view/Servers/WebHome[nyilvánosan elérhetõ NTP szerverekrõl készült egy lista], ahonnan könnyedén ki tudjuk keresni a számunkra leginkább megfelelõ (hozzánk legközelebbi) szervert. Ne hagyjuk figyelmen kívül a szerverre vonatkozó házirendet és kérjünk engedélyt a használatához, amennyiben ez szükséges. Több, egymással közvetlen kapcsolatban nem álló NTP szerver választásával járunk jól, ha netalán az egyikük váratlanul elérhetetlenné vagy az órája pontatlanná válna. Az man:ntpd[8] a visszakapott válaszokat intelligensen használja fel, mivel esetükben a megbízható szervereket részesíti elõnyben. === A gépünk beállítása ==== Alapvetõ beállítások Ha a számítógépünk indításakor akarjuk egyeztetni az óránkat, akkor erre az man:ntpdate[8] nevû programot használhatjuk. Ez olyan asztali gépek számára megfelelõ választás, amelyeket gyakran indítanak újra és csak idõnként kell szinkronizálnunk. A legtöbb gépnek viszont az man:ntpd[8] használatára van szüksége. Az man:ntpdate[8] elindítása olyan esetekben is hasznos, ahol az man:ntpd[8] is fut. Az man:ntpd[8] az órát fokozatosan állítja, ellenben az man:ntpdate[8] az eltérés mértékétõl és irányától függetlenül egyszerûen átállítja a gép óráját a pontos idõre. Az man:ntpdate[8] elindítását úgy tudjuk engedélyezni a rendszer indításakor, ha az [.filename]#/etc/rc.conf# állományba berakjuk az `ntpdate_enable="YES"` sort. Emellett még `ntpdate_flags` változóban meg kell adnunk az alkalmazott beállítások mellett azokat a szervereket, amelyekkel szinkronizálni akarunk. ==== Általános beállítások Az NTP az [.filename]#/etc/ntp.conf# állományon keresztül állítható, amelyek felépítését az man:ntp.conf[5] man oldal tárgyalja. Íme erre egy egyszerû példa: [.programlisting] .... server ntplocal.minta.com prefer server timeserver.minta.org server ntp2a.minta.net driftfile /var/db/ntp.drift .... A `server` beállítás adja meg az egyeztetéshez használt szervereket, soronként egyet. Ha egy szerver mellett szerepel még a `prefer` paraméter is, ahogy azt a példában a `ntplocal.minta.com` mellett láthattuk, akkor a többivel szemben azt a szervert fogjuk elõnyben részesíteni. Az így kiemelt szervertõl érkezõ választ abban az esetben viszont eldobjuk, hogy a többi szervertõl kapott válasz jelentõs mértékben eltér tõle. Minden más esetben a õ válasza lesz a mérvadó. A `prefer` paramétert általában olyan NTP szerverekhez használják, amelyek közismerten nagy pontosságúak, tehát például külön erre a célra szánt felügyeleti eszközt is tartalmaznak. A `driftfile` beállítással azt az állományt adjuk meg, amiben a rendszeróra frekvencia eltolódásait tároljuk. Az man:ntpd[8] program ezzel ellensúlyozza automatikusan az óra természetes elmászását, ezáltal lehetõvé téve, hogy egy viszonylag pontos idõt kapjuk még abban az esetben is, amikor egy kis idõre külsõ idõforrások nélkül maradnánk. A `driftfile` beállítással egyben azt az állományt jelöljük ki, amely az NTP szervertõl kapott korábbi válaszokat tárolja. Ez az NTP mûködéséhez szükséges belsõ adatokat tartalmaz, ezért semmilyen más programnak nem szabad módosítania. ==== A szerverünk elérésének szabályozása Alapértelmezés szerint az NTP szerverünket bárki képes elérni az interneten. Az [.filename]#/etc/ntp.conf# állományban szereplõ `restrict` beállítás segítségével azonban meg tudjuk mondani, milyen gépek érhetik el a szerverünket. Ha az NTP szerverünk felé mindenféle próbálkozást el akarunk utasítani, akkor az [.filename]#/etc/ntp.conf# állományba a következõ sort kell felvennünk: [.programlisting] .... restrict default ignore .... [NOTE] ==== Ezzel egyben azonban a helyi beállításainkban szereplõ szerverek elérését is megakadályozzuk. Ha külsõ NTP szerverekkel is szeretnénk szinkronizálni, akkor itt is engedélyezünk kell ezeket. Errõl bõvebben lásd az man:ntp.conf[5] man oldalon. ==== Ha csak a belsõ hálózatunkban levõ gépek számára szeretnénk elérhetõvé tenni az órák egyeztetését, de sem a szerver állapotának módosítását nem engedélyezzük, sem pedig azt, hogy a vele egyenrangú szerverekkel szinkronizáljon, akkor az iménti helyett a [.programlisting] .... restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap .... sort írjuk bele, ahol a `192.168.1.0` a belsõ hálózatunk IP-címe és a `255.255.255.0` a hozzá tartozó hálózati maszk. Az [.filename]#/etc/ntp.conf# több `restrict` típusú beállítást is tartalmazhat. Ennek részleteirõl az man:ntp.conf[5] man oldalon, az `Access Control Support` címû szakaszban olvashatunk. === Az NTP futtatása Úgy tudjuk az NTP szervert elindítani a rendszerünkkel együtt, ha az [.filename]#/etc/rc.conf# állományban szerepeltetjük az `ntpd_enable="YES"` sort. Ha az man:ntpd[8] számára további beállításokat is át akarunk adni, akkor az [.filename]#/etc/rc.conf# állományban adjuk meg az `ntpd_flags` paramétert. Ha a gépünk újraindítása nélkül akarjuk elindítani a szerver, akkor az `ntpd` parancsot adjuk ki az [.filename]#/etc/rc.conf# állományban a `ntpd_flags` változóhoz megadott paraméterekkel. Mint például: [source,shell] .... # ntpd -p /var/run/ntpd.pid .... === Az ntpd használati idõleges internet csatlakozással Az man:ntpd[8] program megfelelõ mûködéséhez nem szükséges állandó internet kapcsolat. Ha azonban igény szerinti tárcsázással építjünk fel ideiglenes kapcsolatot, akkor érdemes letiltani az NTP forgalmát, nehogy feleslegesen aktiválja vagy tartsa életben a vonalat. Ha PPP típusú kapcsolatunk van, akkor az [.filename]#/etc/ppp/ppp.conf# állományban a `filter` direktívával tudjuk ezt leszabályozni. Például: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Nem engedjük az NTP által küldött adatoknak, hogy tárcsázást # kezdeményezzenek: set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Nem engedjük az NTP adatainak, hogy fenntartsák a kapcsolatot: set filter alive 1 deny udp dst eq 123 set filter alive 2 permit 0/0 0/0 .... Mindenezekrõl részletesebb felvilágosítást a man:ppp[8] man oldal `PACKET FILTERING` címû szakaszában és a [.filename]#/usr/shared/examples/ppp/# könyvtárban található példákban kaphatunk. [NOTE] ==== Egyes internet-szolgáltatók blokkolják az alacsonyabb portokat, ezáltal az NTP nem használható, mivel a válaszok nem fogják elérni a gépünket. ==== === További olvasnivalók Az NTP szerver dokumentációja HTML formátumban a [.filename]#/usr/shared/doc/ntp/# könyvtárban található. [[network-syslogd]] == Távoli gépek naplózása `syslogd` használatával A rendszernaplókkal kapcsolatos mûveletek egyaránt fontosak a biztonság és a karbantartás szempontjából. Ha közepes vagy nagyobb méretû, esetleg különbözõ típusú hálózatokban adminisztrálunk több gépet, akkor könnyen átláthatatlanná válhat a naplók rendszeres felügyelete. Ilyen helyzetekben a távoli naplózás beállításával az egész folyamatot sokkal kényelmesebbé tehetjük. Némileg képesek vagyunk enyhíteni a naplóállományok kezelésének terhét, ha egyetlen központi szerverre küldjük át az adatokat. Ekkor a FreeBSD alaprendszerében megtalálható alapeszközökkel, mint például a man:syslogd[8] vagy a man:newsyslog[8] felhasználásával egyetlen helyen be tudjuk állítani a naplók összegyûjtését, összefésülését és cseréjét. A most következõ példa konfigurációban az `A` gép, a `naploszerver.minta.com` fogja gyûjteni a helyi hálózatról érkezõ naplóinformációkat. A `B` gép, a `naplokliens.minta.com` pedig a szervernek küldi a naplózandó adatokat. Éles környezetben mind a két gépnek rendelkeznie kell megfelelõ DNS bejegyzésekkel, vagy legalább szerepelniük kell egymás [.filename]#/etc/hosts# állományaiban. Ha ezt elmulasztjuk, a szerver nem lesz hajlandó adatokat fogadni. === A naplószerver beállítása A naplószerverek olyan gépek, amelyeket úgy állítottunk be, hogy naplózási információkat tudjanak fogadni távoli számítógépekrõl. A legtöbb esetben így egyszerûsíteni tudunk a konfiguráción, vagy olykor egyszerûen csak hasznos, ha ezt a megoldást alkalmazzuk. Függetlenül attól, hogy miért használjuk, a továbblépés elõtt néhány elõkészületet meg kell tennünk. Egy rendesen beállított naplószervernek legalább a következõ követelményeknek kell eleget tennie: * az 514-es UDP portot engedélyezni kell mind a kliensen, mind pedig a szerveren futó tûzfal szabályrendszerében; * a man:syslogd[8] képes legyen a távoli kliens gépekrõl érkezõ üzeneteket fogadni; * a man:syslogd[8] szervernek és az összes kliensnek rendelkeznie kell érvényes DNS (közvetlen és inverz) bejegyzésekkel vagy szerepelnie kell az [.filename]#/etc/hosts# állományban. A naplószerver beállításához mindegyik klienst fel kell vennünk az [.filename]#/etc/syslog.conf# állományba, valamint meg kell adnunk a megfelelõ funkciót (facility): [.programlisting] .... +naplokliens.minta.com *.* /var/log/naplokliens.log .... [NOTE] ==== A man:syslog.conf[5] man oldalán megtalálhatjuk a különbözõ támogatott és elérhetõ _funkciókat_. ==== Miután beállítottuk, az összes adott funkcióhoz tartozó üzenet az elõbb megadott állományba ([.filename]#/var/log/naplokliens.log#) fog kerülni. A szerveren továbbá meg kell adnunk a következõ sort az [.filename]#/etc/rc.conf# állományban: [.programlisting] .... syslogd_enable="YES" syslogd_flags="-a naplokliens.minta.com -vv" .... Az elsõ sorral engedélyezzük a `syslogd` elindítását a rendszerindítás során, majd a második sorral engedélyezzük, hogy a kliens naplózni tudjon a szerverre. Itt még látható a `-vv` opció, amellyel a naplózott üzenetek részletességét tudjuk növelni. Ennek nagyon fontos a szerepe a naplózási funkciók behangolásakor, mivel így a rendszergazdák pontosan láthatják milyen típusú üzenetek milyen funkcióval kerültek rögzítésre a naplóban. Befejezésképpen hozzuk létre a naplóállományt. Teljesen mindegy, hogy erre milyen megoldást alkalmazunk, például a man:touch[1] remekül megfelel: [source,shell] .... # touch /var/log/naplokliens.log .... Ezután indítsuk újra és ellenõrizzük a `syslogd` démont: [source,shell] .... # /etc/rc.d/syslogd restart # pgrep syslog .... Ha válaszul megkapjuk a futó démon azonosítóját, akkor sikerült újraindítanunk, elkezdhetjük a kliens beállítását. Ha valamiért nem indult volna újra a szerver, az [.filename]#/var/log/messages# állományból próbáljuk meg kideríteni az okát. === A naplókliens beállítása A naplókliens az a gép, amely egy helyi naplópéldány karbantartása mellett továbbküldni a naplózandó információkat egy naplószervernek. Hasonlóan a naplószerverekhez, a klienseknek is teljesítenie bizonyos alapvetõ elvárásokat: * a man:syslogd[8] démon küldjön bizonyos típusú üzeneteket a naplószervernek, amely ezeket pedig képes legyen fogadni; * a hozzá tartozó tûzfal engedje át a forgalmat az 514-es UDP porton; * rendelkezzen mind közvetlen, mind pedig inverz DNS bejegyzéssel, vagy szerepeljenek az [.filename]#/etc/hosts# állományban. A kliens beállítása sokkal egyszerûbb a szerverhez képest. A kliensen adjuk hozzá a következõ sorokat az [.filename]#/etc/rc.conf# állományhoz: [.programlisting] .... syslogd_enabled="YES" syslogd_flags="-s -vv" .... A szerver beállításaihoz hasonlóan itt is engedélyezzük a `syslogd` démont és megnöveljük a naplózott üzenetek részletességét. A `-s` kapcsolóval pedig megakadályozzuk, hogy a kliens más gépekrõl is hajlandó legyen naplóüzeneteket elfogadni. A funkciók a rendszernek azon részét írják le, amelyhez létrejön az adott üzenet. Tehát például az `ftp` és `ipfw` egyaránt ilyen funkciók. Amikor keletkezik egy naplóüzenet valamelyikükhöz, általában megjelenik a nevük. A funkciókhoz tartozik még egy prioritás vagy szint is, amellyel az adott üzenet fontosságát jelzik. Ezek közül a leggyakoribb a `warning` (mint "figyelmeztetés") és `info` (mint "információ"). A használható funkciók és a hozzájuk tartozó prioritások teljes listáját a man:syslog[3] man oldalán olvashatjuk. A naplószervert meg kell adnunk a kliens [.filename]#/etc/syslog.conf# állományában. Itt a `@` szimbólummal jelezzük, hogy az adatokat egy távoli szerverre szeretnénk továbbküldeni, valahogy így: [.programlisting] .... *.* @naploszerver.minta.com .... Ezután a beállítás érvényesítéséhez újra kell indítanunk a `syslogd` démont: [source,shell] .... # /etc/rc.d/syslogd restart .... A man:logger[1] használatával próbáljuk ki a kliensrõl a aplóüzenetek hálózaton keresztüli küldését, és küldjünk valamit a `syslogd` démonnak: [source,shell] .... # logger "Udvozlet a naplokliensrol" .... A parancs kiadása után az üzenetnek mind a kliens, mind pedig a szerver [.filename]#/var/log/messages# állományában meg kell jelennie. === Hibakeresés Elõfordulhat, hogy a naplószerver valamiért nem kapja meg rendesen az üzeneteket, ezért valamilyen módon meg kell keresnünk a hiba okát. Ez több minden lehet, de általában két leggyakoribb ok valamilyen hálózati kapcsolódási vagy DNS beállítási hiba. Ezek teszteléséhez gondoskodjunk róla, hogy a gépek kölcsönösen elérhetõek egymásról az [.filename]#/etc/rc.conf# állományban megadott hálózati nevük szerint. Ha ezzel látszólag minden rendben van, akkor próbáljuk meg módosítani a `syslogd_flags` értékét az [.filename]#/etc/rc.conf# állományban. A most következõ példában a [.filename]#/var/log/naplokliens.log# teljesen üres, illetve a [.filename]#/var/log/messages# állomány semmilyen hibára utaló okot nem tartalmaz. A hibakereséshez még több információt a `syslogd_flags` átírásával tudunk kérni: [.programlisting] .... syslogd_flags="-d -a naploklien.minta.com -vv" .... Természetesen ne felejtsük el újraindítani a szervert: [source,shell] .... # /etc/rc.d/syslogd restart .... A démon újraindítása után közvetlenül az alábbiakhoz hasonló üzenetek árasztják el a képernyõt: [source,shell] .... logmsg: pri 56, flags 4, from naploszerver.minta.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from naploszerver.minta.com, msg syslogd: kernel boot file is /boot/kernel/kernel Logging to FILE /var/log/messages syslogd: kernel boot file is /boot/kernel/kernel cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name naplokliens.minta.com; rejected in rule 0 due to name mismatch. .... A diagnosztikai üzeneteket végigolvasva nyilvánvaló válik, hogy azért dobja el az üzeneteket a szerver, mert nem megfelelõ a gép neve. Miután átnézzük a beállításainkat, felfedezhetünk az [.filename]#/etc/rc.conf# állományban egy apró hibát: [.programlisting] .... syslogd_flags="-d -a naploklien.minta.com -vv" .... Láthatjuk, hogy ebben a sorban a `naplokliens` névnek kellene szerepelni, nem pedig a `naploklien` névnek. Miután elvégeztük a szükséges javításokat, indítsuk újra a szervert és vizsgáljuk meg az eredményt: [source,shell] .... # /etc/rc.d/syslogd restart logmsg: pri 56, flags 4, from naploszerver.minta.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from naploszerver.minta.com, msg syslogd: kernel boot file is /boot/kernel/kernel syslogd: kernel boot file is /boot/kernel/kernel logmsg: pri 166, flags 17, from naploszerver.minta.com, msg Dec 10 20:55:02 naploszerver.minta.com syslogd: exiting on signal 2 cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name naplokliens.minta.com; accepted in rule 0. logmsg: pri 15, flags 0, from naplokliens.minta.com, msg Dec 11 02:01:28 pgj: Masodik teszt uzenet Logging to FILE /var/log/naplokliens.log Logging to FILE /var/log/messages .... Itt már minden üzenet rendben megérkezett és a megfelelõ állományokba került (a [.filename]#/var/log/messages# a kliensen, és a [.filename]#/var/log/naplokliens.log# a szerveren)). === Biztonsági megfontolások Mint minden hálózati szolgáltatás esetén, ilyenkor is figyelembe kell vennünk bizonyos biztonsági megfontolásokat a tényleges konfiguráció kiépítése elõtt. Olykor elõfordulhat, hogy a naplók különbözõ kényes információkat tartalmaznak, mint például a helyi rendszeren futó szolgáltatások nevei, felhasználói nevek vagy egyéb konfigurációs adatok. A kliens és a szerver között hálózaton utazó adatok viszont se nem titkosítottak, se nem jelszóval védettek. Ha titkosítást szeretnénk használni, akkor javasoljuk például a package:security/stunnel[] portot, amellyel egy titkosított tunnelen keresztül tudunk adatokat küldeni a hálózaton. A helyi rendszer biztonságának szavatolása is fontos lehet. A naplók sem a használat során, sem pedig a lecserélésük után nem kerülnek titkosításra. Emiatt a helyi rendszerhez hozzáférõ felhasználók kedvükre nyerhetnek ki belõlük a rendszerünket érintõ konfigurációs információkat. Ezért ilyenkor nagyon fontos, hogy mindig a megfelelõ engedélyeket állítsuk be a naplókra. A man:newsyslog[8] segédprogrammal be tudjuk állítani a frissen létrehozott és a lecserélt naplók engedélyeit. Tehát könnyen megakadályozhatjuk a helyi felhasználók kíváncsiskodását, ha itt a naplók engedélyeit például a `600` kóddal adjuk meg. diff --git a/documentation/content/it/books/handbook/network-servers/_index.adoc b/documentation/content/it/books/handbook/network-servers/_index.adoc index d3d0bdb522..73c2b1640a 100644 --- a/documentation/content/it/books/handbook/network-servers/_index.adoc +++ b/documentation/content/it/books/handbook/network-servers/_index.adoc @@ -1,2398 +1,2397 @@ --- title: Capitolo 27. Server di rete part: Parte IV. Comunicazione di Rete prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 32 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Server di rete :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 27 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Sinossi Questo capitolo coprirà alcuni dei servizi di rete usati più di frequente sui sistemi UNIX(R). Fra gli argomenti toccati, ci saranno l'installazione, la configurazione, il test ed la manutenzione di molti tipi diversi di servizi di rete. Per vostro beneficio in tutto il capitolo saranno inclusi file di configurazione di esempio. Dopo aver letto questo capitolo, sarai in grado di: * Gestire il demone inetd. * Installare un file system di rete. * Installare un server NIS per condividere account utenti. * Installare impostazioni automatiche di rete usando DHCP. * Installare un server di risoluzione dei nomi. * Installare il server HTTP Apache. * Installare un File Transfer Protocol (FTP) Server. * Installare un file server e server di stampa per client Windows(R) usando Samba. * Sincronizzare la data e l'ora ed installare un time server, col protocollo NTP. Prima di leggere questo capitolo, dovresti: * Comprendere le basi dell'organizzazione degli scripts [.filename]#/etc/rc#. * Avere familiarità con la terminologia di rete di base. * Sapere come installare software aggiuntivo di terze parti (crossref:ports[ports,Installazione delle Applicazioni. Port e Package]). [[network-inetd]] == Il "Super-Server"inetd [[network-inetd-overview]] === Uno sguardo d'insieme man:inetd[8] viene talvolta definito l'"Internet Super-Server" perchè gestisce le connessioni verso molti servizi. Quando una connessione viene ricevuta da inetd, questo determina per quale programma la connessione sia destinata, esegue quel particolare processo e affida a lui la socket (il programma è invocato con la socket del servizio come descrittore di standard input, output ed error). Eseguire inetd per server dal carico non troppo alto può ridurre il carico complessivo di sistema, rispetto all'esecuzione individuale di ogni demone in modalità stand-alone. Principalmente, inetd è usato per lanciare altri demoni, ma molti protocolli triviali sono gestiti direttamente, come ad esempio i protocolli chargen, auth, e daytime. Questa sezione coprirà le basi della configurazione di inetd attraverso le opzioni da linea di comando ed il suo file di configurazione, [.filename]#/etc/inetd.conf#. [[network-inetd-settings]] === Impostazioni inetd viene inizializzato attraverso il sistema man:rc[8]. L'opzione `inetd_enable` è impostata a `NO` di default, ma può essere attivata da sysinstall durante l'installazione, a seconda della configurazione scelta dall'utente. Inserendo: [.programlisting] .... inetd_enable="YES" .... o [.programlisting] .... inetd_enable="NO" .... in [.filename]#/etc/rc.conf# si abiliterà o meno la partenza di inetd al boot. Il comando: [source,shell] .... # /etc/rc.d/inetd rcvar .... può essere utilizzato per mostrare le impostazioni attive al momento. Inoltre, diverse opzioni di linea di comando possono essere passate a inetd attraverso l'opzione `inetd_flags`. [[network-inetd-cmdline]] === Opzioni su linea di comando Come molti server di rete, inetd ha un numero di opzioni che possono essergli passate per modificare il suo comportamento. La lista di tutte le opzioni è: inetd synopsis: `inetd [-d] [-l] [-w] [-W] [-c maximum] [-C rate] [-a address | hostname] [-p filename] [-R rate] [configuration file]` Si possono passare opzioni ad inetd usando l'opzione `inetd_flags` in [.filename]#/etc/rc.conf#. Di default, `inetd_flags` è impostato a `-wW -C 60`, il che attiva il TCP wrapping per i servizi di inetd, ed impedisce ad ogni singolo indirizzo IP di richiedere qualsiasi servizio piùdi 60 volte al minuto. Gli utenti novizi possono notare con piacere che questi parametri di solito non devono essere modificati, anche se bisogna menzionare il fatto che le opzioni di limitazione delle connessioni sono utili solo se ci si accorge di ricevere un numero eccessivo di connessioni. L'intera lista delle opzioni di man:inetd[8] può essere trovata nel manuale di man:inetd[8]. -c maximum:: Specifica il numero massimo di invocazioni simultanee per ogni servizio; il default è illimitato. Può essere sovrascritto per ogni servizio dal parametro `max-child`. -C rate:: Specifica un numero massimo di volte in cui un servizio può essere invocato da un singolo indirizzo IP in un minuto; il default è illimitato. Può essere sovrascritto per ogni servizio con il parametro `max-connections-per-ip-per-minute`. -R rate:: Specifica il numero massimo di volte che un servizio può essere invocato in un minuto; il default è 256. L'impostazione 0 permette un numero illimitato di invocazioni. -s maximum:: Specifica il numero massimo di volte che un servizio può essere invocato per ogni periodo di tempo; il default è illimitato. Può essere sovrascritto per ogni singolo servizio con il parametro `max-child-per-ip`. [[network-inetd-conf]] === [.filename]#inetd.conf# La configurazione di inetd è fatta attraverso il file [.filename]#/etc/inetd.conf#. Quando viene apportata una modifica a [.filename]#/etc/inetd.conf#, si può forzare inetd a rileggere il suo file di configurazione eseguendo il comando: [[network-inetd-reread]] .Ricaricare il file di configurazione di inetd [example] ==== [source,shell] .... # /etc/rc.d/inetd reload .... ==== Ogni linea del file di configurazione specifica un singolo demone. I commenti nel file sono preceduti da un "#". Il formato di ogni riga del file [.filename]##/etc/inetd.conf## è il seguente: [.programlisting] .... nome del servizio tipo della socket protocollo {wait|nowait}[/max-child[/max-connections-per-ip-per-minute]] utente[:gruppo][/classe-di-login] programma-server argomenti-del-programma-server .... Un esempio di linea per il demone man:ftpd[8] usando l'IPv4: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... nome-del-servizio:: È il nome del servizio per il demone. Deve corrispondere ad un servizio elencato in [.filename]#/etc/services#. Questo determina su quale porta inetd deve restare in ascolto. Se viene creato un nuovo servizio, deve essere messo prima in [.filename]#/etc/services#. tipo-di-socket:: Una a scelta fra `stream`, `dgram`, `raw`, o `seqpacket`. `stream` deve essere usata per demoni basati sulla connessione, tipo TCP, mentre `dgram` è usato per demoni che usano il protocollo di trasporto UDP. protocollo:: Uno dei seguenti: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protocollo | Spiegazione |tcp, tcp4 |TCP IPv4 |udp, udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |Entrambi TCP IPv4 e v6 |udp46 |Entrambi UDP IPv4 e v6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: `wait|nowait` indica se il demone invocato da inetd è in grado di gestire la sua socket o meno. Il tipo di socket `dgram` deve usare l'opzione `wait`, mentre i demoni con socket stream, che sono in genere multi-thread, devono usare `nowait`. `wait` in genere fornisce socket multiple ad un singolo demone, mentre `nowait` lancia un demone figlio per ogni nuova socket. + Il massimo numero di demoni figli che inetd può lanciare si imposta attraverso l'opzione `max-child`. Se è richiesto un limite di dieci istanze per un particolare demone, un `/10` dovrebbe essere inserito dopo l'opzione `nowait`. Specificando `/0` si lascia un numero illimitato di figli. + Oltre all'opzione `max-child`, possono essere attivate due altre opzioni che limitano il massimo numero di connessioni da un singolo ip verso un particolare demone. `max-connections-per-ip-per-minute` limita il numero di connessioni da un particolare indirizzo IP per minuto, ad esempio un valore di dieci limiterebbe ogni singolo indirizzo IP a connettersi verso un certo servizio a dieci connessioni al minuto. `max-child-per-ip` limita il numero di figli che possono essere avviati su richiesta di un singolo indirizzo IP in ogni momento. Queste opzioni sono utili per prevenire eccessivo consumo delle risorse intenzionale o non intenzionale e attacchi Denial of Service (DoS) ad una macchina. + In questo campo, `wait` o `nowait` sono obbligatorie. `max-child` e `max-connections-per-ip-per-minute` e `max-child-per-ip` sono opzionali. + Un demone tipo-stream multi-thread senza i limiti `max-child` o `max-connections-per-ip-per-minute` dovrebbe essere semplicemente: `nowait`. + Lo stesso demone con un limite massimo di dieci demoni dovrebbe avere: `nowait/10`. + In aggiunta, la stessa impostazione con un limite di venti connessioni per IP al minuto ed un limite massimo di dieci demoni figli avrebbe: `nowait/10/20`. + Queste opzioni sono tutte utilizzate di default nelle impostazioni del demone man:fingerd[8] come si vede di seguito: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -s .... + Alla fine, un esempio di questo campo con 100 figli in tutto, con un massimo di 5 per singolo indirizzo IP sarebbe: `nowait/100/0/5`. user:: Questo è lo username sotto il quale un particolare demone dovrebbe girare. Di frequente, i demoni girano come utente `root`. Per motivi di sicurezza, è normale trovare alcuni server che girano con l'utente `daemon`, o il meno privilegiato utente `nobody`. server-program:: Il percorso assoluto del demone che deve essere eseguito quando è ricevuta una connessione . Se il demone è un servizio offerto da inetd internamente, bisogna usare `internal`. server-program-arguments:: Questa opzione funziona in congiunzione con `server-program` specificando gli argomenti, cominciando con `argv[0]`, passati al demone al momento dell'invocazione. Se `mydaemon -d` è la linea di comando, `mydaemon -d` sarà il valore dell'opzione `server-program-arguments`. Ancora, se un demone è un servizio interno, usa `internal`. [[network-inetd-security]] === Sicurezza A seconda delle scelte fatte all'installazione, molti servizi di inetd potrebbero essere attivi di default. Se non c'è necessità apparente per un particolare demone, considera di disabilitarlo. Usa un "#" a capo della riga del demone in questione in [.filename]##/etc/inetd.conf##, e quindi <>. Alcuni demoni, come fingerd, potrebbero non essere assolutamente desiderati, poichè forniscono all'attaccante informazioni che gli potrebbero risultare utili. Alcuni demoni non sono stati creati coll'obiettivo della sicurezza ed hanno timeout lunghi, o non esistenti. Questo permette ad un attaccante di inviare lentamente connessioni ad un particolare demone, saturando in questo modo le risorse disponibile. Può essere una buona idea impostare le limitazioni `max-connections-per-ip-per-minute` e `max-child` o `max-child-per-ip` su certi demoni se scopri di avere troppe connessioni. Di default, il TCP wrapping è attivo. Consulta la pagina del manuale di man:hosts_access[5] per impostare delle restrizioni TCP su certi demoni invocati da inetd. [[network-inetd-misc]] === Miscellanei daytime, time, echo, discard, chargen, e auth sono tutti servizi interni di inetd. Il servizio auth fornisce servizi di rete di identificazione ed è configurabile fino ad un certo punto, mentre gli altri possono solo essere accesi o spenti. Consulta la paigna di manuale di man:inetd[8] per dettagli più approfonditi. [[network-nfs]] == Network File System (NFS) Fra i molti differenti file system che FreeBSD supporta c'è il Network File System, conosciuto anche come NFS. NFS permette ad un sistema di condividere directory e file con altri sistemi in rete. Usando NFS, utenti e programmi possono accedere a file su sistemi remoti quasi come se fossero files locali. Alcuni dei più notevoli benefici che NFS ci fornisce sono: * Workstation locali usano meno spazio su disco perchè i dati usati in locale possono essere conservati su una singola macchina e restano accessibili agli altri sulla rete. * Non c'è bisogno per gli utenti di avere home directory separate su ogni macchina in rete. Le home directory possono essere poste sul server NFS e rese disponibili attraverso la rete. * Device di storage come floppy disk, drive CDROM, e drive Zip(R) possono essere usati da altre macchine sulla rete. Questo può ridurre il numero di device di storage rimuovibili sulla rete. === Come Funziona NFS NFS consiste di almeno due parti: un server ed uno o più client. Il client accede da remoto ai dati conservati sulla macchina server. Affinchè questo funzioni, alcuni processi devono essere configurati e devono essere attivi. Il server deve avere attivi i seguenti demoni: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Demone | Descrizione |nfsd |Il demone NFS che serve richieste da client NFS. |mountd |Il demone di mount NFS che serve le richieste che man:nfsd[8] gli passa. |rpcbind | Questo demone permette ai client NFS di scoprire quali porte il server NFS sta usando. |=== Il client può anche eseguire un demone, noto come nfsiod. Il demone nfsiod serve le richieste dal server NFS. E' opzionale, aiuta a migliorare le prestazioni ma non è indispensabile per operazioni corrette. Consultare la pagina di manuale di man:nfsiod[8] per più informazioni. [[network-configuring-nfs]] === Configurare NFS La configurazione di NFS è un processo relativamente semplice. I processi che devono essere attivi possono essere tutti avviati al boot della macchina con poche modifiche al tuo file [.filename]#/etc/rc.conf#. Sul server NFS assicurati che le seguenti opzioni sono configurati nel file [.filename]#/etc/rc.conf#: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" .... mountd viene eseguito automaticamente in caso il server NFS sia abilitato. Sul client, accertati che questa riga sia attiva nel file [.filename]#/etc/rc.conf#: [.programlisting] .... nfs_client_enable="YES" .... Il file [.filename]#/etc/exports# specifica quali file system NFS dovrebbe esportare (talora chiamate anche "share"). Ogni linea di [.filename]#/etc/exports# specifica un file system che deve essere esportato e quali macchine hanno accesso a quel file system. Assieme alle macchine che hanno accesso a quel file system, possono esserci specificate anche opzioni. Ci sono molte opzioni di questo tipo che possono essere usate in questo file ma solo poche saranno menzionate qui. Puoi facilmente scoprire le altre opzioni leggendo la pagina di manuale di man:exports[5]. Queste sono alcune linee di esempio del file [.filename]#/etc/exports#: I seguenti esempi danno un'idea di come esportare file system, anche se le impostazioni possono essere diverse a seconda del tuo ambiente e della tua configurazione di rete. Ad esempio, per esportare la directory [.filename]#/cdrom# a tre macchine di esempio che hanno lo stesso nome di dominio del server (da qui la mancanza di nome dominio per ognuno) o hanno delle linee nel vostro file [.filename]#/etc/hosts#. L'opzione `-ro` rende il file system esportato read-only. Con questo flag, il sistema remoto non sarà in grado di scrivere alcun cambiamento sul file system esportato. [.programlisting] .... /cdrom -ro host1 host2 host3 .... La seguente linea esporta la directory [.filename]#/home# a tre host identificati da indirizzo IP. E' una impostazione utile in caso tu abbia una rete privata senza un DNS server configurato. Opzionalmente il file [.filename]#/etc/hosts# può essere configurato per hostname interni. Per favore rileggi man:hosts[5] per più informazioni. Il flag `-alldirs` permette alle sottodirectory di fungere da mount point. In altre parole, non monterà le sottodirectory ma permetterà ai client di montare solo le directory che necessita o di cui ha bisogno. [.programlisting] .... /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... La linea seguente esporta [.filename]#/a# cosicchè due client da diversi domini possono accedere al file system. L'opzione `-maproot=root` permette all'utente `root` sul sistema remoto di scrivere dati sul file system esportato come utente `root`. Se il flag `-maproot=root` non è specificato, anche se l'utente ha accesso come `root` sul file system remoto, non sarà in grado di modificare files sul file system esportato. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... Affinchè un client abbia accesso ad un file system, questo deve avere permessi adeguati. Assicurati che il client sia elencato nel file [.filename]#/etc/exports#. In [.filename]#/etc/exports#, ogni linea rappresenta le informazioni per un file system esportato ad un host. Un host remoto può essere specificato solo una volta per file system, e può avere solo una entry di default. Ad esempio, supponi che [.filename]#/usr# sia un singolo file system. Il seguente [.filename]#/etc/exports# sarebbe invalido: [.programlisting] .... # Invalid when /usr is one file system /usr/src client /usr/ports client .... Un file system, [.filename]#/usr#, ha due linee che specificano exports verso lo stesso host, `client`. Il formato corretto per questa situazione è: [.programlisting] .... /usr/src /usr/ports client .... Le proprietà di un file system esportato ad un dato host devono essere tutte su una riga. Linee senza un cliente specificato sono trattate come un singolo host. Questo limita il modo di esportare file system, ma per la maggior parte delle persone non è un problema. Il seguente è un esempio di valida lista di esportazione, dove [.filename]#/usr# e [.filename]#/exports# [.filename]#/usr# and [.filename]#/exports# sono file system locali: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... Il demone mountd deve essere forzato a rileggere il file [.filename]#/etc/exports# ogni volta che lo modifichi, cosicchè i cambiamenti abbiano effetto. Questo può essere ottenuto inviando un segnale HUP al processo `mountd`: [source,shell] .... # kill -HUP `cat /var/run/mountd.pid` .... o invocando lo script `mountd` man:rc[8] con i parametri appropriati: [source,shell] .... # /etc/rc.d/mountd onereload .... Sei invitato a far riferimento a crossref:config[configtuning-initial,Configurazione Iniziale] per maggiori informazioni sugli script rc. Alternativamente, un reboot farà sì che FreeBSD imposti tutto correttamente. Non è necessario tuttavia effettuare un reboot. L'esecuzione del seguente comando da utente `root` dovrebbe avviare tutto. Sul server NFS: [source,shell] .... # rpcbind # nfsd -u -t -n 4 # mountd -r .... Sul client NFS: [source,shell] .... # nfsiod -n 4 .... Ora dovrebbe essere tutto pronto per montare un file system remoto. In questi esempi il nome del server sarà `server` e quello del client sarà `client`. Se vuoi solo temporaneamente montare un file system remoto o anche testare la configurazione, basta che esegui un comando come questo come utente `root` sul client: [source,shell] .... # mount server:/home /mnt .... Questo monterà la directory [.filename]#/home# del server sopra [.filename]#/mnt# sul client. Se tutto è impostato correttamente dovresti essere in grado di entrare nella directory [.filename]#/mnt# sul client e vedere tutti i file che sono sul server. Se vuoi montare automaticamente un file system remoto ogni volta che il computer fa boot, aggiungi il file system al file [.filename]#/etc/fstab#. Questo è un esempio: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... La pagina di manuale di man:fstab[5] elenca tutte le possibili opzioni. === Locking Alcune applicazioni (es. mutt) richiedono il lock dei file per operare in modo corretto. In caso di NFS, può essere utilizzato rpc.lockd per il lock dei file. Per abilitarlo, aggiungi la seguente riga al file [.filename]#/etc/rc.conf# sia sul client che sul server (assumendo che il client e server NFS siano già configurati): [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... Avvia l'applicazione con: [source,shell] .... # /etc/rc.d/nfslocking start .... Se non è richiesto un lock reale tra il server e il client NFS, è possibile dire al client NFS di fare un lock locale passando l'opzione `-L` a man:mount_nfs[8]. Ulteriori dettagli possono essere trovati nella pagina man di man:mount_nfs[8]. === Usi Pratici NFS ha molti usi pratici. Alcuni dei più usati sono elencati di seguito: * Fa sì che alcune macchine condividano un CDROM o un altro media fra di loro. Questo è un metodo più economico e spesso più convieniente di installare software su molte macchine. * Su grandi reti, potrebbe essere più conveniente configurare un server NFS centrale in cui conservare tutte le home directory degi utenti. Queste home directory possono essere esportate sulla rete cosicchè gli utenti abbiano sempre la stessa directory, indipendentemente dalla workstation dalla quale effettuino il login. * Molte macchine potrebbero avere una directory comune [.filename]#/usr/ports/distfiles#. In questo modo, quando hai bisogno di installare un port su molte macchine, puoi velocemente accedere al sorgente senza scaricarlo su ogni macchina. [[network-amd]] === Mount automatici con amd man:amd[8] (il demone di mount automatico) monta automaticamente un file system remoto ogni volta che un file o una directory in quel file system viene acceduto. I file system che sono inattivi per un certo periodo di tempo possono anche essere smontati automaticamente da amd. L'uso di amd fornisce una semplice alternativa a mount permanenti, dato che i mount permanenti sono di solito elencati in [.filename]#/etc/fstab#. amd opera connettendosi ad un server NFS sulle directory [.filename]#/host# e [.filename]#/net#. Quando si accede ad un file all'interno di una di queste directory, amd fa una ricerca del mount remoto corrispondente e lo monta automaticamente. [.filename]#/net# è usato per montare un file system esportato da un indirizzo IP, mentre [.filename]#/host# è usato per montare un export da un hostname remoto. Un accesso ad un file in [.filename]#/host/foobar/usr# dovrebbe comunicare a amd di cercare di montare l'export [.filename]#/usr# sull'host `foobar`. .Montare un export con amd [example] ==== Puoi osservare i mount disponibili di un host remoto con il comando `showmount`. Ad esempio, per vedere i mounts di un host chiamato `foobar`, puoi usare: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /host/foobar/usr .... ==== Come si vede nell'esempio, il comando `showmount` mostra [.filename]#/usr# come un export. Quando si cambia directory in [.filename]#/host/foobar/usr#, amd cerca di risolvere `foobar` e automaticamente monta l'export desiderato. amd può essere avviato dagli scripts di startup inserendo le seguenti linee in [.filename]#/etc/rc.conf#: [.programlisting] .... amd_enable="YES" .... Inoltre, altri flags personalizzati possono essere ad amd con le opzioni `amd_flags`. Di default, `amd_flags` è impostato a: [.programlisting] .... amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" .... Il file [.filename]#/etc/amd.map# definisce le opzioni di default con le quali gli export sono montati. Il file [.filename]#/etc/amd.conf# definisce alcune delle più avanzate caratteristiche di amd. Consulta le pagine di manuale di man:amd[8] e man:amd.conf[8] per maggiori informazioni. [[network-nfs-integration]] === Problemi nell'integrazione con altri sistemi Alcuni adapter Ethernet per sistemi PC hanno limitazioni che possono portare a seri problemi seri di rete, in particolare con NFS. Questa difficoltà non è specifica a FreeBSD, ma i sistemi FreeBSD ne sono affetti. I problemi avvengono quasi sempre quando sistemi PC (FreeBSD) sono connessi in rete con workstation ad alta performance, tipo quelli di Silicon Graphics, Inc., e Sun Microsystems, Inc. Il mount NFS funziona, ed alcune operazioni possono avere successo, ma d'improvviso sembra che il server non dia più risposte al client, anche se le richieste da e verso altri sistemi continuano ad essere processate. Questo avviene sul sistema client, sia che il client sia il sistema FreeBSD sia che sia la workstation. Su molti sistemi, non c'è modo di effettuare lo shutdown del client in modo pulito una volta che questo problema si sia manifestato. L'unica soluzione è spesso quella di resettare il client, poichè la situazione NFS non può essere risolta. Anche se la soluzione "corretta" è usare un adapter Ethernet dalle migliori prestazioni e capacità , c'è un semplice workaround che permetterà operazioni soddisfacenti. Se il sistem FreeBSD è il _server_, includi le opzioni `-w=1024` al mount dal client. Se il sistema FreeBSD è il _client_, allora monta il file system NFS con l'opzione `-r=1024`. Queste opzioni possono essere specificate usando il quarto campo della linea di [.filename]#fstab# sul client per mount automatici, o usa il parametro `-o` del comando man:mount[8] per mount manuali. Bisognerebbe notare che c'è un problema diverso, a volte confuso con questo, quando il server NFS ed il client sono su reti diverse. Se è questo il caso, _accertatevi_ che i vostri router indirizzino correttamente l'informazione necessaria su UDP, o non andrai da nessuna parte, indipendentemente da cosa tu stia cercando di fare. Nei seguenti esempi, `fastws` è il nome host (interfaccia) di una workstation ad alte prestazioni, e `freebox` è il nome host (interfaccia) di un sistema FreeBSD con un adapter Ethernet a basse prestazioni. Inoltre, [.filename]#/sharedfs# sarà il file system esportato (vedi man:exports[5]), e [.filename]#/project# sarà il mount point sul client per il file system montato. In tutti i casi, nota che le opzioni `hard` o `soft` e `bg` possono essere utili nella tua applicazione. Esempi dal sistema FreeBSD (`freebox`) come client da [.filename]#/etc/fstab# su `freebox`: [.programlisting] .... fastws:/sharedfs /project nfs rw,-r=1024 0 0 .... Come comando manuale di mount da `freebox`: [source,shell] .... # mount -t nfs -o -r=1024 fastws:/sharedfs /project .... Esempi dal sistema FreeBSD come server in [.filename]#/etc/fstab# su `fastws`: [.programlisting] .... freebox:/sharedfs /project nfs rw,-w=1024 0 0 .... Come comando di mount manuale su `fastws`: [source,shell] .... # mount -t nfs -o -w=1024 freebox:/sharedfs /project .... Praticamente ogni Ethernet adapter a 16-bit permetterà operazioni senza le succitate restrizioni sulla dimensione di lettura e scrittura. Per chiunque è interessato, ecco cosa succede quando occorre il problema, il che spiega anche perchè sia non riparabile. NFS tipicamente lavora con una dimensione di "block" di 8 K (anche se può creare frammenti di dimensione minore). Dal momento che la massima dimensione dei pacchetti Ethernet è attorno a 1500 bytes, il "block" NFS sarà diviso in molti pacchetti Ethernet anche se è pur sempre una singola unità per il codice di più alto livello e deve essere ricevuto, assemblato e _riconosciuto_ come una unità . La workstation ad alta performance può inviare pacchetti che comprendono le unità NFS una dietro l'altra, l'una vicino all'altra come permette lo standard.i Sulla scheda a minore capacità , gli ultimi pacchetti sovrascrivono i precedenti pacchetti della stessa unità prima che possano essere trasferiti all'host e l'unità nella sua interezza non può essere ricostruita o riconosciuta. Come risultato, la workstation andrà in timeout e cercherà ancora di ripetere l'operazione, ma cercherà con la stessa unità da 8 K, ed il processo sarà ripetuto ancora, all'infinito. Mantenendo la dimensione dell'unità al di sotto della limitazione dei pacchetti Ethernet, ci assicuriamo che ogni completo pacchetto Ethernet ricevuto possa essere ricono sciuto individualmente, evitando così la situazione deadlock. Sovrascritture possono anche capitare quando una workstation ad alte prestazioni riversi dati verso un sistema PC, ma con la scheda di rete migliore, sovrascritture di questo tipo non sono garantite su "unità " NFS. Quando una sovrascrittura avviene, le unità affette saranno ritrasmesse, e c'è una buona probabilità che saranno ricevute, assemblate, e riconosciute. [[network-nis]] == Network Information System (NIS/YP) === Cos'è? NIS, che sta per Network Information Services, fu sviluppato da Sun Microsystems per centralizzare l'amministrazione di sistemi UNIX(R) (in origine SunOS(TM)). Ora in sostanza è diventato uno standard di settore; tutti i sistemi UNIX(R) like (Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD, FreeBSD, etc) supportano NIS. NIS in precedenza era noto come Yellow Pages, ma per una questione di marchi, Sun ha cambiato il nome. Il vecchio termine (e yp) è ancora si incontra ancora spesso. E' un sistema client/server basato su RPC che permette ad un gruppo di macchine in un dominio NIS di condividere un insieme comune di file di configurazione. Questo permette ad un amministratore di sistema di installare sistemi client NIS con il minimo di dati di configurazione e di aggiungere, rimuovere o modificare dati di configurazione da una singola macchina. E' simile al sistema di domini di Windows NT(R); anche se le implementazioni interne dei due sistemi sono del tutto diverse, le funzionalità base possono essere paragonate. === Termini/Processi che Dovresti Conoscere Ci sono parecchi termini e molti importanti processi utente che incontrerai quando cercherai di implementare NIS su FreeBSD, sia che cerchi di creare un server NIS sia che cerchi di installare un client NIS: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Termine | Descrizione |Nome dominio NIS |Un server NIS master e tutti i suoi client (inclusi i suoi server slave) hanno un nome dominio NIS. Analogamente al nome dominio di Windows NT(R), il nome dominio NIS non ha nulla a che fare con il DNS. |rpcbind |Deve essere in esecuzione al fine di abilitare RPC (Remote Procedure Call, un protocollo di rete usato da NIS). Se rpcbind non è attivo, sarà impossibile portare in esecuzione un server NIS o fungere da client NIS |ypbind |Esegue il "bind" di un client NIS al suo server. Prenderà il nome dominio NIS dal sistema, e, usando RPC, si connetterà al server. ypbind è il fulcro di una comunicazione client-server in ambiente NIS; se ypbind muore su un client, questo non sarà in grado di accedere il server NIS. |ypserv |Dovrebbe essere in esecuzione solo sui server NIS;è il processo NIS vero e proprio. Se man:ypserv[8] muore, il server non sarà più in grado di rispondere a richieste NIS (si spera ci sia un server slave per sostituirlo). Ci sono alcune implementazioni di NIS (ma non quello di FreeBSD) che non cerca di ricollegarsi ad un altro server se il server che stava usando muore. Spesso, l'unica cosa che aiuta in questo caso è riavviare il processo server (o anche l'intero server o il processo ypbind sul client). |rpc.yppasswdd |Un altro processo che dovrebbe essere in esecuzione solo sui server master NIS; è un demone che permette a client NIS di cambiare le proprie password NIS. Se questo demone non è attivo, gli utenti dovranno loggarsi al server master NIS e cambiare le proprie password da lì. |=== === Come funziona? Ci sono tre tipi di host in ambiente NIS: master server, slave server e client. I server fungono da magazzino centralizzato per le informazioni sulla configurazione degli host. I server master mantengono la copia "ufficiale" di queste informazioni, mentre i server slave effettuano il mirror di queste informazioni per ridondanza. I client si affidano al server per ottenere queste informazioni. Le informazioni in molti file possono essere condivise in questo modo. I file [.filename]#master.passwd# ,[.filename]#group# e [.filename]#hosts# sono in genere condivisi in questo modo via NIS. Qualora un processo su un client necessiti di informazioni che normalmente sarebbero trovate in questi file in locale, fa una query al server NIS a cui è legato. ==== Tipi di macchine * Un _server master NIS_. Questo server, analogamente a primary domain controller Windows NT(R) , mantiene i file usati da tutti i client NIS. Il file [.filename]#passwd#, il file [.filename]#group#, e vari altri file usati da client NIS vivono sul server master. + [NOTE] ==== E' possibile per una macchina agire da master server NIS per più di un dominio NIS. Comunque, questo caso non sarà coperto in questa introduzione, che presuppone un ambiente NIS relativamente piccolo. ==== * _NIS slave server_. Analogamente a backup domain controller Windows NT(R), i server slave NIS mantengono copie dei file di dati del server master NIS. I server slave NIS garantiscono la ridondanza che viene richiesta in ambienti importanti. Inoltre aiutano a bilanciare il carico del server master: i client NIS si legano sempre al NIS server che risponde per primo alla loro richiesta, compresi i server slave. * _NIS client_. I client NIS, come la maggior parte delle workstation Windows NT(R) , si autenticano nei confronti del NIS server (o del domain controller Windows NT(R) nel caso di workstation Windows NT(R)) per effettuare il login. === Usare NIS/YP Questa sezione riguarderà l'installazione di un ambiente di esempio NIS. ==== Il Piano Supponiamo che tu sia l'amministratore di un piccolo laboratorio universitario. Questo laboratorio, che consiste di 15 macchine FreeBSD, al momento non ha un sistema centralizzato di amministrazione; ogni macchina ha il suo [.filename]#/etc/passwd# e [.filename]#/etc/master.passwd#. Questi file sono tenuti sincronizzati fra di loro attraverso intervento manuale; al momento, quando aggiungi un utente al laboratorio, devi eseguire `adduser` su tutte e 15 le macchine. Chiaramente, questa situazione è provvisoria, così hai deciso di convertire il laboratorio a NIS, usando due delle macchine come server. Così la configurazione del laboratorio adesso sembra questa: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Nome della macchina | Indirizzo IP | Ruolo della macchina |`ellington` |`10.0.0.2` |NIS master |`coltrane` |`10.0.0.3` |NIS slave |`basie` |`10.0.0.4` |Workstation della facoltà |`bird` |`10.0.0.5` |Macchina client |`cli[1-11]` |`10.0.0.[6-17]` |Altre macchine client |=== Se stai installando uno schema NIS per la prima volta, è una buona idea riflettere su come affrontarlo. Indipendemente dalla dimensione della rete, ci sono alcune decisioni che devono essere prese. ===== Scegliere un nome dominio NIS Questo può non essere il "nome dominio" a cui sei abituato. Per la precisione viene chiamato "nome dominio NIS". Quando un client fa il broadcast della sua richiesta per informazioni, include il nome del dominio NIS di cui fa parte. In questo modo molti server su una rete possono distinguere a quale server la richiesta è riferita. Considerate il nome dominio NIS come il nome per un gruppo di host che sono collegati per qualche motivo. Alcune organizzazioni scelgono di usare il loro nome dominio Internet come nome dominio NIS. Questo non è raccomandabile in quanto può causare confusione quando si cerchi di debuggare problemi di rete. Il nome dominio NIS dovrebbe essere unico all'interno della tua rete ed è utile che sia descrittivo del gruppo di macchine che rappresenta. Per esempio, il dipartimento di Arte della Acme Inc. può essere nel dominio "acme-art". Per questo esempio, si presume tu abbia scelto il nome `test-domain`. Comunque, alcuni sistemi operativi (principalmente SunOS(TM)) usano il loro nome dominio NIS come loro nome dominio Internet. Se una o più macchine sulla tua rete hanno questa restrizione, tu _devi_ usare il nome dominio Internet come il tuo nome dominio NIS. ===== Requisiti fisici dei server Ci sono molte cose da tener in mente quando si sceglie quale macchina usare come server NIS. Una delle caratteristiche più sfortunate di NIS è il livello di dipendenza che i client hanno verso il server. Se un client non riesce a contattare il server per il suo dominio NIS, molto spesso la macchina risulta inutilizzabile. La mancanza di informazioni utente e di gruppo fa sì che molti sistemi si blocchino. Tenendo questo in mente dovresti accertati di scegliere una macchina che non sia soggetta a reboot frequenti o una che non sia usata per sviluppo. Il server NIS dovrebbe essere in teoria una macchina stand alone il cui unico scopo di esistenza è essere un server NIS. Se hai una rete non pesantemente trafficata, è accettabile installare il server NIS su una macchina che esegue altri servizi, basta ricordarsi che se il server NIS diventa irrangiungibile, _tutti_ i tuoi client NIS ne saranno affetti in modo negativo. ==== Server NIS Le copie canoniche di tutte le informazioni NIS sono conservate su una singola macchina chiamata il server master NIS. I database usati per conservare le informazioni sono chiamate mappe NIS. In FreeBSD, queste mappe sono conservate in [.filename]#/var/yp/[nome-dominio]# dove [.filename]#[nome-dominio]# è il nome del dominio NIS che si server. Un singolo server NIS può supportare molti domini al tempo stesso, di conseguenza è possibile avere molte directory di questo tipo, una per ogni dominio supportato. Ogni dominio avrà il suo insieme indipendente di mappe. I server NIS master e slave gestiscono tutte le richieste NIS col demone `ypserv`. `ypserv` è responsabile per la ricezione delle richieste in entrata dai client NIS, traducendo il dominio richiesto e il nome mappa ad un percorso verso il file di database e trasmettendo i dati indietro al client. ===== Installare un server master NIS Installare un server master NIS può essere relativamente semplice, a seconda delle tue necessità . FreeBSD presenta un supporto nativo per NIS. Tutto quello che devi fare è aggiungere le seguenti linee a [.filename]#/etc/rc.conf#, e FreeBSD farà il resto. [.procedure] ==== [.programlisting] .... nisdomainname="test-domain" .... . Questa linea imposterà il nome domino NIS a `test-domain` al momento della configurazione di rete (ad esempio dopo il reboot). + [.programlisting] .... nis_server_enable="YES" .... . Questa linea dirà a FreeBSD di avviare i processi NIS server la prossima volta che la rete è riavviata. + [.programlisting] .... nis_yppasswdd_enable="YES" .... . Questo avvierà il demone `rpc.yppasswd` che, come accennato prima, permetterà agli utenti di cambiare la loro password NIS dalle macchine client. ==== [NOTE] ==== A seconda delle tue impostazioni NIS, potresti aver bisogno di aggiungere altre linee. Leggi la <>, di seguito, per dettagli. ==== Ora, tutto quello che devi fare è eseguire il comando `/etc/netstart` come super-utente. Questo imposterà il sistema, usando i valori che hai specificato in [.filename]#/etc/rc.conf#. ===== Inizializzare le mappe NIS Le _mappe NIS_ sono file di database, che sono conservati nella directory [.filename]#/var/yp#. Sono generati da file di configurazione nella directory [.filename]#/etc# del NIS master, con una eccezione: il file [.filename]#/etc/master.passwd#. C'è un buon motivo per questo, infatti normalmente non vuoi che siano propagate le password a `root` e ad altri account amministrativi a tutti gli altri server nel dominio NIS. Così prima di inizializzare le mappe NIS, dovresti: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... Dovresti rimuovere tutte le linee che riguardano account di sistema (`bin`, `tty`, `kmem`, `games`, etc.), così come altri account che non vuoi siano propagate ai client NIS (per esempio `root` ed ogni altro account con UID 0 (super-utente)). [NOTE] ==== Accertati che il file [.filename]#/var/yp/master.passwd# non sia nè leggibile dal gruppo nè dal resto del mondo (modo 600)! Usa il comando `chmod`, se appropriato. ==== Quando hai finito, è il momento di inizializzare le mappe NIS! FreeBSD include uno script chiamato `ypinit` che lo fa per te (leggi la sua pagina di manuale per dettagli). Nota che questo script è disponibile sulla maggior parte dei sistemi operativi UNIX(R) ma non su tutti. Su Digital Unix/Compaq Tru64 UNIX è chiamato `ypsetup`. Poichè stiamo generando mappe per un NIS master, passeremo l'opzione `-m` al comando `ypinit`. Per generare le mappe NIS, supponendo che tu abbia già eseguito i passi di cui sopra, esegui: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... `ypinit` dovrebbe aver creato [.filename]#/var/yp/Makefile# da [.filename]#/var/yp/Makefile.dist#. Quando creato, questo file assume che tu stia operando su un ambiente NIS a server singolo con solo macchine FreeBSD. Dal momento che `test-domain` ha anche un server slave, devi editare [.filename]#/var/yp/Makefile#: [source,shell] .... ellington# vi /var/yp/Makefile .... Dovresti commentare la linea che dice [.programlisting] .... NOPUSH = "True" .... (se non è già commentata). ===== Impostare un server slave NIS Impostare un server NIS slave è anche più semplice che impostare il master. Loggati al server slave ed edita il file [.filename]#/etc/rc.conf# esattamente come hai fatto col server master. L'unica differenza è che ora dobbiamo usare l'opzione `-s` quando eseguiamo `ypinit`. L'opzione `-s` richiede che il nome del server NIS sia passato, così la nostra linea di comando assomiglia alla seguente: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Don't forget to update map ypservers on ellington. .... Ora dovresti avere una directory chiamata [.filename]#/var/yp/test-domain#. Copie delle mappe NIS del master server dovrebbero risiedere in questa directory. Dovresti accertarti che siano aggiornate. La seguente linea di [.filename]#/etc/crontab# sul tuo server slave dovrebbe far ciò: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... Queste due linee forzano lo slave a sincronizzare le sue mappe con le mappe del server master. Anche se queste entry non sono obbligatorie, dal momento che il server master cerca di assicurarsi che tutte le modifiche alle sue mappe NIS siano comunicate ad i suoi slave e perchè le informazioni sulle password sono vitali per i sistemi che dipendono dal server, è una buona idea forzare gli aggiornamenti. Questo è ancora più importante su reti trafficate dove gli aggiornamenti delle mappe potrebbero non essere completi. Adesso, esegui il comando `/etc/netstart` anche sullo slave, per avviare il server NIS. ==== Client NIS Un client NIS stabilisce quello che è chiamato un binding ad un particolare NIS server usando il demone `ypbind`. `ypbind` controlla il dominio di default del sistema (impostato dal comando `domainname`), ed inizia a fare broadcast di richieste RPC sulla rete locale. Queste richieste specificano il nome del dominio per il quale `ypbind` sta cercando di stabilire un binding. Se un server è stato configurato a servire il dominio richiesto, risponderà a `ypbind`, che registrerà l'indirizzo del server. Se ci sono molti server disponibili (ad esempio un master e molti slave), `ypbind` userà l'indirizzo del primo che risponde. Da quel momento in poi, il sistema client dirigerà tutte le sue richieste NIS a quel server. `ypbind` occasionalmente farà un "ping" del server per accertarsi che sia su ed attivo. Se non riceve una risposta di uno dei suoi ping in un tempo accettabile, `ypbind` segnerà il dominio come non connesso e inizierà di nuovo a fare broadcasting nella speranza di localizzare un altro server. ===== Impostare un client NIS Impostare una macchina FreeBSD perchè sia un client NIS è abbastanza semplice. [.procedure] ==== . Edita il file [.filename]#/etc/rc.conf# e aggiungi le seguenti linee per impostare il nome dominio NIS ed avviare `ypbind` all'avvio della rete: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... + . Per importare tutte le possibili linee di password dal server NIS, rimuovi tutti gli account utente dal tuo [.filename]#/etc/master.passwd# ed usa `vipw` per aggiungere la seguente linea alla fine del file: + [.programlisting] .... +::::::::: .... + [NOTE] ====== Questa linea permetterà a chiunque con un valido account nella mappa delle password del server NIS di loggarsi sul client. Ci sono molti modi per configurare il tuo client NIS cambiando questa linea. Leggi la <> di seguito per maggiori informazioni. Per letture più dettagliate vedere il libro della O'Reilly `Managing NFS and NIS`. ====== + [NOTE] ====== Dovresti tenere almeno un account locale (non importato via NIS) nel tuo file [.filename]#/etc/master.passwd# e questo account dovrebbe essere anche un membro del gruppo `wheel`. Se c'è qualche problema con NIS, questo account può essere usato per loggarsi da remoto, diventare `root` e riparare le cose. ====== + . Per impostare tutte le possibili linee dei gruppi dal server NIS, aggiungi questa linea al tuo file [.filename]#/etc/group#: + [.programlisting] .... +:*:: .... ==== Dopo aver completato questi passi, dovresti essere in grado di eseguire `ypcat passwd` e vedere la mappa delle password del NIS server. === Sicurezza di NIS In generale, ogni utente remoto può eseguire una RPC a man:ypserv[8] ed ottenere i contenuti delle tue mappe NIS, ammesso che l'utente remoto conosca il tuo nome dominio. Per prevenire tali transazioni non autorizzate, man:ypserv[8] supporta una caratteristica chiamata "securenets" che può essere usata per restringere l'accesso ad un dato insieme di host. All'avvio man:ypserv[8] cercherà di caricare le informazioni delle securenets da un file chiamato [.filename]#/var/yp/securenets#. [NOTE] ==== Questo percorso varia a secondo del percorso specificato con l'opzione `-p`. Questo file contiene linee che consistono di una specificazione della rete e di una maschera di rete separate da spazi vuoti. Le linee che cominciano con "#" sono considerati commenti. Un esempio di file securenets può assomigliare al seguente: ==== [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... Se man:ypserv[8] riceve una richiesta da un indirizzo che coincide con una di queste regole, processerà la richiesta normalmente. Se l'indirizzo non coincide la richiesta sarà ignorata ed un messaggio di warning sarà loggato. Se il file [.filename]#/var/yp/securenets# non esiste, `ypserv` permetterà connessioni da ogni host. Il programma `ypserv` ha anche supporto per il pacchetto di Wietse Venema TCP Wrapper. Questo permette all'amministratore di usare i file di configurazione di TCP Wrapper per controlli sull'accesso al posto di [.filename]#/var/yp/securenets#. [NOTE] ==== Pur essendo entrambi questi meccanismi di accesso di controllo abbastanza sicuri, questi, come il test di porta privilegiata, sono vulnerabili agli attacchi "IP spoofing". Tutto il traffico relativo a NIS dovrebbe essere bloccato al firewall. I server che usano [.filename]#/var/yp/securenets# possono non riuscire a servire client NIS legittimi che abbiano implementazioni TCP/IP obsolete. Alcune di queste implementazioni impostano a zero tutti i bit degli host quando fanno broadcast e/o non riescono a osservare la maschera di sotto-rete quando calcolano l'indirizzo broadcast. Mentre alcuni di questi problemi possono essere corretti cambiando la configurazione del client, altri problemi possono causare il ritiro dei client in questione o l'abbandono di [.filename]#/var/yp/securenets#. Usando [.filename]#/var/yp/securenets# su un server con una tale obsoleta implementazione del TCP/IP è sicuramente una cattiva idea e causerà alla perdita della funzionalità NIS per gran parte della tua rete. L'uso del pacchetto TCP Wrapper aumenta la latenza del tuo server NIS. Il ritardo addizionale può essere lungo a sufficienza tanto da causare dei timeout in programmi client, specialmente su reti trafficate o con server NIS lenti. Se uno o più client soffre di questi sintomi, dovresti convertire il sistema dei client in questione a server NIS slave e forzarli a non fare il binding a loro stessi. ==== === Impedire ad Alcuni Utenti di Loggarsi Nel nostro laboratorio c'è una macchina `basie` che si suppone sia una workstation solo della facoltà . Non vogliamo togliere questa macchina dal dominio NIS, tuttavia il file [.filename]#passwd# sul server NIS master contiene account che sono sia della facoltà sia degli studenti. Cosa possiamo fare? C'è un modo di impedire a specifici utenti di loggarsi ad una macchina, anche se sono presenti nel database NIS. Per farlo, tutto quello che devi fare è aggiungere `-username` alla fine del file [.filename]#/etc/master.passwd# sulla macchina client, dove _username_ è lo username dell'utente di cui vuoi impedire l'accesso. E' meglio fare questo con `vipw` dato che `vipw` farà un controllo di correttezza dei tuoi cambiamenti a [.filename]#/etc/master.passwd#, e ricostruirà automaticamente il database delle password quando hai finito di editarlo. Ad esempio, se vogliamo impedire l'accesso all'utente `bill` verso l'host `basie` faremmo: [source,shell] .... basie# vipw [aggiungi -bill alla fine del file, poi esci] vipw: rebuilding the database... vipw: done basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin +::::::::: -bill basie# .... [[network-netgroups]] === Usare i Netgroups Il metodo mostrato nella sezione precedente funziona ragionevolmente bene se hai bisogno di regole speciali per un numero molto piccolo di utenti e/o macchine. Su reti più grandi, _certamente_ ti dimenticherai di impedire l'accesso di certi utenti a macchine dal ruolo critico, oppure potresti perfino finire a modificare ogni macchina separatamente, in questo modo perdendo il beneficio centrale di NIS: l'amministrazione _centralizzata_. La soluzione degli sviluppatori NIS a questo problema è chiamata _netgroups_. Il loro scopo e la loro semantica possono essere paragonate ai normali gruppi utenti usati dal file system UNIX(R). L'unica differenza è la mancanza di un ID numerico e l'abilità di definire un netgroup che includa sia gruppi utenti che altri netgroup. I netgroup furono sviluppati per gestire grandi reti complesse con centinaia di utenti e macchine. Da un lato questa è una Buona Cosa se sei obbligato a gestire una simile situazione. Dall'altro, questa complessità rende praticamente impossibile spiegare i netgroup con esempi relativamente semplici. L'esempio usato nel resto di questa sezione dimostra questo problema. Assumiamo che la favorevole introduzione di NIS nei tuoi laboratori catturi l'interesse dei tuoi superiori. Il tuo prossimo compito è di estendere il tuo dominio NIS per coprire alcune altre macchine del campo. Le due tabelle contengono i nomi dei nuovi utenti e delle nuove macchine, con una breve descrizione. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | User Name(s) | Description |`alpha`, `beta` |Impiegato normale del dipartimento IT |`charlie`, `delta` |Il nuovo apprendista del dipartimento IT |`echo`, `foxtrott`, `golf`, ... |Impiegato ordinario |`able`, `baker`, ... |Gli interni correnti |=== [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Machine Name(s) | Description |`war`, `death`, `famine`, `pollution` |Il tuoi server più importanti. Solo gli impiegati IT hanno il permesso di loggarsi in queste macchine. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |Server meno importanti. Tutti i membri del dipartimento IT hanno il permesso di loggarsi a queste macchine. |`one`, `two`, `three`, `four`, ... |Workstation normali. Solo _veri_ impiegati hanno permesso di accedere a queste macchine. |`trashcan` |Una macchina molto vecchia senza alcun dato critico. Anche gli interni hanno permesso di usare questa macchina. |=== Se provi ad implementare queste restrizioni bloccando separatamente ogni utente, dovresti aggiungere una linea `-user` ad ogni [.filename]#passwd# per ogni utente che non ha il permesso di loggarsi in quel sistema. Se ti dimentichi anche solo di una linea, potresti essere nei pasticci. Può essere ragionevole fare ciò correttamente durante l'installazione iniziale, comunque _certamente_ ti dimenticherai alla fine di aggiungere le linee per i nuovi utenti durante le operazioni giornaliere. Dopo tutto, Murphy era un ottimista. Gestire questa situazione con i netgroup offre molti vantaggi. Non c'è bisogno di gestire separatamente ogni utente; basta assegnare un utente ad uno o più netgroup e permettere o impedire il login a tutti i membri del netgroup. Se aggiungi una nuova macchina, dovrai solo definire restrizioni di login per i netgroup. Se un nuovo utente viene aggiunto, dovrai solo aggiungere l'utente ad uno o più netgroup. Questi cambiamenti sono indipendenti l'uno dall'altro: non più "per ogni combinazione di utenti e macchine fai ..."Se la tua installazione NIS è pianificata con attenzione, dovrai solo modificare esattamente un file centrale di configurazione per garantire o negare l'accesso alle macchine. Il primo passo è l'inizializzazione della mappa NIS netgroup. man:ypinit[8] di FreeBSD non crea questa mappa di default, ma la sua implementazione NIS la supporterà una volta che è stata creata. Per aggiungere una linea alla mappa, semplicemente usa il comando [source,shell] .... ellington# vi /var/yp/netgroup .... e poi inizia ad aggiungere contenuti. Per i nostri esempi abbiamo bisogno di almeno quattro netgroup: impiegati IT, apprendisti IT, impiegati normali ed interni. [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... `IT_EMP`, `IT_APP` etc. sono i nomi dei netgroup. Ogni gruppo fra parentesi tonde aggiunge uno o più account utente. I tre campi dentro il gruppo sono: . Il nome degli host dove le seguenti caratteristiche sono valide. Se non specifichi un nome host, la linea è valida per tutti gli host. Se specifichi un nome host, entrerai nel regno dell'oscurità , dell'orrore e della confusione assoluta. . Il nome dell'account che appartiene a questo netgroup. . Il dominio NIS per l'account. Puoi importare account da altri domini NIS nel tuo netgroup se sei uno di quei ragazzi sfortunati con più di un dominio NIS. Ognuno di questi campi può contenere wildcards. Leggi man:netgroup[5] per dettagli. [NOTE] ==== Nomi netgroup più lunghi di 8 caratteri non dovrebbero essere usati, specialmente se hai macchine che eseguono altri sistemi operativi all'interno del tuo dominio NIS. I nomi sono case sensitive; usare le lettere maiuscole per il tuo netgroup è un modo semplice per distinguere fra utenti, macchine e nomi di netgroup. Alcuni client NIS (non FreeBSD) non possono gestire netgroup con un numero troppo grande di linee. Ad esempio, alcune vecchie versioni di SunOS(TM) iniziano ad avere problemi se un netgroup contiene più di 15 _linee_. Puoi superare questo limite creando molti sotto-netgroup con 15 o meno utenti ed un vero netgroup che consiste dei sotto-netgroup: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Puoi ripetere questo processo se hai bisogno di più di 225 utenti all'interno di un singolo netgroup. ==== Attivare e distribuire la tua nuova mappa NIS è facile: [source,shell] .... ellington# cd /var/yp ellington# make .... Questo genererà le tre mappe NIS [.filename]#netgroup#, [.filename]#netgroup.byhost# e [.filename]#netgroup.byuser#. Usa man:ypcat[1] per controllare che le tue nuove mappe NIS siano disponibili: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... L'output del tuo primo comando dovrebbe assomigliare a [.filename]#/var/yp/netgroup#. Il secondo comando non produrrà output se non hai specificato netgroup specifici agli host. Il terzo comando può essere usato per ottenere una lista dei netgroup di un utente. L'installazione del client è abbastanza semplice. Per configurare il server `war`, devi solo eseguire man:vipw[8] e sostituire la linea [.programlisting] .... +::::::::: .... con [.programlisting] .... +@IT_EMP::::::::: .... Ora, solo i dati per l'utente definito nel netgroup `IT_EMP` sono importati nel database delle password di `war` e solo questi utenti hanno permesso di accesso. Sfortunatamente, questa limitazione si applica anche alla funzione della shell `~` ed a tutte le routine che convertono fra nomi utenti e user ID numerici. In altre parole,`cd ~user ` non funzionerà , `ls -l` mostrerà gli ID numerici invece dello username e `find . -user joe -print` darà l'errore `No such user`. Per riparare questo, dovrai importare tutte le linee dell'utente _senza permettere a loro di loggarsi sui tuoi server_. Questo può essere ottenuto aggiungendo un'altra linea a [.filename]#/etc/master.passwd#. Questo dovrebbe contenere: `+:::::::::/sbin/nologin`, dal significato "Importa tutte le entry ma imposta la shell di login a [.filename]#/sbin/nologin# nelle linee importate". Puoi sostituire ogni campo nella linea `passwd` piazzando un valore di default nel tuo [.filename]#/etc/master.passwd#. [WARNING] ==== Accertati che la linea `+:::::::::/sbin/nologin` sia piazzata dopo `+@IT_EMP:::::::::`. Altrimenti tutti gli account utente importati da NIS avranno [.filename]#/sbin/nologin# come loro shell di login. ==== Dopo questo cambiamento, dovrai solo cambiare una mappa NIS se un nuovo impiegato si unisce al dipartimento IT. Puoi usare un simile approccio per i server meno importanti sostituendo `+:::::::::` nella tua versione locale di [.filename]#/etc/master.passwd# con qualcosa del tipo: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin .... Le linee corrispondenti per le workstation normali potrebbero essere: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/sbin/nologin .... E tutto sarebbe a posto fino a che non c'è un cambiamento di policy dopo poche settimane: il dipartimento IT inizia ad assumere interni. Gli interni IT hanno permesso di usare le normali workstation ed i server meno importanti; e gli apprendisti IT hanno permesso di loggarsi ai server principali. Aggiungi un nuovo netgroup `IT_INTERN`, aggiungi i nuovi interni IT a questo nuovo netgroup `IT_INTERN`, e inizia a cambiare la configurazione su ogni nuova macchina... Come il vecchio adagio dice:"Errori nella pianificazione centralizzata porta a caos globale". L'abilità NIS di creare netgroup da altri netgroup può essere usata per prevenire situazioni come queste. Una possibilità è la creazione di netgroup basati sul ruolo. Per esempio, potresti creare un netgroup chiamato `BIGSRV` per definire le restrizioni di login per i server importanti, un altro netgroup chiamato `SMALLSRV` per i server meno importanti ed un terzo netgroup chiamato `USERBOX` per le workstation normali. Ognuna di questi netgroup contiene i netgroup che hanno permesso di accesso a queste macchine. Le nuove linee della tua mappa NIS dovrebbero assomigliare a questa: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... Questo metodo di definire restrizioni di login funziona ragionevolmente bene se puoi definire gruppi di macchine con restrizioni identiche. Sfortunatamente questa è l'eccezione, non la regola. La maggior parte del tempo, avrai necessità di definire restrizioni di login macchina per macchina. Definizioni di netgroup specifiche per ogni macchina sono l'altra possibilità per gestire il cambiamento di policy delineato sopra. In questo scenario il [.filename]#/etc/master.passwd# di ogni macchina deve contenere due linee che iniziano con "+". La prima di queste aggiunge un netgroup con l'account che ha il permesso di loggarsi alla macchina, il secondo aggiunge tutti gli altri account con [.filename]#/sbin/nologin# come shell. E' buona norma usare la versione "MAIUSCOLA" del nome macchina come nome del netgroup. In altre parole, le linee dovrebbero assomigliare a questa: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/sbin/nologin .... Una volta che hai completato questo task per tutte le macchine, non dovrai mai più modificare la versione locale di [.filename]#/etc/master.passwd#. Tutti gli ulteriori cambiamenti possono essere gestiti modificando la mappa NIS. Di seguito un esempio di una possibile mappa netgroup per questo scenario con altri vantaggi addizionali: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... Se stai usando qualche tipo di database per gestire i tuoi account utente, dovresti essere in grado di creare la prima parte della mappa con i tuoi tool di report del database. In questo modo, i nuovi utenti avranno accesso automaticamente alle macchine. Un ultima nota di avvertimento: può non essere sempre consigliabile usare netgroup basati sulle macchine. Se stai per mettere in produzione qualche dozzina o perfino qualche centinaia di macchine identiche per laboratori studente, dovresti usare netgroup basati sul ruolo invece che netgroup basati sulla macchina, per tenere la dimensione della mappa NIS al di sotto di un limite ragionevole. === Cose Importanti da Ricordare Ci sono ancora un paio di cose che dovrai cambiare ora che operi in ambiente NIS. * Ogni volta che devi aggiungere un utente al laboratorio devi aggiungerlo _solo_ al server master NIS e _devi ricordarti di ricostruire le mappe NIS_. Se ti dimentichi di farlo il nuovo utente non sarà in grado di loggarsi in alcuna macchina eccetto che sul server NIS master. Per esempio, se abbiamo bisogno di aggiungere un nuovo utente `jsmith` al laboratorio, faremmo: + [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... + Puoi anche eseguire `adduser jsmith` invece di `pw useradd jsmith`. * _Tieni gli account amministrativi fuori dalle mappe NIS_. Normalmente non vuoi che gli account amministrativ e le password si propaghino a macchine che avranno utenti che non dovrebbero avere accesso a quegli account. * _Tieni al sicuro il NIS master e slave, e minimizza il tempo in cui sono giù_. Se qualcuno hackera o semplicemente spegne queste macchine riesce a privare molte persone della possibilità di loggarsi al laboratorio. + Questa è la principale debolezza di ogni sistema centralizzato di amministrazione. Se non proteggi il tuo server NIS, avrai un mucchio di utenti arrabbiati! === Compatibilità con NIS v1 ypserv di FreeBSD supporta fino ad un certo punto client NIS v1. L'implementazione di NIS di FreeBSD usa solo il protocollo NIS v2, comunque altre implementazioni includono supporto per il protocollo v1 per compatibilità all'indietro coi vecchi sistemi. Il demone ypbind fornito con questi sistemi proverà a stabilire un binding con un server NIS v1 anche se potrebbero non averne mai bisogno (e possono continuare a fare broadcast in ricerca di uno anche dopo che hanno ricevuto risposta da un server v2). Nota che mentre il supporto per i client normali viene garantito, questa versione di ypserv non gestisce richieste di trasferimento di mappe v1; di conseguenza, non può essere usato come master o slave in congiunzione con server NIS più vecchi che supportano solo il protocollo v1. Fortunatamente, probabilmente non ci sono server del genere in uso oggi. [[network-nis-server-is-client]] === Server NIS che Sono Anche Client Bisogna prestare molta attenzione quando si esegue ypserv in un dominio multi-server dove le macchine server sono anche client NIS. E' generalmente una buona idea forzare i server ad effettuare il binding a sè stessi piuttosto che permettere loro di effettuare il broadcast delle richieste binding e potenzialmente possono fare il bind una all'altra. Possono risultare strani errori quando un server va giù e gli altri sono dipendenti da lui. Alla fine, tutti i client andranno in timeout e cercheranno di effettuare il bind ad altri server, ma il ritardo di questa operazione può essere considerevole e l'uscita di errore è ancora presente dato che i server possono fare il binding fra di loro di nuovo. Puoi forzare un host a fare il binding ad un server in particolare usando `ypbind` con l'opzione `-S`. Se non vuoi fare questa azione a mano ogni volta che fai il reboot del tuo server NIS, puoi aggiungere queste linee al tuo [.filename]#/etc/rc.conf#: [.programlisting] .... nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" .... Consulta man:ypbind[8] per ulteriori informazioni. === Formato delle Password Uno dei problemi più comuni in cui la gente incappa quando tenta di implementare NIS è la compatibilità del formato delle password. Se il tuo server NIS usa password criptate con DES, supporterà solo client che usano anche loro DES. Ad esempio, se hai client NIS Solaris(TM) nella rete, dovrai quasi certamente usare password criptate con DES. Per controllare quale formato il tuo server e client usano, dai un'occhiata a [.filename]#/etc/login.conf#. Se l'host è configurato per usare password criptate DES, la classe `default` conterrà una linea simile a questa: [.programlisting] .... default:\ :passwd_format=des:\ -:copyright=/etc/COPYRIGHT:\ [Further entries elided] .... Altri valori possibili per l'opzione `passwd_format` includono `blf` e `md5` (per password criptate con Blowfish e con MD5, rispettivamente). Se hai fatto modifiche a [.filename]#/etc/login.conf#, dovrai anche ricostruire il database delle possibilità di login, il che si ottiene eseguendo il seguente comando come `root`: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== Il formato delle password che sono già in [.filename]#/etc/master.passwd# non sarà aggiornato finchè un utente cambia la sua password per la prima volta _dopo_ che il database delle possibilità di login è ricostruito. ==== Dopodichè per assicurarti che le password siano criptate con il formato che hai scelto, dovresti anche controllare che `crypt_default` in [.filename]#/etc/auth.conf# dia precedenza al formato delle password scelto. Per farlo, inserisci il formato che hai scelto per primo nella lista. Ad esempio, quando usi password criptate DES, la linea dovrebbe essere: [.programlisting] .... crypt_default = des blf md5 .... Seguendo i passi sopra citati su ognuno dei FreeBSD basati su NIS server e client, puoi star sicuro che tutti siano d'accordo su quale formato delle password sia usato all'interno della rete. Se hai problemi nell'identificazione su un client NIS, questo è un buon punto di partenza per cercare possibili problemi. Ricordati: se vuoi mettere in produzione un server NIS per una rete eterogenea, dovrai probabilmente usare DES su tutti i sistemi poichè questo è il minimo standard comune. [[network-dhcp]] == Configurazione Automatica della Rete (DHCP) === Cos'è il DHCP? DHCP, il Protocollo di Configurazione Host Dinamico, descrive i passi attraverso i quali un sistema si può connettere ad una rete ed ottenere l'informazione necessaria per comunicare attraverso quella rete. Le versioni di FreeBSD prima della 6.0 usano l'implementazione DHCP client (man:dhclient[8]) dell'ISC (Internet Software Consortium). Le ultime versioni usano il `dhclient` di OpenBSD preso da OpenBSD 3.7. Tutte le informazioni specifiche all'implementazione di `dhclient` in questa sede sono riferite all'uso dei client DHCP sia di ISC che di OpenBSD. Il server DHCP è quello incluso nella distribuzione ISC. === Cosa Copre Questa Sezione Questa sezione descrive sia il lato client del sistema DHCP di ISC e di OpenBSD che il lato server del sistema DHCP ISC. Il programma client, `dhclient`, è già integrato con FreeBSD, e la parte server è disponibile nel port package:net/isc-dhcp3-server[]. Le pagine di manuale man:dhclient[8], man:dhcp-options[5], e man:dhclient.conf[5], oltre ai riferimenti elencati oltre, sono risorse utili. === Come Funziona Quando `dhclient`, il client DHCP, viene eseguito sulla macchina client, inizia a fare broadcasting di richieste per informazioni di configurazione. Di default queste richieste sono sulla porta UDP 68. Il server risponde sulla porta UDP 67, dando al client un indirizzo IP ed altre informazioni rilevanti di rete come la netmask, il router ed il DNS server. Tutte queste informazioni arrivano sotto forma di un "rilascio" DHCP e sono valide sono per un certo periodo di tempo (configurato dall'amministratore del server DHCP). In questo modo, gli indirizzi IP bloccati da client che non sono più connessi alla rete possono essere riutilizzati automaticamente. I client DHCP possono ottenere molti tipi di informazione dal server. Una lista esauriente può essere trovata in man:dhcp-options[5]. === L'Integrazione con FreeBSD FreeBSD integra completamente il client DHCP ISC o OpenBSD, `dhclient` (a seconda della versione di FreeBSD utilizzata). Viene fornito supporto al client DHCP sia con l'installazione sia con il sistema base, rendendo inutile il bisogno di una conoscenza dettagliata della configurazione di rete su ogni rete che abbia un server DHCP. `dhclient` è stato incluso in tutte le distribuzioni FreeBSD a partire dalla 3.2. DHCP è supportato da sysinstall. Quando configuri una interfaccia di rete con sysinstall, la seconda domanda che ti pone è: " Vuoi provare a configurare l'interfaccia via DHCP?". Una risposta affermativa eseguirà `dhclient`, e, se ha successo, riempirà le informazioni di configurazione della rete in automatico. Ci sono due cose che devi fare per far sì che il tuo sistema usi il DHCP all'avvio: * Accertati che il device [.filename]#bpf# sia compilato nel tuo kernel. Per fare ciò, aggiungi `device bpf` al tuo file di configurazione del kernel, e ricompilalo. Per maggiori informazioni su come ricompilare i kernel, vedi crossref:kernelconfig[kernelconfig,Configurazione del Kernel di FreeBSD]. + Il device [.filename]#bpf# è già parte del kernel [.filename]#GENERIC# che è fornito con FreeBSD, così se non hai un kernel custom, non dovresti aver bisogno di crearne uno al fine di far funzionare il DHCP. + [NOTE] ==== Quelli di voi che sono particolarmente attenti alla sicurezza, dovrebbero sapere che il device [.filename]#bpf# è anche il device che permette agli sniffer di pacchetti di funzionare correttamente (anche se devono sempre essere eseguiti come `root`). [.filename]#bpf#_è_ richiesto per l'uso del DHCP, ma se siete molto attenti alla sicurezza, non dovreste probabilmente aggiungere [.filename]#bpf# al vostro kernel in previsione di un uso futuro del DHCP. ==== * Edita il tuo [.filename]#/etc/rc.conf# per includere la seguente linea: + [.programlisting] .... ifconfig_fxp0="DHCP" .... + [NOTE] ==== Accertati di sostituire `fxp0` con il nome dell'interfaccia che intendi configurare dinamicamente, come descritto in . ==== + Se stai usando una locazione diversa per `dhclient`, o se desideri passare flags addizionali a `dhclient` includi anche le linee seguenti (editandole come necessario): + [.programlisting] .... dhcp_program="/sbin/dhclient" dhcp_flags="" .... Il server DHCP, dhcpd, è incluso come parte del port package:net/isc-dhcp3-server[] nella collezione dei ports. Questo port contiene il server DHCP ISC e la documentazione. === Files * [.filename]#/etc/dhclient.conf# + `dhclient` richiede un file di configurazione, [.filename]#/etc/dhclient.conf#. Tipicamente il file contiene solo commenti, essendo i default ragionevolmente corretti. Questo file di configurazione è descritto dalla pagina di manuale man:dhclient.conf[5]. * [.filename]#/sbin/dhclient# + `dhclient` è linkato staticamente e risiede in [.filename]#/sbin#. Le pagine di manuale di man:dhclient[8] danno maggiori informazioni su `dhclient`. * [.filename]#/sbin/dhclient-script# + `dhclient-script` è lo script di configurazione del client DHCP specifico di FreeBSD. Viene descritto in man:dhclient-script[8] ma non dovrebbe aver bisogno di nessuna modifica utente per funzionare correttamente. * [.filename]#/var/db/dhclient.leases# + Il client DHCP mantiene un database di validi rilasci in questo file, che viene scritto come un log. man:dhclient.leases[5] ne dàuna descrizione leggermente più estesa. === Ulteriori Letture Il protocollo DHCP è descritto in maniera estesa in http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Informazioni aggiuntive sono presenti a questo URL: http://www.dhcp.org/[http://www.dhcp.org/]. [[network-dhcp-server]] === Installare e Configurare un Server DHCP ==== Cosa Copre Questa Sezione Questa sezione fornisce informazioni su come configurare un sistema FreeBSD che funzioni come un server DHCP usando l'implementazione del server DHCP dell'ISC (Internet Software Consortium). Il server non viene fornito come parte di FreeBSD, così dovrai installare il port package:net/isc-dhcp3-server[] per fornire questo servizio. Vedi crossref:ports[ports,Installazione delle Applicazioni. Port e Package] per più informazioni su come usare la Collezione dei Port. ==== Installazione del DHCP Server Per configurare il tuo sistema FreeBSD come un server DHCP, assicurati che il device man:bpf[4] sia compilato nel kernel. Per farlo, aggiungi `device bpf` al file di configurazione del kernel, e ricompilalo. Per maggiori informazioni su come compilare un kernel, vedi crossref:kernelconfig[kernelconfig,Configurazione del Kernel di FreeBSD]. Il device [.filename]#bpf# è già parte del kernel [.filename]#GENERIC# che viene fornito con FreeBSD, così non hai bisogno di creare un kernel custom per far funzionare il DHCP. [NOTE] ==== Quelli di voi che sono particolarmente attenti alla sicurezza, dovrebbero notare che [.filename]#bpf# è anche il device che permette agli sniffer di pacchetti di funzionare correttamente (anche se tali programmi hanno bisogno di accesso privilegiato). [.filename]#bpf# _è _ richiesto per il funzionamento del DHCP, ma se siete molto attenti alla sicurezza, probabilmente non dovreste includere [.filename]#bpf# nel vostro kernel semplicemente perchè vi aspettate di usare il DHCP in qualche momento. ==== La prossima cosa che devi fare è editare il file [.filename]#dhcpd.conf# che è stato installato dal port package:net/isc-dhcp3-server[]. Di default, questo sarà [.filename]#/usr/local/etc/dhcpd.conf.sample# e dovresti copiare questo file in [.filename]#/usr/local/etc/dhcpd.conf# prima di procedere con i cambiamenti. ==== Configurare il Server DHCP [.filename]#dhcpd.conf# è composto di dichiarazioni riguardanti sottoreti ed host, e forse lo si spiega meglio con un esempio: [.programlisting] .... option domain-name "example.com";<.> option domain-name-servers 192.168.4.100;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 3600;<.> max-lease-time 86400;<.> ddns-update-style none;<.> subnet 192.168.4.0 netmask 255.255.255.0 { range 192.168.4.129 192.168.4.254;<.> option routers 192.168.4.1;<.> } host mailhost { hardware ethernet 02:03:04:05:06:07;<.> fixed-address mailhost.example.com;<.> } .... <.> Questa opzione specifica il dominio che verrà servito ai client come il dominio di default di ricerca. Si veda man:resolv.conf[5] per più informazioni. <.> Questa opzione specifica una lista di server DNS separata da virgole, che i client dovrebbero usare. <.> La netmask che sarà fornita ai client. <.> Un client potrebbe richiedere una lunghezza di tempo specifica per la quale il rilascio sarà valido. Altrimenti il server assegnerà un tempo di rilascio con questa durata (in secondi). <.> Questa è la lunghezza massima di tempo per la quale un server effettuerà un rilascio. Se un client dovesse richiedere un rilascio più lungo, sarà effettuato un rilascio, anche se sarà valido solo per `max-lease-time` secondi. <.> Questa opzione specifica se il server DHCP dovrà cercare di modificare il DNS quando un rilascio è accettato o liberato. Nella implementazione ISC questa opzione è _richiesta_. <.> Questo identifica quale indirizzo IP dovrà essere usato nel pool riservato per l'allocazione ad i client. Gli indirizzi IP fra, ed inclusi, quelli dichiarati sono assegnabili agli utenti. <.> Dichiara il default gateway che sarà assegnato ad i client. <.> L'indirizzo hardware MAC di un host (cosicchè il server DHCP possa riconoscere un host quando fa una richiesta). <.> Specifica che all'host dovrebbe sempre essere fornito lo stesso indirizzo IP. Nota che usare un hostname è corretto in questo caso, dato che il DHCP server risolverà l'hostname stesso prima di restituire l'informazione sul rilascio. Una volta che hai finito di scrivere il tuo [.filename]#dhcpd.conf#, puoi abilitare il server DHCP in [.filename]#/etc/rc.conf#, aggiungendo: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... Sostituisci il nome dell'interfaccia `dc0` con l'interfaccia (o le interfacce, separate da spazi) su cui il tuo server DHCP dovrebbe stare in ascolto per le richieste DHCP dei client. Quindi, puoi procedere ad avviare il server con il seguente comando: [source,shell] .... # /usr/local/etc/rc.d/isc-dhcpd.sh start .... Se hai bisogno di fare altri cambiamenti alla configurazione del server in futuro, è importante notare che l'invio di un segnale `SIGHUP` a dhcpd_non_ fa sì che il file di configurazione sia ricaricato, come avviene con la maggior parte dei demoni. Dovrai inviare un segnale `SIGTERM` per fermare il processo, e poi riavviarlo usando il comando sopracitato. ==== Files * [.filename]#/usr/local/sbin/dhcpd# + dhcpd è linkato staticamente e risiede in [.filename]#/usr/local/sbin#. La pagina di manuale di man:dhcpd[8] installata con il port dà più informazioni su dhcpd. * [.filename]#/usr/local/etc/dhcpd.conf# + dhcpd richiede un file di configurazione, [.filename]#/usr/local/etc/dhcpd.conf#, prima che possa iniziare a fornire il servizio ai client. Questo file deve contenere tutte le informazioni che devono essere fornite ai client che sono serviti, oltre alle informazioni riguardanti le operazioni del server. Questo file di configurazione è descritto dalla pagina di manuale man:dhcpd.conf[5] installata dal port. * [.filename]#/var/db/dhcpd.leases# + Il server DHCP mantiene un database dei rilasci che ha effettuato in questo file, che viene scritto come un log. La pagina di manuale man:dhcpd.leases[5], installata dal port ne dà una descrizione leggermente pi` lunga. * [.filename]#/usr/local/sbin/dhcrelay# + dhcrelay è usata in ambienti avanzati dove un server DHCP reinvia le richieste da un client ad un altro server DHCP su una rete separata. Se hai bisogno di questa funzionalità, installa il port package:net/isc-dhcp3-relay[]. La pagina di manuale man:dhcrelay[8] fornita col port contiene più dettagli. [[network-dns]] == Domain Name System (DNS) === Uno sguardo d'insieme FreeBSD utilizza, di default, una versione di BIND (Berkeley Internet Name Domain), che è la più completa implementazione del protocollo DNS. DNS è il protocollo attraverso il quale nomi sono mappati ad indirizzi IP, e viceversa. Per esempio, una query per `www.FreeBSD.org ` riceverà una replica con l'indirizzo IP del web server del The FreeBSD Project, mentre una query per `ftp.FreeBSD.org` ritornerà l'indirizzo IP della corrispondente macchina FTP. Allo stesso modo, può avvenire l'opposto. Una query per un indirizzo IP può risolvere il suo nome host. Non è necessario avere in esecuzione un name server per fare DNS lookups su un sistema. FreeBSD al momento viene distribuito con software DNSBIND9 di default. La nostra installazione fornisce caratteristiche di sicurezza migliorate, un nuovo layout del file system e configurazione man:chroot[8] automatica. DNS è coordinato su Internet attraverso un sistema alquanto complesso di name server autoritativi, ed altri name server di più piccola scala che ospitano e gestiscono cache di informazioni individuali sui domini. Al momento corrente, BIND è mantenuto dall'Internet Software Consortium http://www.isc.org/[http://www.isc.org/]. === Terminologia Per comprendere questo documento, alcuni termini relativi al DNS devono essere capiti. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Termine | Definizione |Forward DNS |La mappa da hostname ad indirizzi IP. |Origine |Si riferisce al dominio coperto in un particolare file di zona. |named, BIND, name server |Nomi comuni per il pacchetto name server BIND all'interno di FreeBSD. |Risolutore |Un processo di sistema attraverso il quale una macchina fa query su un name server per informazioni di zona. |DNS inverso |L'opposto del forward DNS; mappare indirizzi IP su nomi host. |Zona root |L'inizio della gerarchia della zona Internet. Tutte le zone cadono sotto la zona root, analogamente a come tutti i file nel file system cadono sotto la directory root. |Zona |Un dominio individuale, sottodominio, o porzione del DNS amministrato dalla stessa autorità |=== Esempi di zone: * `.` è la zona root * `org.` è una zona Top Level Domain (TLD) sotto la zona root * `example.org.` è una zona sotto la zona `org.`TLD * `1.168.192.in-addr.arpa` è una zona che referenzia tutti gli indirizzi IP che cadono sotto lo spazio IP`192.168.1.*`. Come si può vedere, la parte più specifica di un nome host appare a sinistra. Per esempio `example.org.` è più specifico di `org.`, come `org.` è più specifico della zona root. La disposizione di ogni parte di un nome host è analoga ad un file system: la directory [.filename]#/dev# cade all'interno della root, e così via. === Ragioni per Avere in Esecuzione un Name Server Attualmente vengono usati due tipi di name server: un name server autoritativo, ed un name server cache. Un name server autoritativo è necessario quando: * uno vuole servire informazioni DNS a tutto il mondo, rispondendo in maniera autoritativa alle query. * un dominio, tipo `example.org`, è registrato e gli indirizzi IP devono essere assegnati ad hostname sotto questo. * un blocco di indirizzi IP richiede entry di DNS inverso (da IP ad hostname). * un name server di backup, chiamato uno slave, deve rispondere alle query. Un name server cache è necessario quando: * un server locale DNS può tenere in cache e rispondere più velocemente rispetto ad effettuare query ad un name server all'esterno. * una riduzione nel traffico complessivo di rete è desiderato (è stato calcolato che il traffico DNS conta più del 5% sul traffico totale di Internet). Quando uno fa una query per risolvere `www.FreeBSD.org`, il risolutore di solito fa una query al name server dell'ISP a cui si è connessi, ed ottiene una risposta. Con un server DNS locale, che fa cache, la query deve essere effettuata una volta sola dal server DNS che fa cache. Ogni query aggiuntiva non dovrà cercare all'esterno della rete locale, dato che l'informazione è tenuta in cache localmente. === Come Funziona In FreeBSD, il demone BIND è chiamato named per ovvie ragioni. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | File | Descrizione |man:named[8] |Il demone BIND. |man:rndc[8] |Programma di controllo del name server. |[.filename]#/etc/namedb# |Directory dove risiedono le informazioni di zona di BIND. |[.filename]#/etc/namedb/named.conf# |File di configurazione del demone. |=== A seconda di come certe zone sono configurate sul server, i file relativi a quelle zone possono essere trovate nelle sottodirectory [.filename]#master#, [.filename]#slave#, or [.filename]#dynamic# della directory [.filename]#/etc/namedb#. Questi file contengono le informazioni DNS che saranno distribuite dal name server in risposta alle query. === Avviare BIND Dato che BIND è installato di default, configurarlo è relativamente semplice. La configurazione di default di named è quella di un name server basilare, eseguito in ambiente man:chroot[8]. Per avviare il server una volta con questa configurazione, usa il seguente comando: [source,shell] .... # /etc/rc.d/named forcestart .... Per assicurarsi che il demone named sia avviato alla partenza, metti la seguente riga in [.filename]#/etc/rc.conf#: [.programlisting] .... named_enable="YES" .... Ci sono ovviamente molte opzioni di configurazione per [.filename]#/etc/namedb/named.conf# che sono al di là dello scopo di questo documento. Comunque, se siete interessati nelle opzioni di avvio per named su FreeBSD, dai un'occhiata ai flags `named_` in [.filename]#/etc/defaults/rc.conf# e consulta la pagina di manuale man:rc.conf[5]. Anche la sezione crossref:config[configtuning-initial,Configurazione Iniziale] è una buona base di partenza. === File di Configurazione I file di configurazione per named al corrente risiedono nella directory [.filename]#/etc/named# e necessiteranno di modifiche prima dell'uso, a meno che non si voglia un semplice resolver. Qui è dove la maggior pare della configurazione viene effettuata. ==== Usando `make-localhost` Per configurare una zona master per il localhost visita la directory [.filename]#/etc/namedb# ed esegui il seguente comando: [source,shell] .... # sh make-localhost .... Se tutto è andato bene, un nuovo file dovrebbe esistere nella sottodirectory [.filename]#master#. I nomi dei file dovrebbero essere [.filename]#localhost.rev# per il local domain name e [.filename]#localhost-v6.rev# per le configurazioni IPv6. Come il file di configurazione di default, l'informazione richiesta sarà presente nel file [.filename]#named.conf#. ==== [.filename]#/etc/namedb/named.conf# [.programlisting] .... // $FreeBSD$ // // Refer to the named.conf(5) and named(8) man pages, and the documentation // in /usr/shared/doc/bind9 for more details. // // If you are going to set up an authoritative server, make sure you // understand the hairy details of how DNS works. Even with // simple mistakes, you can break connectivity for affected parties, // or cause huge amounts of useless Internet traffic. options { directory "/etc/namedb"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; // If named is being used only as a local resolver, this is a safe default. // For named to be accessible to the network, comment this option, specify // the proper IP address, or delete this option. listen-on { 127.0.0.1; }; // If you have IPv6 enabled on this system, uncomment this option for // use as a local resolver. To give access to the network, specify // an IPv6 address, or the keyword "any". // listen-on-v6 { ::1; }; // In addition to the "forwarders" clause, you can force your name // server to never initiate queries of its own, but always ask its // forwarders only, by enabling the following line: // // forward only; // If you've got a DNS server around at your upstream provider, enter // its IP address here, and enable the line below. This will make you // benefit from its cache, thus reduce overall DNS traffic in the Internet. /* forwarders { 127.0.0.1; }; */ .... Proprio come dicono i commenti, per beneficiare di una cache di un server superiore, può essere abilitato `forwarders`. Sotto circostanze normali, un name server farà query ricorsive attraverso Internet cercando certi name server fino a chè non trova la risposta che sta cercando. Averlo abilitato farà sì che sarà fatta prima una query verso il name server superiore (o il name server fornito), avvantaggiandosi della sua cache. Se il name server superiore è un name server molto trafficato e veloce, può valere la pena di abilitarlo. [WARNING] ==== `127.0.0.1` _non_ funzionerà qui. Cambia questo indirizzo IP in un name server superiore. ==== [.programlisting] .... /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND versions 8 and later * use a pseudo-random unprivileged UDP port by default. */ // query-source address * port 53; }; // If you enable a local name server, don't forget to enter 127.0.0.1 // first in your /etc/resolv.conf so this server will be queried. // Also, make sure to enable it in /etc/rc.conf. zone "." { type hint; file "named.root"; }; zone "0.0.127.IN-ADDR.ARPA" { type master; file "master/localhost.rev"; }; // RFC 3152 zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA" { type master; file "master/localhost-v6.rev"; }; // NB: Do not use the IP addresses below, they are faked, and only // serve demonstration/documentation purposes! // // Example slave zone config entries. It can be convenient to become // a slave at least for the zone your own domain is in. Ask // your network administrator for the IP address of the responsible // primary. // // Never forget to include the reverse lookup (IN-ADDR.ARPA) zone! // (This is named after the first bytes of the IP address, in reverse // order, with ".IN-ADDR.ARPA" appended.) // // Before starting to set up a primary zone, make sure you fully // understand how DNS and BIND works. There are sometimes // non-obvious pitfalls. Setting up a slave zone is simpler. // // NB: Don't blindly enable the examples below. :-) Use actual names // and addresses instead. /* An example master zone zone "example.net" { type master; file "master/example.net"; }; */ /* An example dynamic zone key "exampleorgkey" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "example.org" { type master; allow-update { key "exampleorgkey"; }; file "dynamic/example.org"; }; */ /* Examples of forward and reverse slave zones zone "example.com" { type slave; file "slave/example.com"; masters { 192.168.1.1; }; }; zone "1.168.192.in-addr.arpa" { type slave; file "slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ .... In [.filename]#named.conf#, ci sono esempi di linee slave per zone di forward ed inverse. Per ogni nuova zona servita, una nuova linea di zona deve essere aggiunta a [.filename]#named.conf#. Per esempio, la più semplice entry per `example.org` può assomigliare a: [.programlisting] .... zone "example.org" { type master; file "master/example.org"; }; .... La zona è una master, come indicato dall'entry `type`, e conserva le informazioni di zona su [.filename]#/etc/namedb/master/example.org# indicata dalla entry `file`. [.programlisting] .... zone "example.org" { type slave; file "slave/example.org"; }; .... Nel caso slave, l'informazione di zona è trasferita dal name server master per quella zona particolare, e salvata nel file specificato. Se e quando il master muore o è irraggiungibile, il name server slave avrà le informazioni di zona trasferite e sarà in grado di servirlo. ==== File di Zona Un esempio di file di zona master per `example.org` (che esiste all'interno di [.filename]#/etc/namedb/master/example.org#) è la seguente: [.programlisting] .... $TTL 3600 ; 1 hour example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ; Minimum TTL ) ; DNS Servers IN NS ns1.example.org. IN NS ns2.example.org. ; MX Records IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Machine Names localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 ; Aliases www IN CNAME @ .... Nota che ogni hostname che finisce in un "." è un nome esatto, mentre ogni entità senza un "." è referenziato all'origine. Per esempio `www` è trasformato in `www.origin`. Nel nostro file di zone fittizio, la nostra origine è `example.org`, così `www` si trasformerebbe in `www.example.org`. Il formato di un file di zona è il seguente: [.programlisting] .... recordname IN recordtype value .... I record DNS usati più di frequente: SOA:: inizio di una zona di autorità NS:: un name server autoritativo A:: un indirizzo host CNAME:: il nome canonico per un alias MX:: mail exchanger PTR:: un puntatore a nome di dominio (usato nel DNS inverso) [.programlisting] .... example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh after 3 hours 3600 ; Retry after 1 hour 604800 ; Expire after 1 week 86400 ) ; Minimum TTL of 1 day .... `example.org.`:: il nome di dominio, inoltre è l'origine per questo file di zona. `ns1.example.org.`:: il name server primario/autoritativo per questa zona. `admin.example.org.`:: la persona responsabile per questa zona, un indirizzo email con "@" sostituito. (mailto:admin@example.org[admin@example.org] diventa `admin.example.org`) `2006051501`:: il numero di serie del file. Questo deve essere aumentato ogni volta che il file di zona è modificato. Al giorno d'oggi molti amministratori preferiscono un formato `yyyymmddrr` per il numero di serie. `2006051501` significherebbe modificato l'ultima volta il 05/15/2006, l'ultimo `01` essendo la prima volta che il file di zona è stato modificato in questo giorno. Il numero di serie è importante dato che avverte name server slave per una zona quando questa ` modificata. [.programlisting] .... IN NS ns1.example.org. .... Questa è una linea NS. Ogni name server che replicherà in maniera autoritativa la zona deve avere una di queste linee. Il `@` come visto potrebbe essere stato ` example.org.` Il `@` si traduce nell'origine. [.programlisting] .... localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 .... Il record A indica un nome macchina. Come visto sopra, `ns1.example.org` risolverebbe in `192.168.1.2`. [.programlisting] .... IN A 192.168.1.1 .... Questa linea assegna l'indirizzo IP `192.168.1.1` alla corrente origine, in questo caso `example.org`. [.programlisting] .... www IN CNAME @ .... Il record nome canonico è usato per dare alias ad una macchina. Nell'esempio, `www` è tramutato in alias nella macchina "master" che corrisponde al domain name `example.org ` (`192.168.1.1`). CNAME possono essere usati per fornire alias ad hostname o distribuire in round robin un hostname fra molte macchine. [.programlisting] .... IN MX 10 mail.example.org. .... Il record MX ` usato per specificare quali mail server sono responsabili per gestire mail entranti per la zona. `mail.example.org ` è l'hostname del mail server, e 10 è la priorità di quel mail server. Uno può avere molti mail server, con priorità di 10, 20 e così via. Un mail server che cerca di consegnare una mail a `example.org` proverà prima l'MX con la più alta priorità (il record con il numero di priorita' minimo) poi il secondo, etc., fino a chè la mail non sia consegnata correttamente. Per file di zona in-addr.arpa (DNS inverso), lo stesso formato è usato, eccetto con linee PTR al posto di A o CNAME. [.programlisting] .... $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 3600 ) ; Minimum IN NS ns1.example.org. IN NS ns2.example.org. 1 IN PTR example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 4 IN PTR mx.example.org. 5 IN PTR mail.example.org. .... Questo file da la corretta mappa da indirizzi IP ad hostname per il nostro dominio fittizio. === Caching Name Server Un name server caching è un name server che non è autoritativo per nessuna zona. Fa semplicemente query, e ne memorizza le risposte per uso successivo. Per impostarne uno, configura il name server come al solito, omettendo ogni inclusione di zona. === Sicurezza Anche se BIND è la più comune implementazione del DNS, c'è sempre la questione della sicurezza. Talvolta vengono trovati possibili e sfruttabili buchi di sicurezza. Mentre FreeBSD tiene named automaticamente in un ambiente man:chroot[8], ci sono molti altri meccanismi di sicurezza che potrebbero essere sfruttati per attacchi al servizio DNS. È una buona idea leggere gli avvisi sulla sicurezza di http://www.cert.org/[CERT] e sottoscrivere le {freebsd-security-notifications} per stare aggiornato con le questioni correnti di sicurezza di Internet e FreeBSD. [TIP] ==== Se sorge un problema, tenere i sorgenti aggiornati e fare una compilazione al volo di named non farebbe male. ==== === Ulteriori letture Pagine di manuale di BIND/named: man:rndc[8] man:named[8] man:named.conf[8] * http://www.isc.org/products/BIND/[Official ISC BIND Page] * http://www.isc.org/sw/guild/bf/[Official ISC BIND Forum] * http://www.nominum.com/getOpenSourceResource.php?id=6[ BIND FAQ] * http://www.oreilly.com/catalog/dns4/[O'Reilly DNS and BIND 4th Edition] * link:ftp://ftp.isi.edu/in-notes/rfc1034.txt[RFC1034 - Domain Names - Concepts and Facilities] * link:ftp://ftp.isi.edu/in-notes/rfc1035.txt[RFC1035 - Domain Names - Implementation and Specification] [[network-apache]] == Apache HTTP Server === Uno sguardo d'insieme FreeBSD è usato per far girare alcuni dei siti web più trafficati al mondo. La maggioranza dei web server su Internet usano attualmene Apache HTTP Server. Il pacchetto software di Apache dovrebbe essere incluso nel tuo media di installazione di FreeBSD. Se non hai installato Apache quando hai installato FreeBSD per la prima volta, lo puoi installare dal port package:www/apache13[] o package:www/apache22[]. Una volta che Apache è stato installato con successo, deve essere configurato. [NOTE] ==== Questa sezione copre la versione 1.3.X di Apache HTTP Server dato che è la versione più usata per FreeBSD. Apache 2.X introduce molte nuove tecnologie ma queste non saranno discusse in questa sede. Per maggiori informazioni su Apache 2.X, per favore consulta link:httpd://httpd.apache.org/[httpd://httpd.apache.org/]. ==== === Configurazione Il principale file di configurazione di Apache HTTP Server è installato in [.filename]#/usr/local/etc/apache/httpd.conf# su FreeBSD. Questo file è un tipico file di testo di configurazione di UNIX(R) con linee di commento che cominciano col carattere `#`. Una descrizione comprensiva di tutte le possibili opzioni di configurazione è al di fuori dello scopo di questo libro, così solo le direttive usate più di frequente saranno descritte di seguito. `ServerRoot "/usr/local"`:: Questo specifica la gerachia di directory di default per l'installazione di Apache. I binari sono conservati nelle sottodirectory [.filename]#bin# e [.filename]#sbin# sotto la server root, ed i file di configurazione sono conservati sotto [.filename]#etc/apache#. `ServerAdmin you@your.address`:: L'indirizzo email al quale i problemi riguardanti il server dovrebbero essere inviati. Questo indirizzo appare su alcune pagine generate dal server, come alcuni documenti di errore. `ServerName www.example.com`:: `ServerName` ti permette di impostare un nome host che viene inviato ai client per il tuo server, se questo è differente da quello per il quale l'host è configurato (ad esempio usi `www ` invece del vero nome host). `DocumentRoot "/usr/local/www/data"`:: `DocumentRoot`: La directory dalla quale servirai documenti. Di default tutte le richieste sono girate a questa directory, ma link simbolici ed alias possono essere usati per puntare ad altre locazioni. È sempre una buona idea fare copie di backup del tuo file di configurazione di Apache prima di modificarlo. Una volta che sei soddisfatto dalla tua configurazione iniziale sei pronto per iniziare ad eseguire Apache. === Eseguire Apache Apache non viene eseguito dal super server inetd a differenza di molti altri server di rete. È configurato per girare standalone per migliori performance per gestire le richieste HTTP in entrata dai client web browser. Un wrapper shell script è incluso per rendere il più semplice possibile lo start, lo stop ed il restart del server. Per avviare Apache per la prima volta, esegui: [source,shell] .... # /usr/local/sbin/apachectl start .... Puoi fermare il server in ogni istante digitando: [source,shell] .... # /usr/local/sbin/apachectl stop .... Dopo aver fatto modifiche al file di configurazione per una qualsiasi ragione, avrai bisogno di riavviare il server: [source,shell] .... # /usr/local/sbin/apachectl restart .... Per riavviare Apache senza mandare in abort le connessioni correnti, esegui. [source,shell] .... # /usr/local/sbin/apachectl graceful .... Informazioni addizionali sono disponibili sulla pagina di manuale di man:apachectl[8]. Per eseguire Apache all'avvio del sistema, aggiungi la seguente linea ad [.filename]#/etc/rc.conf#: [.programlisting] .... apache_enable="YES" .... o per Apache 2.2: [.programlisting] .... apache22_enable="YES" .... Se volessi fornire opzioni addizionali di linea di comando al programma Apache`httpd` avviato al boot di sistema, puoi specificarle con una linea addizionale in [.filename]#rc.conf#: [.programlisting] .... apache_flags="" .... Ora che il web server è in esecuzione puoi navigare il tuo sito web puntando il tuo web browser ad `http://localhost/`. La pagina di default che viene mostrata è [.filename]#/usr/local/www/data/index.html#. === Virtual Hosting Apache supporta due tipi diversi di Virtual Hosting. Il primo metodo è Virtual Hosting basato sul nome. Il Virtual Hosting basato sul nome usa gli header HTTP/1.1 per scoprire l'hostname. Questo permette a molti domini diversi di condividere lo stesso indirizzo IP. Per fare sì che Apache usi Virtual Hosting basato sui nomi aggiungi una entry come la seguente al tuo file [.filename]#httpd.conf#: [.programlisting] .... NameVirtualHost * .... Se il tuo webserver era nominato `www.domain.tld` e tu avessi voluto installare un dominio virtuale per `www.someotherdomain.tld ` avresti dovuto aggiungere le seguenti entry a [.filename]#httpd.conf#: [source,shell] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... Sostituisci gli indirizzi con gli indirizzi che vuoi usare ed i percorsi dei documenti con quelli che usi. Per maggiori informazioni sull'impostazione dei virtual host, per favore consulta la documentazione ufficiale a http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]. === Moduli Apache Ci sono molti diversi moduli Apache disponibili per aggiungere funzionalità al server base. La Collezione Port di FreeBSD fornisce un modo semplice di installare Apache assieme ad alcuni dei più popolari moduli aggiuntivi. ==== mod_ssl Il modulo mod_ssl usa la libreria OpenSSL per fornire una forte crittografia attraverso i protocolli Secure Sockets Layer (SSL v2/v3) e Transport Layer Security (TLS v1). Questo modulo fornisce tutto il necessario per richiedere un certificato firmato da un'autorità fidata che emette certificati, cosicchè puoi eseguire un web server sicuro su FreeBSD. Se non hai ancora installato Apache, una versione di Apache 1.3.X che includa mod_ssl può essere installata con il port package:www/apache13-modssl[]. Il supporto ad SSL è anche disponibile per Apache 2.X nel port package:www/apache22[], dove viene abilitato di default. ==== Siti web dinamici con Perl & PHP Negli ultimi anni, molte aziende si sono rivolte a Internet per migliorare i loro ricavi e aumentare la loro esposizione. Questo ha anche aumentato il bisogno di contenuti interattivi web. Mentre alcune società come Microsoft(R) hanno introdotto soluzioni nei loro prodotti proprietari, la comunità open source ha risposto all'appello. Due opzioni per contenuti web dinamici includono mod_perl & mod_php. ===== mod_perl Il progetto di integrazione Apache/Perl mette assieme la grande potenza del linguaggio di programmazione Perl e l'Apache HTTP Server. Con il modulo mod_perl è possibile scrivere moduli Apache interamente in Perl. In aggiunta l'interprete persistente integrato nel server evita l'overhead di avviare un interprete esterno e la penalizzazione del tempo di caricamento Perl. mod_perl è disponibile in alcuni modi diversi. Per usare mod_perl ricorda che mod_perl 1.0 funziona solo con Apache 1.3 e mod_perl 2.0 funziona solo con Apache 2.X. mod_perl 1.0 è disponibile in package:www/mod_perl[] ed una versione compilata staticamente è disponibile in package:www/apache13-modperl[]. mod_perl 2.0 è disponibile in package:www/mod_perl2[]. ===== mod_php PHP, anche noto come "Hypertext Prepocessor" è un linguaggio di scripting di scopo generale che è particolarmente adatto per lo sviluppo Web. Adatto ad essere usato all'interno dell'HTML, la sua sintassi deriva dal C, Java(TM), e Perl con l'intenzione di permettere agli sviluppatori web di scrivere pagine web generate dinamicamente in modo veloce. Per integrare supporto a PHP5 per il web server Apache, inizia con l'installare il port package:lang/php5[]. Se il port package:lang/php5[] viene installato per la prima volta, le `OPTIONS` disponibili saranno mostrate automaticamente. Se non viene mostrato un menu, ad esempio perché il port package:lang/php5[] è stato installato qualche volta in passato, è sempre possibile rivedere il menu a dialogo con le opzioni eseguendo: [source,shell] .... # make config .... nella directory dei port. Nel menu a dialogo delle opzioni, flagga l'opzione `APACHE` per compilare mod_php5 come modulo caricabile per il web server Apache. [NOTE] ==== Molti siti stanno ancora usando PHP4 per varie ragioni (ad esempio questioni di compatibilità o applicativi web già costruiti). Se si necessita del modulo mod_php4 invece che di mod_php5, siete pregati di usare il port package:lang/php4[]. Il port package:lang/php4[] supporta molte delle configurazioni e delle opzioni di build-time del port package:lang/php5[]. ==== Questo installerà e configurerà i moduli richiesti per supportare applicazioni web dinamiche PHP. Controlla che le seguenti linee siano state aggiunte al file [.filename]#/usr/local/etc/apache/httpd.conf#: [.programlisting] .... LoadModule php5_module libexec/apache/libphp5.so AddModule mod_php5.c DirectoryIndex index.php index.html AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps .... Una volta completato, una semplice chiamata al comando `apachectl` per un tranquillo restart è richiesto per caricare il modulo PHP: [source,shell] .... # apachectl graceful .... Per upgrade futuri di PHP, il comando `make config` non sarà richiesto; le `OPTIONS` selezionate sono salvate automaticamente dal sistema dei Ports di FreeBSD. Il supporto a PHP in FreeBSD è estremamente modulare così l'installazione base è molto limitata. È molto facile aggiungere supporto usando il port package:lang/php5-extensions[]. Questo port fornisce un interfaccia a menu per l'installazione di estensioni a PHP. Alternativamente le singole estensioni possono essere installate usando il port appropriato. Ad esempio, per aggiungere supporto al database MySQL a PHP5, semplicemente installa package:databases/php5-mysql[]. Dopo aver installato un'estensione, il server Apache deve essere riavviato per caricare i cambiamenti della nuova configurazione: [source,shell] .... # apachectl graceful .... [[network-ftp]] == File Transfer Protocol (FTP) === Uno sguardo d'insieme Il File Transfer Protocol (FTP) fornisce agli utenti un semplice modo di trasferire file da e verso un server FTP. FreeBSD include software per server FTP nel sistema base. Questo rende l'installazione e l'ammininistrazione di un server FTP molto semplice. === Configurazione Il più importante passo di configurazione è decidere a quali account saraà permesso accedere al server FTP. Un sistema normale FreeBSD ha un certo numero di account di sistema usati per vari demoni, ma agli utenti estranei non dovrebbe essere permesso di loggarsi con questi account. Il file [.filename]#/etc/ftpusers# è una lista di utenti a cui è negato l'accesso FTP. Di default include gli account di sistema sopra citati ma è possibile aggiungere utenti specifici che non dovrebbero avere accesso FTP. Può essere che tu voglia restringere l'accesso ad alcuni utenti senza impedir loro di usare completamente FTP. Ciò può essere ottenuto con il file [.filename]#/etc/ftpchroot#. Questo file elenca utenti e gruppi soggetti a restrizioni di accesso FTP. La pagina di manuale man:ftpchroot[5] ha tutti i dettagli così non sarà descritta qui. Se tu volessi abilitare accesso anonimo FTP al tuo server, devi creare un utente chiamato `ftp` sul tuo sistema FreeBSD. Gli utenti allora potranno loggarsi al tuo server FTP con uno username di `ftp` o `anonymous` e con una password qualsiasi (di norma dovrebbe essere usato un indirizzo email dell'utente come password). Il server FTP chiamerà man:chroot[2] quando un utente anonimo si logga, per restringere l'accesso solo alla home directory di `ftp`. Ci sono due file di testo che specificano messaggi di benvenuto per i client FTP. Il contenuto del file [.filename]#/etc/ftpwelcome# sarà mostrato agli utenti prima che raggiungano il prompt del login. Dopo un login di successo, il contenuto del file [.filename]#/etc/ftpmotd# sarà mostrato. Nota che il percorso di questo file è relativo all'ambiente di login, così saraà mostrato il file [.filename]#~ftp/etc/ftpmotd#. Una volta che il server FTP è stato configurato correttamente, deve essere abilitato in [.filename]#/etc/inetd.conf#. Tutto ciò che viene richiesto è rimuovere il simbolo di commento "#" dall'inizio della linea relativa a ftpd: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... Come spiegato in <>, la configurazione di inetd deve essere ricaricata dopo che che questo file di configurazione è stato cambiato. Ora puoi loggarti al tuo server FTP digitando: [source,shell] .... % ftp localhost .... === Manutenzione Il demone ftpd usa man:syslog[3] per loggare i mesaggi. Di default il demone dei log di sistema girerà i messaggi relativi a FTP nel file [.filename]#/var/log/xferlog#. La posizione del log FTP può essere modificata cambiando la seguente linea in [.filename]#/etc/syslog.conf#: [.programlisting] .... ftp.info /var/log/xferlog .... Presta attenzione ai problemi potenziali correlati all'esecuzione di un server FTP anonimo. In particolare, dovresti pensarci due volte prima di permettere agli utenti anonimi di fare upload di file. Potresti scoprire che il tuo sito FTP è diventato un forum per il commercio di software commerciale senza licenza o anche peggio. Se hai veramente bisogno di permettere upload FTP anonimi, allora dovresti impostare i permessi in modo che questi file non possano essere letti da altri utenti fino a che non siano stati revisionati. [[network-samba]] == Servizi di File e Stampa per client Microsoft(R) Windows(R) (Samba) === Uno sguardo d'insieme Samba è un popolare pacchetto software open source che fornisce servizi di file e stampa per client Microsoft(R) Windows(R). Tali client possono connettersi ed usare un file system FreeBSD come se fosse un disco locale, o stampanti FreeBSD come se fossero stampanti locali. Il pacchetto software Samba dovrebbe essere incluso nel tuo media di installazione FreeBSD. Se non hai installato Samba quando hai installato per la prima volta FreeBSD, puoi sempre installarlo dal port o pacchetto package:net/samba3[]. === Configurazione Un file di configurazione di Samba di default è installato in [.filename]#/usr/local/shared/examples/smb.conf.default#. Questo file deve essere copiato in [.filename]#/usr/local/etc/smb.conf# e personalizzato prima che Samba possa essere usato. Il file [.filename]#smb.conf# contiene informazione di configurazione runtime per Samba, come le definizioni delle stampanti e "share di file system" che vorresti condividere con Windows(R) client. Il pacchetto Samba include un tool basato sul web chiamato swat che fornisce un modo semplice di configurare il file [.filename]#smb.conf#. ==== Usare il Samba Web Administration Tool (SWAT) Il Samba Web Administration Tool (SWAT) viene eseguito come demone da inetd. Quindi, dovresti togliere i commenti alla seguente linea in [.filename]#/etc/inetd.conf# prima che swat possa essere usato per configurare Samba: [.programlisting] .... swat stream tcp nowait/400 root /usr/local/sbin/swat swat .... Come spiegato in <>, la configurazione di inetd deve essere ricaricata dopo che questo file di configurazione è stato cambiato. Una volta che swat è stato abilitato in [.filename]#inetd.conf#, puoi usare un browser per connetterti a http://localhost:901[http://localhost:901]. Dovrai prima loggarti con l'account di sistema `root`. Una volta che ti sei loggato con successo alla pagina principale di configurazione di Samba, puoi navigare la documentazione di sistema, o iniziare cliccando sul tab menu:Globals[]. La sezione menu:Globals[] corrisponde alle variabili che sono impostate nella sezione `[global]` di [.filename]#/usr/local/etc/smb.conf#. ==== Impostazioni Globali Sia che tu stia usando swat o che tu stia editando direttamente [.filename]#/usr/local/etc/smb.conf#, le prime direttive che tu puoi incontrare quando configuri Samba sono: `workgroup`:: Nome dominio NT o nome Workgroup per i computer che accedono a questo server. `netbios name`:: Questo imposta il nome NetBIOS attraverso il quale un Samba è conosciuto. Di default è lo stesso della prima parte del nome host DNS. `server string`:: Questo imposta la stringa che sarà mostrata con il comando `net view` e con alcuni altri strumenti di rete che cercano di mostrare testo descrittivo sul server. ==== Impostazioni di Sicurezza Due delle più importanti impostazioni in [.filename]#/usr/local/etc/smb.conf# sono i modelli di sicurezza usati, ed il formato delle password di backend per utenti client. Le seguenti direttive controllano queste opzioni: `security`:: Le due più comuni opzioni in questo caso sono `security = share` e `security = user`. Se i tuoi client usano nomi utente che sono gli stessi dei nomi utenti sulla tua macchina FreeBSD, allora vorrai sicurezza di tipo user. Questa è la policy di sicurezza di default e richiede ai client prima di loggarsi prima che possano accedere a risorse condivise. + Nel modello di sicurezza di tipo share, i client non hanno bisogno di loggarsi al server con una valida coppia username e password prima che provino a connettersi a risorse condivise. Questo è il modello di sicurezza di default per versioni precedenti di Samba. `passdb backend`:: Samba ha molti modelli diversi di backend di autenticazione. Puoi autenticare i client con LDAP, NIS+, un database SQL, o un file di password modificato. Il metodo di autenticazione di default è `smbpasswd`, e questo sarà l'unico coperto qui. Assumendo che il backend usato sia quello di default, `smbpasswd`, il file [.filename]#/usr/local/private/smbpasswd# deve essere creato per permettere a Samba di autenticare i client. Se tu volessi dare ai tuoi account UNIX(R) accesso da client Windows(R), usa il seguente comando: [source,shell] .... # smbpasswd -a username .... Per favore consulta l' http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/[Official Samba HOWTO] http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/[HOWTO Ufficiale di Samba] per informazioni addizionali sulle opzioni di configurazione. Con le basi delineate qui, dovresti avere tutto ciò di cui hai bisogno per avviare Samba. === Avviare Samba Il port package:net/samba3[] aggiunge un nuovo script di avvio, che può essere usato per controllare Samba. Per abilitare questo script, in modo tale da essere usato per esempio per avviare fermare o far ripartire Samba, aggiungi la riga seguente al file [.filename]#/etc/rc.conf#: [.programlisting] .... samba_enable="YES" .... Oppure, per un controllo più accurato: [.programlisting] .... nmbd_enable="YES" .... [.programlisting] .... smbd_enable="YES" .... [NOTE] ==== In questo modo Samba viene avviato automaticamente ad ogni avvio del sistema. ==== Per avviare Samba digita: [source,shell] .... # /usr/local/etc/rc.d/samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. .... Fai riferimento alla crossref:config[configtuning-rcd,Usare rc con FreeBSD] per ulteriori informazioni sull'uso degli script rc. Samba attualmente consiste di tre demoni separati. Dovresti osservare che entrambi nmbd e smbd siano avviati dallo script [.filename]#samba#. Se hai abilitato servizi di risoluzione di nomi winbind in [.filename]#smb.conf#, allora osserverai che anche il demone winbindd è avviato. Puoi anche fermare Samba in ogni istante digitando: [source,shell] .... # /usr/local/etc/rc.d/samba stop .... Samba è una suite complessa di software con funzionalità che permette una larga integrazione con reti Microsoft(R) Windows(R). Per maggiori informazioni sulle funzionalità al di là dell'installazione di base descritta qui per favore consulta http://www.samba.org[http://www.samba.org]. [[network-ntp]] == Sincronizzazione del Clock con NTP === Uno sguardo d'insieme Al passare del tempo, il clock di un computer tende a perdere la sincronizzazione. Il Network Time Protocol (NTP) fornisce un modo per assicurarti che il tuo clock sia accurato. Molti servizi Internet si basano sul fatto che il clock del computer sia accurato, o comunque traggono notevole beneficio da questo fatto. Per esempio, un web server può ricevere richieste di inviare un file se questo è stato modificato da una certa data. In un ambiente locale di rete, è essenziale che i computer che condividono i file dallo stesso file server abbiano clock sincronizzati cosicchè i timestamp dei file siano consistenti. Anche servizi come man:cron[8] si basano su un clock di sistema accurato per eseguire comandi al momento specificato. FreeBSD è dotato del server man:ntpd[8] NTP che può essere usato per interrogare altri server NTP per impostare il clock sulla tua macchina o fornire servizi di time ad altri. === Scegliere Server NTP Appropriati Per sincronizzare il tuo clock, avrai bisogno di scegliere uno o più server NTP da usare. Il tuo amministratore di rete o ISP potrebbe aver impostato un server NTP, a questo scopo - controlla la loro documentazione per vedere se questo è il caso. C'è una http://ntp.isc.org/bin/view/Servers/WebHome[ lista online di server NTP pubblicamente accessibili ] che tu puoi usare per trovare un server NTP vicino a te. Accertati di essere al corrente della politica di ogni server che scegli, e chiedi il permesso se necessario. Scegliere molti server NTP non connessi fra loro è una buona idea in caso uno dei server che stai usando diventa irraggiungibile o il suo clock è inaffidabile. man:ntpd[8] usa le risposte che riceve da altri server in modo intelligente; favorirà server inaffidabili meno di quelli affidabili. === Configurare la tua Macchina ==== Configurazione Base Se desideri solo sincronizzare il tuo clock al momento del boot della macchina, puoi usare man:ntpdate[8]. Questo può essere appropriato per alcune macchine desktop che sono rebootate di frequente e richiedono sincronizzazione non frequente, ma le altre macchine dovrebbero eseguire man:ntpd[8]. Usare man:ntpdate[8] al momento del boot è una buona idea per le macchine che eseguono man:ntpdate[8]. Il programma man:ntpd[8] cambia il clock gradualmente, mentre man:ntpdate[8] imposta il clock, indipentemente da quanto grande sia la differenza fra l'impostazione di clock corrente di una macchina e l'ora corretta. Per abilitare man:ntpdate[8] al momento del boot, aggiungi `ntpdate_enable="YES"` a [.filename]#/etc/rc.conf#. Avrai anche bisogno di specificare tutti i server con i quali ti desideri sincronizzare ed ogni flags passato a man:ntpdate[8] in `ntpdate_flags`. ==== Configurazione Generale NTP è configurato dal file [.filename]#/etc/ntp.conf# nel formato descritto da man:ntp.conf[5]. Questo è un semplice esempio: [.programlisting] .... server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift .... L'opzione `server` specifica quali server siano da usare, con un server elencato su ogni linea. Se un server è specificato con l'argomento `prefer`, come con `ntplocal.example.com`, quel server saraà preferito rispetto ad altri. Una risposta da un server preferito sarà scartata se differisce in modo significativo dalle risposte di altri server, altrimenti sarà usata senza nessuna considerazione delle altre risposte. L'argomento `prefer` è normalmente usato per server NTP che sono noti per essere molto accurati, come quelli con hardware a monitoraggio speciale del tempo. L'opzione `driftfile` specifica quale file sia usato per conservare la frequenza di scostamento dal clock di sistema. Il programma man:ntpd[8] usa questo dato per compensare automaticamente le imprecisioni naturali del clock, permettendo di mantenere una impostazione ragionevolmente corretta anche se gli è impedito di accedere a tutte le sorgenti di sincronizzazione tempo esterne per un certo periodo di tempo. L'opzione `driftfile` specifica quale file sia usato per conservare informazioni sulle risposte precedenti dai server NTP che usi. Questo file contiene informazioni interne per NTP. Non dovrebbe essere modificato da altri processi. ==== Controllare l'Accesso ad i tuoi Server Di default, il tuo server NTP sarà accessibile a tutti gli host su Internet. L'opzione `restrict` in [.filename]#/etc/ntp.conf# ti permette di controllare quali macchine possano accedere al tuo server. Se vuoi negare a tutte le macchine accesso al tuo server NTP, aggiungi la seguente linea a [.filename]#/etc/ntp.conf#: [.programlisting] .... restrict default ignore .... [NOTE] ==== Inoltre questo settaggio vieta l'accesso al tuo server dai server elencati nella tua configurazione locale. Se hai bisogno di sincronizzare il tuo server NTP con un server NTP esterno devi permettere il server che vuoi usare. Guada la pagina man man:ntp.conf[5] per ulteriori dettagli. ==== Se vuoi permettere solo alle macchine della tua rete di sincronizzare il loro clock con il tuo server, ma assicurarti che non gli sia permesso configurare il server o che non sianousate come punto di riferimento per sincronizzarsi, aggiungi [.programlisting] .... restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap .... invece, dove`192.168.1.0` è un indirizzo IP sulla tua rete e `255.255.255.0` è la netmask della tua rete. [.filename]#/etc/ntp.conf# può contenere molte opzioni `restrict`. Per maggiori dettagli, consulta la sezione `Access Control Support` di man:ntp.conf[5]. === Eseguire il Server NTP Per assicurarsi che il server NTP sia avviato al momento del boot, aggiungi la linea `ntpd_enable="YES"` a [.filename]#/etc/rc.conf#. Se desideri passare flag addizionali a man:ntpd[8], edita il parametro `ntpd_flags` in [.filename]#/etc/rc.conf#. Per avviare il server senza riavviare la tua macchina, esegui `ntpd` accertandoti di specificare ogni parametro addizionale in `ntpd_flags` presente in [.filename]#/etc/rc.conf#. Per esempio: [source,shell] .... # ntpd -p /var/run/ntpd.pid .... === Usare ntpd con una Connessione Temporanea ad Internet Il programma man:ntpd[8] non necessita di una connessione permanente ad Internet per funzionnare correttamente. Comunque, se hai una connessione temporanea che è configurata per effettuare una chiamata su richiesta, è una buona idea evitare che il traffico NTP causi la chiamata o mantenga la connessione attiva. Se stai usando PPP utente, puoi usare le direttive `filter` in [.filename]#/etc/ppp/ppp.conf#. Per esempio: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... Pre maggiori dettagli consulta la sezione `PACKET FILTERING` in man:ppp[8] e gli esempi in [.filename]#/usr/shared/examples/ppp/#. [NOTE] ==== Alcuni provider di accesso ad Internet bloccano le porte dal numero basso, impedendo ad NTP di funzionare dato che le repliche non raggiungono mai la tua macchina. ==== === Informazioni Ulteriori La documentazione per il server NTP può essere trovata in formato HTML in [.filename]#/usr/shared/doc/ntp/#. diff --git a/documentation/content/ja/books/handbook/advanced-networking/_index.adoc b/documentation/content/ja/books/handbook/advanced-networking/_index.adoc index 9ba8682bdf..36e2b7ef40 100644 --- a/documentation/content/ja/books/handbook/advanced-networking/_index.adoc +++ b/documentation/content/ja/books/handbook/advanced-networking/_index.adoc @@ -1,4157 +1,4156 @@ --- title: 第20章 高度なネットワーク part: パートIV. ネットワーク通信 prev: books/handbook/mail next: books/handbook/partv showBookMenu: true weight: 25 params: path: "/books/handbook/advanced-networking/" --- [[advanced-networking]] = 高度なネットワーク :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 20 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/advanced-networking/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[advanced-networking-synopsis]] == この章では この章では UNIX(R) システム上で良く利用されるネットワークサービスについて説明します。 FreeBSD が利用するすべてのネットワークサービスをどのように定義し、 設定し、テストし、そして保守するのかを扱います。さらに、 本章を通してあなたの役に立つ設定例が載っています。 この章を読めば以下のことが分かります。 * ゲートウェイと経路の基本 * FreeBSD をブリッジとして動作させる方法 * ネットワークファイルシステム (NFS) の設定方法 * ディスクレスマシンのネットワークブートの設定方法 * ユーザアカウントを共有するためのネットワークインフォメーションサーバ (NIS) の設定方法 * DHCP を用いて自動的にネットワーク設定を行う方法 * ドメインネームサーバ (DNS) の設定方法 * NTP プロトコルを用いて日時を同期してタイムサーバを設定する方法 * ネットワークアドレス変換 (NAT) の設定方法 * `inetd` デーモンの管理方法 * PLIP 経由で二台のコンピュータを接続する方法 * FreeBSD で IPv6 を設定する方法 この章を読む前に、以下のことを行っておくべきです。 * [.filename]#/etc/rc# スクリプトの基本を理解していること * 基礎的なネットワーク用語に精通していること [[network-routing]] == ゲートウェイと経路 あるマシンがネットワーク上で他のマシンをみつけることができるようにするには、 あるマシンから他のマシンへどのようにたどり着くかを記述する適切な仕組みが必要です。 この仕組みを__ルーティング__と呼びます。 "経路" (route) は "送信先" (destination) と "ゲートウェイ" の 2 つのアドレスの組で定義します。この組合せは、この _送信先_ へたどり着こうとする場合は、その _ゲートウェイ_ を通じて通信することを示しています。 送信先には個々のホスト、サブネット、"デフォルト" の 3 つの型があります。 "デフォルトルート" は他のどの経路も適用できない場合に使われます。 デフォルトルートについてはのちほどもう少し詳しく述べます。 また、ゲートウェイには、個々のホスト、インタフェース ("リンク" とも呼ばれます)、 イーサネットハードウェアアドレス (MAC アドレス) の 3 つの型があります。 === 例 以下に示す `netstat` の例を使って、ルーティングのさまざまな状態を説明します。 [source,shell] .... % netstat -r Routing tables Destination Gateway Flags Refs Use Netif Expire default outside-gw UGSc 37 418 ppp0 localhost localhost UH 0 181 lo0 test0 0:e0:b5:36:cf:4f UHLW 5 63288 ed0 77 10.20.30.255 link#1 UHLW 1 2421 example.com link#1 UC 0 0 host1 0:e0:a8:37:8:1e UHLW 3 4601 lo0 host2 0:e0:a8:37:8:1e UHLW 0 5 lo0 => host2.example.com link#1 UC 0 0 224 link#1 UC 0 0 .... 最初の 2 行はデフォルトルート (<>で扱います) と、 `localhost` への経路を示しています。 `localhost` に割り当てるインタフェース (`Netif` 欄) としてこのルーティングテーブルが指定しているのは [.filename]#lo0# で、これはループバックデバイスともいいます。 これは結局のところ出たところに戻るだけなので、 この送信先あてのトラフィックは、LAN に送られずに、すべて内部的に処理されます。 次の行では `0:e0:` から始まるアドレスに注目しましょう。 これはイーサネットハードウェアアドレスで、MAC アドレスともいいます。 FreeBSD はローカルなイーサネット上の任意のホスト (この例では `test0`) を自動的に認識し、 イーサネットインタフェース [.filename]#ed0# にそのホストへの直接の経路をつけ加えます。 この種の経路には、タイムアウト時間 (`Expire` 欄) も結びつけられており、 指定された時間内にホストからの応答がないことを判断するのに用いられます。 その場合、そのホストへの経路情報は自動的に削除されます。 これらのホストは RIP (Routing Information Protocol) という、 最短パス判定に基づいてローカルなホストへの経路を決定する仕組みを利用して認識されます。 さらに FreeBSD ではローカルサブネットへの経路情報も加えることができます (`10.20.30.255` は `10.20.30` というサブネットに対するブロードキャストアドレスで、 `example.com` はこのサブネットに結びつけられているドメイン名)。 `link#1` という名称は、 このマシンの一つ目のイーサネットカードのことをさします。 これらについては、 何も追加インタフェースが指定されていないことがわかります。 これら 2 つのグループ (ローカルネットワークホストとローカルサブネット) は、両方とも routed というデーモンによって自動的に経路が設定されます。 routed を動かさなければ、静的に定義した (つまり明示的に設定した) 経路のみが存在することになります。 `host1` の行は私たちのホストのことで、 イーサネットアドレスで示されています。送信側のホストの場合、 FreeBSDはイーサネットインタフェースへ送るのではなく、 ループバックインタフェース ([.filename]#lo0#) を使います。 2 つある `host2` の行は、 man:ifconfig[8] のエイリアスを使ったときにどのようになるかを示す例です (このようなことをする理由については Ethernet の節を参照してください)。 [.filename]#lo0# の後にある `=>` は、 インタフェースが (このアドレスがローカルなホストを参照しているので) ループバックを使っているというだけでなく、 エイリアスになっていることも示しています。 このような経路はエイリアスに対応しているホストにのみ現れます。 ローカルネットワーク上の他のすべてのホストでは、 それぞれの経路に対して単に``link#1`` となります。 最後の行 (送信先サブネット `224`) はマルチキャストで扱うものですが、これは他の節で説明します。 最後に `Flags` (フラグ) 欄にそれぞれの経路のさまざまな属性が表示されます。 以下にフラグの一部と、それが何を意味しているかを示します。 [.informaltable] [cols="1,1", frame="none"] |=== |U |Up: この経路はアクティブです。 |H |Host: 経路の送信先が単一のホストです。 |G |Gateway: この送信先へ送られると、 どこへ送ればよいかを明らかにして、 そのリモートシステムへ送られます。 |S |Static: この経路はシステムによって自動的に生成されたのではなく、 手動で作成されました。 |C |Clone: マシンに接続したときにこの経路に基づく新しい経路が作られます。 この型の経路は通常はローカルネットワークで使われます。 |W |WasCloned: ローカルエリアネットワーク (LAN) の (Clone) 経路に基づいて自動的に生成された経路であることを示します。 |L |Link: イーサネットハードウェアへの参照を含む経路です。 |=== [[network-routing-default]] === デフォルトルート ローカルシステムからリモートホストにコネクションを張る必要がある場合、 既知の経路が存在するかどうかを確認するためにルーティングテーブルをチェックします。 到達するための経路を知っているサブネットの内部にリモートホストがある場合 (Cloned routes)、 システムはそのインタフェースから接続できるかどうか確認します。 知っているパスがすべて駄目だった場合でも、 システムには最後の手段として "デフォルト" ルートがあります。このルートはゲートウェイルート (普通はシステムに 1 つしかありません) の特別なものです。そして、 フラグ欄には必ず `c` が表示されています。このゲートウェイは、LAN 内のホストにとって、どのマシンでも外部へ (PPP リンク、DSL、ケーブルモデム、T1、 またはその他のネットワークインタフェースのいずれかを経由して) 直接接続するために設定されるものです。 外部に対するゲートウェイとして機能するマシンでデフォルトルートを設定する場合、 デフォルトルートはインターネットサービスプロバイダ (ISP) のサイトのゲートウェイマシンになるでしょう。 それではデフォルトルートの一例を見てみましょう。 一般的な構成を示します。 image::net-routing.png[] ホスト `Local1` とホスト `Local2` はあなたのサイト内にあります。`Local1` はダイアルアップ PPP 接続経由で ISP に接続されています。 この PPP サーバコンピュータは、その ISP のインターネットへの接続点に向けた外部インタフェースを備えた他のゲートウェイコンピュータへ LAN を通じて接続しています。 あなたのマシンのデフォルトルートはそれぞれ次のようになります。 [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | ホスト | デフォルトゲートウェイ | インタフェース |Local2 |Local1 |Ethernet |Local1 |T1-GW |PPP |=== "なぜ (あるいは、どうやって) デフォルトゲートウェイを、`Local1` が接続されている ISP のサーバではなく、`T1-GW` に設定するのか" という質問がよくあります。 PPP 接続で、あなたのサイト側の PPP インタフェースは、 ISP のローカルネットワーク上のアドレスを用いているため、 ISP のローカルネットワーク上のすべてのマシンへの経路は 自動的に生成されています。 つまりあなたのマシンは、どのようにして `T1-GW` に到達するかという経路を既に知っていることになりますから、 ISP サーバにトラフィックを送るのに、 中間的な段階を踏む必要はありません。 一般的にローカルネットワークでは `X.X.X.1` というアドレスをゲートウェイアドレスとして使います。ですから (同じ例を用います)、あなたの class-C のアドレス空間が `10.20.30` で ISP が `10.9.9` を用いている場合、 デフォルトルートは次のようになります。 [.informaltable] [cols="1,1", frame="none", options="header"] |=== | ホスト | デフォルトルート |Local2 (10.20.30.2) |Local1 (10.20.30.1) |Local1 (10.20.30.1, 10.9.9.30) |T1-GW (10.9.9.1) |=== デフォルトルートは [.filename]#/etc/rc.conf# ファイルで簡単に定義できます。この例では、 `Local2` マシンで [.filename]#/etc/rc.conf# に次の行を追加しています。 [.programlisting] .... defaultrouter="10.20.30.1" .... man:route[8] コマンドを使ってコマンドラインから直接実行することもできます。 [source,shell] .... # route add default 10.20.30.1 .... 経路情報を手動で操作する方法について詳しいことは man:route[8] のマニュアルページをご覧ください。 === デュアルホームホスト ここで扱うべき種類の設定がもう一つあります。 それは 2 つの異なるネットワークにまたがるホストです。 技術的にはゲートウェイとして機能するマシン (上の例では PPP コネクションを用いています) はすべてデュアルホームホストです。 しかし実際にはこの言葉は、2 つの LAN 上のサイトであるマシンを指す言葉としてのみ使われます。 2 枚のイーサネットカードを持つマシンが、 別のサブネット上にそれぞれアドレスを持っている場合があります。 あるいは、イーサネットカードが 1 枚しかないマシンで、 man:ifconfig[8] のエイリアスを使っているかもしれません。 物理的に分かれている 2 つのイーサネットのネットワークが使われているならば前者が用いられます。 後者は、物理的には 1 つのネットワークセグメントで、 論理的には 2 つのサブネットに分かれている場合に用いられます。 どちらにしても、 このマシンがお互いのサブネットへのゲートウェイ (inbound route) として定義されていることが分かるように、 おのおののサブネットでルーティングテーブルを設定します。このマシンが 2 つのサブネットの間のルータとして動作するという構成は、 パケットのフィルタリングを実装する必要がある場合や、 一方向または双方向のファイアウォールを利用したセキュリティを構築する場合によく用いられます。 このマシンが二つのインタフェース間で実際にパケットを受け渡すようにしたい場合は、 FreeBSD でこの機能を有効にしないといけません。 くわしい手順については次の節をご覧ください。 [[network-dedicated-router]] === ルータの構築 ネットワークルータは単にあるインタフェースから別のインタフェースへパケットを転送するシステムです。 インターネット標準およびすぐれた技術的な慣習から、 FreeBSD プロジェクトは FreeBSD においてこの機能をデフォルトでは有効にしていません。 man:rc.conf[5] 内で次の変数を `YES` に変更することでこの機能を有効にできます。 [.programlisting] .... gateway_enable=YES # Set to YES if this host will be a gateway .... このオプションは man:sysctl[8] 変数の `net.inet.ip.forwarding` を `1` に設定します。 一時的にルーティングを停止する必要があるときには、 この変数を一時的に `0` に設定しなおせます。 次に、トラフィックの宛先を決めるために、 そのルータには経路情報が必要になります。 ネットワークが十分簡素なら、静的経路が利用できます。 また、FreeBSD は BSD の標準ルーティングデーモンである man:routed[8] を備えています。これは RIP (バージョン 1 および 2) および IRDP を扱えます。 BGP バージョン 4、OSPF バージョン2、 その他洗練されたルーティングプロトコルは package:net/zebra[] package を用いれば対応できます。 また、より複雑なネットワークルーティングソリューションには、 GateD(R) のような商用製品も利用可能です。 このように FreeBSD を設定したとしても、 ルータに対するインターネット標準要求を完全に満たすわけではありません。 しかし、通常利用に関しては十分といえます。 === 静的な経路の設定 ==== 手動による経路の設定 以下のようなネットワークが存在すると仮定します。 .... INTERNET | (10.0.0.1/24) Default Router to Internet | |Interface xl0 |10.0.0.10/24 +------+ | | RouterA | | (FreeBSD gateway) +------+ | Interface xl1 | 192.168.1.1/24 | +--------------------------------+ Internal Net 1 | 192.168.1.2/24 | +------+ | | RouterB | | +------+ | 192.168.2.1/24 | Internal Net 2 .... このシナリオでは、FreeBSD マシンの `RouterA` がインターネットに向けられたルータとして動作します。 ルータは外側のネットワークへ接続できるように `10.0.0.1` へ向けたデフォルトルートを保持しています。 `RouterB` はすでに適切に設定されており、 どこへ向かう必要があるか、 行き着く方法を知っていると仮定します (この例では、図のように簡単です。 `192.168.1.1` をゲートウェイとして `RouterB` にデフォルトルートを追加するだけです)。 `RouterA` のルーティングテーブルを確認すると、 以下のような出力を得ます。 [source,shell] .... % netstat -nr Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 10.0.0.1 UGS 0 49378 xl0 127.0.0.1 127.0.0.1 UH 0 6 lo0 10.0.0/24 link#1 UC 0 0 xl0 192.168.1/24 link#2 UC 0 0 xl1 .... 現在のルーティングテーブルでは、`RouterA` はまだ Internal Net 2 には到達できないでしょう。 `192.168.2.0/24` の経路を保持していないからです。 解決するための一つの方法は、経路を手動で追加することです。 以下のコマンドで `RouterA` のルーティングテーブルに `192.168.1.2` を送り先として、Internal Net 2 ネットワークを追加します。 [source,shell] .... # route add -net 192.168.2.0/24 192.168.1.2 .... これにより、`RouterA` は、 `192.168.2.0/24` ネットワーク上のホストに到達出来ます。 ==== 永続的な設定 上記の例は、 起動しているシステム上に静的な経路を設定する方法としては完全です。 しかしながら、FreeBSD マシンを再起動した際にルーティング情報が残らないという問題が一つあります。 静的な経路を追加するには、[.filename]#/etc/rc.conf# ファイルにルートを追加してください。 [.programlisting] .... # Add Internal Net 2 as a static route static_routes="internalnet2" route_internalnet2="-net 192.168.2.0/24 192.168.1.2" .... `static_routes` の設定変数は、 スペースによって分離される文字列のリストです。 それぞれの文字列は経路名として参照されます。 上記の例では `static_routes` は一つの文字列のみを持ちます。 その文字列は _internalnet2_ です。その後、 `route_internalnet2` という設定変数を追加し、 man:route[8] コマンドに与えるすべての設定パラメータを指定しています。 前節の例では、以下のコマンド [source,shell] .... # route add -net 192.168.2.0/24 192.168.1.2 .... を用いたので、 `"-net 192.168.2.0/24 192.168.1.2"` が必要になります。 上記のように `static_routes` は一つ以上の文字列を持つことが出来るので、 多数の静的な経路を作ることができます。 以下の行は `192.168.0.0/24` および `192.168.1.0/24` ネットワークを、 仮想ルータ上に静的な経路として追加する例です。 [.programlisting] .... static_routes="net1 net2" route_net1="-net 192.168.0.0/24 192.168.0.1" route_net2="-net 192.168.1.0/24 192.168.1.1" .... === ルーティングの伝搬 外部との経路をどのように定義したらよいかはすでに説明しました。 しかし外部から私たちのマシンをどのようにして見つけるのかについては説明していません。 ある特定のアドレス空間 (この例では class-C のサブネット) におけるすべてのトラフィックが、 到着したパケットを内部で転送するネットワーク上の特定のホストに送られるようにルーティングテーブルを設定することができるのは分かっています。 あなたのサイトにアドレス空間を割り当てる場合、 あなたのサブネットへのすべてのトラフィックがすべて PPP リンクを通じてサイトに送ってくるようにサービスプロバイダはルーティングテーブルを設定します。 しかし、国境の向こう側のサイトはどのようにしてあなたの ISP へ送ることを知るのでしょうか? 割り当てられているすべてのアドレス空間の経路を維持する (分散している DNS 情報とよく似た) システムがあり、 そのインターネットバックボーンへの接続点を定義しています。 "バックボーン" とは国を越え、 世界中のインターネットのトラフィックを運ぶ主要な信用できる幹線のことです。 どのバックボーンマシンも、 あるネットワークから特定のバックボーンのマシンへ向かうトラフィックと、 そのバックボーンのマシンからあなたのネットワークに届くサービスプロバイダまでのチェーンのマスタテーブルのコピーを持っています。 あなたのサイトが接続 (プロバイダからみて内側にあることになります) したということを、 プロバイダからバックボーンサイトへ通知することはプロバイダの仕事です。 これが経路の伝搬です。 === トラブルシューティング 経路の伝搬に問題が生じて、 いくつかのサイトが接続をおこなうことができなくなることがあります。 ルーティングがどこでおかしくなっているかを明らかにするのに最も有効なコマンドはおそらく man:traceroute[8] コマンドでしょう。 このコマンドは、あなたがリモートマシンに対して接続をおこなうことができない (たとえば man:ping[8] に失敗するような) 場合も、同じように有効です。 man:traceroute[8] コマンドは、 接続を試みているリモートホストを引数にして実行します。 試みている経路が経由するゲートウェイホストを表示し、 最終的には目的のホストにたどり着くか、 コネクションの欠如によって終ってしまうかのどちらかになります。 より詳しい情報は、man:traceroute[8] のマニュアルページをみてください。 === マルチキャストルーティング FreeBSD はマルチキャストアプリケーションとマルチキャストルーティングの両方にネイティブ対応しています。 マルチキャストアプリケーションを動かすのに FreeBSD で特別な設定をする必要は一切ありません。 アプリケーションは普通はそのままで動くでしょう。 マルチキャストルーティングに対応するには、 下のオプションを追加してカーネルをコンパイルする必要があります。 [.programlisting] .... options MROUTING .... さらに、[.filename]#/etc/mrouted.conf# を編集してルーティングデーモン man:mrouted[8] を設定し、トンネルと DVMRP を設置する必要があります。 マルチキャスト設定についての詳細は man:mrouted[8] のマニュアルページを参照してください。 [[network-wireless]] == 無線ネットワーク === はじめに 常にネットワークケーブルをつないでいるという面倒なことをせずに、 コンピュータを使用できることは、とても有用でしょう。 FreeBSD は無線のクライアントとして、 さらに "アクセスポイント" としても使えます。 === 無線の動作モード 802.11 無線デバイスの設定には、BSS と IBSS の二つの方法があります。 ==== BSS モード BSS モードは一般的に使われているモードです。 BSS モードはインフラストラクチャモードとも呼ばれています。 このモードでは、 多くの無線アクセスポイントが 1 つの有線ネットワークに接続されます。 それぞれのワイヤレスネットワークは固有の名称を持っています。 その名称はネットワークの SSID と呼ばれます。 無線クライアントはこれらの無線アクセスポイントに接続します。 IEEE 802.11 標準は無線ネットワークが接続するのに使用するプロトコルを規定しています。 SSID が設定されているときは、 無線クライアントを特定のネットワークに結びつけることができます。 SSID を明示的に指定しないことにより、 無線クライアントを任意のネットワークに接続することもできます。 ==== IBSS モード アドホックモードとも呼ばれる IBSS モードは、 一対一通信のために設計された通信方式です。 実際には二種類のアドホックモードがあります。 一つは IBSS モードで、アドホックモード、または IEEE アドホックモードとも呼ばれます。 このモードは IEEE 802.11 標準に規定されています。 もう一つはデモアドホックモードもしくは Lucent アドホックモード (そして時々、紛らわしいことに、アドホックモード) と呼ばれるモードです。 このモードは古く、802.11 が標準化する以前のアドホックモードで、 これは古い設備でのみ使用されるべきでしょう。 ここでは、どちらのアドホックモードについてもこれ以上言及しません。 === インフラストラクチャーモード ==== アクセスポイント アクセスポイントは一つ以上の無線クライアントが、 そのデバイスをセントラルハブとして利用できるようにする無線ネットワークデバイスです。 アクセスポイントを使用している間、 すべてのクライアントはアクセスポイントを介して通信します。 家屋や職場、または公園などの空間を無線ネットワークで完全にカバーするために、 複数のアクセスポイントがよく使われます。 アクセスポイントは一般的に複数のネットワーク接続 (無線カードと、 その他のネットワークに接続するための一つ以上の有線イーサネットアダプタ) を持っています。 アクセスポイントは、出来合いのものを購入することもできますし、 FreeBSD と対応している無線カードを組み合わせて、 自分で構築することもできます。 いくつものメーカが、 さまざまな機能をもった無線アクセスポイントおよび無線カードを製造しています。 ==== FreeBSD のアクセスポイントの構築 ===== 要件 FreeBSD で無線アクセスポイントを設定するためには、 互換性のある無線カードが必要です。 現状では Prism チップセットのカードのみに対応しています。 また FreeBSD に対応している有線ネットワークカードも必要になるでしょう (これを見つけるのは難しくないでしょう。 FreeBSD は多くの異なるデバイスに対応しているからです) 。 この手引きでは、 無線デバイスと有線ネットワークカードに接続しているネットワーク間のトラフィックを man:bridge[4] したいと仮定します。 FreeBSD がアクセスポイントを実装するのに使用する hostap 機能はファームウェアの特定のバージョンで一番よく性能を発揮します。 Prism 2 カードは、 1.3.4 以降のバージョンのファームウェアで使用すべきです。 Prism 2.5 および Prism 3 カードでは、バージョン 1.4.9 のバージョンのファームウェアで使用すべきです。 それより古いバージョンのファームウェアは、 正常に動くかもしれませんし、動かないかもしれません。 現時点では、カードのファームウェアを更新する唯一の方法は、 カードの製造元から入手できる Windows(R) 用ファームウェアアップデートユーティリティを使うものです。 ===== 設定 はじめにシステムが無線カードを認識していることを確認してください。 [source,shell] .... # ifconfig -a wi0: flags=8843 mtu 1500 inet6 fe80::202:2dff:fe2d:c938%wi0 prefixlen 64 scopeid 0x7 inet 0.0.0.0 netmask 0xff000000 broadcast 255.255.255.255 ether 00:09:2d:2d:c9:50 media: IEEE 802.11 Wireless Ethernet autoselect (DS/2Mbps) status: no carrier ssid "" stationname "FreeBSD Wireless node" channel 10 authmode OPEN powersavemode OFF powersavesleep 100 wepmode OFF weptxkey 1 .... 細かいことは気にせず、 無線カードがインストールされていることを示す何かが表示されていることを確かめてください。 PC カードを使用していて、無線インタフェースを認識できない場合、 詳しい情報を得るために man:pccardc[8] と man:pccardd[8] のマニュアルページを調べてみてください。 次に、アクセスポイント用に FreeBSD のブリッジ機能を担う部分を有効にするために、 モジュールを読み込む必要があるでしょう。 man:bridge[4] モジュールを読み込むには、 次のコマンドをそのまま実行します。 [source,shell] .... # kldload bridge .... モジュールを読み込む時には、何もエラーはでないはずです。 もしもエラーがでたら、カーネルに man:bridge[4] のコードを入れてコンパイルする必要があるかもしれません。 ハンドブックの<>の節が、 この課題を成し遂げる手助けをになるかもしれません。 ブリッジ部分が準備できたので、 どのインタフェース間をブリッジするのかを FreeBSD カーネルに指定する必要があります。 これは、man:sysctl[8] を使って行います。 [source,shell] .... # sysctl net.link.ether.bridge=1 # sysctl net.link.ether.bridge_cfg="wi0,xl0" # sysctl net.inet.ip.forwarding=1 .... FreeBSD 5.2-RELEASE 以降では、次のように指定しなければなりません。 [source,shell] .... # sysctl net.link.ether.bridge.enable=1 # sysctl net.link.ether.bridge.config="wi0,xl0" # sysctl net.inet.ip.forwarding=1 .... さて、無線カードを設定するときです。 次のコマンドはカードをアクセスポイントとして設定します。 [source,shell] .... # ifconfig wi0 ssid my_net channel 11 media DS/11Mbps mediaopt hostap up stationname "FreeBSD AP" .... この man:ifconfig[8] コマンド行は [.filename]#wi0# インタフェースを up 状態にし、SSID を _my_net_ に設定し、 ステーション名を _FreeBSD AP_ に設定します。 `media DS/11Mbps` オプションはカードを 11Mbps モードに設定し、また `mediaopt` を実際に有効にするのに必要です。 `mediaopt hostap` オプションはインタフェースをアクセスポイントモードにします。 `channel 11` オプションは使用するチャネルを 802.11b に設定します。 各規制地域 (regulatory domain) で有効なチャネル番号は man:wicontrol[8] マニュアルページに載っています。 さて、 これで完全に機能するアクセスポイントが立ち上がり、動作しています。 より詳しい情報については、man:wicontrol[8], man:ifconfig[8] および man:wi[4] のマニュアルを読むとよいでしょう。 また、下記の暗号化に関する節を読むこともおすすめします。 ===== ステータス情報 一度アクセスポイントが設定されて稼働すると、 管理者はアクセスポイントを利用しているクライアントを見たいと思うでしょう。 いつでも管理者は以下のコマンドを実行できます。 [source,shell] .... # wicontrol -l 1 station: 00:09:b7:7b:9d:16 asid=04c0, flags=3, caps=1, rates=f<1M,2M,5.5M,11M>, sig=38/15 .... これは一つの局が、 表示されているパラメータで接続していることを示します。 表示された信号は、 相対的な強さを表示しているだけのものとして扱われるべきです。 dBm やその他の単位への変換結果は、 異なるファームウェアバージョン間で異なります。 ==== クライアント 無線クライアントはアクセスポイント、 または他のクライアントに直接アクセスするシステムです。 典型的には、 無線クライアントが有しているネットワークデバイスは、 無線ネットワークカード 1 枚だけです。 無線クライアントを設定するにはいくつか方法があります。 それぞれは異なる無線モードに依存していますが、 一般的には BSS (アクセスポイントを必要とするインフラストラクチャーモード) か、 IBSS (アドホック、またはピアツーピアモード) のどちらかです。 ここでは、アクセスポイントと通信をするのに、 両者のうちで最も広まっている BSS モードを使用します。 ===== 要件 FreeBSD を無線クライアントとして設定するのに、 本当に必要なものはたった 1 つだけです。 FreeBSD が対応している無線カードが必要です。 ===== 無線 FreeBSD クライアントの設定 設定をはじめる前に、 あなたが接続しようとする無線ネットワークについていくつか知っておかなければなりません。 この例では、_my_net_ という名前で暗号化は無効になっているネットワークに接続しようとしています。 [NOTE] ==== この例では暗号化を行っていないのですが、 これは危険な状況です。次の節で、暗号化を有効にする方法と、 なぜそれが重要で、 暗号技術によっては完全にはあなたを保護することができないのはなぜか、 ということを学ぶでしょう。 ==== カードが FreeBSD に認識されていることを確認してください。 [source,shell] .... # ifconfig -a wi0: flags=8843 mtu 1500 inet6 fe80::202:2dff:fe2d:c938%wi0 prefixlen 64 scopeid 0x7 inet 0.0.0.0 netmask 0xff000000 broadcast 255.255.255.255 ether 00:09:2d:2d:c9:50 media: IEEE 802.11 Wireless Ethernet autoselect (DS/2Mbps) status: no carrier ssid "" stationname "FreeBSD Wireless node" channel 10 authmode OPEN powersavemode OFF powersavesleep 100 wepmode OFF weptxkey 1 .... それでは、このカードをネットワークに合わせて設定しましょう。 [source,shell] .... # ifconfig wi0 inet 192.168.0.20 netmask 255.255.255.0 ssid my_net .... `192.168.0.20` と `255.255.255.0` を有線ネットワークで有効な IP アドレスとネットマスクに置き換えてください。 アクセスポイントは無線ネットワークと有線ネットワークの間でデータをブリッジしているため、 ネットワーク上の他のデバイスには、このデバイスが、他と同様に、 有線ネットワーク上にあるかのように見えることに注意してください。 これを終えると、 あなたは標準的な有線接続を使用しているかのように、 有線ネットワーク上のホストに ping を送ることができるでしょう。 無線接続に関する問題がある場合は、 アクセスポイントに接続されていることを確認してください。 [source,shell] .... # ifconfig wi0 .... いくらか情報が表示されるはずです。 その中に以下の表示があるはずです。 [source,shell] .... status: associated .... もし `associated` と表示されなければ、 アクセスポイントの範囲外かもしれないし、 暗号化が有効になっているかもしれないし、 または設定の問題を抱えているのかもしれません。 ==== 暗号化 無線ネットワークを暗号化することが重要なのは、 十分保護された領域にネットワークを留める能力がもはやないからです。 無線データはその周辺全体にわたって放送されるので、 それを読みたいと思う人はだれでも読むことができます。 そこで暗号化が役に立ちます。 電波に載せて送られるデータを暗号化することによって、 興味を抱いた者が空中からデータを取得することをずっと難しくします。 クライアントとアクセスポイント間のデータを暗号化するもっとも一般的な方法には、 WEP と man:ipsec[4] の二種類があります。 ===== WEP WEP は Wired Equivalency Protocol (訳注: 直訳すると、有線等価プロトコル) の略語です。WEP は無線ネットワークを有線ネットワークと同程度に安全で確実なものにしようとする試みです。 残念ながら、これはすでに破られており、 破るのはそれほど苦労しません。 これは、機密データを暗号化するという場合に、 これに頼るものではないということも意味します。 なにも無いよりはましなので、 次のコマンドを使って、あなたの新しい FreeBSD アクセスポイント上で WEP を有効にしてください。 [source,shell] .... # ifconfig wi0 inet up ssid my_net wepmode on wepkey 0x1234567890 media DS/11Mbps mediaopt hostap .... クライアントについては次のコマンドで WEP を有効にできます。 [source,shell] .... # ifconfig wi0 inet 192.168.0.20 netmask 255.255.255.0 ssid my_net wepmode on wepkey 0x1234567890 .... _0x1234567890_ をより特異なキーに変更すべきであることに注意してください。 ===== IPsec man:ipsec[4] はネットワーク上で交わされるデータを暗号化するための、 はるかに頑健で強力なツールです。 これは無線ネットワーク上のデータを暗号化する明らかに好ましい方法です。 ハンドブック内の crossref:security[ipsec,IPsec] 節で man:ipsec[4] セキュリティ、 およびその実装方法の詳細を読むことができます。 ==== ツール 無線ネットワークをデバッグしたり設定するのに使うツールがわずかばかりあります。 ここでその一部と、それらが何をしているか説明します。 ===== bsd-airtools パッケージ bsd-airtools パッケージは、 WEP キークラッキング、 アクセスポイント検知などの無線通信を監査するツールを含む完備されたツール集です。 bsd-airtools ユーティリティは package:net/bsd-airtools[] port からインストールできます。 ports のインストールに関する情報はこのハンドブックの crossref:ports[ports,アプリケーションのインストール - packages と ports] を参照してください。 `dstumbler` プログラムは、 アクセスポイントの発見および S/N 比のグラフ化をできるようにするパッケージツールです。 アクセスポイントを立ち上げて動かすのに苦労しているなら、 `dstumbler` はうまく行く手助けになるかもしれません。 無線ネットワークの安全性をテストするのに、 "dweputils" (`dwepcrack`, `dwepdump` および `dwepkeygen`) を使用することで、 WEP があなたの無線安全性への要求に対する正しい解決策かどうか判断するのを助けられるかもしれません。 ===== `wicontrol`, `ancontrol` および `raycontrol` ユーティリティ これらは、無線ネットワーク上で無線カードがどのように動作するかを制御するツールです。 上記の例では、無線カードが [.filename]#wi0# インタフェースであるので、man:wicontrol[8] を使用することに決めました。 もし Cisco の無線デバイスを持っている場合は、それは [.filename]#an0# として動作するでしょうから、 man:ancontrol[8] を使うことになるでしょう。 ===== `ifconfig` コマンド man:ifconfig[8] は man:wicontrol[8] と同じオプションの多くを処理できますが、 いくつかのオプションを欠いています。 コマンドライン引数とオプションについて man:ifconfig[8] を参照してください。 ==== 対応しているカード ===== アクセスポイント 現在のところ (アクセスポイントとして) BSS モードに対応した唯一のカードは Prism 2, 2.5 または 3 チップセットを利用したデバイスです。 man:wi[4] に完全な一覧があります。 ===== クライアント 現在、FreeBSD では、ほとんどすべての 802.11b 無線カードに対応しています。 Prism, Spectrum24, Hermes, Aironet または Raylink のチップセットを利用したほとんどのカードは、 (アドホック、ピアツーピア、そして BSS の) IBSS モードで無線ネットワークカードとして動作するでしょう。 [[network-bluetooth]] == Bluetooth === はじめに Bluetooth は免許のいらない 2.4 GHz の帯域を利用して、 10 m 程度のパーソナルネットワークを作る無線技術です。 ネットワークはたいていの場合、その場その場で、携帯電話や PDA やノートパソコンなどの携帯デバイスから形成されます。 Wi-Fi などの他の有名な無線技術とは違い、 Bluetooth はより高いレベルのサービスを提供します。 たとえば、FTP のようなファイルサーバ、ファイルのプッシュ、 音声伝送、シリアル線のエミュレーションなどのサービスです。 FreeBSD 内での Bluetooth スタックは Netgraph フレームワーク (man:netgraph[4] 参照) を使って実現されています。 man:ng_ubt[4] ドライバは、 多種多様な Bluetooth USB ドングルに対応しています。 Broadcom BCM2033 チップを搭載した Bluetooth デバイスは man:ubtbcmfw[4] および man:ng_ubt[4] ドライバによって対応されています。 3Com Bluetooth PC カード 3CRWB60-A は man:ng_bt3c[4] ドライバによって対応されています。 シリアルおよび UART を搭載した Bluetooth デバイスは man:sio[4], man:ng_h4[4] および man:hcseriald[8] ドライバによって対応されています。 この節では USB Bluetooth ドングルの使用法について説明します。 Bluetooth に対応しているのは FreeBSD 5.0 以降のシステムです。 [NOTE] ==== 5.0, 5.1 Release ではカーネルモジュールは利用可能ですが、 種々のユーティリティとマニュアルは標準でコンパイルされていません。 ==== === デバイスの挿入 デフォルトでは Bluetooth デバイスドライバはカーネルモジュールとして利用できます。 デバイスを接続する前に、 カーネルにドライバを読み込む必要があります。 [source,shell] .... # kldload ng_ubt .... Bluetooth デバイスがシステム起動時に存在している場合、 [.filename]#/boot/loader.conf# からモジュールを読み込んでください。 [.programlisting] .... ng_ubt_load="YES" .... USB ドングルを挿してください。コンソールに (または syslog に) 下記のような表示が現れるでしょう。 [source,shell] .... ubt0: vendor 0x0a12 product 0x0001, rev 1.10/5.25, addr 2 ubt0: Interface 0 endpoints: interrupt=0x81, bulk-in=0x82, bulk-out=0x2 ubt0: Interface 1 (alt.config 5) endpoints: isoc-in=0x83, isoc-out=0x3, wMaxPacketSize=49, nframes=6, buffer size=294 .... [.filename]#/usr/shared/examples/netgraph/bluetooth/rc.bluetooth# を [.filename]#/etc/rc.bluetooth# のようなどこか便利な場所にコピーしてください。 このスクリプトは Bluetooth スタックを開始および終了させるのに使われます。 デバイスを抜く前にスタックを終了するのはよい考えですが、 (たいていの場合) しなくても致命的ではありません。 スタックを開始するときに、下記のような出力がされます。 [source,shell] .... # /etc/rc.bluetooth start ubt0 BD_ADDR: 00:02:72:00:d4:1a Features: 0xff 0xff 0xf 00 00 00 00 00 <3-Slot> <5-Slot> Max. ACL packet size: 192 bytes Number of ACL packets: 8 Max. SCO packet size: 64 bytes Number of SCO packets: 8 .... === ホストコントローラインタフェース (HCI) ホストコントローラインタフェース (HCI) は、 ベースバンドコントローラおよびリンクマネージャへのコマンドインタフェースを提供し、 ハードウェアステータスおよびコントロールレジスタへアクセスします。 このインタフェースは Bluetooth ベースバンド機能へアクセスする画一的な方法を提供します。 ホストの HCI 層は Bluetooth ハードウェア上の HCI ファームウェアと、 データとコマンドをやり取りします。 ホストコントローラトランスポート層 (つまり物理的なバス) のドライバは、 両方の HCI 層に相互に情報を交換する能力を与えます。 一つの Bluetooth デバイスにつき、_hci_ タイプの Netgraph ノードが一つ作成されます。 HCI ノードは通常 Bluetooth デバイスドライバノード (下流) と L2CAP ノード (上流) に接続されます。 すべての HCI 動作はデバイスドライバノード上ではなく、 HCI ノード上で行われなくてはいけません。 HCI ノードのデフォルト名は "devicehci" です。 詳細については man:ng_hci[4] マニュアルを参照してください。 最も一般的なタスクの一つに、無線通信的に近傍にある Bluetooth デバイスの発見があります。 この動作は _inquiry (問い合わせ)_ と呼ばれています。 Inquiry や他の HCI に関連した動作は man:hccontrol[8] ユーティリティによってなされます。 下記の例は、どの Bluetooth デバイスが通信圏内にあるかを知る方法を示しています。 デバイスのリストが表示されるには数秒かかります。 リモートデバイスは _discoverable (発見可能な)_ モードにある場合にのみ inquiry に返答するということに注意してください。 [source,shell] .... % hccontrol -n ubt0hci inquiry Inquiry result, num_responses=1 Inquiry result #0 BD_ADDR: 00:80:37:29:19:a4 Page Scan Rep. Mode: 0x1 Page Scan Period Mode: 00 Page Scan Mode: 00 Class: 52:02:04 Clock offset: 0x78ef Inquiry complete. Status: No error [00] .... `BD_ADDR` は Bluetooth デバイスに固有のアドレスです。 これはネットワークカードの MAC アドレスに似ています。 このアドレスはデバイスとの通信を続けるのに必要となります。 BD_ADDR に人間が判読しやすい名前を割り当てることもできます。 [.filename]#/etc/bluetooth/hosts# ファイルには、 既知の Bluetooth ホストに関する情報が含まれています。 次の例はリモートデバイスに割り当てられている、 人間が判読しやすい名前を得る方法を示しています。 [source,shell] .... % hccontrol -n ubt0hci remote_name_request 00:80:37:29:19:a4 BD_ADDR: 00:80:37:29:19:a4 Name: Pav's T39 .... リモートの Bluetooth デバイス上で inquiry を実行すると、 あなたのコンピュータは "your.host.name (ubt0)" と認識されます。 ローカルデバイスに割り当てられた名前はいつでも変更できます。 Bluetooth システムは一対一接続 (二つの Bluetooth ユニットだけが関係します) または一対多接続を提供します。 一対多接続では、接続はいくつかの Bluetooth デバイス間で共有されます。 次の例は、ローカルデバイスに対するアクティブなベースバンド接続のリストを得る方法を示しています。 [source,shell] .... % hccontrol -n ubt0hci read_connection_list Remote BD_ADDR Handle Type Mode Role Encrypt Pending Queue State 00:80:37:29:19:a4 41 ACL 0 MAST NONE 0 0 OPEN .... _connection handle_ はベースバンド接続の終了が必要とされるときに便利です。 もっとも、通常はこれを手動で行う必要はありません。 Bluetooth スタックはアクティブでないベースバンド接続を自動的に終了します。 [source,shell] .... # hccontrol -n ubt0hci disconnect 41 Connection handle: 41 Reason: Connection terminated by local host [0x16] .... 利用可能な HCI コマンドの完全な一覧を得るには、 `hccontrol help` を参照してください。 HCI コマンドのほとんどはスーパユーザ権限を必要としません。 === ロジカルリンクコントロールおよびアダプテーションプロトコル (L2CAP) ロジカルリンクコントロールおよびアダプテーションプロトコル (L2CAP) は、プロトコル多重化ケーパビリティおよび分割・再編成動作を備えた、 上位層プロトコルへのコネクション指向およびコネクションレスデータサービスを提供します。 L2CAP は上位層プロトコルおよびアプリケーションが 64 KB までの長さの L2CAP データパケットを送受信することを可能にします。 L2CAP は _チャネル_ の概念に基づいています。 チャネルはベースバンド接続の上位に位置する論理的な接続です。 それぞれのチャネルは多対一の方法で一つのプロトコルに結びつけられます。 複数のチャネルを同じプロトコルに結びつけることは可能ですが、 一つのチャネルを複数のプロトコルに結びつけることはできません。 チャネル上で受け取られたそれぞれの L2CAP パケットは、 適切なより上位のプロトコルに渡されます。 複数のチャネルは同じベースバンド接続を共有できます。 一つの Bluetooth デバイスに対して、_l2cap_ タイプの Netgraph ノードが一つ作成されます。 L2CAP ノードは通常 Bluetooth HCI ノード (下流) と Bluetooth ソケットノード (上流) に接続されます。 L2CAP ノードのデフォルト名は "devicel2cap" です。 詳細については man:ng_l2cap[4] マニュアルを参照してください。 便利なコマンドに、他のデバイスに ping を送ることができる man:l2ping[8] があります。Bluetooth 実装によっては、 送られたデータすべては返さないことがあります。 したがって次の例で _0 バイト_ は正常です。 [source,shell] .... # l2ping -a 00:80:37:29:19:a4 0 bytes from 0:80:37:29:19:a4 seq_no=0 time=48.633 ms result=0 0 bytes from 0:80:37:29:19:a4 seq_no=1 time=37.551 ms result=0 0 bytes from 0:80:37:29:19:a4 seq_no=2 time=28.324 ms result=0 0 bytes from 0:80:37:29:19:a4 seq_no=3 time=46.150 ms result=0 .... man:l2control[8] ユーティリティは L2CAP ノード上でさまざまな操作を行うのに使われます。 この例は、ローカルデバイスに対する論理的な接続 (チャネル) およびベースバンド接続の一覧を得る方法を示しています。 [source,shell] .... % l2control -a 00:02:72:00:d4:1a read_channel_list L2CAP channels: Remote BD_ADDR SCID/ DCID PSM IMTU/ OMTU State 00:07:e0:00:0b:ca 66/ 64 3 132/ 672 OPEN % l2control -a 00:02:72:00:d4:1a read_connection_list L2CAP connections: Remote BD_ADDR Handle Flags Pending State 00:07:e0:00:0b:ca 41 O 0 OPEN .... 別の診断ツールが man:btsockstat[1] です。 これは man:netstat[1] と同様の作業を、Bluetooth ネットワークに関するデータ構造についての行います。 下記の例は上の man:l2control[8] と同じ論理的な接続を示します。 [source,shell] .... % btsockstat Active L2CAP sockets PCB Recv-Q Send-Q Local address/PSM Foreign address CID State c2afe900 0 0 00:02:72:00:d4:1a/3 00:07:e0:00:0b:ca 66 OPEN Active RFCOMM sessions L2PCB PCB Flag MTU Out-Q DLCs State c2afe900 c2b53380 1 127 0 Yes OPEN Active RFCOMM sockets PCB Recv-Q Send-Q Local address Foreign address Chan DLCI State c2e8bc80 0 250 00:02:72:00:d4:1a 00:07:e0:00:0b:ca 3 6 OPEN .... === RFCOMM プロトコル RFCOMM プロトコルは L2CAP プロトコルを介してシリアルポートのエミュレーションを提供します。 このプロトコルは ETSI (訳注: 欧州電気通信標準化機構) 標準 TS 07.10 に基づいています。 RFCOMM プロトコルは、単純な伝送プロトコルに RS-232 (EIATIA-232-E) シリアルポートの 9 本の結線をエミュレートする項目を加えたものです。 RFCOMM プロトコルは、二つの Bluetooth デバイス間で、最大 60 までの同時接続 (RFCOMM チャネル) に対応しています。 RFCOMM の目的から、完全な通信経路は、異なるデバイス上 (通信の端点) で動作している二つのアプリケーションと、 その間の通信セグメントを含んでいます。RFCOMM は、それが動いているデバイスのシリアルポートを利用するアプリケーションをカバーするためのものです。 通信セグメントはあるデバイスから他のデバイスへの Bluetooth リンクです (直接接続)。 RFCOMM は直接接続している場合のデバイス間の接続、 またはネットワークの場合のデバイスとモデムの間の接続にだけ関係があります。 RFCOMM は、一方が Bluetooth 無線技術で通信し、 もう一方で有線インタフェースを提供するモジュールのような、 他の構成にも対応できます。 FreeBSD では RFCOMM プロトコルは Bluetooth ソケット層に実装されています。 === デバイスのペアリング デフォルトでは Bluetooth 通信は認証されておらず、 すべてのデバイスが他のすべてのデバイスと通信できます。 Bluetooth デバイス (たとえば携帯電話) は特定のサービス (たとえばダイアルアップサービス) を提供するために、 認証を要求することも選択できます。 Bluetooth 認証は通常 _PIN コード_ で行われます。 PIN コードは最長 16 文字のアスキー文字列です。 ユーザは両デバイスで同じ PIN コードを入力することを要求されます。 一度 PIN コードを入力すると、 両デバイスは _リンクキー_ を作成します。 その後、リンクキーはそのデバイス自身または、 不揮発性記憶デバイス内に格納できます。 次の機会には、両デバイスは前に作成されたリンクキーを使用するでしょう。 このような手続きを__ペアリング (pairing)__ と呼びます。いずれかのデバイス上でリンクキーが失われたときには、 ペアリングをやり直さなければならないことに注意してください。 man:hcsecd[8] デーモンが Bluetooth 認証要求のすべてを扱う責任を負っています。 デフォルトの設定ファイルは [.filename]#/etc/bluetooth/hcsecd.conf# です。 PIN コードが "1234" に設定された携帯電話に関する例は以下の通りです。 [.programlisting] .... device { bdaddr 00:80:37:29:19:a4; name "Pav's T39"; key nokey; pin "1234"; } .... PIN コードには (長さを除いて) 制限はありません。 いくつかのデバイス (たとえば Bluetooth ヘッドフォン) には固定的な PIN コードが組み込まれているかもしれません。 `-d` オプションは man:hcsecd[8] デーモンがフォアグラウンドで動作するように強制するため、 何が起きているのか確認しやすくなります。 リモートデバイスがペアリングを受け取るように設定して、 リモートデバイスへの Bluetooth 接続を開始してください。 リモートデバイスはペアリングが受け入れらた、と応答して PIN コードを要求するでしょう。 [.filename]#hcsecd.conf# 内にあるのと同じ PIN コードを入力してください。 これであなたの PC とリモートデバイスがペアとなりました。 また、リモートデバイスからペアリングを開始することもできます。 以下は `hcsecd` の出力例です。 [.programlisting] .... hcsecd[16484]: Got Link_Key_Request event from 'ubt0hci', remote bdaddr 0:80:37:29:19:a4 hcsecd[16484]: Found matching entry, remote bdaddr 0:80:37:29:19:a4, name 'Pav's T39', link key doesn't exist hcsecd[16484]: Sending Link_Key_Negative_Reply to 'ubt0hci' for remote bdaddr 0:80:37:29:19:a4 hcsecd[16484]: Got PIN_Code_Request event from 'ubt0hci', remote bdaddr 0:80:37:29:19:a4 hcsecd[16484]: Found matching entry, remote bdaddr 0:80:37:29:19:a4, name 'Pav's T39', PIN code exists hcsecd[16484]: Sending PIN_Code_Reply to 'ubt0hci' for remote bdaddr 0:80:37:29:19:a4 .... === サービスディスカバリプロトコル (SDP) サービスディスカバリプロトコル (SDP) は、 クライアントアプリケーションが、 サーバアプリケーションが提供するサービスの存在とその属性を発見する手段を提供します。 サービスの属性には提示されているサービスのタイプまたはクラス、 および、サービスを利用するのに必要な仕組みまたはプロトコルの情報が含まれます。 SDP には SDP サーバと SDP クライアント間の通信が含まれます。 SDP サーバは、サーバに関連づけられたサービスの特性について記述しているサービスレコードの一覧を維持しています。 各サービスレコードにはそれぞれ 1 つのサービスの情報が書かれています。 クライアントは SDP リクエストを出すことによって、 SDP サーバが維持しているサービスレコードから情報を検索できます。 クライアントまたはクライアントに関連づけられたアプリケーションがサービスを利用することにしたら、 サービスを利用するためには、 サービスプロバイダへの接続を別途開かなければなりません。 SDP はサービスとそれらの属性を発見するための仕組みを提供しますが、 そのサービスを利用するための仕組みは提供しません。 通常 SDP クライアントは希望するサービスの特性に基づいてサービスを検索します。 しかしながら、サービスに関する事前の情報なしに、 どのタイプのサービスが SDP サーバのサービスレコードに記述されているか知ることが望ましいことがあります。 この、提供されている任意のサービスを閲覧する手順を、 _ブラウジング (browsing)_ と呼びます。 現在のところ Bluetooth SDP サーバおよびクライアントは、 http://www.geocities.com/m_evmenkin/[ここ] からダウンロードできる第三者パッケージ sdp-1.5 で実装されています。 sdptool はコマンドラインの SDP クライアントです。 次の例は SDP ブラウズの問い合わせ方法を示しています。 [source,shell] .... # sdptool browse 00:80:37:29:19:a4 Browsing 00:80:37:29:19:A4 ... Service Name: Dial-up Networking Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 1 Service Name: Fax Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 2 Service Name: Voice gateway Service Class ID List: "Headset Audio Gateway" (0x1112) "Generic Audio" (0x1203) Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 3 .... ... 等々。 それぞれのサービスは属性の一覧 (たとえば RFCOMM チャネル) を持っていることに注意してください。サービスによっては、 属性のリストの一部についてメモをとっておく必要があるかもしれません。 Bluetooth 実装のいくつかは、サービスブラウジングに対応しておらず、 空の一覧を返してくるかもしれません。この場合、 特定のサービスを検索をすることは可能です。下記の例は OBEX オブジェクトプッシュ (OPUSH) サービスを検索する方法です。 [source,shell] .... # sdptool search --bdaddr 00:07:e0:00:0b:ca OPUSH .... FreeBSD 上における Bluetooth クライアントへのサービス提供は sdpd サーバが行います。 [source,shell] .... # sdpd .... sdptool は、ローカル SDP サーバにサービスを登録するのにも用いられます。 下記の例は PPP (LAN) サービスを備えたネットワークアクセスを登録する方法を示しています。 一部のサービスでは属性 (たとえば RFCOMM チャネル) を要求することに注意してください。 [source,shell] .... # sdptool add --channel=7 LAN .... ローカル SDP サーバに登録されたサービスの一覧は SDP ブラウザの問い合わせを "特別な" BD_ADDR に送ることで得られます。 [source,shell] .... # sdptool browse ff:ff:ff:00:00:00 .... === ダイアルアップネットワーク (DUN) および PPP (LAN) を用いたネットワークアクセスプロファイル ダイアルアップネットワーク (DUN) プロファイルはほとんどの場合、 モデムや携帯電話とともに使用されます。 このプロファイルが対象とする場面は以下のものです。 * コンピュータから携帯電話またはモデムを、 ダイアルアップインターネットアクセスサーバへの接続、 または他のダイアルアップサービスを利用するための無線モデムとして使うこと * データ呼び出しを受けるための、 コンピュータによる携帯電話またはモデムの使用 PPP (LAN) によるネットワークアクセスプロファイルは、 次の状況で利用できます。 * 単一の Bluetooth デバイスへの LAN アクセス * マルチ Bluetooth デバイスへの LAN アクセス * (シリアルケーブルエミュレーション上の PPP ネットワーク接続を使用した) PC から PC への接続 FreeBSD ではどちらのプロファイルも man:ppp[8] と man:rfcomm_pppd[8] (RFCOMM Bluetooth 接続を PPP が制御可能なように変換するラッパ) で実装されています。 いずれかのプロファイルが使用可能となる前に、 [.filename]#/etc/ppp/ppp.conf# 内に新しい PPP ラベルが作成されていなければなりません。 例については、 man:rfcomm_pppd[8] のマニュアルページを参照してください。 次の例では、DUN RFCOMM チャネル上で BD_ADDR が 00:80:37:29:19:a4 のリモートデバイスへの RFCOMM 接続を開くのに man:rfcomm_pppd[8] が使われます。実際の RFCOMM チャネル番号は SDP を介してリモートデバイスから得ます。 手動で RFCOMM チャネルを指定することもでき、その場合 man:rfcomm_pppd[8] は SDP 問い合わせを実行しません。 リモートデバイス上の RFCOMM チャネルを見つけるには、 sdptool を使ってください。 [source,shell] .... # rfcomm_pppd -a 00:80:37:29:19:a4 -c -C dun -l rfcomm-dialup .... PPP (LAN) サービスでネットワークアクセスを提供するためには、 sdpd サーバが動いていなければなりません。 これはローカル SDP サーバに LAN サービスを登録するのにも必要です。 LAN サービスは RFCOMM チャネル属性を必要とすることに注意してください。 [.filename]#/etc/ppp/ppp.conf# ファイル内に LAN クライアントの新しいエントリを作成しなければなりません。 例については man:rfcomm_pppd[8] のマニュアルページを参照してください。 最後に、RFCOMM PPP サーバが実行され、 ローカル SDP サーバに登録されているのと同じ RFCOMM チャネルで待ち受けていなければなりません。 次の例は RFCOMM PPP サーバを起動する方法を示しています。 [source,shell] .... # rfcomm_pppd -s -C 7 -l rfcomm-server .... === OBEX プッシュ (OPUSH) プロファイル OBEX はモバイルデバイス間で広く使われている単純なファイル転送プロトコルです。 これは主に赤外線通信で利用されており、ノートパソコンや PDA 間の汎用的なファイル転送、および PIM アプリケーションを搭載した携帯電話その他のデバイス間で名刺やカレンダーエントリを転送するのに用いられます。 OBEX サーバおよびクライアントは、 http://www.geocities.com/m_evmenkin/[ここ] からダウンロードできる obexapp-1.0 という第三者のパッケージとして実装されています。 このパッケージは openobex ライブラリ (上記の obexapp に含まれます) および package:devel/glib12[] port を必要とします。 なお、obexapp はルート権限を必要としません。 OBEX クライアントは OBEX サーバとの間でオブジェクトを渡したり (プッシュ) および受け取ったり (プル) するのに使用されます。 オブジェクトは、たとえば名刺や予定などになります。 OBEX クライアントは RFCOMM チャネル番号を SDP によってリモートデバイスから得ることができます。 これは RFCOMM チャネル番号の代わりにサービス名を指定することによって行うことができます。 対応しているサービス名は IrMC, FTRN および OPUSH です。 RFCOMM チャネルを番号で指定することもできます。 下記は、デバイス情報オブジェクトを携帯電話から受け取り、 新しいオブジェクト (名刺) が携帯電話に渡される場合の OBEX セッションの例です。 [source,shell] .... % obexapp -a 00:80:37:29:19:a4 -C IrMC obex> get get: remote file> telecom/devinfo.txt get: local file> devinfo-t39.txt Success, response: OK, Success (0x20) obex> put put: local file> new.vcf put: remote file> new.vcf Success, response: OK, Success (0x20) obex> di Success, response: OK, Success (0x20) .... OBEX プッシュサービスを提供するためには、 sdpd サーバが実行されていなければなりません。 また OPUSH サービスをローカル SDP サーバに登録することも必要です。 なお、OPUSH サービスには RFCOMM チャネル属性が必要です。 渡されるオブジェクトをすべて格納するルートフォルダを作成しなければいけません。 ルートフォルダのデフォルトパスは [.filename]#/var/spool/obex# です。 最後に OBEX サーバが実行され、 ローカル SDP サーバに登録されているのと同じ RFCOMM チャネルで待ち受けていなければなりません。 下記の例は OBEX サーバの起動方法を示します。 [source,shell] .... # obexapp -s -C 10 .... === シリアルポート (SP) プロファイル シリアルポート (SP) プロファイルは Bluetooth デバイスが RS232 (または同様の) シリアルケーブルエミュレーションを行えるようにします。 このプロファイルが対象とする場面は、 レガシーアプリケーションが、仮想シリアルポート抽象を介して Bluetooth をケーブルの代替品として使うところです。 man:rfcomm_sppd[1] ユーティリティはシリアルポートプロファイルを実装します。 Pseudo tty が仮想シリアルポート抽象概念として用いられます。 下記の例はリモートデバイスのシリアルポートサービスへ接続する方法を示します。 なお、RFCOMM チャネルを指定する必要はありません。- man:rfcomm_sppd[1] は SDP を介してリモートデバイスからその情報を得ることができます。 これを上書きしたい場合にはコマンドラインで RFCOMM チャネルを指定してください。 [source,shell] .... # rfcomm_sppd -a 00:07:E0:00:0B:CA -t /dev/ttyp6 rfcomm_sppd[94692]: Starting on /dev/ttyp6... .... 接続された pseudo tty はシリアルポートとして利用することができます。 [source,shell] .... # cu -l ttyp6 .... === トラブルシューティング ==== リモートデバイスが接続できません 古い Bluetooth デバイスのなかにはロールスイッチング (role switching) に対応していないものがあります。 デフォルトでは FreeBSD が新しい接続を受け付けるときに、 ロールスイッチを実行してマスタになろうとします。 これに対応していないデバイスは接続できないでしょう。 なお、ロールスイッチングは新しい接続が確立されるときに実行されるので、 ロールスイッチングに対応しているかどうかリモートデバイスに問い合わせることはできません。 ローカル側でロールスイッチングを無効にする HCI オプションがあります。 [source,shell] .... # hccontrol -n ubt0hci write_node_role_switch 0 .... ==== 何かがうまくいっていないみたいです。 何が実際に起こっているか確認できますか? できます。 http://www.geocities.com/m_evmenkin/[ここ] からダウンロードできる第三者パッケージ hcidump-1.5 を使ってください。 hcidump ユーティリティは man:tcpdump[1] と似ています。 これはターミナル上の Bluetooth パケットの内容の表示および Bluetooth パケットをファイルにダンプするのに使えます。 [[network-bridging]] == ブリッジ === はじめに IP サブネットを作成して、 それらのセグメントをルータを使って接続することなしに、 (Ethernet セグメントのような) 一つの物理ネットワークを二つのネットワークセグメントに分割することはとても有効な場合があります。 この方法で二つのネットワークを繋ぐデバイスは "ブリッジ" と呼ばれます。 二つのネットワークインタフェースカードを持つ FreeBSD システムは、ブリッジとして動作することができます。 ブリッジは、各ネットワークインタフェイスに繋がるデバイスの MAC 層のアドレス (Ethernet アドレス) を記憶することにより動作します。 ブリッジはトラフィックの送信元と受信先が異なったネットワーク上にある場合にのみトラフィックを転送します。 多くの点で、ブリッジはポート数の少ない Ethernet スイッチのようなものといえます。 === ブリッジがふさわしい状況 今日ブリッジが活躍する場面は大きく分けて二つあります。 ==== トラフィックの激しいセグメント ひとつは、 物理ネットワークセグメントがトラフィック過剰になっているが、 なんらかの理由によりネットワークをサブネットに分け、 ルータで接続することができない場合です。 編集部門と製作部門がおなじサブネットに同居している新聞社を例に考えてみましょう。 編集部門のユーザはファイルサーバとして全員サーバ `A` を利用し、 製作部門のユーザはサーバ `B` を利用します。 すべてのユーザを接続するのには Ethernet が使われており、 高負荷となったネットワークは遅くなってしまいます。 もし編集部門のユーザを一つのネットワークセグメントに分離することができ、 製作部門のユーザも同様にできるのなら、 二つのネットワークセグメントをブリッジで繋ぐことができます。 ブリッジの "反対" 側へ向かうネットワークトラフィックだけが転送され、 各ネットワークセグメントの混雑は緩和されます。 ==== パケットフィルタ/帯域制御用ファイアウォール もうひとつはネットワークアドレス変換 (NAT) を使わずにファイアウォール機能を利用したい場合です。 ここでは DSL もしくは ISDN で ISP に接続している小さな会社を例にとってみましょう。 この会社は ISP からグローバル IP アドレスを 13 個割り当てられており、ネットワーク上には 10 台の PC が存在します。 このような状況では、サブネット化にまつわる問題から、 ルータを用いたファイアウォールを利用することは困難です。 ブリッジを用いたファイアウォールなら、 IP アドレスの問題を気にすること無く、 DSL/ISDN ルータの下流側に置くように設定できます。 === ブリッジを設定する ==== ネットワークインタフェースカードの選択 ブリッジを利用するには少なくとも 2 枚のネットワークカードが必要です。 残念なことに FreeBSD 4.0 ではすべてのネットワークインタフェースカードがブリッジ機能に対応しているわけではありません。 カードに対応しているかどうかについては man:bridge[4] を参照してください。 以下に進む前に、 二枚のネットワークカードをインストールしてテストしてください。 ==== カーネルコンフィグレーションの変更 カーネルでブリッジ機能を有効にするには [.programlisting] .... options BRIDGE .... という行をカーネルコンフィグレーションファイルに追加して カーネルを再構築してください。 ==== ファイアウォール対応 ファイアウォールとしてブリッジを利用しようとしている場合には `IPFIREWALL` オプションも指定する必要があります。 ブリッジをファイアウォールとして設定する際の一般的な情報に関しては、 ファイアウォールの章 を参照してください。 IP 以外のパケット (ARP など) がブリッジを通過するようにするためには、 ファイアウォール用オプションを設定しなければなりません。 このオプションは `IPFIREWALL_DEFAULT_TO_ACCEPT` です。この変更により、 デフォルトではファイアウォールがすべてのパケットを受け入れるようになることに注意してください。 この設定を行う前に、 この変更が自分のルールセットにどのような影響をおよぼすかを把握しておかなければなりません。 ==== 帯域制御機能 ブリッジで帯域制御機能を利用したい場合、 カーネルコンフィグレーションで `DUMMYNET` オプションを加える必要があります。 詳しい情報に関しては man:dummynet[4] を参照してください。 === ブリッジを有効にする ブリッジを有効にするには、 [.filename]#/etc/sysctl.conf# に以下の行を加えてください。 [.programlisting] .... net.link.ether.bridge=1 .... 指定したインタフェースでブリッジを可能にするには以下を加えてください。 [.programlisting] .... net.link.ether.bridge_cfg=if1,if2 .... (_if1_ および _if2_ は二つのネットワークインタフェースの名前に置き換えてください)。 ブリッジを経由したパケットを man:ipfw[8] でフィルタしたい場合には、 以下の行も付け加える必要があります [.programlisting] .... net.link.ether.bridge_ipfw=1 .... FreeBSD 5.2-RELEASE 以降では、かわりに以下の行を使用してください。 [.programlisting] .... net.link.ether.bridge.enable=1 net.link.ether.bridge.config=if1,if2 net.link.ether.bridge.ipfw=1 .... === その他の情報 ネットワークからブリッジに man:telnet[1] したい場合、 ネットワークカードの一つに IP アドレスを割り当てるのが正しいです。 一般的に、両方のカードに IP アドレスを割り当てるのはよい考えではないとされています。 ネットワーク内に複数のブリッジを設置する場合、 任意のワークステーション間で一つ以上の経路を持つことはできません。 技術的には、 これはスパニングツリーのリンク制御はサポートされていない、 ということを意味します。 ブリッジは、man:ping[8] にかかる時間を遅らせることがあります。特に、 一方のセグメントからもう一方へのトラフィックでそうなります。 [[network-nfs]] == NFS FreeBSD がサポートしている多くのファイルシステムの中には、 NFS とも呼ばれているネットワークファイルシステムがあります。 NFS はあるマシンから他のマシンへと、 ネットワークを通じてディレクトリとファイルを共有することを可能にします。 NFS を使うことで、 ユーザやプログラムはリモートシステムのファイルを、 それがローカルファイルであるかのようにアクセスすることができます。 NFS が提供可能な最も特筆すべき利点いくつかは以下のものです。 * 一般的に使われるデータを単一のマシンに納めることができ、 ユーザはネットワークを通じてデータにアクセスできるため、 ローカルワークステーションが使用するディスク容量が減ります。 * ネットワーク上のすべてのマシンに、 ユーザが別々にホームディレクトリを持つ必要がありません。 NFS サーバ上にホームディレクトリが設定されれば、 ネットワークのどこからでもアクセス可能です。 * フロッピーディスクや CDROM ドライブ、 ZIP ドライブなどのストレージデバイスを、 ネットワーク上の他のマシンで利用することができます。 ネットワーク全体のリムーバブルドライブの数を減らせるかもしれません。 === NFS はどのように動作するのか NFS は最低二つの主要な部分、 サーバと一つ以上のクライアントからなります。 クライアントはサーバマシン上に格納されたデータにリモートからアクセスします。 これが適切に機能するには、 いくつかのプロセスが設定されて実行されていなければなりません。 [NOTE] ==== FreeBSD 5.X では `portmap` ユーティリティは rpcbind ユーティリティに置き換わりました。 したがって FreeBSD 5.X では、ユーザは下記の例で、 portmap の例のすべてを `rpcbind` に置き換える必要があります。 ==== サーバは以下のデーモンを動作させなければなりません。 [.informaltable] [cols="1,1", frame="none", options="header"] |=== | デーモン | 説明 |nfsd |NFS クライアントからのリクエストを処理する NFS デーモン |mountd |man:nfsd[8] から渡されたリクエストを実際に実行する NFS マウントデーモン |portmap |NFS サーバの利用しているポートを NFS クライアントから取得できるようにするためのポートマッパデーモン |=== クライアント側では nfsiod というデーモンも実行できます。 nfsiod デーモンは NFS サーバからのリクエストを処理します。 これは任意であり、性能を改善しますが、 通常の正しい動作には必要としません。詳細については man:nfsiod[8] マニュアルページを参照してください。 [[network-configuring-nfs]] === NFS の設定 NFS の設定は比較的素直な工程です。 動かさなければならないプロセスは [.filename]#/etc/rc.conf# ファイルを少し変更すれば起動時に実行させられます。 NFS サーバでは [.filename]#/etc/rc.conf# ファイルの中で、 以下のオプションが設定されていることを確かめてください。 [.programlisting] .... portmap_enable="YES" nfs_server_enable="YES" mountd_flags="-r" .... `mountd` は NFS サーバが有効になっていれば、 自動的に実行されます。 クライアント側では [.filename]#/etc/rc.conf# 内に以下の設定があることを確認してください。 [.programlisting] .... nfs_client_enable="YES" .... [.filename]#/etc/exports# ファイルは NFS サーバがどのファイルシステムをエクスポート (ときどき "共有" と呼ばれます) するのかを指定します。 [.filename]#/etc/exports# ファイル中の各行は、 エクスポートするファイルシステム、 およびそのファイルシステムにアクセスできるマシンを指定します。 ファイルシステムにアクセスできるマシンとともに、 アクセスオプションも指定できます。 このファイルで指定できるオプションはたくさんありますが、 ここではほんの少しだけ言及します。man:exports[5] マニュアルページを読めば、 他のオプションは簡単にみつけられるでしょう。 いくつか [.filename]#/etc/exports# の設定例を示します。 以下の例はファイルシステムのエクスポートの考え方を示しますが、 あなたの環境とネットワーク設定に応じて設定は少し変わるでしょう。 たとえば次の行は [.filename]#/cdrom# ディレクトリを、サーバと同じドメイン名か (そのため、いずれもドメイン名がありません)、 [.filename]#/etc/hosts# に記述されている三つの例となるマシンに対してエクスポートします。 `-ro` フラグは共有されるファイルシステムを読み込み専用にします。 このフラグにより、 リモートシステムは共有されたファイルシステムに対して何の変更も行えなくなります。 [.programlisting] .... /cdrom -ro host1 host2 host3 .... 以下の設定は IP アドレスで指定した 3 つのホストに対して [.filename]#/home# をエクスポートします。 この設定はプライベートネットワークで DNS が設定されていない場合に便利でしょう。 内部のホスト名に対して [.filename]#/etc/hosts# を設定するという手段もあります。 詳細については man:hosts[5] を参照してください。 `-alldirs` フラグはサブディレクトリがマウントポイントとなることを認めます。 言い替えると、これはサブディレクトリをマウントしませんが、 クライアントが要求するか、 または必要とするディレクトリだけをマウントできるようにします。 [.programlisting] .... /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... 以下の設定は、サーバとは異なるドメイン名の 2 台のクライアントがアクセスできるように [.filename]#/a# をエクスポートします。 `-maproot=root` フラグは、リモートシステムの `root` ユーザが、 エクスポートされたファイルシステムに `root` として書き込むことを許可します。 `-maproot=root` フラグが無ければ、 リモートマシンの `root` 権限を持っていても、 共有されたファイルシステム上のファイルを変更することはできないでしょう。 [.programlisting] .... /a -maproot=root host.example.com box.example.org .... クライアントがエクスポートされたファイルシステムにアクセスするためには、 そうする権限が与えられていなければなりません。 [.filename]#/etc/exports# ファイルに クライアントが含まれているかどうか確認してください。 [.filename]#/etc/exports# ファイルでは、 それぞれの行が一つのファイルシステムを一つのホストにエクスポートすることを表します。 リモートホストはファイルシステム毎に一度だけ指定することができ、 それに加えて一つのデフォルトエントリを置けます。たとえば [.filename]#/usr# が単一のファイルシステムであると仮定します。 次の [.filename]#/etc/exports# は無効です。 [.programlisting] .... /usr/src client /usr/ports client .... 単一のファイルシステムである [.filename]#/usr# は、2 行に渡って、同じホスト `client` へエクスポートされています。 この場合、正しい書式は次のとおりです。 [.programlisting] .... /usr/src /usr/ports client .... あるホストにエクスポートされるある 1 つのファイルシステムのプロパティは、 1 行ですべて指定しなければなりません。 クライアントの指定のない行は、単一のホストとして扱われます。 これはファイルシステムをエクスポートできる方法を制限しますが、 多くの場合これは問題になりません。 下記は、 [.filename]#/usr# および [.filename]#/exports# がローカルファイルシステムである場合の、 有効なエクスポートリストの例です。 [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... 変更が有効となるように、 [.filename]#/etc/exports# が変更されたら `mountd` を再起動しなければなりません。 これは `mountd` プロセスに HUP シグナルを送ることで実行できます。 [source,shell] .... # kill -HUP `cat /var/run/mountd.pid` .... 他には、再起動すれば、FreeBSD はすべてを適切に設定します。 しかしながら、再起動は必須ではありません。 `root` 権限で以下のコマンドを実行すれば、すべてが起動するでしょう。 NFS サーバでは [source,shell] .... # portmap # nfsd -u -t -n 4 # mountd -r .... NFS クライアントでは [source,shell] .... # nfsiod -n 4 .... これでリモートのファイルシステムを実際にマウントする準備がすべてできました。 この例では、サーバの名前は `server` で、 クライアントの名前は `client` とします。 リモートファイルシステムを一時的にマウントするだけ、 もしくは設定をテストするだけなら、クライアント上で `root` 権限で以下のコマンドを実行するだけです。 [source,shell] .... # mount server:/home /mnt .... これで、サーバの [.filename]#/home# ディレクトリが、クライアントの [.filename]#/mnt# にマウントされます。もしすべてが正しく設定されていれば、 クライアントの /mnt に入り、 サーバにあるファイルすべてを見れるはずです。 リモートファイルシステムを起動のたびに自動的にマウントしたいなら、 ファイルシステムを [.filename]#/etc/fstab# ファイルに追加してください。 例としてはこのようになります。 [.programlisting] .... server:/home /mnt nfs rw 0 0 .... man:fstab[5] マニュアルページに利用可能なオプションがすべて掲載されています。 === 実用的な使い方 NFS には実用的な使用法がいくつもあります。 ここで典型的な使用法をいくつか紹介しましょう。 * 何台ものマシンで CDROM などのメディアを共有するように設定します。 これは安上がりで、たいていは、 複数のマシンにソフトウェアをインストールするのにより便利な方法です。 * 大規模なネットワークでは、 すべてのユーザのホームディレクトリを格納するメイン NFS サーバを構築すると、ずっと便利でしょう。 どのワークステーションにログインしても、 ユーザがいつでも同じホームディレクトリを利用できるように、 これらのホームディレクトリはネットワークに向けてエクスポートされます。 * 何台ものマシンで [.filename]#/usr/ports/distfiles# ディレクトリを共有できます。こうすると、 何台ものマシン上に port をインストールする必要がある時に、 それぞれのマシンでソースコードをダウンロードすることなく、 直ちにソースにアクセスできます。 [[network-amd]] === amd による自動マウント man:amd[8] (自動マウントデーモン) は、 ファイルシステム内のファイルまたはディレクトリがアクセスされると、 自動的にリモートファイルシステムをマウントします。 また、一定の間アクセスされないファイルシステムは amd によって自動的にアンマウントされます。 amd を使用することは、通常 [.filename]#/etc/fstab# 内に記述する恒久的なマウントに対する、 単純な代替案となります。 amd はそれ自身を NFS サーバとして [.filename]#/host# および [.filename]#/net# ディレクトリに結びつけることによって動作します。 このディレクトリ内のどこかでファイルがアクセスされると、 amd は対応するリモートマウントを調べて、 自動的にそれをマウントします。 [.filename]#/net# が、エクスポートされたファイルシステムを IP アドレスで指定してマウントするのに利用される一方で、 [.filename]#/host# は、エクスポートされたファイルシステムをリモートホスト名で指定してマウントするのに利用されます。 [.filename]#/host/foobar/usr# 内のファイルにアクセスすると、 amd はホスト `foobar` からエクスポートされた [.filename]#/usr# をマウントします。 .amd によるエクスポートされたファイルシステムのマウント [example] ==== `showmount` コマンドを用いて、 リモートホストのマウントで利用できるものが見られます。 たとえば、`foobar` と名付けられたホストのマウントを見るために次のように利用できます。 [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /host/foobar/usr .... ==== 例のように `showmount` はエクスポートとして [.filename]#/usr# を表示します。 [.filename]#/host/foobar/usr# にディレクトリを変更すると、 amd はホスト名 `foobar` を解決し、お望みのエクスポートをマウントしようと試みます。 amd は [.filename]#/etc/rc.conf# 内に次の行を記述すれば、 起動スクリプトによって起動されます。 [.programlisting] .... amd_enable="YES" .... さらに `amd_flags` オプションによって amd にフラグをカスタマイズして渡せます。デフォルトでは `amd_flags` は次のように設定されています。 [.programlisting] .... amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" .... [.filename]#/etc/amd.map# ファイルは、 エクスポートがマウントされるデフォルトオプションを決定します。 [.filename]#/etc/amd.conf# ファイルは、 amd のより高度な機能の一部を設定します。 詳細については man:amd[8] および man:amd.conf[8] マニュアルページを参照してください。 [[network-nfs-integration]] === 他のシステムとの統合についての問題 ISA バス用のイーサネットアダプタの中には性能が悪いため、 ネットワーク、特に NFS で深刻な問題がおきるものがあります。 これは FreeBSD に限ったことではありませんが FreeBSD でも起こり得ます。 この問題は (FreeBSD を使用した) PC がシリコングラフィックス社やサン・マイクロシステムズ社などの高性能なワークステーションにネットワーク接続されている場合に頻繁に起こります。 NFS マウントはうまく動作するでしょう。 また、いくつかの操作もうまく動作するかもしれませんが、 他のシステムに対する要求や応答は続いていても、 突然サーバがクライアントの要求に対して応答しなくなります。これは、 クライアントが FreeBSD か上記のワークステーションであるときにクライアント側に起きる現象です。 多くのシステムでは、いったんこの問題が現われると、 行儀良くクライアントを終了する手段はありません。 NFS がこの状態に陥ってしまうと正常に戻すことはできないため、 多くの場合クライアントをリセットすることが唯一の解決法となります。 "正しい" 解決法は、より高性能のイーサネットアダプタを FreeBSD システムにインストールすることですが、 満足に動作させる簡単な方法があります。 FreeBSD システムが _サーバ_ になるのなら、 クライアントからのマウント時に `-w=1024` オプションをつけて下さい。FreeBSD システムが _クライアント_ になるのなら、 NFS ファイルシステムを `-r=1024` オプションつきでマウントして下さい。 これらのオプションは自動的にマウントをおこなう場合には クライアントの [.filename]#fstab# エントリの 4 番目のフィールドに指定してもよいですし、 手動マウントの場合は mount コマンドの `-o` パラメータで指定してもよいでしょう。 NFS サーバとクライアントが別々のネットワーク上にあるような場合、 これと間違えやすい他の問題が起きることに注意して下さい。 そのような場合は、ルータが必要な UDP 情報をきちんとルーティングしているかを確かめて下さい。 していなければ、たとえあなたが何をしようと解決できないでしょう。 次の例では `fastws` は高性能ワークステーションのホスト (インタフェース) 名で、 `freebox` は低性能のイーサネットアダプタを備えた FreeBSD システムのホスト (インタフェース) 名です。 また [.filename]#/sharedfs# はエクスポートされる NFS ファイルシステムであり (man:exports[5] を参照) 、 [.filename]#/project# はエクスポートされたファイルシステムの、 クライアント上のマウントポイントとなります。 すべての場合において、アプリケーションによっては `hard` や `soft`, `bg` といった追加オプションがふさわしいかもしれないことに注意して下さい。 クライアント側 FreeBSD システム (`freebox`) の [.filename]#/etc/fstab# の例は以下のとおりです。 [.programlisting] .... fastws:/sharedfs /project nfs rw,-r=1024 0 0 .... `freebox` 上で手動で mount コマンドを実行する場合は次のようにして下さい。 [source,shell] .... # mount -t nfs -o -r=1024 fastws:/sharedfs /project .... サーバ側 FreeBSD システム (`fastws`) の [.filename]#/etc/fstab# の例は以下のとおりです。 [.programlisting] .... freebox:/sharedfs /project nfs rw,-w=1024 0 0 .... `fastws` 上で手動で mount コマンドで実行する場合は次のようにして下さい。 [source,shell] .... # mount -t nfs -o -w=1024 freebox:/sharedfs /project .... 近いうちにどのような 16 ビットのイーサネットアダプタでも、上記の読み出し、 書き込みサイズの制限なしで操作できるようになるでしょう。 失敗が発生したとき何が起きているか関心のある人に、 なぜ回復不可能なのかも含めて説明します。NFS は通常 (より小さいサイズへ分割されるかもしれませんが) 8 K の "ブロック" サイズで動作します。 イーサネットのパケットサイズは最大 1500 バイト程度なので、 上位階層のコードにとっては 1 つのユニットであって、 NFS "ブロック" は複数のイーサネットパケットに分割されるものの、 上位階層のコードにとっては 1 つのユニットであって、 ユニットとして受信され、組み立て直され、 _肯定応答_ (ACK) されなければなりません。 高性能のワークステーションは次々に NFS ユニットを構成するパケットを、 標準の許す限り間隔を詰めて次々に送り出すことができます。 小さく、容量の低いカードでは、 同じユニットの前のパケットがホストに転送される前に、 後のパケットがそれを踏みつぶしてしまいます。 このため全体としてのユニットは、再構成も肯定応答もできません。 その結果、 ワークステーションはタイムアウトして再送を試みますが、 8 K のユニット全体を再送しようとするので、 このプロセスは際限無く繰り返されてしまいます。 ユニットサイズをイーサネットのパケットサイズの 制限以下に抑えることにより、 受信した完全なイーサネットパケットについて個々に肯定応答を返せることが保証されるので、 デッドロック状態を避けられるようになります。 それでも、高性能なワークステーションが力任せに次々と PC システムにデータを送ったときには踏みつぶしが起きるかもしれません。 しかし、高性能のカードを使っていれば、NFS "ユニット" で必ずそのような踏みつぶしが起きるとは限りません。 踏みつぶしが起きたら、影響を受けたユニットは再送されて、 受信され、組み立てられ、肯定応答される十分な見込みがあります。 [[network-diskless]] == ディスクレス稼働 FreeBSD マシンはネットワークを通じて起動でき、 そして NFS サーバからマウントしたファイルシステムを使用して、 ローカルディスクなしで動作することができます。 標準の設定ファイルを変更する以上の、システムの修正は必要ありません。 必要な要素のすべてが用意されているので、 このようなシステムを設定するのは簡単です。 * ネットワークを通じてカーネルを読み込む方法は、 少なくとも二つあります。 ** PXE: Intel(R) の Preboot Execution Environment システムは、 一部のネットワークカードまたはマザーボードに組み込まれた、 スマートなブート ROM の一形態です。 詳細については man:pxeboot[8] を参照してください。 ** port の etherboot (package:net/etherboot[]) は、 ネットワークを通じてカーネルを起動する ROM 化可能なコードを提供します。 コードはネットワークカード上のブート PROM に焼き付けるか、 あるいはローカルフロッピー (ハード) ディスクドライブ、 または動作している MS-DOS(R) システムから読み込むことができます。 多くのネットワークカードに対応しています。 * サンプルスクリプト ([.filename]#/usr/shared/examples/diskless/clone_root#) はサーバ上で、 ワークステーションのルートファイルシステムの作成と維持をやり易くします。 このスクリプトは少し書き換えないといけないでしょうが、 早く取り掛かれるようにします。 * ディスクレスシステム起動を検知しサポートする標準のシステム起動ファイルが [.filename]#/etc# 内にあります。 * 必要なら、NFS ファイルまたはローカルディスクのどちらかにスワップできます。 ディスクレスワークステーションを設定する方法はいろいろあります。 多くの要素が関わっており、 その多くはローカルの状況に合わせてカスタマイズできます。下記は、 単純さと標準の FreeBSD 起動スクリプトとの互換性を強調した完全なシステムの設定を説明します。 記述されているシステムの特徴は次のとおりです。 * ディスクレスワークステーションは、 共有された読み取り専用の [.filename]##ルート##ファイルシステムと、 共有された読み取り専用の [.filename]##/usr## を使用します。 + [.filename]#ルート# ファイルシステムは、 標準的な FreeBSD (典型的にはサーバの) のルートのコピーで、 一部の設定ファイルが、ディスクレス稼働、 また場合によってはそのワークステーションに特有のもので上書きされています。 + 書き込み可能でなければならない [.filename]#ルート# の部分は man:mfs[8] ファイルシステムで覆われます。 システムが再起動するときにはすべての変更が失われるでしょう。 * カーネルは DHCP (または BOOTP) および TFTP を用いて etherboot によって読み込まれます。 [CAUTION] ==== 記述されているとおり、 このシステムは安全ではありません。 ネットワークの保護された範囲で使用されるべきであり、 他のホストから信頼されてはいけません。 ==== === セットアップの手順 ==== DHCP/BOOTP の設定 ネットワークを通じて設定を取得し、 ワークステーションを起動するために一般的に使用されるプロトコルには、 BOOTP と DHCP の 2 つがあります。 それらはワークステーションのブートストラップ時に何ヵ所かで使用されます。 * etherboot はカーネルを見つけるために DHCP (デフォルト) または BOOTP (設定オプションが必要) を使用します (PXE は DHCP を使用します) 。 * NFS ルートの場所を定めるためにカーネルは BOOTP を使用します。 BOOTP だけを使用するようにシステムを設定することもできます。 man:bootpd[8] サーバプログラムは FreeBSD のベースシステムに含まれています。 しかしながら、DHCP には BOOTP に勝る点が多々あります。 (よりよい設定ファイル、PXE が使えること、 そしてディスクレス稼働には直接関係しない多くの長所) ここでは BOOTP だけ利用する場合と、 BOOTP と DHCP を組み合わせた設定を扱います。特に ISC DHCP ソフトウェアパッケージを利用する後者の方法に重点をおきます。 ===== ISC DHCP を使用する設定 isc-dhcp サーバは、 BOOTP および DHCP リクエストの両方に答えることができます。 4.4-RELEASE の時点で isc-dhcp 3.0 はベースシステムの一部では無くなりました。 まずはじめに package:net/isc-dhcp3-server[] port または対応する package をインストールする必要があるでしょう。 ports および package に関する一般的な情報については crossref:ports[ports,アプリケーションのインストール - packages と ports] を参照してください。 isc-dhcp がインストールされると、 動作するために設定ファイルを必要とします (通常 [.filename]#/usr/local/etc/dhcpd.conf# が指定されます) 。 下記にコメントを含めた例を示します。 [.programlisting] .... default-lease-time 600; max-lease-time 7200; authoritative; option domain-name "example.com"; option domain-name-servers 192.168.4.1; option routers 192.168.4.1; subnet 192.168.4.0 netmask 255.255.255.0 { use-host-decl-names on; <.> option subnet-mask 255.255.255.0; option broadcast-address 192.168.4.255; host margaux { hardware ethernet 01:23:45:67:89:ab; fixed-address margaux.example.com; next-server 192.168.4.4;<.> filename "/tftpboot/kernel.diskless";<.> option root-path "192.168.4.4:/data/misc/diskless";<.> } } .... <.> このオプションは `host` 宣言の値を、 ディスクレスホストへのホスト名として送るように `dhcpd` に指示します。 別の方法として、ホスト宣言内に `option host-name margaux` を加えるものがあります。 <.> TFTP サーバを `next-server` ディレクティブに指定します (デフォルトは DHCP サーバと同じホストを使います)。 <.> カーネルとして etherboot が読み込むファイルを `filename` ディレクティブに指定します。 <.> ルートファイルシステムへのパスを、 通常の NFS 書式で `root-path` オプションに指定します。 ===== BOOTP を使用する設定 続けて、`bootpd` で同等のことをする設定です。 これは [.filename]#/etc/bootptab# におきます。 BOOTP を使用するために、デフォルトではない `NO_DHCP_SUPPORT` オプション付きで etherboot をコンパイルしなければならないことと、PXE は DHCP を _必要_ とすることに注意してください。 bootpd の唯一明白な利点は、 これがベースシステムに存在するということです。 [.programlisting] .... .def100:\ :hn:ht=1:sa=192.168.4.4:vm=rfc1048:\ :sm=255.255.255.0:\ :ds=192.168.4.1:\ :gw=192.168.4.1:\ :hd="/tftpboot":\ :bf="/kernel.diskless":\ :rp="192.168.4.4:/data/misc/diskless": margaux:ha=0123456789ab:tc=.def100 .... ==== Etherboot を用いるブートプログラムの準備 http://etherboot.sourceforge.net[Etherboot のウェブサイト] には主に Linux システムについて述べたlink:http://etherboot.sourceforge.net/doc/html/userman/t1.html[ 広範囲の文書] が含まれています。 しかし、それにもかかわらず有用な情報を含んでいます。 下記は FreeBSD システム上での etherboot の使用法についての概観を示します。 まずはじめに package:net/etherboot[] の package または port をインストールしなければなりません。 etherboot port は通常 [.filename]#/usr/ports/net/etherboot# にあります。 ports ツリーがシステムにインストールされている場合、 このディレクトリ内で `make` を実行すれば、よきに計らってくれます。 ports および packages に関する情報は crossref:ports[ports,アプリケーションのインストール - packages と ports] を参照してください。 ここで説明している方法では、ブートフロッピーを使用します。 他の方法 (PROM または DOS プログラム) については etherboot の文書を参照してください。 ブートフロッピーを作成するためには、 etherboot をインストールしたマシンのドライブにフロッピーディスクを挿入します。 それからカレントディレクトリを etherboot ツリー内の [.filename]#src# ディレクトリにして次のように入力します。 [source,shell] .... # gmake bin32/devicetype.fd0 .... _devicetype_ は ディスクレスワークステーションのイーサネットカードタイプに依存します。 正しい _devicetype_ を決定するために、 同じディレクトリ内の [.filename]#NIC# ファイルを参照してください。 ==== TFTP および NFS サーバの設定 TFTP サーバ上で `tftpd` を有効にする必要があります。 [.procedure] ==== . `tftpd` が提供するファイルを置くディレクトリ (たとえば [.filename]#/tftpboot#) を作成してください。 . [.filename]#/etc/inetd.conf# ファイルに以下の行を追加してください。 + [.programlisting] .... tftp dgram udp wait root /usr/libexec/tftpd tftpd -s /tftpboot .... + [NOTE] ====== 少なくとも PXE のいくつかのバージョンが TCP 版の TFTP を要求するようです。その場合 `dgram udp` を `stream tcp` に置き換えた 2 番目の行を追加してください。 ====== + . `inetd` に設定ファイルを再読み込みさせてください。 + [source,shell] .... # kill -HUP `cat /var/run/inetd.pid` .... ==== [.filename]#tftpboot# ディレクトリはサーバ上のどこにでも置けます。 その場所が [.filename]#inetd.conf# および [.filename]#dhcpd.conf# の両方に設定されていることを確かめてください。 さらに NFS を有効にして NFS サーバの適切なファイルシステムをエクスポートする必要があります。 [.procedure] ==== . この行を [.filename]#/etc/rc.conf# に追加してください。 + [.programlisting] .... nfs_server_enable="YES" .... + . 下記を [.filename]#/etc/exports# に加えることで、 ディスクレスマシンのルートディレクトリが位置するファイルシステムをエクスポートしてください (ボリュームのマウントポイントを適当に調節し、 _margaux_ をディスクレスワークステーションの名前に置き換えてください)。 + [.programlisting] .... /data/misc -alldirs -ro margaux .... + . `mountd` に設定ファイルを再読み込みさせてください。 [.filename]#/etc/rc.conf# 内で NFS をはじめて有効にする必要があったのなら、 代わりに再起動した方がよいかもしれません。 + [source,shell] .... # kill -HUP `cat /var/run/mountd.pid` .... ==== ==== ディスクレス用のカーネル構築 次のオプションを (通常のものに) 追加した、 ディスクレスクライアント用のカーネルコンフィグレーションファイルを作成してください。 [.programlisting] .... options BOOTP # Use BOOTP to obtain IP address/hostname options BOOTP_NFSROOT # NFS mount root filesystem using BOOTP info options BOOTP_COMPAT # Workaround for broken bootp daemons. .... `BOOTP_NFSV3` および `BOOTP_WIRED_TO` を利用してもよいかもしれません ([.filename]#LINT# を参照してください)。 カーネルを構築して (crossref:kernelconfig[kernelconfig,FreeBSD カーネルのコンフィグレーション] を参照)、 [.filename]#dhcpd.conf# に記述した名称で tftp ディレクトリにコピーしてください。 ==== ルートファイルシステムの準備 [.filename]#dhcpd.conf# に `root-path` として記載された ディスクレスワークステーションのためのルートファイルシステムを作成する必要があります。 これを行う最も簡単な方法は [.filename]#/usr/shared/examples/diskless/clone_root# シェルスクリプトを使用することです。 このスクリプトは、少なくともファイルシステムが作成される場所 (`DEST` 変数) を調節するために変更する必要があります。 説明についてはスクリプトの一番上にあるコメントを参照してください。 ベースシステムをどのように構築するか、 またファイルがどのようにディスクレス稼働、サブネット、 または個々のワークステーションに固有のバージョンによって、 選択的にオーバライドできるかを説明します。 また、ディスクレスな場合の [.filename]#/etc/fstab# ファイルおよび [.filename]#/etc/rc.conf# ファイルの例を示します。 [.filename]#/usr/shared/examples/diskless# 内の [.filename]#README# ファイルには、多くの興味深い背景情報が書かれています。 しかし [.filename]#diskless# ディレクトリ内の他の例と同じく、 [.filename]#clone_root# と [.filename]#/etc/rc.diskless[12]# で実際に使われているものとは異なる設定方法が説明されています。 ここに書かれている方法は [.filename]#rc# スクリプトの変更が必要になりますが、 こちらの方が気に入ったというのでなければ、 参照にとどめてください。 ==== スワップの設定 必要なら、サーバに置かれたスワップファイルに NFS 経由でアクセスできます。 [.filename]#bootptab# または [.filename]#dhcpd.conf# の正確なオプションは、 現時点では明確には文書化されていません。 下記の設定例は isc-dhcp 3.0rc11 を使用して動作したと報告されているものです。 [.procedure] ==== . [.filename]#dhcpd.conf# に下記の行を追加してください。 + [.programlisting] .... # Global section option swap-path code 128 = string; option swap-size code 129 = integer 32; host margaux { ... # Standard lines, see above option swap-path "192.168.4.4:/netswapvolume/netswap"; option swap-size 64000; } .... + これは、少なくとも FreeBSD クライアントにおいては、 DHCP/BOOTP オプションコードの 128 は NFS スワップファイルへのパスで、オプションコード 129 は KB 単位のスワップサイズだということです。 もっと古いバージョンの `dhcpd` では `option option-128 "...` という書式が受け付けられましたが、 もはや対応していません。 + 代わりに、[.filename]#/etc/bootptab# では次の書式を使います。 + `T128="192.168.4.4:/netswapvolume/netswap":T129=0000fa00` + [NOTE] ====== [.filename]#/etc/bootptab# では、スワップの大きさは 16 進数で表さなければなりません。 ====== + . NFS スワップファイルサーバ側でスワップファイルを作成します。 + [source,shell] .... # mkdir /netswapvolume/netswap # cd /netswapvolume/netswap # dd if=/dev/zero bs=1024 count=64000 of=swap.192.168.4.6 # chmod 0600 swap.192.168.4.6 .... + _192.168.4.6_ はディスクレスクライアントの IP アドレスです。 . NFS スワップファイルサーバ上で [.filename]#/etc/exports# に下記の行を追加してください。 + [.programlisting] .... /netswapvolume -maproot=0:10 -alldirs margaux .... + それから、上述したように mountd にエクスポートファイルを再読み込みさせてください。 ==== ==== 雑多な問題 ===== 読み取り専用の [.filename]#/usr# で動作させる ディスクレスワークステーションが X を起動するように設定されている場合、 xdm 設定ファイルを調整しなければならないでしょう。 これはデフォルトでエラーファイルを [.filename]#/usr# に置きます。 ===== FreeBSD ではないサーバを使用する ルートファイルシステムを提供するサーバが FreeBSD で動作していない場合、 FreeBSD マシン上でルートファイルシステムを作成し、 `tar` または `cpio` を利用して置きたい場所にコピーしなければならないでしょう。 この状況では、major/minor 整数サイズが異なっていることにより [.filename]#/dev# 内のスペシャルファイルに関する問題が時々おこります。 この問題を解決するには、非 FreeBSD サーバからディレクトリをエクスポートして、 そのディレクトリを FreeBSD マシンでマウントし、 FreeBSD マシン上で `MAKEDEV` を実行して正しいデバイスエントリを作成します (FreeBSD 5.0 およびそれ以降では、man:devfs[5] を使用してユーザに意識させずにデバイスノードを割り当てるので、 これらのバージョンでは `MAKEDEV` は必要ありません)。 [[network-isdn]] == ISDN ISDN 技術とハードウェアに関しては、 http://www.alumni.caltech.edu/~dank/isdn/[ Dan Kegel's ISDN Page] がよい参考になるでしょう。 手軽な ISDN の導入手順は以下のようになります。 * ヨーロッパ在住の方は ISDN カードの節に進んでください。 * ダイヤルアップ専用でない回線上で、 インターネットプロバイダをつかってインターネットに接続するために ISDN を使用することを第一に考えている場合は、 ターミナルアダプタの使用を考えてみてください。 この方法はもっとも柔軟性があり、 プロバイダを変更した場合の問題も少ないでしょう。 * 2 つの LAN を接続する場合や、 ISDN 専用線を使用する場合には、 スタンドアロンなルータまたはブリッジの使用を勧めます。 費用はどの解決法を選ぶかを決める重要な要因です。 以下に、最も安価な方法から、高価な方法まで順に説明していきます。 [[network-isdn-cards]] === ISDN カード FreeBSD の ISDN 実装は、パッシブカードを使用した DSS1/Q.931 (または Euro-ISDN) 標準だけに対応しています。FreeBSD 4.4 からは、ファームウェアが他の信号プロトコルにも対応している 一部のアクティブカードにも対応しました。 その中には、はじめて対応された一次群速度インタフェース (PRI) ISDN カードもあります。 isdn4bsd は IP over raw HDLC または同期 PPP を利用して他の ISDN ルータに接続できるようにします。 PPP では、カーネル PPP を man:sppp[4] ドライバを修正した `isppp` ドライバとともに利用するか、または ユーザプロセス man:ppp[8] を利用するかのどちらかになります。ユーザ man:ppp[8] を利用すると、二つ以上の ISDN B チャネルを併せて利用できます。 ソフトウェア 300 ボーモデムのような多くのユーティリティとともに、 留守番電話アプリケーションも利用可能です。 FreeBSD が対応している PC ISDN カードの数は増加しており、 ヨーロッパ全域や世界のその他多くの地域でうまく使えることが報告されています。 対応しているパッシブ ISDN カードのほとんどは Infineon (前身は Siemens) の ISAC/HSCX/IPAC ISDN チップセットを備えたカードですが、 Cologne Chip から供給されたチップを備えた ISDN カード (ISA バスのみ)、Winbond W6692 チップを備えた PCI カード、 Tiger300/320/ISAC チップセットを組み合わたカードの一部、 および AVM Fritz!Card PCI V.1.0 や AVM Fritz!Card PnP のようなベンダ独自のチップセットに基づいたカードもあります。 現在のところ、対応しているアクティブカードは AVM B1 (ISA および PCI) BRI カードと AVM T1 PCI PRI カードです。 isdn4bsd についての文書は FreeBSD システム内の [.filename]#/usr/shared/examples/isdn/# ディレクトリまたは http://www.freebsd-support.de/i4b/[isdn4bsd のウェブサイト]を参照してください。 そこにはヒントや正誤表や http://people.FreeBSD.org/~hm/[isdn4bsd ハンドブック]のような、 さらに多くの文書に対するポインタがあります。 異なる ISDN プロトコルや、現在対応されていない ISDN PC カードに対応することや、その他 isdn4bsd を拡張することに興味があるなら、{hm} に連絡してください。 isdn4bsd のインストール、設定、 そしてトラブルシューティングに関して質問があれば link:{freebsd-isdn-url}[freebsd-isdn] メーリングリストが利用可能です。 === ISDN ターミナルアダプタ ターミナルアダプタ (TA) は ISDN で、 通常の電話線におけるモデムに相当するものです。 ほとんどの TA は、標準のヘイズ AT コマンドセットを使用しているので、 単にモデムと置き換えて使うことができます。 TA は、基本的にはモデムと同じように動作しますが、 接続方法は異なり、通信速度も古いモデムよりはるかに速くなります。 crossref:ppp-and-slip[ppp,PPP] の設定を、 モデムの場合と同じように行ってください。 特にシリアル速度を使用できる最高速度に設定するのを忘れないでください。 プロバイダへの接続に TA を使用する最大のメリットは、動的 PPP を行えることです。 最近 IP アドレス空間がますます不足してきているため、 ほとんどのプロバイダは、 固定 IP アドレスを割り当てないようになっています。 ほとんどのスタンドアローンルータは、動的 IP アドレス割り当てに対応していません。 [NOTE] ==== 最近の ISDN ルータでは IP アドレスの動的割り当てに対応しているものも多いようです。 ただし制限がある場合もありますので、 詳しくはメーカに問い合わせてください。 ==== TA を使用した場合の機能や接続の安定性は、使用している PPP デーモンに完全に依存します。そのため、FreeBSD で PPP の設定が完了していれば、使用している既存のモデムを ISDN の TA に簡単にアップグレードすることができます。ただし、それまでの PPP のプログラムに問題があった場合、その問題は TA に置き換えてもそのまま残ります。 最高の安定性を求めるのであれば、 crossref:ppp-and-slip[userppp,ユーザランド PPP] ではなく、カーネル crossref:ppp-and-slip[ppp,PPP]を使用してください。 以下の TA は、FreeBSD で動作確認ずみです。 * Motorola BitSurfer および Bitsurfer Pro * Adtran 他の TA もほとんどの場合うまく動作するでしょう。TA のメーカーでは、TA がほとんどの標準モデム AT コマンドセットを受け付けるようにするよう努力しているようです。 外部 TA を使う際の最大の問題点は、 モデムの場合と同じく良いシリアルカードが必要であるということです。 シリアルデバイスの詳細と、 非同期シリアルポートと同期シリアルポートの差を理解するには、extref:{serial-uart}[FreeBSD シリアルハードウェア]チュートリアルを参照してください。 標準の PC シリアルポート (非同期) に接続された TA は 128 Kbs の接続を行っていても、最大通信速度が 115.2 Kbs に制限されてしまいます。128 Kbs の ISDN の性能を最大限に生かすためには TA を同期シリアルカードに接続しなければなりません。 内蔵 TA を購入すれば、 同期/非同期問題を回避できるとは思わないでください。内蔵 TA には、 単に標準 PC シリアルポートのチップが内蔵されているだけです。 内蔵 TA の利点といえば、 シリアルケーブルを買わなくていいということと、 電源コンセントが一つ少なくて済むということくらいでしょう。 同期カードと TA の組合せでも、スタンドアロンのルータと同程度の速度は確保できます。 さらに、386 の FreeBSD マシンと組合せると、 より柔軟な設定が可能です。 同期カード/TA を選ぶか、スタンドアロンルータを選ぶかは、 多分に宗教的な問題です。 メーリングリストでもいくつか議論がありました。議論の全容については、 link:https://www.FreeBSD.org/search/[アーカイブ] を検索してください。 === スタンドアロン ISDN ブリッジ/ルータ ISDN ブリッジあるいはルータは、 FreeBSD あるいは他の OS に特有のものでは皆目ありません。 ルーティングやブリッジング技術に関する詳細は、 ネットワークの参考書をご覧ください。 この節では、 ルータとブリッジのどちらでもあてはまるように記述します。 ローエンド ISDN ルータ/ブリッジ製品は、 価格が下がってきていることもあり、 より広く選択されるようになるでしょう。ISDN ルータは、 ローカルイーサネットネットワークに直接接続し、 自身で他のブリッジ/ルータとの接続を制御する小さな箱です。PPP や他の広く使用されているプロトコルをつかって通信するためのソフトウェアが組み込まれています。 ルータは、完全な同期 ISDN 接続を使用するため、通常の TA と比較してスループットが大幅に向上します。 ISDN ルータ/ブリッジを使用する場合の最大の問題点は、 各メーカーの製品間に相性の問題がまだ存在することです。 インターネットプロバイダとの接続を考えている場合には、 プロバイダと相談することをお勧めします。 事務所の LAN と家庭の LAN の間など、二つの LAN セグメントの間を接続しようとしている場合は、 これはもっともメンテナンスが簡単で、安くあがる解決方法です。 接続の両側の機材を購入するので、 リンクがうまくいくであろうことを保証できます。 たとえば、 家庭のコンピュータや支店のネットワークを本社のネットワークに接続するためには、 以下のような設定が使用できます。 .支店または家庭のネットワーク [example] ==== ネットワークは 10 Base 2 イーサネット ("thinnet") のバス型トポロジを用いています。ルータとネットワークの間は、 必要に応じて AUI/10BT トランシーバを使って接続してください。 image::isdn-bus.png[10 Base 2 イーサネット] 家庭/支店で一台しかコンピュータを使用しないのであれば、 クロスのツイストペアケーブルを使用して、 直接スタンドアロンルータに接続することも可能です。 ==== .本社 LAN や他の LAN [example] ==== ネットワークは 10 base T イーサネット ("Twisted Pair") のスター型トポロジを用いています。 image::isdn-twisted-pair.png[ISDN ネットワークダイアグラム] ==== ほとんどのルータ/ブリッジの大きな利点は、 別々の二つのサイトに対して、_同時_ にそれぞれ__独立した__二つの PPP 接続が可能であることです。 これは、シリアルポートを 2 つもった特定の (通常は高価な) モデルを除いて、通常の TA では対応していません。 チャネルボンディングや MPP などと混同しないでください。 たとえば、事務所で専用線 ISDN 接続を使用していて、 別の ISDN 回線を購入したくないときには大変便利な機能です。この場合、 事務所のルータは、インターネットに接続するための一つの専用線 B チャネル接続 (64 Kbs) を管理し、 別の B チャネルを他のデータ接続に使用できます。 2 つ目の B チャネルは他の場所とのダイアルイン、 ダイアルアウトに使用したり、バンド幅を増やすために、 1 つ目の B チャネルと動的に結合すること (MPPなど) ができます。 またイーサネットブリッジは、IP パケット以外も中継できます。 IPX/SPX など、使用するすべてのプロトコルを送ることが可能です。 [[network-nis]] == NIS/YP === NIS/YP とは? NIS とは Network Information Services の略で Sun Microsystems によって UNIX(R) の (もともとは SunOS(TM) の) 集中管理のために開発されました。現在では事実上の業界標準になっており、 主要な UNIX(R) ライクシステム (Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD, FreeBSD、等々) はすべてこれをサポートしています。 NIS は元々、イエローページといっていましたが、 商標問題から Sun はその名前を変えました。 古い用語 (および yp) はまだよく見られ、使用されています。 NIS は RPC を使ったクライアント/サーバシステムです。 これを使うと NIS ドメイン内のマシン間で、 共通の設定ファイルを共有することができます。 また NIS を使うことでシステム管理者は最小限の設定データで NIS クライアントを立ち上げることができ、 1 ヶ所から設定データの追加、削除、変更が可能です。 NIS は Windows NT(R) のドメインシステムに似ています。 内部の実装は似ても似つかないものですが、 基本的な機能を対比することはできます。 === 知っておくべき用語 / プロセス NIS サーバの立ち上げや NIS クライアントの設定など、 NIS を FreeBSD に導入するにあたって、 目にするであろう用語や重要なユーザプロセスがいくつかあります。 [.informaltable] [cols="1,1", options="header"] |=== | 用語 | 説明 |NIS ドメイン名 |NIS マスタサーバとそのクライアントすべて (スレーブサーバを含む) には NIS ドメイン名がついています。 Windows NT(R) ドメイン名と同様に、NIS ドメイン名は DNS とは何の関係もありません。 |portmap |RPC (Remote Procedure Call, NIS で使用されるネットワークプロトコル) を利用するために実行しておかなければなりません。 `portmap` が動作していなければ、 NIS サーバを起動することも、 NIS クライアントとして動作させることもできません。 |ypbind |NIS クライアントを NIS サーバに "結びつけ" ます。 これは NIS ドメイン名をシステムから取得し RPC を用いてサーバに接続します。`ypbind` は NIS 環境におけるクライアントとサーバ間の通信の中枢です。 クライアントマシンの `ypbind` が停止した場合は、NIS サーバへアクセスすることができなくなります。 |ypserv |は NIS サーバでのみ実行されるべきもので、 NIS サーバプロセスそのものです。man:ypserv[8] が停止した場合、サーバはもはや NIS リクエストに応答することができなくなるでしょう (できれば、後を引き継ぐスレーブサーバがあるとよいでしょう)。 今まで使っていたサーバが機能を停止したとき、 別のサーバに再接続しに行かない NIS の実装もいくつかあります (FreeBSD のものは違います)。 そのような場合に復帰するための唯一の方法は、 サーバプロセス (あるいはサーバ全体)、もしくはクライアントの `ypbind` プロセスを再スタートすることです。 |rpc.yppasswdd |NIS マスターサーバで動かすべき、 もう一つのプロセスです。これは NIS クライアントが NIS パスワードを変更することを可能にするデーモンです。 このデーモンが動作していないときは、 ユーザは NIS マスタサーバにログインし、 そこでパスワードを変更しなければなりません。 |=== === 動作のしくみ NIS 環境にあるホストは、 マスターサーバ、スレーブサーバ、クライアントの 3 種類に分類されます。 サーバは、ホストの設定情報の中心的な情報格納庫の役割をします。 マスターサーバは元となる信頼できる情報を保持し、 スレーブサーバは冗長性を確保するためこの情報をミラーします。 そしてクライアントは、サーバから情報の提供を受けて動作します。 この方法を用いることで、数多くのファイルにある情報が共有できます。 よく NIS で共有されるのは、 [.filename]#master.passwd# や [.filename]#group#, [.filename]#hosts# といったファイルです。 クライアント上のプロセスが、 通常ならローカルのファイルにある情報を必要とするときは、 クライアントは代わりに接続している NIS サーバに問い合わせを行います。 ==== マシンの分類 * _NIS マスターサーバ_。 このサーバは Windows NT(R) で言うところのプライマリドメインコントローラにあたります。 すべての NIS クライアントで利用されるファイルを保守します。 [.filename]#passwd# や [.filename]#group#、 その他 NIS クライアントが参照するファイルは、 マスターサーバにあります。 + [NOTE] ==== 一つのマシンが一つ以上の NIS ドメインのマスターサーバになることは可能です。 しかし、ここでは比較的小規模の NIS 環境を対象としているため、 そのような場合については扱いません。 ==== * _NIS スレーブサーバ_。 Windows NT(R) のバックアップドメインコントローラに似たもので、 NIS スレーブサーバは NIS マスターサーバのデータファイルのコピーを保持します。 NIS スレーブサーバは重要な環境で必要とされる冗長性を提供し、 マスターサーバの負荷のバランスをとります。 NIS クライアントは常に最初にレスポンスを返したサーバを NIS サーバとして接続しますが、 これにはスレーブサーバも含まれます。 * _NIS クライアント_。 NIS クライアントは大部分の Windows NT(R) ワークステーションのように、ログオンに際して NIS サーバ (Windows NT(R) ワークステーションの場合は Windows NT(R) ドメインコントローラ) に接続して認証します。 === NIS/YP を使う この節では NIS 環境の立ち上げ例を取り上げます。 [NOTE] ==== この節ではあなたが FreeBSD 3.3 以降を使っているものとします。 ここで与えられる指示は _おそらく_ FreeBSD の 3.0 以降のどのバージョンでも機能するでしょうが、 それを保証するものではありません。 ==== ==== 計画を立てる あなたが大学の小さな研究室の管理人であるとしましょう。 この研究室は 15 台の FreeBSD マシンからなっていて、 現在はまだ集中管理されていません。 すなわち、各マシンは [.filename]#/etc/passwd# と [.filename]#/etc/master.passwd# を各々が持っています。 これらのファイルは手動でお互いに同期させています。 つまり現時点では、新しいユーザをあなたが追加するとき、 `adduser` を 15 ヶ所すべてで実行しなければなりません。 これは明らかに変える必要があるため、 あなたはこのうち 2 台をサーバにして NIS を導入することを決めました。 その結果、研究室の設定はこのようなものになります。 [.informaltable] [cols="1,1,1", options="header"] |=== | マシンの名前 | IP アドレス | 役割 |`ellington` |`10.0.0.2` |NIS マスタ |`coltrane` |`10.0.0.3` |NIS スレーブ |`basie` |`10.0.0.4` |教員用のワークステーション |`bird` |`10.0.0.5` |クライアントマシン |`cli[1-11]` |`10.0.0.[6-17]` |その他のクライアントマシン |=== もし NIS によるシステム管理の設定を行なうのが初めてなら、 どのようにしたいのか、 ひととおり最後まで考えてみることをお勧めします。 ネットワークの規模によらず、 いくつか決めるべきことがあるからです。 ===== NIS ドメイン名を決める ここでいうドメイン名は、今まであなたが使っていた、 いわゆる "ドメイン名" と呼んでいたものとは違います。 正確には "NIS ドメイン名" と呼ばれます。 クライアントがサーバに情報を要求するとき、 その要求には自分が属する NIS ドメインの名前が含まれています。 これは 1 つのネットワークに複数のサーバがある場合に、 どのサーバが要求を処理すれば良いかを決めるために使われます。 NIS ドメイン名とは、 関連のあるホストをグループ化するための名前である、 と考えると良いでしょう。 組織によってはインターネットのドメイン名を NIS ドメイン名に使っているところがあります。 これはネットワークのトラブルをデバッグするときに混乱の原因となるため、 お勧めできません。 NIS ドメイン名はネットワーク内で一意なければいけません。そして、 ドメイン名がドメインに含まれるマシンを表すようなものであれば分かり易いです。 たとえば Acme 社のアート (Art) 部門であれば NIS ドメイン名を "acme-art" とすれば良いでしょう。この例では NIS ドメイン名として _test-domain_ を使用します。 しかしながらオペレーティングシステムによっては (特に SunOS(TM))、 NIS ドメイン名をネットワークドメイン名として使うものもあります。 あなたのネットワークにそのような制限のあるマシンが 1 台でもあるときは、NIS のドメイン名としてインターネットのネットワークドメイン名を使わなければ _いけません_。 ===== サーバマシンの物理的必要条件 NIS サーバとして使うマシンを選ぶ際には、 いくつか注意すべき点があります。 NIS に関する困ったことの一つに、 クライアントのサーバへの依存度があります。 クライアントが自分の NIS ドメインのサーバに接続できないと、 マシンが使用不能になることがあまりに多いのです。 もし、ユーザやグループに関する情報が得られなければ、 ほとんどのシステムは一時的に停止してしまいます。 こういったことを念頭に置いて、頻繁にリブートされるマシンや、 開発に使われそうなマシンを選ばないようにしなければなりません。 理想的には NIS サーバはスタンドアロンで NIS サーバ専用のマシンにするべきです。 ネットワークの負荷が重くなければ、 他のサービスを走らせているマシンを NIS サーバにしてもかまいません。 ただし NIS サーバが使えなくなると、 _すべての_ クライアントに影響をおよぼす、 という点には注意しなければなりません。 ==== NIS サーバ 元となるすべての NIS 情報は、 NIS マスターサーバと呼ばれる 1 台のマシンに格納されます。 この情報が格納されるデータベースを NIS マップと呼びます。 FreeBSD では、このマップは [.filename]#/var/yp/[domainname]# に置かれます。 [.filename]#[domainname]# は、 サーバがサービスする NIS ドメインです。 1 台の NIS サーバが複数のドメインをサポートすることも可能です。 つまり、このディレクトリを各々のドメインごとに作ることができます。 それぞれのドメインは、 独立したマップの集合を持つことになります。 NIS のマスターサーバとスレーブサーバ上では、 `ypserv` デーモンがすべての NIS 要求を処理します。 `ypserv` は NIS クライアントからの要求を受け付け、 ドメイン名とマップ名を対応するデータベースファイルへのパスに変換し、 データをクライアントに返送します。 ===== NIS マスターサーバの設定 やりたいことにもよりますが NIS マスターサーバの設定は比較的単純です。 FreeBSD は初期状態で NIS に対応しています。 必要なのは以下の行を [.filename]#/etc/rc.conf# に追加することだけで、 あとは FreeBSD がやってくれます。 [.procedure] ==== [.programlisting] .... nisdomainname="test-domain" .... . この行はネットワークの設定後に (たとえば再起動後に) NIS のドメイン名を _test-domain_ に設定します。 + [.programlisting] .... nis_server_enable="YES" .... . これは FreeBSD に次にネットワークが立ち上がったとき NIS のサーバプロセスを起動させます。 + [.programlisting] .... nis_yppasswdd_enable="YES" .... . これは `rpc.yppasswdd` デーモンを有効にします。上述したようにこれはユーザが NIS のパスワードをクライアントのマシンから変更することを可能にします。 ==== [NOTE] ==== NIS の設定によっては、 さらに他のエントリを付け加える必要があるかもしれません。 詳細については、下記の <> 節を参照してください。 ==== さて、あとはスーパユーザ権限で `/etc/netstart` コマンドを実行するだけです。 これにより [.filename]#/etc/rc.conf# で定義された値を使ってすべての設定が行なわれます。 ===== NIS マップの初期化 _NIS マップ_ とは [.filename]#/var/yp# ディレクトリにあるデータベースファイルです。 これらは NIS マスタの [.filename]#/etc# ディレクトリの設定ファイルから作られます。 唯一の例外は [.filename]#/etc/master.passwd# ファイルです。これは `root` や他の管理用アカウントのパスワードまでその NIS ドメインのすべてのサーバに伝えたくないという、 もっともな理由によるものです。このため NIS マップの初期化の前に以下を行う必要があります。 [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... システムに関するアカウント (`bin`, `tty`, `kmem`, `games` など) や、NIS クライアントに伝えたくないアカウント (たとえば `root` や他の UID が 0 (スーパユーザ) のアカウント) をすべて NIS マップから取り除かなければなりません。 [NOTE] ==== [.filename]#/var/yp/master.passwd# が グループまたは誰もが読めるようになっていないようにしてください (モード 600)! 必要なら `chmod` コマンドを使ってください。 ==== すべてが終わったら NIS マップを初期化します! FreeBSD には、これを行うために `ypinit` という名のスクリプトが含まれています (詳細はそのマニュアルページをご覧ください)。 このスクリプトはほとんどの UNIX(R) OS に存在しますが、 すべてとは限らないことを覚えておいてください。 Digital Unix/Compaq Tru64 UNIX では `ypsetup` と呼ばれています。NIS マスタのためのマップを作るためには `-m` オプションを `ypinit` に与えます。上述のステップを完了しているなら、以下を実行して NIS マップを生成します。 [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... `ypinit` は [.filename]#/var/yp/Makefile# を [.filename]#/var/yp/Makefile.dist# から作成します。 作成された時点では、そのファイルはあなたが FreeBSD マシンだけからなるサーバが 1 台だけの NIS 環境を扱っていると仮定しています。 _test-domain_ はスレーブサーバを一つ持っていますので [.filename]#/var/yp/Makefile# を編集しなければなりません。 [source,shell] .... ellington# vi /var/yp/Makefile .... 以下の行を (もし既にコメントアウトされていないならば) コメントアウトしなければなりません。 [.programlisting] .... NOPUSH = "True" .... ===== NIS スレーブサーバの設定 NIS スレーブサーバの設定はマスターサーバの設定以上に簡単です。 スレーブサーバにログオンし [.filename]#/etc/rc.conf# ファイルを前回と同様に編集します。唯一の違うところは `ypinit` の実行に `-s` オプションを使わなければいけないことです。 `-s` オプションは NIS マスターサーバの名前を要求し、 コマンドラインは以下のようになります。 [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Don't forget to update map ypservers on ellington. .... この例の場合 [.filename]#/var/yp/test-domain# というディレクトリが必要になります。 NIS マスターサーバのマップファイルのコピーは、 このディレクトリに置いてください。 これらを確実に最新のものに維持する必要があります。 次のエントリをスレーブサーバの [.filename]#/etc/crontab# に追加することで、最新のものに保つことができます。 [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... この二行はスレーブサーバにあるマップファイルを、 マスターサーバのマップファイルと同期させるものです。 このエントリは必須というわけではありませんが、マスターサーバは NIS マップに対する変更をスレーブサーバに伝えようとしますし、 サーバが管理するシステムにとってパスワード情報はとても重要なので、 強制的に更新してしまうことはよい考えです。特に、 マップファイルの更新がきちんと行なわれるかどうかわからないくらい混雑するネットワークでは、 重要になります。 スレーブサーバ上でも `/etc/netstart` コマンドを実行して、NIS サーバを再起動してください。 ==== NIS クライアント NIS クライアントは `ypbind` デーモンを使って、特定の NIS サーバとの間に結合 (binding) と呼ばれる関係を成立させます。 `ypbind` はシステムのデフォルトのドメイン (`domainname` コマンドで設定されます) を確認し、RPC 要求をローカルネットワークにブロードキャストします。 この RPC 要求により `ypbind` が結合を成立させようとしているドメイン名が指定されます。 要求されているドメイン名に対してサービスするよう設定されたサーバが ブロードキャストを受信すると、 サーバは `ypbind` に応答し``ypbind`` は応答のあったサーバのアドレスを記録します。複数のサーバ (たとえば一つのマスターサーバと、複数のスレーブサーバ) が利用可能な場合、`ypbind` は、 最初に応答したサーバのアドレスを使用します。 これ以降、クライアントのシステムは、 すべての NIS の要求をそのサーバに向けて送信します。 `ypbind` は、 サーバが順調に動作していることを確認するため、 時々 "ping" をサーバに送ります。 反応が戻ってくるべき時間内に ping に対する応答が来なければ、 `ypbind` は、そのドメインを結合不能 (unbound) として記録し、別のサーバを見つけるべく、 再びブロードキャストパケットの送信を行います。 ===== NIS クライアントの設定 FreeBSD マシンを NIS クライアントにする設定は非常に単純です。 [.procedure] ==== . ネットワークの起動時に NIS ドメイン名を設定して `ypbind` を起動させるために [.filename]#/etc/rc.conf# ファイルを編集して以下の行を追加します。 + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... + . NIS サーバから、 利用可能なパスワードエントリをすべて取り込むため、 [.filename]#/etc/master.passwd# からすべてのユーザアカウントを取り除いて、 `vipw` コマンドで以下の行を [.filename]#/etc/master.passwd# の最後に追加します。 + [.programlisting] .... +::::::::: .... + [NOTE] ====== この行によって NIS サーバのパスワードマップにアカウントがある人全員にアカウントが与えられます。 この行を変更すると、 さまざまな NIS クライアントの設定を行なうことが可能です。 詳細は <> を、さらに詳しい情報については、O'Reilly の `Managing NFS and NIS` を参照してください。 ====== + [NOTE] ====== [.filename]#/etc/master.passwd# 内に少なくとも一つのローカルアカウント (つまり NIS 経由でインポートされていないアカウント) を置くべきです。 また、このアカウントは `wheel` グループのメンバーであるべきです。 NIS がどこか調子悪いときには、 リモートからこのアカウントでログインし、 root になって修復するのに利用できます。 ====== + . NIS サーバにあるすべてのグループエントリを取り込むため、 以下の行を [.filename]#/etc/group# に追加します。 + [.programlisting] .... +:*:: .... ==== 上記の手順がすべて完了すれば、 `ypcat passwd` によって NIS サーバの passwd マップが参照できるようになっているはずです。 === NIS セキュリティ 一般にドメイン名さえ知っていれば、 どこにいるリモートユーザでも man:ypserv[8] に RPC を発行して NIS マップの内容を引き出すことができます。 こういった不正なやりとりを防ぐため、 man:ypserv[8] には securenets と呼ばれる機能があります。これは、 アクセスを決められたホストだけに制限するのに使える機能です。 man:ypserv[8] は起動時に [.filename]#/var/yp/securenets# ファイルから securenets に関する情報を読み込みます。 [NOTE] ==== 上記のパス名は `-p` オプションで指定されたパス名によって変わります。このファイルは、 空白で区切られたネットワーク指定とネットマスクのエントリからなっていて、 "#" で始まる行はコメントとみなされます。 簡単な securenets ファイルの例を以下に示します。 ==== [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 10.0.0.0 255.255.240.0 .... man:ypserv[8] が上記のルールの一つと合致するアドレスからの要求を受け取った場合、 処理は正常に行なわれます。 もしアドレスがルールに合致しなければ、 その要求は無視されて警告メッセージがログに記録されます。 また [.filename]#/var/yp/securenets# が存在しない場合、 `ypserv` はすべてのホストからの接続を受け入れます。 `ypserv` は Wietse Venema 氏による tcpwrapper パッケージもサポートしています。 そのため [.filename]#/var/yp/securenets# の代わりに tcpwrapper の設定ファイルを使ってアクセス制御を行なうことも可能です。 [NOTE] ==== これらのアクセス制御機能は一定のセキュリティを提供しますが、 どちらも特権ポートのテストのような "IP spoofing" 攻撃に対して脆弱です。すべての NIS 関連のトラフィックはファイアウォールでブロックされるべきです。 [.filename]#/var/yp/securenets# を使っているサーバは、古い TCP/IP 実装を持つ正当なクライアントへのサービスに失敗することがあります。 これらの実装の中にはブロードキャストのホストビットをすべて 0 でセットしてしまったり、 ブロードキャストアドレスの計算でサブネットマスクを見落としてしまったりするものがあります。 これらの問題にはクライアントの設定を正しく行なえば解決できるものもありますが、 問題となっているクライアントシステムを引退させるか、 [.filename]#/var/yp/securenets# を使わないようにしなければならないものもあります。 このような古風な TCP/IP の実装を持つサーバで [.filename]#/var/yp/securenets# を使うことは実に悪い考えであり、 あなたのネットワークの大部分において NIS の機能喪失を招きます。 tcpwrapper パッケージを使うとあなたの NIS サーバのレイテンシ (遅延) が増加します。特に混雑したネットワークや遅い NIS サーバでは、遅延の増加によって、 クライアントプログラムのタイムアウトが起こるかもしれません。 一つ以上のクライアントシステムがこれらの兆候を示したなら、 あなたは問題となっているクライアントシステムを NIS スレーブサーバにして自分自身に結び付くように強制すべきです。 ==== === 何人かのユーザのログオンを遮断する わたしたちの研究室には `basie` という、 教員専用のマシンがあります。わたしたちはこのマシンを NIS ドメインの外に出したくないのですが、 マスタ NIS サーバの [.filename]#passwd# ファイルには教員と学生の両方が載っています。 どうしたらいいでしょう? 当該人物が NIS のデータベースに載っていても、 そのユーザがマシンにログオンできないようにする方法があります。 そうするには __-username__ をクライアントマシンの [.filename]#/etc/master.passwd# ファイルの末尾に付け足します。 _username_ はあなたがログインさせたくないと思っているユーザのユーザ名です。 これは `vipw` で行うべきです。 `vipw` は [.filename]#/etc/master.passwd# への変更をチェックし、編集終了後パスワードデータベースを再構築します。 たとえば、ユーザ _bill_ が `basie` にログオンするのを防ぎたいなら、以下のようにします。 [source,shell] .... basie# vipw [add -bill to the end, exit] vipw: rebuilding the database... vipw: done basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin +::::::::: -bill basie# .... [[network-netgroups]] === ネットグループの利用 前節までに見てきた手法は、 極めて少ないユーザ/マシン向けに個別のルールを必要としている場合にはうまく機能します。 しかし大きなネットワークでは、 ユーザに触られたくないマシンへログオンを防ぐのを _忘れるでしょう_ し、 そうでなくとも各マシンを個別に設定して回らなければならず、 __集中__管理という NIS の恩恵を失ってしまいます。 NIS の開発者はこの問題を _ネットグループ_ と呼ばれる方法で解決しました。 その目的と意味合いは UNIX(R) のファイルシステムで使われている一般的なグループと比較できます。 主たる相違は数値 ID が存在しないことと、 ユーザアカウントと別のネットグループを含めたネットグループを定義できることです。 ネットグループは百人/台以上のユーザとマシンを含む、 大きく複雑なネットワークを扱うために開発されました。 あなたがこのような状況を扱わなければならないなら便利なものなのですが、 一方で、この複雑さは単純な例でネットグループの説明をすることをほとんど不可能にしています。 この節の残りで使われている例は、この問題を実演しています。 あなたの行なった、 研究室への NIS の導入の成功が上司の目に止ったとしましょう。 あなたの次の仕事は、あなたの NIS ドメインをキャンパスの他のいくつものマシンを覆うものへ拡張することです。 二つの表は新しいユーザと新しいマシンの名前とその説明を含んでいます。 [.informaltable] [cols="1,1", options="header"] |=== | ユーザの名前 | 説明 |alpha, beta |IT 学科の通常の職員 |charlie, delta |IT 学科の新しい見習い |echo, foxtrott, golf, ... |一般の職員 |able, baker, ... |まだインターン |=== [.informaltable] [cols="1,1", options="header"] |=== | マシンの名前 | 説明 |war, death, famine, pollution |最も重要なサーバ。IT 職員だけがログオンを許されます。 |pride, greed, envy, wrath, lust, sloth |あまり重要でないサーバ。 IT 学科の全員がログオンを許されます。 |one, two, three, four, ... |通常のワークステーション。 _本当の_ 職員だけがログオンを許されます。 |trashcan |重要なデータの入っていないひどく古いマシン。 インターンでもこのマシンの使用を許されます。 |=== もしあなたがこの手の制限を各ユーザを個別にブロックする形で実装するなら、 あなたはそのシステムにログオンすることが許されていない各ユーザについて -_user_ という 1 行を、各システムの [.filename]#passwd# に追加しなければならなくなるでしょう。 もしあなたが 1 エントリでも忘れればトラブルに巻き込まれてしまいます。 最初のセットアップの時にこれを正しく行えるのはありえることかも知れませんが、 遂には連日の業務の間に例の行を追加し__忘れてしまうでしょう__。 結局マーフィーは楽観主義者だったのです。 この状況をネットグループで扱うといくつかの有利な点があります。 各ユーザを別個に扱う必要はなく、 ユーザを一つ以上のネットグループに割り当て、 ネットグループの全メンバのログインを許可したり禁止したりすることができます。 新しいマシンを追加するときはネットグループへログインの制限を定義するだけ、 新しいユーザを追加するときはそのユーザを一つ以上のネットグループへ追加するだけで、 それぞれ行なうことができます。 これらの変更は互いに独立なので、 "ユーザとマシンの組合わせをどうするか" は存在しなくなります。 あなたの NIS のセットアップが注意深く計画されていれば、 マシンへのアクセスを認めるにも拒否するにも中心の設定をたった一カ所変更するだけです。 最初のステップは NIS マップネットグループの初期化です。 FreeBSD の man:ypinit[8] はこのマップをデフォルトで作りませんが、 その NIS の実装はそれが作られさえすればそれをサポートするものです。 空のマップを作るには、単に [source,shell] .... ellington# vi /var/yp/netgroup .... とタイプして内容を追加していきます。 わたしたちの例では、すくなくとも IT 職員、IT 見習い、一般職員、 インターンの 4 つのネットグループが必要です。 [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... `IT_EMP`, `IT_APP` 等はネットグループの名前です。 それぞれの括弧で囲まれたグループが一人以上のユーザアカウントをそれに登録しています。 グループの 3 つのフィールドは . その記述が有効なホスト (群) の名称。 ホスト名を特記しなければそのエントリはすべてのホストで有効です。 もしあなたがホスト名を特記するなら、 あなたは闇と恐怖と全き混乱の領域に入り込んでしまうでしょう。 . このネットグループに所属するアカウントの名称。 . そのアカウントの NIS ドメイン。 もしあなたが一つ以上の NIS ドメインの不幸な仲間なら、 あなたは他の NIS ドメインからあなたのネットグループにアカウントを導入できます。 各フィールドには、ワイルドカードが使えます。 詳細は man:netgroup[5] をご覧ください。 [NOTE] ==== 8 文字以上のネットグループ名は、特にあなたの NIS ドメインで他のオペレーティングシステムを走らせているときは使うべきではありません。 名前には大文字小文字の区別があります。 そのためネットグループ名に大文字を使う事は、 ユーザやマシン名とネットグループ名を区別する簡単な方法です。 (FreeBSD 以外の) NIS クライアントの中には 多数のエントリを扱えないものもあります。 たとえば SunOS(TM) の古い版では 15 以上の _エントリ_ を含むネットグループはトラブルを起こします。 この制限は 15 ユーザ以下のサブネットグループをいくつも作り、 本当のネットグループはこのサブネットグループからなるようにすることで回避できます。 [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... 単一のネットグループに 225 人以上のユーザをいれたいときは、 このやり方を繰り返すことができます。 ==== 新しい NIS マップの有効化と配布は簡単です。 [source,shell] .... ellington# cd /var/yp ellington# make .... これで新しい 3 つの NIS マップ [.filename]#netgroup#, [.filename]#netgroup.byhost#, [.filename]#netgroup.byuser# ができるはずです。 新しい NIS マップが利用できるか確かめるには man:ypcat[1] を使います。 [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... 最初のコマンドの出力は [.filename]#/var/yp/netgroup# の内容に似ているはずです。 2 番目のコマンドはホスト別のネットグループを作っていなければ出力されません。 3 番目のコマンドはユーザに対するネットグループのリストを得るのに使えます。 クライアント側の設定は非常に簡単です。 サーバ _war_ を設定するには、 man:vipw[8] を実行して以下の行 [.programlisting] .... +::::::::: .... を [.programlisting] .... +@IT_EMP::::::::: .... に入れ替えるだけです。 今、ネットグループ _IT_EMP_ で定義されたユーザのデータだけが _war_ のパスワードデータベースに読み込まれ、 そのユーザだけがログインを許されています。 残念ながらこの制限はシェルの ~ の機能や、 ユーザ名や数値の ユーザ ID の変換ルーチンにも影響します。 つまり、 `cd ~user` はうまく動かず、 `ls -l` はユーザ名のかわりに数値の ID を表示し `find . -user joe -print` は "No such user" で失敗します。 これを避けるためには、すべてのユーザのエントリを _サーバにログインすることを許さずに_ 読み込まなければなりません。 これはもう一行を [.filename]#/etc/master.passwd# に追加することで実現できます。その行は以下の `+:::::::::/sbin/nologin` を含んでおり、 これは "すべてのエントリを読み込むが、読み込まれたエントリのシェルは [.filename]#/sbin/nologin# で置き換えられる" ということを意味します。passwd エントリの他のフィールドを [.filename]#/etc/master.passwd# の既定値から置き換えることも可能です。 [WARNING] ==== `+:::::::::/sbin/nologin` の行が `+@IT_EMP:::::::::` の行より後ろに位置することに注意してください。 さもないと NIS から読み込まれた全ユーザが /sbin/nologin をログインシェルとして持つことになります。 ==== この変更の後では、新しい職員が IT 学科に参加しても NIS マップを一つ書き換えるだけで済みます。 同様にして、あまり重要でないサーバのローカルの [.filename]#/etc/master.passwd# のかつての `+:::::::::` 行を以下のように置き換えます。 [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin .... この行は、一般のワークステーションでは以下のようになります。 [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/sbin/nologin .... これでしばらく順調に運用していましたが、 数週間後、ポリシに変更がありました。 IT 学科はインターンを雇い始め、IT インターンは一般のワークステーションと余り重要ではないサーバを使うことが許され、 IT 見習いはメインサーバへのログインが許されました。 あなたは新たなネットグループ IT_INTERN を追加して新しい IT インターンたちをそのグループに登録し、 すべてのマシンの設定を変えて回ることにしました。 古い諺にこうあります。 "集中管理における過ちは、大規模な混乱を導く"。 いくつかのネットグループから新たなネットグループを作るという NIS の機能は、このような状況に対処するために利用できます。 その方法の一つは、役割別のネットグループを作ることです。 たとえば、重要なサーバへのログイン制限を定義するために _BIGSRV_ というネットグループを作り あまり重要ではないサーバへは _SMALLSRV_ というネットグループを、そして一般のワークステーション用に _USERBOX_ という第 3 のネットグループを 作ることができます。これらのネットグループの各々は、 各マシンにログインすることを許されたネットグループを含みます。 あなたの NIS マップネットグループの新しいエントリは、 以下のようになるはずです。 [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... このログイン制限の定義法は、 同一の制限を持つマシンのグループを定義できるときには便利なものです。 残念ながらこのようなケースは例外的なものです。 ほとんどの場合、 各マシンに基づくログイン制限の定義機能が必要となるでしょう。 マシンごとのネットグループの定義は、 上述したようなポリシの変更を扱うことができるもうひとつの方法です。 このシナリオでは、各マシンの [.filename]#/etc/master.passwd# は "+" で始まる 2 つの行からなります。 最初のものはそのマシンへのログインを許されたアカウントを追加するもので、 2 番目はその他のアカウントを [.filename]#/sbin/nologin# をシェルとして追加するものです。 マシン名をすべて大文字で記述したものをネットグループの名前として使うのは良い考えです。 言い換えれば、件の行は次のようになるはずです。 [.programlisting] .... +@BOXNAME::::::::: +:::::::::/sbin/nologin .... 一度、各マシンに対してこの作業を済ませてしまえば、 二度とローカルの [.filename]#/etc/master.passwd# を編集する必要がなくなります。 以降のすべての変更は NIS マップの編集で扱うことができます。 以下はこのシナリオに対応するネットグループマップに、 いくつかの便利な定義を追加した例です。 [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... もしユーザアカウントを管理するのにデータベースの類を使っているなら、 データベースのレポートツールからマップの最初の部分を作れるようにするべきです。 そうすれば、新しいユーザは自動的にマシンにアクセスできるでしょう。 最後に使用上の注意を: マシン別のネットグループを使うことが常に賢明というわけではありません。 あなたが数ダースから数百の同一の環境のマシンを学生の研究室に配置しているのならば、 NIS マップのサイズを手頃な範囲に押さえるために、 マシン別のネットグループのかわりに役割別のネットグループを使うべきです。 === 忘れてはいけないこと NIS 環境にある今、 今までとは違ったやり方が必要なことがいくつかあります。 * 研究室にユーザを追加するときは、それをマスター NIS サーバに _だけ_ 追加しなければならず、さらに _NIS マップを再構築することを忘れてはいけません_。 これを忘れると新しいユーザは NIS マスタ以外のどこにもログインできなくなります。 たとえば、新しくユーザ "jsmith" をラボに登録したいときは以下のようにします。 + [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... + `pw useradd jsmith` のかわりに `adduser jsmith` を使うこともできます。 * _管理用アカウントを NIS マップから削除してください_。 管理用アカウントやパスワードを、 それらのアカウントへアクセスさせてはいけないユーザが居るかも知れないマシンにまで伝えて回りたいとは思わないでしょう。 * _NIS のマスタとスレーブをセキュアに、 そして機能停止時間を最短に保ってください_。 もし誰かがこれらのマシンをクラックしたり、 あるいは単に電源を落としたりすると、 彼らは実質的に多くの人を研究室へログインできなくしてしまえます。 + これはどの集中管理システムにとってももっとも大きな弱点でしょう。 あなたの NIS サーバを守らなければ怒れるユーザと対面することになるでしょう! === NIS v1 との互換性 FreeBSD の ypserv は、 NIS v1 クライアントを部分的にサポートしています。 FreeBSD の NIS 実装は NIS v2 プロトコルのみを使用していますが、 ほかの実装では、古いシステムとの下位互換性を持たせるため v1 プロトコルをサポートしているものもあります。 そのようなシステムに付いている ypbind デーモンは、 必要がないにもかかわらず NIS v1 のサーバとの結合を成立させようとします (しかも v2 サーバからの応答を受信した後でも、 ブロードキャストをし続けるかも知れません)。 FreeBSD の ypserv は、 クライアントからの通常のリクエストはサポートしていますが、 v1 のマップ転送リクエストはサポートしていないことに注意してください。 つまり FreeBSD の ypserv を、 v1 だけをサポートするような古い NIS サーバと組み合わせて マスターやスレーブサーバとして使うことはできません。 幸いなことに、現在、そのようなサーバが使われていることは ほとんどないでしょう。 [[network-nis-server-is-client]] === NIS クライアントとしても動作している NIS サーバ 複数のサーバが存在し、サーバ自身が NIS クライアントでもあるようなドメインで ypserv が実行される場合には注意が必要です。 一般的に良いとされているのは、 他のサーバと結合をつくるようにブロードキャストさせるのではなく、 サーバをそれ自身に結合させることです。 もし、サーバ同士が依存関係を持っていて、一つのサーバが停止すると、 奇妙なサービス不能状態に陥ることがあります。 その結果、すべてのクライアントはタイムアウトを起こして 他のサーバに結合しようと試みますが、 これにかかる時間はかなり大きく、 サーバ同士がまた互いに結合してしまったりすると、 サービス不能状態はさらに継続することになります。 `ypbind` に `-S` オプションフラグを指定して実行することで、 ホストを特定のサーバに結合することが可能です。 NIS サーバを再起動するたびに、これを手動で行いたくないなら、 次の行を [.filename]#/etc/rc.conf# に追加すればよいでしょう。 [.programlisting] .... nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" .... 詳細については man:ypbind[8] を参照してください。 === パスワード形式 NIS を実装しようする人の誰もがぶつかる問題の一つに、 パスワード形式の互換性があります。 NIS サーバが DES 暗号化パスワード使っている場合には、 同様に DES を使用しているクライアントしか対応できません。 たとえば Solaris(TM); の NIS クライアントがネットワーク内にある場合、 ほぼ確実に DES 暗号化パスワードを使用しなければならないでしょう。 サーバとクライアントがどのライブラリを使用しているかは、 [.filename]#/etc/login.conf# を確認してください。 ホストが DES 暗号パスワードを使用するように設定されている場合、 `default` クラスには以下のようなエントリが含まれます。 [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Further entries elided] .... `passwd_format` 特性について他に利用可能な値は `blf` および `md5` (それぞれ Blowfish および MD5 暗号化パスワード) です。 [.filename]#/etc/login.conf# を変更したときは、 ログイン特性データベースも再構築しなければなりません。 これは `root` 権限で下記のようにコマンドを実行すればできます。 [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== すでに [.filename]#/etc/master.passwd# 内に記録されているパスワード形式は、 ログイン特性データベースが再構築された__後__、 ユーザが彼らのパスワードをはじめて変更するまで変更されないでしょう。 ==== 次に、 パスワードが選択した形式で暗号化されることを確実にするために、 さらに [.filename]#/etc/auth.conf# 内の `crypt_default` において、 選択したパスワード形式に高い優先順位がついていることも確認してください。 そうするためには、選択した形式をリストの先頭に置いてください。 たとえば DES 暗号化されたパスワードを使用するときは、 エントリは次のようになります。 [.programlisting] .... crypt_default = des blf md5 .... FreeBSD 上の各 NIS サーバおよびクライアントにおいて上記の手順に従えば、 ネットワーク内でどのパスワード形式が使用されるかが それらのマシン間で整合されているということを確信できます。 NIS クライアント上で問題があれば、 ここから問題となりそうな部分を探すと良いでしょう。 覚えておいてください: 異種混在ネットワークに NIS サーバを配置したいときには、 DES が最大公約数的な標準となるでしょうから、 すべてのシステムで DES を使用しなければならないかもしれません。 [[network-dhcp]] == DHCP === DHCP とは何でしょうか? DHCP (Dynamic Host Configuration Protocol) は、 システムをネットワークに接続するだけで、 ネットワークでの通信に必要な情報を入手することができる仕組みです。 FreeBSD では ISC (Internet Software Consortium) による DHCP の実装を使用しています。したがって、 ここでの説明のうち実装によって異なる部分は ISC のもの用になっています。 === この節で説明していること この節は ISC DHCP システムのクライアント側およびサーバ側の構成要素の両方について説明します。 クライアント側のプログラムである `dhclient` は FreeBSD のベースシステム内に含まれています。そして、サーバ側の要素は package:net/isc-dhcp3-server[] port から利用可能です。下記の説明の他に、 man:dhclient[8], man:dhcp-options[5] および man:dhclient.conf[5] マニュアルページが役にたつ情報源です。 === DHCP の動作 クライアントとなるマシン上で、 DHCP のクライアントである `dhclient` を実行すると、 まず設定情報の要求をブロードキャストします。デフォルトでは、 このリクエストには UDP のポート 68 を使用します。 サーバは UDP のポート 67 で応答し、クライアントの IP アドレスと、 ネットマスクやルータ、DNS サーバなどの関連する情報を提供します。 これらの情報のすべては DHCP の "リース" の形で送られ、DHCP サーバ管理者によって決められたある一定の時間内でのみ有効になります。 これによって、ネットワークに存在しなくなったホストの IP アドレスは自動的に回収されることになります。 DHCP クライアントはサーバから非常に多くの情報を取得することができます。 man:dhcp-options[5] に非常に大きなリストが載っています。 === FreeBSD への組み込み FreeBSD は ISC の DHCP クライアントである `dhclient` を完全に組み込んでいます。 DHCP クライアントはインストーラと基本システムの両方で提供されています。 ですから DHCP サーバを走らせているネットワーク上ではネットワーク関係の設定についての詳細な知識は必要になりません。 `dhclient` は、3.2 以降のすべての FreeBSD の配布物に含まれています。 DHCP は sysinstall で対応されており、sysinstall でのネットワークインタフェイス設定の際は、 "このインタフェイスの設定として DHCP を試してみますか? (Do you want to try DHCP configuration of this interface?)" という質問が最初になされます。 これに同意することで `dhclient` が実行され、 それが成功すればネットワークの設定情報は自動的に取得されます。 システム起動時に DHCP を使ってネットワーク情報を取得するように するには、次の二つを行なう必要があります。 * [.filename]#bpf# デバイスがカーネルに組み込まれていることを確認します。 これを組み込むには、カーネルコンフィグレーションファイルに `pseudo-device bpf` という行を追加し、カーネルを再構築します。 カーネルの構築に関する詳細は、 crossref:kernelconfig[kernelconfig,FreeBSD カーネルのコンフィグレーション] を参照してください。 + [.filename]#bpf# デバイスは、 FreeBSD にはじめから用意されている [.filename]#GENERIC# カーネルに組み込まれていますので、 自分で設定を変えたカスタムカーネルを使っているのでなければ、 DHCP を動作させるためにカーネルを再構築する必要はありません。 + [NOTE] ==== セキュリティに関心のある方向けに注意しておきます。 [.filename]#bpf# デバイスは、パケットスニファ (盗聴プログラム) を動作させることができる (ただし `root` 権限が必要) デバイスです。 [.filename]#bpf# は DHCP を動作させるために __かならず__必要ですが、 セキュリティが非常に重要な場面では DHCP をいつか使うかもしれないというだけで [.filename]#bpf# デバイスをカーネルに追加すべきではないでしょう。 ==== * [.filename]#/etc/rc.conf# を編集して、 次の行を追加してください。 + [.programlisting] .... ifconfig_fxp0="DHCP" .... + [NOTE] ==== で説明されているように `fxp0` の部分を、 動的に設定したいインタフェースの名前で置き換えることを忘れないようにしてください。 ==== + もし、使っている `dhclient` の場所を変更していたり、`dhclient` にフラグを渡したい場合は、 同様に下のように書き加えてください。 + [.programlisting] .... dhcp_program="/sbin/dhclient" dhcp_flags="" .... DHCP サーバ `dhcpd` は、Ports Collection に package:net/isc-dhcp3-server[] の一部として収録されています。 この port には ISC DHCP サーバと文書が含まれています。 === 関連ファイル * [.filename]#/etc/dhclient.conf# + `dhclient` は設定ファイル [.filename]#/etc/dhclient.conf# を必要とします。 大抵の場合、このファイルはコメントだけであり、 デフォルトが通常使いやすい設定になっています。 この設定ファイルは man:dhclient.conf[5] マニュアルページで説明しています。 * [.filename]#/sbin/dhclient# + `dhclient` は静的にリンクされており、 [.filename]#/sbin# に置かれています。man:dhclient[8] マニュアルページで `dhclient` コマンドについてより詳しく説明しています。 * [.filename]#/sbin/dhclient-script# + `dhclient-script` は FreeBSD 特有の、 DHCP クライアント設定スクリプトです。これについては man:dhclient-script[8] マニュアルページで説明されていますが、 これを編集する必要はほとんど発生しないでしょう。 * [.filename]#/var/db/dhclient.leases# + DHCP クライアントはこのファイルに有効なリースのデータベースをログとして記録します。 man:dhclient.leases[5] にもうすこし詳しい解説があります。 === 参考になる文献 DHCP のプロトコルは http://www.freesoft.org/CIE/RFC/2131/[RFC 2131] に完全に記述されています。また http://www.dhcp.org/[dhcp.org] にも有用な情報源が用意されています。 [[network-dhcp-server]] === DHCP サーバのインストールと設定 ==== この節で説明していること この節は DHCP の ISC (Internet Software Consortium) 実装を用いて FreeBSD システムを DHCP サーバとして動作させる方法の情報を提供します。 DHCP のサーバ部分は FreeBSD の一部として提供されません。 したがって、このサービスを提供するために package:net/isc-dhcp3-server[] port をインストールする必要があるでしょう。 Ports Collection を使用する情報についての詳細は crossref:ports[ports,アプリケーションのインストール - packages と ports] を参照してください。 ==== DHCP サーバのインストール FreeBSD システムを DHCP サーバとして設定するために、man:bpf[4] デバイスがカーネルに組み込まれていることを保証する必要があります。 そうするためには、カーネルコンフィギュレーションファイルに `pseudo-device bpf` を追加して、 カーネルを再構築してください。 カーネルの構築に関する詳細は crossref:kernelconfig[kernelconfig,FreeBSD カーネルのコンフィグレーション] を参照してください。 [.filename]#bpf# デバイスは、 FreeBSD にはじめから用意されている [.filename]#GENERIC# カーネルの一部なので、DHCP を動作させるためにカスタムカーネルを作成する必要はありません。 [NOTE] ==== セキュリティを特に意識する人は、[.filename]#bpf# [.filename]#bpf# はパケットスニファ (盗聴プログラム) が正常に (このようなプログラムはさらに特権アクセスを必要としますが) 動作することを可能にするデバイスでもあることに注意してください。 [.filename]#bpf# は DHCP を使用するために必要 _です_。 しかし、セキュリティをとても気にしているなら、 DHCP をいつか使うかもしれないというだけで [.filename]#bpf# デバイスをカーネルに含めるべきではないでしょう。 ==== 次に行わねばならないのは、 package:net/isc-dhcp3-server[] port によってインストールされた [.filename]#dhcpd.conf# のサンプルを編集することです。 デフォルトでは、これは [.filename]#/usr/local/etc/dhcpd.conf.sample# で、 編集する前にこれを [.filename]#/usr/local/etc/dhcpd.conf# にコピーするべきでしょう。 ==== DHCP サーバの設定 [.filename]#dhcpd.conf# はサブネットおよびホストに関する宣言で構成されます。 例を使って説明するのが最も簡単でしょう。 [.programlisting] .... option domain-name "example.com";<.> option domain-name-servers 192.168.4.100;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 3600;<.> max-lease-time 86400;<.> ddns-update-style none;<.> subnet 192.168.4.0 netmask 255.255.255.0 { range 192.168.4.129 192.168.4.254;<.> option routers 192.168.4.1;<.> } host mailhost { hardware ethernet 02:03:04:05:06:07;<.> fixed-address mailhost.example.com;<.> } .... <.> このオプションは、 デフォルト探索ドメインとしてクライアントに渡されるドメインを指定します。 これが意味するところの詳細については man:resolv.conf[5] を参照してください。 <.> このオプションはクライアントが使用する、 コンマで区切られた DNS サーバのリストを指定します。 <.> クライアントに渡されるネットマスクです。 <.> クライアントは特定のリース期限を要求することもできます。 それ以外の場合は、サーバはこのリース期限値 (秒) でリースを割り当てるでしょう。 <.> これはサーバがリースする時間の最大値です。 クライアントがこれより長いリースを要求しても、 `max-lease-time` 秒だけしか有効にならないでしょう。 <.> このオプションは、リースが受理、またはリリースされたときに DHCP サーバが DNS を更新しようとするかどうかを指定します。 ISC 実装では、このオプションは _必須_ です。 <.> これはどの範囲の IP アドレスが、 クライアントに割り当てるために予約されたプールに使用されるかを示します。 この範囲に含まれている IP アドレスはクライアントに渡されます。 <.> クライアントに供給されるデフォルトゲートウェイを宣言します。 <.> (リクエストが生じた時に DHCP サーバがホストを認識できるように) ホストのハードウェア MAC アドレスを指定します。 <.> ホストに常に同じ IP アドレスを付与することを指定します。 DHCP サーバはリース情報を返す前にホスト名の名前解決をするので、 ここにホスト名を書いても構いません。 [.filename]#dhcpd.conf# を書き終えたら以下のコマンドでサーバを起動できます。 [source,shell] .... # /usr/local/etc/rc.d/isc-dhcpd.sh start .... 今後サーバの設定に変更を加える必要が生じた時には、 `SIGHUP` シグナルを dhcpd に送っても、 多くのデーモンがそうであるようには、 設定ファイルが再読み込み _されない_ ことに注意してください。 `SIGTERM` シグナルを送ってプロセスを停止し、 それから上記のコマンドを用いて再起動させる必要があります。 ==== ファイル * [.filename]#/usr/local/sbin/dhcpd# + dhcpd は静的にリンクされ [.filename]#/usr/local/sbin# に置かれます。 dhcpd に関するそれ以上の情報は port とともにインストールされる man:dhcpd[8] マニュアルページにあります。 * [.filename]#/usr/local/etc/dhcpd.conf# + dhcpd はクライアントへのサービス提供をはじめる前に設定ファイル [.filename]#/usr/local/etc/dhcpd.conf# を必要とします。このファイルは、 サーバの稼働に関する情報に加えて、 サービスされているクライアントに提供される情報のすべてを含む必要があります。 この設定ファイルについての詳細は、 port によってインストールされる man:dhcpd.conf[5] マニュアルページを参照してください。 * [.filename]#/var/db/dhcpd.leases# + DHCP サーバは発行したリースのデータベースをこのファイルにログとして保持します。 port によってインストールされる man:dhcpd.leases[5] にはもう少し詳しい説明があります。 * [.filename]#/usr/local/sbin/dhcrelay# + dhcrelay は、DHCP サーバがクライアントからのリクエストを、 別のネットワーク上にある DHCP サーバに転送する高度な環境下で使用されます。 この機能が必要なら、package:net/isc-dhcp3-server[] port をインストールしてください。 port とともに提供される man:dhcrelay[8] マニュアルページにはより詳細な情報が含まれます。 [[network-dns]] == DNS === 概観 FreeBSD はデフォルトでは DNS プロトコルの最も一般的な実装である BIND (Berkeley Internet Name Domain) を使用します。DNS はホスト名を IP アドレスに、そして IP アドレスをホスト名に関連づけるプロトコルです。 たとえば `www.FreeBSD.org` に対する問い合わせは The FreeBSD Project の ウェブサーバの IP アドレスを受け取るでしょう。 その一方で `ftp.FreeBSD.org` に対する問い合わせは、 対応する FTP マシンの IP アドレスを返すでしょう。 同様に、その逆のことも可能です。 IP アドレスに対する問い合わせを行うことで、 そのホスト名を解決することができます。 DNS 検索を実行するために、 システム上でネームサーバを動作させる必要はありません。 DNS は、 個々のドメイン情報を格納およびキャッシュした、 権威のあるルートサーバおよび他の小規模なネームサーバによる多少複雑なシステムによって、 インターネット全体にわたって協調して動作します。 この文書は FreeBSD で安定版として利用されている BIND 8.x について説明します。 FreeBSD では BIND 9.x を package:net/bind9[] port からインストールできます。 RFC1034 および RFC1035 は DNS プロトコルを定義しています。 現在のところ BIND は http://www.isc.org/[Internet Software Consortium (www.isc.org)] によって保守されています。 === 用語 この文書を理解するには DNS 関連の用語をいくつか理解しなければいけません。 [.informaltable] [cols="1,1", frame="none", options="header"] |=== | 用語 | 定義 |正引き DNS |ホスト名から IP アドレスへの対応です。 |オリジン (origine) |特定のゾーンファイルによってカバーされるドメインへの参照です。 |named, BIND, ネームサーバ |FreeBSD 内の BIND ネームサーバパッケージの一般名称です。 |リゾルバ (resolver) |マシンがゾーン情報についてネームサーバに問い合わせるシステムプロセスです。 |逆引き DNS |正引き DNS の逆です。つまり IP アドレスからホスト名への対応です。 |ルートゾーン |インターネットゾーン階層の起点です。 すべてのゾーンはルートゾーンの下に属します。 これはファイルシステムのすべてのファイルがルートディレクトリの下に属することと似ています。 |ゾーン |同じ権威によって管理される個々の DNS ドメイン、 DNS サブドメイン、あるいは DNS の一部分です。 |=== ゾーンの例: * `.` はルートゾーンです。 * `org.` はルートゾーンの下のゾーンです。 * `example.org` は `org.` ゾーンの下のゾーンです。 * `foo.example.org.` はサブドメインで、 `example.org.` の下のゾーンです。 * `1.2.3.in-addr.arpa` は 3.2.1.* の IP 空間に含まれるすべての IP アドレスを参照するゾーンです。 見て分かるように、ホスト名のより詳細な部分はその左側に現れます。 たとえば `example.org.` は `org.` より限定的です。同様に `org.` はルートゾーンより限定的です。 ホスト名の各部分のレイアウトはファイルシステムに非常に似ています。 たとえば [.filename]#/dev# はルートの下であることなどです。 === ネームサーバを実行する理由 ネームサーバは通常二つの形式があります: 権威のあるネームサーバとキャッシュネームサーバです。 権威のあるネームサーバは以下の場合に必要です。 * 問い合わせに対して信頼できる返答をすることで、 ある人が DNS 情報を世界に向けて発信したいとき。 * `example.org` といったドメインが登録されており、 その下にあるホスト名に IP アドレスを割り当てる必要があるとき。 * IP アドレスブロックが (IP からホスト名への) 逆引き DNS エントリを必要とするとき。 * プライマリサーバがダウンしているかまたはアクセスできない場合に、 代わりに問い合わせに対してスレーブと呼ばれるバックアップネームサーバが返答しなければならないとき。 キャッシュネームサーバは以下の場合に必要です。 * ローカルのネームサーバが、 外部のネームサーバに問い合わせするよりも、 キャッシュしてより速く返答できるとき。 * ネットワークトラフィックの総量を減らしたいとき (DNS のトラフィックはインターネットトラフィック全体の 5% 以上を占めることが測定されています) `www.FreeBSD.org` に対する問い合わせを発したとき、 リゾルバは大体の場合上流の ISP のネームサーバに問い合わせをして返答を得ます。 ローカルのキャッシュ DNS サーバがあれば、 問い合わせはキャッシュ DNS サーバによって外部に対して一度だけ発せられます。 情報がローカルに蓄えられるので、 追加の問い合わせはいずれもローカルネットワークの外側にまで確認しなくてもよくなります。 === 動作のしくみ FreeBSD では BIND デーモンは自明な理由から named と呼ばれます。 [.informaltable] [cols="1,1", frame="none", options="header"] |=== | ファイル | 説明 |named |BIND デーモン |`ndc` |ネームデーモンコントロールプログラム |[.filename]#/etc/namedb# |BIND のゾーン情報が置かれるディレクトリ |[.filename]#/etc/namedb/named.conf# |デーモンの設定ファイル |=== ゾーンファイルは通常 [.filename]#/etc/namedb# ディレクトリ内に含まれており、ネームサーバによって処理される DNS ゾーン情報を含んでいます。 === BIND の起動 BIND はデフォルトでインストールされているので、 すべてを設定することは比較的単純です。 named デーモンが起動時に開始されることを保証するには、 [.filename]#/etc/rc.conf# に以下の変更をいれてください。 [.programlisting] .... named_enable="YES" .... デーモンを手動で起動するためには (設定をした後で) [source,shell] .... # ndc start .... === 設定ファイル ==== `make-localhost` の利用 次のコマンドが [source,shell] .... # cd /etc/namedb # sh make-localhost .... ローカル逆引き DNS ゾーンファイルを [.filename]#/etc/namedb/localhost.rev# に適切に作成することを確認してください。 ==== [.filename]#/etc/namedb/named.conf# [.programlisting] .... // $FreeBSD$ // // 詳細については named(8) マニュアルページを参照してください。プライマリサーバ // を設定するつもりなら、DNS がどのように動作するかの詳細を確実に理解してくださ // い。単純な間違いであっても、影響をうける相手に対する接続を壊したり、無駄な // インターネットトラフィックを大量に引き起こし得ます。 options { directory "/etc/namedb"; // "forwarders" 節に加えて次の行を有効にすることで、ネームサーバに決して自発的 // に問い合わせを発せず、常にそのフォワーダにたいして尋ねるように強制すること // ができます: // // forward only; // あなたが上流のプロバイダ周辺の DNS サーバを利用できる場合、その IP アドレス // をここに入力し、下記の行を有効にしてください。こうすれば、そのキャッシュの // 恩恵にあやかることができ、インターネット全体の DNS トラフィックが減るでしょう。 /* forwarders { 127.0.0.1; }; */ .... コメントが言っている通り、上流のキャッシュの恩恵を受けるために `forwarders` をここで有効にすることができます。 通常の状況では、ネームサーバはインターネットの特定のネームサーバを調べて、 探している返答を見つけるまで再帰的に問い合わせを行います。 これが有効になっていれば、まず上流のネームサーバ (または 与えられたネームサーバ) に問い合わせて、 そのキャッシュを利用するでしょう。 問い合わせをする上流のネームサーバが極度に通信量が多く、 高速であった場合、これを有効にする価値があるかもしれません。 [WARNING] ==== ここに `127.0.0.1` を指定しても動作 _しません_。 上流のネームサーバの IP アドレスに変更してください。 ==== [.programlisting] .... /* * あなたと利用したいネームサーバとの間にファイアウォールがある場合、 * 下記の quiery-source 指令を有効にする必要があるでしょう。 * 過去の BIND のバージョンは常に 53 番ポートに問い合わせをしますが、 * BIND 8.1 はデフォルトで非特権ポートを使用します。 */ // query-source address * port 53; /* * 砂場内で動作させている場合、ダンプファイルのために異なる場所を指定 * しなければならないかもしれません。 */ // dump-file "s/named_dump.db"; }; // 注意: 下記は将来のリリースで対応されるでしょう。 /* host { any; } { topology { 127.0.0.0/8; }; }; */ // セカンダリを設定することはより簡単な方法で、そのおおまかな姿が下記で説明さ // れています。 // // ローカルネームサーバを有効にする場合、このサーバが最初に尋ねられるように // /etc/resolv.conf に 127.0.0.1 を入力することを忘れないでください。さらに、 // /etc/rc.conf 内で有効にすることも確認してください。 zone "." { type hint; file "named.root"; }; zone "0.0.127.IN-ADDR.ARPA" { type master; file "localhost.rev"; }; zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.INT" { type master; file "localhost.rev"; }; // 注意: 下記の IP アドレスを使用しないでください。これはダミーでありデモや文書 // だけを目的としたものです。 // // セカンダリ設定の例です。少なくともあなたのドメインが属するゾーンに対するセカ // ンダリになることは便利かもしれません。プライマリの責を負っている IP アドレス // をネットワーク管理者に尋ねてください。 // // 逆引き参照ゾーン (IN-ADDR.ARPA) を含めることを決して忘れないでください! // (これは ".IN-ADDR.ARPA" を付け加えられたそれぞれの IP アドレスの最初のバイト // の逆順です。) // // プライマリゾーンの設定をはじめる前に DNS および BIND がどのように動作するか // 完全に理解してください。時々自明でない落し穴があります。それに比べるとセカン // ダリを設定するのは単純です。 // // 注意: 下記の例を鵜呑みにして有効にしないでください。:-) 実際の名前とアドレス // を代わりに使用してください。 // // 注意!!! FreeBSD は bind を砂場のなかで動かします (rc.conf 内の named_flags // を参照してください)。セカンダリゾーンを含んだディレクトリは、bind によって // 書き込み可能でなければなりません。次の手順が推奨されます: // // mkdir /etc/namedb/s // chown bind:bind /etc/namedb/s // chmod 750 /etc/namedb/s .... BIND を砂場 (sandbox) で (訳注: chroot をもちいて) 動作させるための詳細は <> を参照してください。 [.programlisting] .... /* zone "example.com" { type slave; file "s/example.com.bak"; masters { 192.168.1.1; }; }; zone "0.168.192.in-addr.arpa" { type slave; file "s/0.168.192.in-addr.arpa.bak"; masters { 192.168.1.1; }; }; */ .... [.filename]#named.conf# の中で、 上記は転送と逆引きゾーンのためのスレーブエントリの例です。 新しくサービスするそれぞれのゾーンについて、新規のエントリを [.filename]#named.conf# に加えなければいけません。 たとえば `example.org` に対する最もシンプルなゾーンエントリは以下のようになります。 [.programlisting] .... zone "example.org" { type master; file "example.org"; }; .... このゾーンは `type` 命令で示されているようにマスタで、ゾーン情報を `file` 命令で指示された [.filename]#/etc/namedb/example.org# ファイルに保持しています。 [.programlisting] .... zone "example.org" { type slave; file "example.org"; }; .... スレーブの場合、 ゾーン情報は特定のゾーンのマスタネームサーバから転送され、 指定されたファイルに保存されます。 マスタサーバが停止するか到達できない場合には、 スレーブサーバが転送されたゾーン情報を保持していて、 サービスできるでしょう。 ==== ゾーンファイル `example.org` に対するマスタゾーンファイル ([.filename]#/etc/namedb/example.org# に保持されます) の例は以下のようになります。 [.programlisting] .... $TTL 3600 example.org. IN SOA ns1.example.org. admin.example.org. ( 5 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ) ; Minimum TTL ; DNS Servers @ IN NS ns1.example.org. @ IN NS ns2.example.org. ; Machine Names localhost IN A 127.0.0.1 ns1 IN A 3.2.1.2 ns2 IN A 3.2.1.3 mail IN A 3.2.1.10 @ IN A 3.2.1.30 ; Aliases www IN CNAME @ ; MX Record @ IN MX 10 mail.example.org. .... "." が最後についているすべてのホスト名は正確なホスト名であり、 一方で "." で終了しないすべての行はオリジンが参照されることに注意してください。 たとえば `www` は `www + オリジン` に展開されます。この架空のゾーンファイルでは、 オリジンは `example.org.` なので `www` は `www.example.org.` に展開されます。 ゾーンファイルの書式は次のとおりです。 [.programlisting] .... recordname IN recordtype value .... DNS レコードに使われる最も一般的なものは以下のとおりです。 SOA:: ゾーン権威の起点 NS:: 権威のあるネームサーバ A:: ホストのアドレス CNAME:: 別名としての正規の名称 MX:: メールエクスチェンジャ PTR:: ドメインネームポインタ (逆引き DNS で使用されます) [.programlisting] .... example.org. IN SOA ns1.example.org. admin.example.org. ( 5 ; Serial 10800 ; Refresh after 3 hours 3600 ; Retry after 1 hour 604800 ; Expire after 1 week 86400 ) ; Minimum TTL of 1 day .... `example.org.`:: このゾーンのオリジンでもあるドメイン名 `ns1.example.org.`:: このゾーンに対して権威のあるプライマリネームサーバ `admin.example.org.`:: このゾーンの責任者。@ を置き換えた電子メールアドレスを指定します。 (mailto:admin@example.org[admin@example.org] は `admin.example.org` になります) `5`:: ファイルのシリアル番号です。 これはファイルが変更されるたびに増加させる必要があります。 現在では多くの管理者は `yyyymmddrr` という形式をシリアル番号として使用することを好みます。 2001041002 は最後に修正されたのが 2001/04/10 で、後ろの 02 はその日で二回目に修正されたものであるということを意味するでしょう。 シリアル番号は、 それが更新されたときにスレーブネームサーバに対してゾーンを通知するので重要です。 [.programlisting] .... @ IN NS ns1.example.org. .... これは `NS` エントリです。 このゾーンに対して権威のある返答を返すネームサーバはすべて、 このエントリを一つ有していなければなりません。 ここにある `@` は `example.org.` を意味します。 `@` はオリジンに展開されます。 [.programlisting] .... localhost IN A 127.0.0.1 ns1 IN A 3.2.1.2 ns2 IN A 3.2.1.3 mail IN A 3.2.1.10 @ IN A 3.2.1.30 .... `A` レコードはマシン名を示します。 上記のように `ns1.example.org` は `3.2.1.2` に結びつけられるでしょう。 ふたたびオリジンを示す `@` がここに使用されていますが、これは `example.org` が `3.2.1.30` に結び付けられることを意味しています。 [.programlisting] .... www IN CNAME @ .... `CNAME` レコードは通常マシンに別名を与えるときに使用されます。 例では `www` はオリジン、すなわち `example.org` (`3.2.1.30`) のアドレスをふられたマシンへの別名を与えます。 `CNAME` はホスト名の別名、 または複数のマシン間で一つのホスト名をラウンドロビン (訳注: 問い合わせがあるたびに別の IP アドレスを返すことで、 一台にアクセスが集中することを防ぐ手法) するときに用いられます。 [.programlisting] .... @ IN MX 10 mail.example.org. .... `MX` レコードは、 ゾーンに対してどのメールサーバがやってきたメールを扱うことに責任を持っているかを示します。 `mail.example.org` はメールサーバのホスト名で、10 はメールサーバの優先度を示します。 優先度が 3,2 または 1 などのメールサーバをいくつも置くことができます。 `example.org` へ送ろうとしているメールサーバははじめに一番優先度の高いメールサーバに接続しようとします。 そして接続できない場合、二番目に優先度の高いサーバに接続しようとし、 以下、メールが適切に配送されるまで同様に繰り返します。 in-addr.arpa ゾーンファイル (逆引き DNS) に対しても `A` または `CNAME` の代わりに `PTR` エントリが用いられることを除けば、 同じ書式が使われます。 [.programlisting] .... $TTL 3600 1.2.3.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 5 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 3600 ) ; Minimum @ IN NS ns1.example.org. @ IN NS ns2.example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 10 IN PTR mail.example.org. 30 IN PTR example.org. .... このファイルは上記の架空のドメインの IP アドレスからホスト名への対応を与えます。 === キャッシュネームサーバ キャッシュネームサーバはどのゾーンに対しても権威をもたないネームサーバです。 キャッシュネームサーバは単に自分で問い合わせをし、 後で使えるように問い合わせの結果を覚えておきます。 これを設定するには、ゾーンを何も含まずに、 通常通りネームサーバを設定してください。 [[network-named-sandbox]] === 砂場で named を実行する セキュリティを強めるために man:named[8] を非特権ユーザで実行し、 砂場のディレクトリ内に man:chroot[8] して実行したいと思うかもしれません。 こうすると named デーモンは砂場の外にはまったく手を出すことができません。 named が乗っ取られたとしても、 これによって起こりうる損害が小さくなるでしょう。 FreeBSD にはデフォルトで、そのための `bind` というユーザとグループがあります。 [NOTE] ==== 多くの人々は named を `chroot` するように設定する代わりに、 man:jail[8] 環境内で named を実行することを奨めるでしょう。 この節ではそれは扱いません。 ==== named は砂場の外 (共有ライブラリ、ログソケットなど) にアクセスできないので、 named を正しく動作させるためにいくつもの段階を経る必要があります。 下記のチェックリストにおいては、砂場のパスは [.filename]#/etc/namedb# で、 このディレクトリの内容には何も手を加えていないと仮定します。 `root` 権限で次のステップを実行してください。 * named が存在することを期待しているディレクトリをすべて作成します。 + [source,shell] .... # cd /etc/namedb # mkdir -p bin dev etc var/tmp var/run master slave # chown bind:bind slave var/* <.> .... + <.> これらのディレクトリに対して named が必要なのは書き込み権限だけなので、それだけを与えます。 * 基本ゾーンファイルと設定ファイルの編集と作成を行います。 + [source,shell] .... # cp /etc/localtime etc <.> # mv named.conf etc && ln -sf etc/named.conf # mv named.root master # sh make-localhost && mv localhost.rev localhost-v6.rev master # cat > master/named.localhost $ORIGIN localhost. $TTL 6h @ IN SOA localhost. postmaster.localhost. ( 1 ; serial 3600 ; refresh 1800 ; retry 604800 ; expiration 3600 ) ; minimum IN NS localhost. IN A 127.0.0.1 ^D .... + <.> これは named が man:syslogd[8] に正しい時刻でログを書き込むことを可能にします。 * 4.9-RELEASE より前のバージョンの FreeBSD を使用している場合、 静的リンクされた named-xfer を構築し、砂場にコピーしてください。 + [source,shell] .... # cd /usr/src/lib/libisc # make cleandir && make cleandir && make depend && make all # cd /usr/src/lib/libbind # make cleandir && make cleandir && make depend && make all # cd /usr/src/libexec/named-xfer # make cleandir && make cleandir && make depend && make NOSHARED=yes all # cp named-xfer /etc/namedb/bin && chmod 555 /etc/namedb/bin/named-xfer<.> .... + 静的リンクされた `named-xfer` をインストールしたら、 ソースツリーの中にライブラリまたはプログラムの古くなったコピーを残さないように、 掃除する必要があります。 + [source,shell] .... # cd /usr/src/lib/libisc # make cleandir # cd /usr/src/lib/libbind # make cleandir # cd /usr/src/libexec/named-xfer # make cleandir .... + <.> このステップは時々失敗することが報告されています。 もし失敗した場合、次のコマンドを実行してください。そして [.filename]#/usr/obj# ツリーを削除します。これはソースツリーからすべての "がらくた" を一掃します。 もう一度上記の手順を行うと、今度はうまく動作するでしょう。 + バージョン 4.9-RELEASE 以降の FreeBSD を使用している場合 [.filename]#/usr/libexec# にある `named-xfer` のコピーはデフォルトで静的リンクされています。 砂場にコピーするために単純に man:cp[1] が使えます。 * named が見ることができ、 書き込むことのできる [.filename]#dev/null# を作成します。 + [source,shell] .... # cd /etc/namedb/dev && mknod null c 2 2 # chmod 666 null .... * [.filename]#/etc/namedb/var/run/ndc# から [.filename]#/var/run/ndc# へのシンボリックリンクを作成します。 + [source,shell] .... # ln -sf /etc/namedb/var/run/ndc /var/run/ndc .... + [NOTE] ==== これは単に man:ndc[8] を実行するたびに `-c` オプションを指定しなくてもよいようにするだけです。 /var/run の中身は起動時に削除されるため、 これが有用だと思うなら、このコマンドをルートの crontab に `@reboot` オプションを指定して追加してください。 詳細については man:crontab[5] を参照してください。 ==== * named が書き込める追加の [.filename]#log# ソケットを作成するように man:syslogd[8] を設定します。 これを行うためには、[.filename]#/etc/rc.conf# 内の `syslogd_flags` 変数に `-l /etc/namedb/dev/log` を加えてください。 * 次の行を [.filename]#/etc/rc.conf# に加えて named が起動し、 自身を砂場内に `chroot` するように調整します + [.programlisting] .... named_enable="YES" named_flags="-u bind -g bind -t /etc/namedb /etc/named.conf" .... + [NOTE] ==== 設定ファイル _/etc/named.conf_ は _砂場のディレクトリに対して相対的な_ フルパスで表されることに注意してください。 つまり、上記の行で示されたファイルは実際には [.filename]#/etc/namedb/etc/named.conf# です。 ==== 次のステップは named がどのゾーンを読み込むか、 そしてディスク上のどこにゾーンファイルがあるのかを知るために [.filename]#/etc/namedb/etc/named.conf# を編集することです。 下記に例をコメントを加えて示します (ここで特にコメントされていない内容については、 砂場の中で動作させない DNS サーバの設定と同じです)。 [.programlisting] .... options { directory "/";<.> named-xfer "/bin/named-xfer";<.> version ""; // Don't reveal BIND version query-source address * port 53; }; // ndc control socket controls { unix "/var/run/ndc" perm 0600 owner 0 group 0; }; // Zones follow: zone "localhost" IN { type master; file "master/named.localhost";<.> allow-transfer { localhost; }; notify no; }; zone "0.0.127.in-addr.arpa" IN { type master; file "master/localhost.rev"; allow-transfer { localhost; }; notify no; }; zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.int" { type master; file "master/localhost-v6.rev"; allow-transfer { localhost; }; notify no; }; zone "." IN { type hint; file "master/named.root"; }; zone "private.example.net" in { type master; file "master/private.example.net.db"; allow-transfer { 192.168.10.0/24; }; }; zone "10.168.192.in-addr.arpa" in { type slave; masters { 192.168.10.2; }; file "slave/192.168.10.db";<.> }; .... <.> `directory` は [.filename]#/# を指定します。 named が必要とするファイルはすべてこのディレクトリにあります。 (この指定は "通常の" (訳注: 砂場内で動作させない) ユーザにとっての [.filename]#/etc/namedb# と等価です)。 <.> `named-xfer` バイナリへの (named にとっての) フルパスを指定します。 named はデフォルトで `named-xfer` を [.filename]#/usr/libexec# から探すようにコンパイルされているので、これが必要です <.> このゾーンに対するゾーンファイルを named が見つけられるようにファイル名を (上記と同様に `directory` からの相対パスで) 指定します。 <.> このゾーンに対するゾーン情報がマスタサーバからが転送されたあとに、 named がゾーンファイルのコピーを書き込むファイル名を (上記と同様に `directory` からの相対パスで) 指定します。これが、上記のように設定段階で [.filename]#slave# ディレクトリの所有者を `bind` に変更する理由です。 上記のステップを完了したら、サーバを再起動するか man:syslogd[8] を再起動し、man:named[8] を起動してください。その際、 `syslogd_flags` および `named_flags` に新たに指定したオプションが有効になっていることを確かめてください。 これで named を砂場のなかで動作させることができているはずです! === セキュリティ BIND は DNS の最も一般的な実装ではありますが、 常にセキュリティ問題を抱えています。 問題になり得る、また悪用可能なセキュリティホールが時々みつかります。 現在のインターネットおよび FreeBSD のセキュリティ問題について常に最新の情報を得るために http://www.cert.org/[CERT] および crossref:eresources[eresources-mail,freebsd-security-notifications] を購読するとよいでしょう。 [TIP] ==== 問題が生じたとしても、 最新のソースからビルドした named を用意しておけば、 問題にならないかもしれません。 ==== === さらなる情報源 BIND/named のマニュアルページ: man:ndc[8] man:named[8] man:named.conf[8] * http://www.isc.org/products/BIND/[ISC Bind 公式ページ] * http://www.nominum.com/getOpenSourceResource.php?id=6[ BIND FAQ] * http://www.oreilly.com/catalog/dns4/[O'Reilly DNS and BIND 4th Edition] * link:ftp://ftp.isi.edu/in-notes/rfc1034.txt[RFC1034 - Domain Names - Concepts and Facilities] (ドメイン名、その概念と基盤) * link:ftp://ftp.isi.edu/in-notes/rfc1035.txt[RFC1035 - Domain Names - Implementation and Specification] (ドメイン名、その実装と仕様) [[network-ntp]] == NTP === 概説 時間の経過とともに、コンピュータの時計はずれてしまいがちです。 時間が経つと、コンピュータの時計は正確でなくなってゆきます。 NTP (Network Time Protocol) は時計が正確であることを保証する方法の一つです。 インターネットサービスの多くは、 コンピュータの時計が正確であることに依存しているか、 あるいは多くを負っています。 たとえば web サーバ は、 あるファイルがある時刻以降に修正されていたらそのファイルを送ってほしいという要求を受け取るかもしれません。 man:cron[8] のようなサービスは所定の時間にコマンドを実行します。 時計が正確でない場合、 これらのコマンドは期待したとおりには実行されないかもしれません。 FreeBSD は man:ntpd[8] NTP サーバを搭載しています。これは、 マシンの時計を合わせるために他の NTP サーバに問い合わせをしたり、 他のマシンに対して時刻を報じるために使用できます。 === 適切な NTP サーバの選択 時刻を同期するために利用する NTP サーバを、 一つ以上見つける必要があります。 ネットワーク管理者、または ISP はこの目的のために NTP サーバを設定しているかもしれません - 本当にそうなのか確かめるためにドキュメントを確認してください。 あなたの近くの NTP サーバを探せる http://www.eecis.udel.edu/~mills/ntp/servers.html[公にアクセス可能な NTP サーバのリスト] があります。 どのサーバを選択するとしても、そのサーバの運営ポリシを理解し、 要求されているなら利用許可を求めることを忘れないでください。 使用しているサーバのうちのどれかが到達不能になるか、 その時計の信頼性が低い場合、無関係の NTP サーバをいくつか選択するとよいでしょう。 man:ntpd[8] は他のサーバから受け取った応答を賢く利用します - 信頼できないサーバより信頼できるサーバを重視します。 === マシンの設定 ==== 基本設定 マシンが起動するときだけ時計を同期させたい場合は man:ntpdate[8] が使えます。頻繁に再起動され、 たまに同期すれば十分なデスクトップマシンには適切かもしれません。 しかしほとんどのマシンでは man:ntpd[8] を実行するべきです。 man:ntpd[8] を動かしているマシンでも、起動時に man:ntpdate[8] を使用するのはよい考えです。 man:ntpd[8] プログラムは時計を徐々に変更します。しかし man:ntpdate[8] は正しい時刻と現在設定されているマシンの時刻がどんなに離れていようとも時計を設定します。 起動時に man:ntpdate[8] を有効にするためには、 `ntpdate_enable="YES"` を [.filename]#/etc/rc.conf# に追加してください。 さらに、同期したいすべてのサーバおよび、man:ntpdate[8] に渡すあらゆるフラグを `ntpdate_flags` に指定する必要があるでしょう。 ==== 一般設定 NTP は man:ntp.conf[5] に記述された書式の [.filename]#/etc/ntp.conf# ファイルによって設定されます。 簡単な例を以下に示します。 [.programlisting] .... server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift .... `server` オプションは、 使用するサーバを一行に一つずつ指定します。サーバが上記の `ntplocal.example.com` のように `prefer` 引数とともに指定された場合、 このサーバは他のサーバより優先されます。 優先されたサーバからの応答は、 他のサーバの応答と著しく異なる場合は破棄されますが、 そうでなければ他の応答を考慮することなく使用されます。 `prefer` 引数は、通常、 特別な時間モニタハードウェアを備えているような非常に正確であるとされている NTP サーバに対して使用されます。 `driftfile` オプションはシステム時計の周波数オフセットを格納するために使用するファイルを指定します。 man:ntpd[8] プログラムは、 時計の自然変動を自動的に補正するためにこれを用います。 これにより、一定時間外部の時刻ソースから切り離されたとしても、 十分正確な時刻を維持することを可能にします。 `driftfile` オプションは、使用している NTP サーバから過去に受け取った応答に関する情報を格納するために、 どのファイルが使用されるか指定します。 このファイルは NTP に関する内部情報を含んでいます。 これは他のプロセスによって修正されてはいけません。 ==== サーバへのアクセス制御 デフォルトでは NTP サーバはインターネット上のすべてのホストからアクセスが可能です。 [.filename]#/etc/ntp.conf# 内で `restrict` オプションを指定することによって、 どのマシンがサーバにアクセスできるかを制御できるようにします。 NTP サーバにアクセスするマシンのすべてを拒否したいのなら、 以下の行を [.filename]#/etc/ntp.conf# に追加してください。 [.programlisting] .... restrict default ignore .... あなたのネットワーク内のマシンにだけサーバに接続して時計を同期することを認めたいが、 それらからサーバに対して設定を行うのを許さず、 同期する端末としても利用されないようにしたいのなら、 以下を加えてください。 [.programlisting] .... restrict 192.168.1.0 mask 255.255.255.0 notrust nomodify notrap .... `192.168.1.0` をあなたのネットワークの IP アドレスに `255.255.255.0` をあなたのネットワークのネットマスクに置き換えてください。 [.filename]#/etc/ntp.conf# には複数の `restrict` オプションを置けます。 詳細に付いては man:ntp.conf[5] の `Access Control Support` サブセクションを参照してください。 === NTP サーバの実行 NTP サーバが起動時に実行されることを保証するために、 `xntpd_enable="YES"` を [.filename]#/etc/rc.conf# に加えてください。 man:ntpd[8] にフラグを追加したい場合は [.filename]#/etc/rc.conf# 内の `xntpd_flags` パラメータを編集してください。 マシンを再起動することなくサーバを実行したいときは、 [.filename]#/etc/rc.conf# 内の `xntpd_flags` で追加されたパラメータをすべて指定して `ntpd` を実行してください。以下に例を示します。 [source,shell] .... # ntpd -p /var/run/ntpd.pid .... [NOTE] ==== FreeBSD 5.X では [.filename]#/etc/rc.conf# 内のさまざまなオプションの名前が変わりました。 したがって、上記の `xntpd` に関するオプションは `ntpd` に置き換えてください。 ==== === 一時的なインターネット接続で ntpd を使用する man:ntpd[8] プログラムは正しく機能するために、 インターネットへの常時接続を必要としません。しかしながら、 オンデマンドでダイアルアップされるように設定された一時的な接続の場合、 NTP トラフィックがダイアルを引き起こしたり、 接続を維持し続けるようなことを避けるようにした方がよいでしょう。 ユーザ PPP を使用している場合、以下の例のように [.filename]#/etc/ppp/ppp.conf# 内で `filter` ディレクティブが使用できます。 [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... 詳細については man:ppp[8] 内の `PACKET FILTERING` セクション、および [.filename]#/usr/shared/examples/ppp/# 内の例を参照してください。 [NOTE] ==== 小さい番号のポートをブロックするインターネットアクセスプロバイダでは、 応答があなたのマシンに到達しないので NTP がきちんと動作しない場合もあります。 ==== === さらなる情報源 NTP サーバに関する文書は HTML 形式で [.filename]#/usr/shared/doc/ntp/# にあります。 [[network-natd]] == ネットワークアドレス変換 (NAT) [[network-natoverview]] === 概要 一般に man:natd[8] として知られている FreeBSD ネットワークアドレス変換デーモンは、 raw IP パケットを受信して、 ソースアドレスをローカルマシンに変更し、 そのパケットを外向きの IP パケットの流れに再注入するデーモンです。 man:natd[8] は、 データが戻ってきたときに、データの本来の場所を判別し、 もともと要求した相手へデータを返すことができるようにソース IP アドレスとポートを変更します。 NAT の最も一般的な使用法は、 一般的にはインターネット接続共有として知られているものを実行することです。 [[network-natsetup]] === 設定 IPv4 の IP 空間が足りなくなりつつあること、および、 ケーブルや DSL のような高速の加入者回線利用者の増加によって、 人々はますますインターネット接続を共有する手段を必要としています。 一つの接続および IP アドレスを通していくつものコンピュータを回線に接続する能力がある man:natd[8] が合理的な選択になります。 もっともよくあるのは、ユーザが 1 つの IP アドレスでケーブルまたは DSL 回線に接続されたマシンを持っており、 インターネットへのアクセスを LAN 経由でいくつかのコンピュータに提供するのに、 この接続されたコンピュータを使用したいという場合です。 そのためには、インターネットに接続されている FreeBSD マシンはゲートウェイとして動作しなければなりません。 このゲートウェイマシンは 2 つの NIC が必要です (1 つはインターネットルータへ接続するためで、もう 1 つは LAN に接続するためです)。 LAN 上のすべてのマシンはハブまたはスイッチを通して接続されます。 image::natd.png[ネットワークレイアウト] インターネット接続を共有するために、 このような設定がよく使用されています。 LAN 内のマシンの 1 台がインターネットに接続しています。 残りのマシンはその "ゲートウェイ" マシンを通してインターネットにアクセスします。 [[network-natdkernconfiguration]] === 設定 次のオプションがカーネルコンフィギュレーションファイルに必要です。 [.programlisting] .... options IPFIREWALL options IPDIVERT .... さらに、次のオプションを入れてもよいでしょう。 [.programlisting] .... options IPFIREWALL_DEFAULT_TO_ACCEPT options IPFIREWALL_VERBOSE .... 下記の設定を [.filename]#/etc/rc.conf# で行わなければなりません。 [.programlisting] .... gateway_enable="YES" firewall_enable="YES" firewall_type="OPEN" natd_enable="YES" natd_interface="fxp0" natd_flags="" .... [.informaltable] [cols="1,1", frame="none"] |=== |gateway_enable="YES" |マシンがゲートウェイとして動作するように設定します。 `sysctl net.inet.ip.forwarding=1` コマンドを実行しても同じ効果がえられます。 |firewall_enable="YES" |[.filename]#/etc/rc.firewall# にあるファイアウォールルールを起動時に有効にします。 |firewall_type="OPEN" |これはあらかじめ定義されている、 すべてのパケットを通すファイアウォールルールセットを指定します。 他のタイプについては [.filename]#/etc/rc.firewall# を参照してください。 |natd_interface="fxp0" |パケットを転送するインタフェースを指定します (インターネットに接続されたインタフェース)。 |natd_flags="" |起動時に man:natd[8] に渡される追加の引数 |=== [.filename]#/etc/rc.conf# に前述したオプションを定義すると、起動時に `natd -interface fxp0` が実行されます。 これは手動でも実行できます。 [NOTE] ==== オプションの定義に man:natd[8] のコンフィグレーションファイルを使うこともできます。 この場合には、[.filename]#/etc/rc.conf# に以下の行を追加し、 コンフィグレーションファイルを定義してください。 [.programlisting] .... natd_flags="-f /etc/natd.conf" .... [.filename]#/etc/natd.conf# ファイルでは、一行ごとにオプションを設定します。たとえば、 次節の例では以下のような行を含むファイルを用意してください。 [.programlisting] .... redirect_port tcp 192.168.0.2:6667 6667 redirect_port tcp 192.168.0.3:80 80 .... コンフィグレーションファイルに関する、より詳細な情報については、 man:natd[8] マニュアルページの `-f` オプションを調べてください。 ==== LAN にぶら下がっているマシンおよびインタフェースのそれぞれには link:ftp://ftp.isi.edu/in-notes/rfc1918.txt[RFC 1918] で定義されているプライベートネットワーク空間の IP アドレス番号を割り当て、デフォルトゲートウェイアドレスを natd マシンの内側の IP アドレスにすべきです。 たとえば LAN 側のクライアント `A` および `B` は IP アドレス `192.168.0.2` および `192.168.0.3` を割り当てられており、 natd マシンの LAN インタフェースは IP アドレス `192.168.0.1` を割り当てられています。 クライアント `A` および `B` のデフォルトゲートウェイは natd マシンの `192.168.0.1` に設定されなければなりません。 natd マシンの外部、 またはインターネットインタフェースは man:natd[8] の動作に際して特別の修正を必要としません。 [[network-natdport-redirection]] === ポート転送 man:natd[8] の短所は、インターネットから LAN 内のクライアントにアクセスできないということです。 LAN 内のクライアントは外部に向けて接続を行うことはできますが、 入って来るものを受け取ることができません。これは、LAN クライアントのどれかでインターネットサービスを動かそうとした場合に、 問題になります。これを何とかする単純な方法は natd マシンから LAN クライアントへ、 選択したインターネットポートを転送することです。 たとえばクライアント `A` で実行されている IRC サーバがあり、 クライアント `B` 上で実行されている web サーバがあるとします。 これが正しく動作するには、ポート 6667 (IRC) および 80 (web) への接続を対応するマシンに転送しなければなりません。 `-redirect_port` に適切なオプションを加えて man:natd[8] に渡さなければなりません。 書式は以下のとおりです。 [.programlisting] .... -redirect_port proto targetIP:targetPORT[-targetPORT] [aliasIP:]aliasPORT[-aliasPORT] [remoteIP[:remotePORT[-remotePORT]]] .... 上記の例では、引数は以下のようにします。 [.programlisting] .... -redirect_port tcp 192.168.0.2:6667 6667 -redirect_port tcp 192.168.0.3:80 80 .... これで適切な _tcp_ ポートが LAN クライアントマシンに転送されます。 `-redirect_port` 引数は個々のポートを対応させるポート範囲を示すのに使えます。 たとえば _tcp 192.168.0.2:2000-3000 2000-3000_ は 2000 番から 3000番ポートに受け取られたすべての接続を、 クライアント `A` 上の 2000 番から 3000 番に転送します。 これらのオプションは man:natd[8] を直接実行するか、 [.filename]#/etc/rc.conf# 内の `natd_flags=""` オプションで設定するか、 もしくはコンフィグレーションファイルから渡してください。 設定オプションの詳細については man:natd[8] をご覧ください。 [[network-natdaddress-redirection]] === アドレス転送 複数の IP アドレスが利用可能ですが、 それらが 1 台のマシン上になければならないときには、 アドレス転送が便利です。 これを用いれば man:natd[8] は LAN クライアントのそれぞれに外部 IP アドレスを割り当てることができます。 man:natd[8] は LAN クライアントから外部へ出て行くパケットを適切な外部の IP アドレスで書き直し、 そして特定の IP アドレスに対してやって来るトラフィックのすべてを、 指定された LAN クライアントに転送します。 これは静的 NAT としても知られています。 たとえば `128.1.1.1`, `128.1.1.2` および `128.1.1.3` の IP アドレスが、 natd ゲートウェイマシンに属しているとします。 `128.1.1.2` および `128.1.1.3` は LAN クライアントの `A` および `B` に転送される一方で、`128.1.1.1` は natd ゲートウェイマシンの外部 IP アドレスとして使用することができます。 `-redirect_address` の書式は以下のとおりです。 [.programlisting] .... -redirect_address localIP publicIP .... [.informaltable] [cols="1,1", frame="none"] |=== |localIP |LAN クライアントの内部 IP アドレス |publicIP |LAN クライアントに対応する外部 IP アドレス |=== 上記の例では引数は以下のようになります。 [.programlisting] .... -redirect_address 192.168.0.2 128.1.1.2 -redirect_address 192.168.0.3 128.1.1.3 .... `-redirect_port` と同様に、これらの引数は [.filename]#/etc/rc.conf# 内の `natd_flags=""` オプションで設定するか、 コンフィグレーションファイルから渡すことで指定できます。 アドレス転送では、 特定の IP アドレスで受け取られたデータはすべて転送されるので、 port 転送は必要ありません。 natd マシン上の外部 IP アドレスは、 アクティブで外部インタフェースにエイリアスされていなければなりません。 やりかたは man:rc.conf[5] を参照してください。 [[network-inetd]] == inetd"スーパサーバ" [[network-inetd-overview]] === 概観 man:inetd[8] は複数のデーモンに対する接続を制御するので、 "インターネットスーパサーバ" と呼ばれます。 ネットワークサービスを提供するプログラムは、 一般的にデーモン呼ばれます。inetd は他のデーモンを管理するサーバを努めます。 接続が inetd によって受け付けられると、 inetd は接続がどのデーモンに対するものか判断して、 そのデーモンを起動し、ソケットを渡します。 inetd を 1 つ実行することにより、 それぞれのデーモンをスタンドアロンモードで実行することに比べ、 全体としてのシステム負荷を減らします。 基本的に、inetd は他のデーモンを起動するために使用されます。しかし、 chargen, auth および daytime のようなささいなプロトコルは直接扱われます。 この節ではコマンドラインオプションおよび設定ファイル [.filename]#/etc/inetd.conf# による inetd の設定の基本を説明します。 [[network-inetd-settings]] === 設定 inetd は [.filename]#/etc/rc.conf# の仕組によって初期化されます。 デフォルトでは `inetd_enable` オプションは "NO" に設定されています。 しかし多くの場合、sysinstall でセキュリティプロファイルを medium に設定することにより、有効化されます。 [.programlisting] .... inetd_enable="YES" .... または [.programlisting] .... inetd_enable="NO" .... を [.filename]#/etc/rc.conf# に置くことで、起動時に inetd を有効または無効にできます。 さらに `inetd_flags` オプションによって、 いろいろなコマンドラインオプションを inetd に渡すことができます。 [[network-inetd-cmdline]] === コマンドラインオプション inetd 書式 `inetd [-d] [-l] [-w] [-W] [-c maximum] [-C rate] [-a address | hostname] [-p filename] [-R rate] [configuration file]` -d:: デバッグモードにします。 -l:: 成功した接続のログをとります。 -w:: 外部サービスに対して TCP Wrapper を有効にします (デフォルト)。 -W:: inetd 組み込みの内部サービスに対して TCP Wrapper を有効にします (デフォルト)。 -c maximum:: サービス毎に同時に起動可能な最大値のデフォルトを指定します。 デフォルトでは無制限です。サービスごとに指定する `max-child` パラメータで上書きできます。 -C rate:: 1 分間にひとつの IP アドレスから起動されるサービスの、 最大値のデフォルトを指定します。デフォルトは無制限です。 サービスごとに指定する `max-connections-per-ip-per-minute` パラメータで上書きできます。 -R rate:: あるサービスを 1 分間に起動できる最大の数を指定します。 デフォルトは 256 です。rate に 0 を指定すると、 起動可能な数は無制限になります。 -a:: バインドする IP アドレスを一つ指定します。 代わりにホスト名も指定できます。この場合、ホスト名に対応する IPv4 または IPv6 アドレスが使用されます。通常 inetd が man:jail[8] 内で起動される時点で、ホスト名が指定されます。この場合、 ホスト名は man:jail[8] 環境に対応するものです。 + ホスト名指定が使用され、 IPv4 および IPv6 両方にバインドしたい場合、 [.filename]#/etc/inetd.conf# の各サービスに対して、 各バインドに対する適切なプロトコルのエントリが必要です。 たとえば TCP ベースのサービスは、 ひとつはプロトコルに "tcp4" を使用し、 もう一つは "tcp6" を使用する、 2 つのエントリが必要です。 -p:: デフォルトとは異なる PID を保持するファイルを指定します。 [.filename]#/etc/rc.conf# 内の `inetd_flags` オプションを用いて、これらのオプションを inetd に渡すことができます。デフォルトでは `inetd_flags` は "-wW" に設定されており、 これは inetd の内部および外部サービスに対して TCP wrapper を有効にします。 初心者ユーザはこれらのパラメータを変更する必要は通常ありませんし、 [.filename]#/etc/rc.conf# に入力する必要もありません。 [NOTE] ==== 外部サービスは、接続を受け取ったときに起動される inetd の外部にあるデーモンで、 それに対して、内部サービスは inetd 自身が提供する内部のデーモンです。 ==== [[network-inetd-conf]] === [.filename]#inetd.conf# inetd の設定は [.filename]#/etc/inetd.conf# ファイルによって制御されます。 [.filename]#/etc/inetd.conf# が変更されたときは、 以下のように inetd プロセスに HangUP シグナルを送ることにより、inetd に設定ファイルを再読み込みさせられます。 [[network-inetd-hangup]] .inetd への HangUP シグナル送付 [example] ==== [source,shell] .... # kill -HUP `cat /var/run/inetd.pid` .... ==== 設定ファイルのそれぞれの行は、 個々のデーモンについての指示になります。 ファイル内のコメントは "#" が先頭につきます。 [.filename]##/etc/inetd.conf## の書式は以下のとおりです。 [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute]] user[:group][/login-class] server-program server-program-arguments .... IPv4 を利用する ftpd デーモンのエントリの例です。 [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... service-name:: これは特定のデーモンのサービス名です。 これは [.filename]#/etc/services# 内のサービスリストに対応していなければなりません。 これは inetd がどのポートで受け付けなければならないかを決定します。 新しいサービスが作成された場合、まずはじめに [.filename]#/etc/services# 内に記載しなければなりません。 socket-type:: `stream`, `dgram`, `raw` または `seqpacket` のどれかを指定します。 `stream` はコネクションに基づいた TCP デーモンに使用しなければならず、 一方で `dgram` は UDP 転送プロトコルを利用したデーモンに対して使用されます。 protocol:: 次のうちのどれか 1 つを指定します。 + [.informaltable] [cols="1,1", options="header"] |=== | プロトコル | 説明 |tcp, tcp4 |TCP IPv4 |udp, udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |TCP IPv4 および v6 の両方 |udp46 |UDP IPv4 および v6 の両方 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute]]:: `wait|nowait` は inetd から起動したデーモンが、 自分のソケットを管理できるかどうかを示します。 通常マルチスレッド化されている stream ソケットデーモンは `nowait` を使用するべきである一方、 `dgram` ソケットタイプは wait オプションを使用しなければなりません。 `nowait` は新しいソケット毎に子のデーモンを起動する一方で、 `wait` は通常複数のソケットを 1 つのデーモンに渡します。 + inetd が起動できる子のデーモンの最大数は `max-child` オプションで設定できます。 特定のデーモンに対して、起動する数が 10 までという制限が必要な場合、 `nowait` の後に `/10` を置きます。 + `max-child` に加えて、他にある 1 つの場所から特定のデーモンへの最大接続数を制限するオプションが利用できます。 `max-connections-per-ip-per-minute` がそれです。ここに 10 を指定すると、特定の IP アドレスからの特定のサービスへの接続を 1 分間につき 10 回に制限します。 これは故意または故意でない資源の浪費および、 マシンへのサービス不能 (DoS) 攻撃を防ぐのに有用です。 + `wait` または `nowait` はこの欄に必ず必要です。 `max-child` および `max-connections-per-ip-per-minute` は任意です。 + `max-child` または `max-connections-per-ip-per-minute` 制限をかけない stream タイプのマルチスレッドデーモンの設定は `nowait` になります。 + 作成できる子プロセスの上限が 10 である同じデーモンの設定は `nowait/10` になります。 + さらに、 1 分間に IP アドレスあたりの接続制限が 20、 子プロセスの上限が 10 である同じデーモンの設定は `nowait/10/20` になります。 + 以下のように、これらのオプションはすべて fingerd デーモンのデフォルト設定に使われています。 + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -s .... user:: user はあるデーモンが実行するときのユーザ名を指定します。 一般的にデーモンは `root` ユーザとして実行します。セキュリティを考慮して、 いくつかのサーバは `daemon` ユーザ、 または最低の権限が与えられている `nobody` ユーザとして実行することも多く見られます。 server-program:: 接続を受け取ったときに実行するデーモンのフルパスです。 デーモンが inetd によって内部的に提供されるサービスの場合 `internal` を使用します。 server-program-arguments:: ここには、起動するときにデーモンに渡される、 argv[0] から始まる引数を指定して、 `server-program` と協調して動作します。 mydaemon -d がコマンドラインの場合、 `server program arguments` の値に `mydaemon -d` を指定します。 また、デーモンが内部サービスの場合、ここに `internal` を指定します。 [[network-inetd-security]] === セキュリティ インストールの時に選択したセキュリティプロファイルによっては、 多くの inetd のデーモンがデフォルトで有効になっているかもしれません。 あるデーモンが特に必要でない場合には、それを無効にしてください! 問題となっているデーモンが記述されている行の先頭に "#" をおいて <>を送ってください。 fingerd のようないくつかのデーモンは、 動かそうとすべきではないかもしれません。なぜなら、 それらは攻撃者に対してあまりにも多くの情報を与えるからです。 セキュリティをあまり考慮せず、 接続試行に対してタイムアウトまでの時間が長いか、 タイムアウトしないデーモンもあります。 これは、特定のデーモンに攻撃者がゆっくり接続要求を送ることによって、 利用可能なリソースを飽和させることを可能にします。ある種のデーモンに `ip-per-minute` および `max-child` 制限を設けることはよい考えかもしれません。 TCP wrapper はデフォルトで有効です。 inetd から起動されるさまざまなデーモンに対して TCP 制限を設けることの詳細については man:hosts_access[5] マニュアルページを参照してください。 [[network-inetd-misc]] === その他 daytime, time, echo, discard, chargen および auth はすべて inetd が内部的に提供するサービスです。 auth サービスは identity (ident, identd) ネットワークサービスを提供し、 ある程度設定可能です。 詳細については man:inetd[8] マニュアルを参照してください。 [[network-plip]] == パラレルライン IP (PLIP) PLIP はパラレルポート間で TCP/IP 通信を可能にします。 これはネットワークカードの無いマシンやノートパソコンにインストールするときに役に立ちます。 この節では以下について説明します。 * パラレル (ラップリンク または パラレルクロス) ケーブルの作成。 * 2 台のコンピュータの PLIP による接続。 [[network-create-parallel-cable]] === パラレル (クロス) ケーブルの作成 コンピュータ用品店のほとんどでパラレル (クロス) ケーブルを購入することができます。 購入することができないか、 単にケーブルがどのような構造であるか知りたい場合は、 次の表に通常のパラレルプリンタケーブルをもとに作成する方法が示されています。 .ネットワーク向けのパラレル (クロス) ケーブル結線 [cols="1*l,1*l,1*l,1,1*l", options="header"] |=== | A-名称 | A-端 | B-端 | 説明 | Post/Bit | .... DATA0 -ERROR .... | .... 2 15 .... | .... 15 2 .... |Data | .... 0/0x01 1/0x08 .... | .... DATA1 +SLCT .... | .... 3 13 .... | .... 13 3 .... |Data | .... 0/0x02 1/0x10 .... | .... DATA2 +PE .... | .... 4 12 .... | .... 12 4 .... |Data | .... 0/0x04 1/0x20 .... | .... DATA3 -ACK .... | .... 5 10 .... | .... 10 5 .... |Strobe | .... 0/0x08 1/0x40 .... | .... DATA4 BUSY .... | .... 6 11 .... | .... 11 6 .... |Data | .... 0/0x10 1/0x80 .... |GND |18-25 |18-25 |GND |- |=== [[network-plip-setup]] === PLIP の設定 はじめに、ラップリンクケーブルを入手しなければなりません。 次に、両方のコンピュータのカーネルが man:lpt[4] ドライバ対応であることを確認してください。 [source,shell] .... # grep lp /var/run/dmesg.boot lpt0: on ppbus0 lpt0: Interrupt-driven port .... パラレルポートは割り込み駆動ポートでなければなりません。 FreeBSD 4.X では、 以下のような行がカーネルコンフィギュレーションファイル内になければならないでしょう。 [.programlisting] .... device ppc0 at isa? irq 7 .... FreeBSD 5.X では [.filename]#/boot/device.hints# ファイルに以下の行がなければならないでしょう。 [.programlisting] .... hint.ppc.0.at="isa" hint.ppc.0.irq="7" .... それからカーネルコンフィギュレーションファイルに `device plip` という行があるか、または [.filename]#plip.ko# カーネルモジュールが読み込まれていることを確認してください。 どちらの場合でも man:ifconfig[8] コマンドを直接実行したときに、 パラレルネットワークインタフェースが現れるはずです。 FreeBSD 4.X ではこのようになります。 [source,shell] .... # ifconfig lp0 lp0: flags=8810 mtu 1500 .... FreeBSD 5.X ではこのようになります。 [source,shell] .... # ifconfig plip0 plip0: flags=8810 mtu 1500 .... [NOTE] ==== パラレルインタフェースに対して用いられるデバイス名は FreeBSD 4.X ([.filename]#lpX#) と FreeBSD 5.X ([.filename]#plipX#) 間で異なります。 ==== 両方のコンピュータのパラレルインタフェースにラップリンクケーブルを接続します。 両方のネットワークインタフェースパラメータを `root` で設定します。 たとえば、FreeBSD 4.X を動作させている `host1` と FreeBSD 5.X を動作させている `host2` の両ホストを接続したい場合は次のようにします。 [.programlisting] .... host1 <-----> host2 IP Address 10.0.0.1 10.0.0.2 .... 次のコマンドで `host1` 上のインタフェースを設定します。 [source,shell] .... # ifconfig lp0 10.0.0.1 10.0.0.2 .... 次のコマンドで `host2` 上のインタフェースを設定します。 [source,shell] .... # ifconfig plip0 10.0.0.2 10.0.0.1 .... さて、これで接続が確立したはずです。詳細については man:lp[4] および man:lpt[4] マニュアルページをご覧ください。 さらに[.filename]##/etc/hosts## に両ホストを加えるとよいでしょう。 [.programlisting] .... 127.0.0.1 localhost.my.domain localhost 10.0.0.1 host1.my.domain host1 10.0.0.2 host2.my.domain .... 接続がうまくいっているか確かめるために、 両方のホスト上で互いを ping してください。 たとえば `host1` で以下を実行します。 [source,shell] .... # ifconfig lp0 lp0: flags=8851 mtu 1500 inet 10.0.0.1 --> 10.0.0.2 netmask 0xff000000 # netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire host2 host1 UH 0 0 lp0 # ping -c 4 host2 PING host2 (10.0.0.2): 56 data bytes 64 bytes from 10.0.0.2: icmp_seq=0 ttl=255 time=2.774 ms 64 bytes from 10.0.0.2: icmp_seq=1 ttl=255 time=2.530 ms 64 bytes from 10.0.0.2: icmp_seq=2 ttl=255 time=2.556 ms 64 bytes from 10.0.0.2: icmp_seq=3 ttl=255 time=2.714 ms --- host2 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max/stddev = 2.530/2.643/2.774/0.103 ms .... [[network-ipv6]] == IPv6 IPv6 (IPng "IP next generation" とも呼ばれます) は、著名な IP プロトコル (IPv4 とも呼ばれます) の新しいバージョンです。 他の最新の *BSD システムと同様に FreeBSD は KAME IPv6 リファレンス実装を含んでいます。したがって、あなたの FreeBSD システムには IPv6を試すために必要なものすべてが備わっています。 この節では IPv6 の設定と実行に関して説明します。 1990 年代のはじめには、人々は IPv4 アドレス空間が急速に縮小していることに気づくようになりました。 インターネットの成長率が増大するにしたがって、 2 つの心配ごとがでてきました。 * アドレスの枯渇。 今日では、プライベートアドレス空間 (`10.0.0.0/8`, `192.168.0.0/24` など) およびネットワークアドレス変換 (NAT) が使用されているので、それほど心配されていません。 * ルーティングテーブルのエントリが大きくなりすぎていました。 これは今でも心配な事柄です。 IPv6 は以下の、そしてその他多くの問題を扱います。 * 128 bit アドレス空間。言い換えると、理論上 340,282,366,920,938,463,463,374,607,431,768,211,456 個のアドレスが利用可能です。これは地球上の一平方メータあたり、 およそ 6.67 * 10^27 個の IPv6 アドレスがあることを意味します。 * ルータは、 ルーティングテーブル内にネットワーク集約アドレスだけを格納することで、 ルーティングテーブルの平均を 8192 項目程度に減らします。 他にも以下のように IPv6 の便利な機能がたくさんあります。 * アドレス自動設定 (RFC2462) * エニーキャスト (anycast) アドレス ("one-out-of many" 訳注: 複数の異なるノードが応答する 1 つのアドレス。 RFC2526 を参照してください)。 * 強制マルチキャストアドレス * IPsec (IP セキュリティ) * シンプルなヘッダ構造 * モバイル IP * IPv4 から IPv6 への移行手段 詳細については下記を参照してください。 * http://www.sun.com[Sun.com] の IPv6 概観 * http://www.ipv6.org[IPv6.org] * http://www.kame.net[KAME.net] * http://www.6bone.net[6bone.net] === IPv6 アドレスの背景 いくつか違うタイプの IPv6 アドレスがあります。 ユニキャスト (Unicast)、エニーキャスト (Anycast) およびマルチキャスト (Multicast) です。 ユニキャストアドレスは周知のアドレスです。 ユニキャストアドレスへ送られたパケットは、 まさにそのアドレスに属するインターフェースに到着します。 エニーキャストアドレスはユニキャストアドレスと構文上判別不可能ですが、 インタフェース群に宛てられています。 エニーキャストアドレスに送られたパケットは (ルータメトリック的に) 最も近いインタフェースに到着します。 エニーキャストアドレスはルータでしか使ってはいけません。 マルチキャストアドレスはインタフェース群を識別します。 マルチキャストアドレスに送られたパケットは、 マルチキャスト群に属するすべてのインタフェースに到着します。 [NOTE] ==== IPv4 のブロードキャストアドレス (通常 `xxx.xxx.xxx.255`) は、IPv6 ではマルチキャストアドレスで表現されます。 ==== .予約された IPv6 アドレス [cols="1,1,1,1", frame="none", options="header"] |=== | IPv6 アドレス | プレフィックス長 (ビット) | 説明 | 備考 |`::` |128 ビット |不特定 |IPv4 の `0.0.0.0` 参照 |`::1` |128 ビット |ループバックアドレス |IPv4 の `127.0.0.1` 参照 |`::00:xx:xx:xx:xx` |96 ビット |IPv4 埋め込みアドレス |下位の 32 ビットは IPv4 アドレスです。 "IPv4 互換 IPv6 アドレス" とも呼ばれます。 |`::ff:xx:xx:xx:xx` |96 ビット |IPv4 射影 IPv6 アドレス |下位の 32 ビットは IPv4 アドレスです。 IPv6 に対応していないホストに対するアドレスです。 |`fe80::` - `feb::` |10 ビット |リンクローカル |IPv4 のループバックアドレス参照 |`fec0::` - `fef::` |10 ビット |サイトローカル | |`ff::` |8 ビット |マルチキャスト | |`001` (基数 2) |3 ビット |グローバルユニキャスト |すべてのグローバルユニキャストアドレスはこのプールから割り当てられます。 はじめの 3 ビットは "001" です。 |=== === IPv6 アドレスを読む 正規の書式では `x:x:x:x:x:x:x:x` と表されます。それぞれの "x" は 16 ビットの 16 進数です。たとえば `FEBC:A574:382B:23C1:AA49:4592:4EFE:9982` となります。 すべてゼロの長い部分文字列がアドレス内によく現れます。 そのため、そのような部分文字列は "::" に短縮することができます。 たとえば、`fe80::1` は正規形の `fe80:0000:0000:0000:0000:0000:0000:0001` に対応します。 3 番目の形式は、最後の 32 ビットの部分を "." を分割文字として使う、 なじみ深い IPv4 (10 進) 形式で書くことです。 たとえば `2002::10.0.0.1` は (16 進) 正規形の `2002:0000:0000:0000:0000:0000:0a00:0001` に対応し、同時に `2002::a00:1` と書くこととも等価です。 ここまで来れば、下記を理解することができるでしょう。 [source,shell] .... # ifconfig .... [.programlisting] .... rl0: flags=8943 mtu 1500 inet 10.0.0.10 netmask 0xffffff00 broadcast 10.0.0.255 inet6 fe80::200:21ff:fe03:8e1%rl0 prefixlen 64 scopeid 0x1 ether 00:00:21:03:08:e1 media: Ethernet autoselect (100baseTX ) status: active .... `fe80::200:21ff:fe03:8e1%rl0` は自動的に設定されたリンクローカルアドレスです。 これは自動設定の一環として、 イーサネット MAC アドレスを変換したものを含んでいます。 IPv6 アドレス構造についての詳細は RFC3513 をご覧ください。 === 接続 現在、他の IPv6 ホストおよびネットワークに接続するためには 4 つの方法があります。 * 6bone 実験ネットワークに参加する。 * 上流のプロバイダから IPv6 ネットワークの割り当てを受ける。 手順については、インターネットプロバイダに問い合わせてください。 * IPv6 over IPv4 によるトンネル。 * ダイアルアップ接続の場合 freenet6 port を使用する。 ここでは、現在もっともよく使われている方法と思われる 6bone へ接続する方法を説明します。 はじめに 6bone サイトをみて、 あなたに最も近い 6bone 接続先を見つけてください。 責任者に連絡すると、少しばかり運がよければ、 接続を設定する方法についての指示を受けられるでしょう。 多くのばあい、これには GRE (gif) トンネルの設定が含まれます。 [NOTE] ==== 6bone は `3ffe::` (16 ビット) という IPv6 アドレスを割り振られた実験目的のネットワークでしたが、 2006 年 6 月に運用を停止することになっています。 他の商用や試験的な IPv6 接続サービスを探してください。 ==== ここに man:gif[4] トンネルを設定する典型的な例を示します。 [source,shell] .... # ifconfig gif0 create # ifconfig gif0 gif0: flags=8010 mtu 1280 # ifconfig gif0 tunnel MY_IPv4_ADDR HIS_IPv4_ADDR # ifconfig gif0 inet6 alias MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR .... 大文字になっている単語を、 上流の 6bone ノードから受け取った情報に置き換えてください。 これでトンネルが確立されます。man:ping6[8] を `ff02::1%gif0` に送ることによって、トンネルが動作しているか確かめてください。 ping の応答を 2 つ受け取るはずです。 [NOTE] ==== `ff02:1%gif0` というアドレスに興味をそそられている場合のために説明すると、 これはマルチキャストアドレスです。 `%gif0` は、ネットワークインタフェース [.filename]#gif0# 上のマルチキャストアドレスが使用されるということを示しています。 マルチキャストアドレスに対して `ping` を送ったので、トンネルのもう一方の端も応答します。 ==== ここまで来ると 6bone アップリンクに経路設定することは比較的簡単でしょう。 [source,shell] .... # route add -inet6 default -interface gif0 # ping6 -n MY_UPLINK .... [source,shell] .... # traceroute6 www.jp.FreeBSD.org (3ffe:505:2008:1:2a0:24ff:fe57:e561) from 3ffe:8060:100::40:2, 30 hops max, 12 byte packets 1 atnet-meta6 14.147 ms 15.499 ms 24.319 ms 2 6bone-gw2-ATNET-NT.ipv6.tilab.com 103.408 ms 95.072 ms * 3 3ffe:1831:0:ffff::4 138.645 ms 134.437 ms 144.257 ms 4 3ffe:1810:0:6:290:27ff:fe79:7677 282.975 ms 278.666 ms 292.811 ms 5 3ffe:1800:0:ff00::4 400.131 ms 396.324 ms 394.769 ms 6 3ffe:1800:0:3:290:27ff:fe14:cdee 394.712 ms 397.19 ms 394.102 ms .... この出力はマシンによって異なります。 これで、あなたが package:www/mozilla[] のような IPv6 が利用可能なブラウザを持っていれば、 IPv6 サイト http://www.kame.net[www.kame.net] にいって踊るカメを見ることができるでしょう。 === IPv6 世界の DNS IPv6 のための新しい DNS レコードが 2 種類あります。 * AAAA レコード * A6 レコード AAAA レコードは簡単に使えます。 [.programlisting] .... MYHOSTNAME AAAA MYIPv6ADDR .... 上記をプライマリゾーン DNS ファイルに加えて、 もらったばかりの IPv6 アドレスにホスト名を割り当ててください。 あなた自身で DNS ゾーンを管理していない場合は、 DNS プロバイダに頼んでください。 bind の最新バージョン (バージョン 8.3 および 9) は AAAA レコードに対応しています。 diff --git a/documentation/content/mn/books/handbook/mac/_index.adoc b/documentation/content/mn/books/handbook/mac/_index.adoc index d3758da44f..76edd18c52 100644 --- a/documentation/content/mn/books/handbook/mac/_index.adoc +++ b/documentation/content/mn/books/handbook/mac/_index.adoc @@ -1,945 +1,943 @@ --- title: Бүлэг 17. Mandatory Access Control буюу Албадмал Хандалтын хяналт part: хэсэг III. Системийн Удирдлага prev: books/handbook/jails next: books/handbook/audit showBookMenu: true weight: 21 params: path: "/books/handbook/mac/" --- [[mac]] = Mandatory Access Control буюу Албадмал Хандалтын хяналт :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 17 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == Ерөнхий агуулга FreeBSD 5.X нь POSIX(R).1e ноорог дээр тулгуурласан TrustedBSD төслийн аюулгүй байдлын шинэ өргөтгөлүүдийг танилцуулсан. Хамгийн чухал аюулгүй байдлын шинэ арга замуудын хоёр нь файлын системийн Access Control Lists буюу Хандалтын Хяналтын Жагсаалтууд (ACL-үүд) болон Mandatory Access Control (MAC) буюу Албадмал Хандалтын Хяналт боломжууд юм. Албадмал Хандалтын Хяналт нь аюулгүй байдлын шинэ бодлогуудыг бий болгож хандалтын хяналтын модулиудыг ачаалах боломжийг олгодог. Зарим нь тухайн үйлчилгээг хатуужуулж системийн нарийн дэд олонлогуудын хамгаалалтуудыг хангадаг. Бусад нь хаяглагдсан, олон талын аюулгүй байдлыг бүх субьект болон обьектуудын хувьд хангадаг байна. Тодорхойлолтын албадмал буюу зайлшгүй шаардлагатай гэж хэлсэн хэсэг нь хяналтуудын албадлагыг администраторууд болон систем хийдэг бөгөөд discretionary access control (DAC, FreeBSD дээрх стандарт файл болон System V IPC зөвшөөрлүүд) буюу тусдаа байх хандалтын хяналтаар хийгддэг шиг хэрэглэгчээр өөрөөр нь хийлгэдэггүй гэсэн үг юм. Энэ бүлэг Mandatory Access Control Framework (MAC Framework) буюу Албадмал Хандалтын Хяналт Тогтолцоо болон залгагдаж болох аюулгүй байдлын бодлогын модулиудын олонлогт анхаарлаа төвлөрүүлж төрөл бүрийн аюулгүй байдлын арга замуудыг идэвхжүүлэх болно. Энэ бүлгийг уншсаны дараа, та дараах зүйлсийг мэдэх болно: * Одоогоор FreeBSD-д ямар ямар аюулгүй байдлын MAC бодлогын модулиуд орсон болон тэдгээртэй холбоотой арга замуудын талаар. * Аюулгүй байдлын MAC бодлогын модулиуд юу шийддэг болон хаяглагдсан болон хаяглагдаагүй бодлогын хоорондын ялгааны талаар. * Системийг хэрхэн үр ашигтайгаар MAC тогтолцоог ашиглахаар тохируулах талаар. * MAC тогтолцоонд орсон аюулгүй байдлын өөр өөр бодлогын модулиудыг хэрхэн тохируулах талаар. * MAC тогтолцоо болон үзүүлсэн жишээнүүдийг ашиглан илүү аюулгүй орчинг хэрхэн бий болгох талаар. * Тогтолцоо зөв хийгдсэнийг шалгахын тулд MAC тохиргоог хэрхэн тест хийх талаар. Энэ бүлгийг уншихаасаа өмнө та дараах зүйлсийг гүйцэтгэх хэрэгтэй: * UNIX(R) болон FreeBSD-ийн үндсүүдийг ойлгосон байх (crossref:basics[basics,Юниксийн үндэс]). * Цөмийн тохиргоо/эмхэтгэлийн (crossref:kernelconfig[kernelconfig,FreeBSD цөмийг тохируулах нь]) үндсүүдтэй танилцсан байх. * Аюулгүй байдалтай танилцаж энэ нь FreeBSD-д хэрхэн хамааралтай болохыг мэдэх (crossref:security[security,Аюулгүй байдал]). [WARNING] ==== Энд байгаа мэдээллийг буруу ашиглавал системд хандаж чадахгүй болгох, хэрэглэгчдийн доройтол эсвэл X11-ийн хангадаг боломжуудад хандаж чадахгүйд хүргэж болох юм. Хамгийн чухал нь MAC нь системийг бүр мөсөн аюулгүй болгоно гэж найдаж болохгүй юм. MAC тогтолцоо нь байгаа аюулгүй байдлын бодлогыг зөвхөн сайжруулдаг; аюулгүй байдлын сайн практикгүй, байнгын аюулгүй байдлын шалгалтгүйгээр систем хэзээ ч бүрэн аюулгүй байж чадахгүй. Мөн энэ бүлгийн хүрээнд байгаа жишээнүүд нь зөвхөн жишээнүүд гэдгийг тэмдэглэх ёстой юм. Ялангуяа эдгээр тухайлсан тохиргоонуудыг жинхэнэ систем дээр хэрэглэхийг зөвлөдөггүй. Төрөл бүрийн аюулгүй байдлын бодлогын модулиудыг бүтээх нь ихээхэн бодолт болон тест хийхийг шаарддаг. Бүгд хэрхэн яаж ажилладгийг бүрэн ойлгоогүй хүнийн хувьд бүхэл системийг дахин үзэж олон файлууд эсвэл сангуудыг дахин тохируулахад хүргэж болох юм. ==== === Юуг хэлэлцэхгүй вэ Энэ бүлэг нь MAC тогтолцоотой холбоотой өргөн хүрээний аюулгүй байдлын асуудлуудыг хамардаг. Шинэ MAC аюулгүй байдлын бодлогын модулиудыг хөгжүүлэх талаар хэлэлцэхгүй болно. MAC тогтолцоонд орсон хэд хэдэн аюулгүй байдлын бодлогын модулиуд нь тусгай онцлогуудтай бөгөөд эдгээр нь тест хийх болон шинэ модуль хөгжүүлэхэд зориулагдсан юм. Эдгээрт man:mac_test[4], man:mac_stub[4] болон man:mac_none[4] орно. Эдгээр аюулгүй байдлын бодлогын модулиудын талаар болон тэдгээрийн хангадаг төрөл бүрийн арга замуудын талаар дэлгэрэнгүй мэдээллийг гарын авлагын хуудаснуудаас лавлана уу. [[mac-inline-glossary]] == Энэ бүлэг дэх түлхүүр ухагдахуунууд Энэ бүлгийг уншихаасаа өмнө хэд хэдэн түлхүүр ухагдахуунуудыг тайлбарлах ёстой. Энэ нь учирч болох ямар нэг эндүүрлийг цэгцэлж шинэ ухагдахуунууд болон мэдээллийн огцом танилцуулгаас зайлсхийх болно гэж найдаж байна. * _compartment_ буюу тасалгаа: Тасалгаа нь хэрэглэгчдэд системийн тусгай бүрэлдэхүүн хэсгүүдэд хандах хандалтыг өгдөг хуваагдах эсвэл тусгаарлагдах програмууд болон өгөгдлийн олонлог юм. Мөн тасалгаа нь ажлын групп, хэлтэс, төсөл эсвэл сэдэв зэрэг бүлэглэлийг илэрхийлдэг. Тасалгаануудыг ашиглан мэдэх хэрэгтэй аюулгүй байдлын бодлогыг хийж гүйцэтгэх боломжтой байдаг. * _high water mark_ буюу өндөр түвшин: Өндөр түвшин бодлого нь өндөр түвшний мэдээлэлд хандах зорилгоор аюулгүй байдлын түвшнүүдийг дээшлүүлэхийг зөвшөөрдөг бодлого юм. Ихэнх тохиолдолд процесс дууссаны дараа анхдагч түвшин сэргээгддэг. Одоогоор FreeBSD MAC тогтолцоо нь үүнд зориулсан бодлогогүй, гэхдээ бүрэн бүтэн байдлын үүднээс тодорхойлолт нь оржээ. * _integrity_ буюу бүрэн бүтэн байдал: Бүрэн бүтэн байдал нь түлхүүр ойлголт бөгөөд өгөгдөлд тавигдаж болох итгэмжлэлийн түвшин юм. Өгөгдлийн бүрэн бүтэн байдал дээшлэх тусам тэр өгөгдөлд итгэх чадвар бас дээшилдэг. * _label_ буюу хаяг/шошго: Хаяг/шошго нь файлууд, сангууд эсвэл систем дэх бусад зүйлсэд хамааруулж болох аюулгүй байдлын шинж чанар юм. Энэ нь итгэмжлэлийн тамга гэгдэж болно; хаяг/шошго файлд тавигдсан бол тэр файлын аюулгүй байдлын өмчүүдийг тайлбарлах бөгөөд зөвхөн ижил аюулгүй байдлын тохиргоотой файлууд, хэрэглэгчид, эх үүсвэрүүд гэх зэргээс хандалтыг зөвшөөрөх болно. Хаяг/шошгоны утгуудын утга санаа болон тайлбар нь бодлогын тохиргооноос хамаардаг: зарим бодлогууд нь хаяг/шошгыг обьектийн бүрэн бүтэн байдал эсвэл нууцгай байдал гэж ойлгодог бол бусад бодлогууд хаяг/шошгыг хандалт хийхийн тулд дүрмүүдийг агуулахад ашиглаж болох юм. * _level_ буюу түвшин: Аюулгүй байдлын шинж чанарын ихэсгэсэн эсвэл багасгасан тохиргоо. Түвшин ихсэх тусам түүний аюулгүй байдал бас дээшилнэ гэж үздэг. * _low water mark_ буюу доод түвшин: Доод түвшин нь тийм ч аюулгүй биш мэдээлэлд хандахын тулд аюулгүй байдлын түвшингүүдийг доошлуулахыг зөвшөөрдөг бодлого юм. Ихэнх тохиолдолд процесс дууссаны дараа хэрэглэгчийн анхдагч аюулгүй байдлын түвшин сэргээгддэг. FreeBSD-д үүнийг ашигладаг цорын ганц аюулгүй байдлын бодлогын модуль бол man:mac_lomac[4] юм. * _multilabel_ буюу олон хаяг/шошго: `multilabel` өмч нь ганц хэрэглэгчийн горимд man:tunefs[8] хэрэгсэл, ачаалалтын үйлдлүүдийн үед эсвэл шинэ файлын систем үүсгэх үед man:fstab[5] файл ашиглан тохируулж болох файлын системийн тохируулга юм. Энэ тохируулга нь өөр өөр обьектуудад өөр өөр MAC хаяг/шошгонуудыг хамааруулахыг администраторт зөвшөөрөх болно. Энэ тохируулга нь хаяглалтыг дэмждэг аюулгүй байдлын бодлогын модулиудад зөвхөн хамаардаг. * _object_ буюу обьект: Обьект буюу системийн обьект нь _subject_ буюу субьектийн удирдлагын доор мэдээлэл дамжин урсдаг тэр мөн чанар юм. Үүнд сангууд, файлууд, талбарууд, дэлгэцүүд, гарууд, санах ой, соронзон хадгалалт, хэвлэгчид эсвэл бусад дурын хадгалалт/хөдлөх төхөөрөмж ордог. Үндсэндээ обьект нь өгөгдлийн чингэлэг эсвэл системийн эх үүсвэр юм; _обьект_од хандах нь өгөгдөлд хандана гэсэн үг юм. * _policy_ буюу бодлого: Зорилгод хэрхэн хүрэхийг тодорхойлох дүрмүүдийн цуглуулга юм. _Бодлого_ нь ихэвчлэн зарим нэг зүйлүүдтэй хэрхэн ажиллахыг баримтжуулдаг. Энэ бүлэг нь сэдэв дахь _бодлого_ гэсэн энэ нэр томъёог _аюулгүй байдлын бодлого_ гэж үзэх болно; өөрөөр хэлбэл өгөгдөл болон мэдээллийн урсгалыг хянах дүрмүүдийн цуглуулга гэж үзэх бөгөөд тэр өгөгдөл болон мэдээлэлд хэн хандалттай байхыг тодорхойлох болно. * _sensitivity_ буюу мэдрэмтгий байдал: MLS-ийг хэлэлцэж байх үед ихэвчлэн хэрэглэдэг. Мэдрэмтгий байдлын түвшин нь өгөгдөл ямар чухал эсвэл нууцлаг байх ёстой болохыг тайлбарлахад хэрэглэгддэг нэр томъёо юм. Мэдрэмтгий байдлын түвшин ихсэх тусам нууцгай байдлын чухал ач холбогдол эсвэл өгөгдлийн итгэмжлэгдсэн байдал бас ихэсдэг. * _single label_ буюу ганц хаяг/шошго: Ганц хаяг/шошго нь өгөгдлийн урсгалд хандалтын хяналт хийхийн тулд бүхэл файлын систем ганц хаяг/шошгыг хэрэглэх үе юм. `multilabel` тохируулгыг тохируулаагүй ямар ч үед файлын систем үүнийг тохируулсан байхад бүх файлууд нь ижил хаяг/шошгоны тохиргоог дагах болно. * _subject_ буюу субьект: субьект нь хэрэглэгч, хэрэглэгчийн процессор, системийн процесс гэх мэт _обьектууд_ийн хооронд мэдээллийг урсгах идэвхтэй мөн чанар юм. FreeBSD дээр энэ нь бараг үргэлж хэрэглэгчийн өмнөөс процессод үйлчилж байгаа thread буюу урсгал байдаг. [[mac-initial]] == MAC-ийн тайлбар Энэ бүх шинэ ухагдахуунуудыг санаад MAC тогтолцоо хэрхэн системийн аюулгүй байдлыг ерөнхийд нь нэмэгдүүлдэгийг эргэцүүлье. MAC тогтолцооны хангадаг төрөл бүрийн аюулгүй байдлын модулиуд нь сүлжээ болон файлын системүүдийг хамгаалах, зарим портууд болон сокетуудад хэрэглэгчид хандахыг хаах гэх зэрэгт ашиглагдаж болно. Магадгүй бодлогын модулиудыг ашиглах хамгийн шилдэг арга нь хэд хэдэн аюулгүй байдлын бодлогын модулиудыг нэг зэрэг олон давхаргажсан аюулгүй байдлын орчны хувьд дуудаж тэдгээрийг холих явдал байж болох юм. Олон давхаргажсан аюулгүй байдлын орчинд олон бодлогын модулиуд нь аюулгүй байдлыг шалгаж ажиллаж байдаг. Энэ нь зөвхөн тусгай зориулалтаар ашиглаж байгаа системийн элементүүдийг ихэвчлэн хатуужуулдаг чангатгах бодлогоос өөр юм. Цорын ганц сул тал нь олон файлын системийн хаяг/шошгонууд, сүлжээний хандалтын хяналтыг хэрэглэгч бүр дээр тохируулах гэх мэт тохиолдлуудад удирдлагын хувьд илүү ажилтай байдаг явдал юм. Сул талууд нь тогтолцооны үйлчлэх нөлөөлөлтэй харьцуулахад бага зүйл юм. Жишээ нь тусгайлсан тохиргоонд ямар бодлогууд шаардлагатайг шилж сонгох чадвар нь ажиллагааны хувьд илүү ачааллыг багасгадаг. Хэрэгцээгүй бодлогуудын дэмжлэгийг багасгах нь системийн нийт ажиллагааг нэмэгдүүлэхээс гадна сонголтын уян хатан байдлыг санал болгодог. Сайн шийдэл нь аюулгүй байдлын ерөнхий шаардлагуудыг бодолцож энэ тогтолцооны санал болгодог төрөл бүрийн аюулгүй байдлын модулиудыг үр ашигтайгаар авч хэрэгжүүлдэг. Тиймээс MAC боломжуудыг ашигладаг систем нь хэрэглэгчийн хүссэнээрээ аюулгүй байдлын шинж чанаруудыг өөрчлөх боломжийг хамгийн багаар бодоход зөвшөөрөхгүй байж баталгаажуулах ёстой юм. Хэрэглэгчийн бүх хэрэгслүүд, програмууд болон скриптүүд нь сонгосон аюулгүй байдлын бодлогын модулиудын хандалтын дүрмүүдийн шахалтын доор ажиллах ёстой бөгөөд MAC хандалтын дүрмүүдийн ерөнхий хяналт нь системийн администраторын гарт байдаг байна. Аюулгүй байдлын бодлогын модулиудыг анхааралтай сонгох нь системийн администраторын цорын ганц үүрэг байдаг. Зарим орчнуудын хувьд сүлжээнд хандалтын хяналтыг хязгаарлах хэрэгтэй байдаг. Ийм тохиолдлуудад man:mac_portacl[4], man:mac_ifoff[4] болон бүр man:mac_biba[4] бодлогын модулиуд зөв эхлэл болж болох юм. Бусад тохиолдлуудад файлын системийн обьектуудын чанд нууцлал/итгэмжлэлийг шаардаж болох юм. Энэ зорилгоор man:mac_bsdextended[4] болон man:mac_mls[4] зэрэг бодлогын модулиуд байдаг. Сүлжээний тохиргоон дээр үндэслэн бодлогын шийдвэрүүдийг хийдэг. Магадгүй сүлжээ эсвэл Интернэтэд хандахын тулд man:ssh[1]-ийн хангадаг боломжуудад зөвхөн зарим нэг хэрэглэгчдийг хандахыг зөвшөөрөх ёстой байж болох юм. Эдгээр тохиолдлуудад man:mac_portacl[4] нь сонгох бодлогын модуль болох юм. Гэхдээ файлын системүүдийн хувьд юу хийх ёстой вэ? Зарим нэг сангуудад бусад бүлгүүдээс эсвэл тусгай хэрэглэгчдээс хандах бүх хандалтыг чангаруулах ёстой юу? Эсвэл тусгай файлууд уруу хийх хэрэглэгчийн эсвэл хэрэгслийн хандалтыг зарим обьектуудыг нууц гэж тохируулан бид хязгаарлах ёстой юу? Файлын системийн тохиолдолд обьектуудад хандах хандалт нь зарим хэрэглэгчдийн хувьд итгэмжлэгдсэн/нууц, бусдуудын хувьд үгүй байж болох юм. Жишээ нь хөгжүүлэх том багийг хэд хэдэн хөгжүүлэгчдээс тогтох жижиг бүлгүүдэд хувааж болох юм. B төсөл дэх хөгжүүлэгчдийн бичсэн обьектуудад A төсөл дэх хөгжүүлэгчид хандах ёсгүй. Бас тэд C төсөл дэх хөгжүүлэгчдийн үүсгэсэн обьектуудад хандах хэрэгтэй байж болох юм. Ийм тохиолдол харин ч байж болох юм. MAC тогтолцооны өөр өөр аюулгүй байдлын бодлогын модулиудыг ашиглан хэрэглэгчдийг эдгээр бүлгүүдэд хувааж мэдээллийн алдагдлаас айлгүйгээр тохирох талбаруудад хандалтыг өгч болох юм. Тиймээс аюулгүй байдлын бодлогын модуль бүр нь системийн ерөнхий аюулгүй байдлыг сайжруулах өвөрмөц аргатай байдаг. Модулийн сонголтыг хийхдээ аюулгүй байдлын бодлогын хувьд сайн бодож хийх хэрэгтэй. Ихэнх тохиолдлуудад ерөнхий бодлогыг дахин харж сайжруулан систем дээр дахин хэрэгжүүлэх хэрэгтэй байж болох юм. MAC тогтолцооны санал болгодог өөр өөр аюулгүй байдлын бодлогын модулиудыг ойлгох нь администраторуудад өөр өөрсдийн нөхцөлдөө тохируулан хамгийн шилдэг бодлогуудыг сонгоход туслах болно. FreeBSD-ийн анхдагч цөм нь MAC тогтолцоонд зориулсан тохируулгагүй байдаг, тиймээс энэ бүлэгт байгаа жишээнүүд эсвэл мэдээллийг туршихаасаа өмнө дараах цөмийн тохируулгыг нэмэх ёстой: [.programlisting] .... options MAC .... Тэгээд цөмийг дахин бүтээж суулгах шаардлагатай болно. [CAUTION] ==== MAC бодлогын модулиудын төрөл бүрийн гарын авлагын хуудаснууд нь тэдгээрийг цөмд оруулан бүтээсэн гэж мэдэгддэг боловч системийг сүлжээнээс гаргаж түгжих зэрэг олон боломжтой байдаг. MAC-ийг хэрэгжүүлэх нь галт ханыг хэрэгжүүлэхтэй бараг адил бөгөөд системээс бүр мөсөн гарч түгжигдэхээс сэргийлэхийн тулд анхааралтай байх ёстой. Өмнөх тохиргоондоо эргэж буцааж болдог байх чадварыг бодолцох ёстой бөгөөд MAC шийдлийг алсаас хийхдээ маш болгоомжтой хийх хэрэгтэй юм. ==== [[mac-understandlabel]] == MAC хаяг/шошгонуудыг ойлгох нь MAC хаяг/шошго нь системийн турш нэлэнхүйд нь субьектууд болон обьектуудад өгч болох аюулгүй байдлын шинж чанар юм. Хаяг/шошгыг тохируулах үед хэрэглэгч үүнийг яг юу болох, юу хийгдэхийг ойлгож чадаж байх ёстой. Обьект дээр байдаг шинж чанарууд нь бодлогын модуль дуудагдсан болон бодлогын модулиуд тэдгээрийн шинж чанаруудыг өөр аргаар ойлгуулдгаас хамаарна. Дутуу ойлгосноос эсвэл утга санаануудыг нь ойлгох чадваргүй байдлаас болоод буруу тохируулсан бол үр дүн нь тааж болшгүй байх бөгөөд магадгүй системийн хүсээгүй ажиллагаанд хүргэж болох юм. Обьект дээрх аюулгүй байдлын хаяг/шошго нь бодлогын гаргах аюулгүй байдлын хандалтын хяналтын шийдвэрийн хэсэг болон хэрэглэгддэг. Зарим бодлогуудад хаяг/шошго нь өөрөө шийдвэр гаргахад шаардлагатай бүх мэдээллийг агуулдаг; бусад загваруудад хаяг/шошгонууд нь илүү том дүрмийн олонлогийн хэсэг болон процесс хийгдэж болох юм. Гэх мэт олныг дурдаж болно. Жишээ нь файл дээр `biba/low` гэж хаяг/шошгыг тохируулах нь Biba аюулгүй байдлын бодлогын модулиар хангагдаж байдаг хаяг/шошгыг "low" гэсэн утгатайгаар илэрхийлж байна гэсэн үг юм. FreeBSD-д хаяглалтын боломжийг дэмждэг цөөн бодлогын модулиуд нь урьдчилан тодорхойлсон тусгай гурван хаяг/шошгыг санал болгодог. Эдгээр нь low буюу доод, high буюу өндөр болон equal буюу тэнцүү гэсэн хаяг/шошгууд юм. Тэдгээр нь хандалтын хяналтыг бодлогын модуль бүртэй өөр өөрөөр хийдэг боловч low хаяг/шошго нь хамгийн доод тохиргоо болох ба equal хаяг/шошго нь субьект эсвэл обьектийг хаах эсвэл хамаарахгүй гэж тохируулах бөгөөд high хаяг/шошго нь Biba болон MLS бодлогын модулиудад байх хамгийн дээд тохиргоог хийх болно. Ганц хаяг/шошго бүхий файлын системийн орчинд обьектууд дээр зөвхөн нэг хаяг/шошго хэрэглэгдэх болно. Энэ нь хандалтын зөвшөөрлүүдийн нэг олонлогийг бүхэл бүтэн системийн дагуу ашиглах бөгөөд олон орчны хувьд энэ нь хангалттай байж болох юм. Файлын систем дэх обьектууд эсвэл субьектууд дээр олон хаяг/шошгонууд тавих цөөн тохиолдлууд байдаг. Ийм тохиолдолд `multilabel` тохируулгыг man:tunefs[8] уруу дамжуулж өгч болох юм. Biba болон MLS-ийн хувьд тоон хаяг/шошгыг шаталсан хяналтын тодорхой түвшинг заахын тулд тохируулж болно. Энэ тоон түвшин нь мэдээллийг ангиллын өөр өөр бүлгүүдэд хуваах буюу эрэмбэлж тэр бүлэг эсвэл илүү өндөр бүлгийн түвшинд хандах хандалтыг зөвхөн зөвшөөрөхөд хэрэглэгддэг. Ихэнх тохиолдлуудад администратор нь файлын системийн дагуу хэрэглэхийн тулд зөвхөн ганц хаяг/шошгыг тохируулдаг. _Хөөе хүлээгээрэй, энэ нь DAC-тай адил юм байна! MAC нь хяналтыг зөвхөн администраторт өгдөг гэж бодсон._ Энэ өгүүлбэр нь зарим талаараа үнэн хэвээр байгаа, учир нь `root` хэрэглэгчид хяналт байгаа бөгөөд тэрээр хэрэглэгчдийг тохирох зэрэглэл/хандалтын түвшингүүдэд байрлуулахаар бодлогуудыг тохируулдаг. Харамсалтай нь бодлогын олон модулиуд нь `root` хэрэглэгчийг бас хязгаарлаж чадна. Обьектууд дээрх үндсэн хяналт нь тэгээд бүлэгт суллагдах боловч `root` нь тохиргоонуудыг ямар ч үед буцааж эсвэл өөрчилж болох юм. Энэ нь Biba болон MLS зэрэг бодлогуудын хамардаг шаталсан/цэвэрлэгээ загвар юм. === Хаяг/шошгоны тохиргоо Хаяг/шошгоны бодлогын модулийн тохиргооны бараг л бүх зүйлсийг үндсэн системийн хэрэгслүүдийг ашиглан гүйцэтгэдэг. Эдгээр тушаалууд нь обьект эсвэл субьектийн тохиргоо эсвэл тохиргооны удирдлага болон шалгалтын хувьд энгийн интерфэйсээр хангадаг. Бүх тохиргоог man:setfmac[8] болон man:setpmac[8] хэрэгслүүдийг ашиглан хийнэ. `setfmac` тушаал нь системийн обьектууд дээр MAC хаяг/шошгонуудыг тохируулахад хэрэглэгддэг бол `setpmac` тушаал нь системийн субьектууд дээр хаяг/шошгонуудыг тохируулахад хэрэглэгддэг. Дараах тушаалыг ажиглаарай: [source,shell] .... # setfmac biba/high test .... Дээрх тушаалыг ажиллуулсны дараа хэрэв ямар ч алдаа гараагүй бол хүлээх мөр буцаагдах болно. Эдгээр тушаалууд нь хөдөлгөөнгүй биш байх цорын ганц үе нь алдаа гарах үе юм; man:chmod[1] болон man:chown[8] тушаалуудтай адил юм. Зарим тохиолдолд энэ алдаа нь `Permission denied` гэсэн байж болох бөгөөд энэ нь ихэвчлэн хязгаарласан обьект дээр хаяг/шошгыг тохируулах буюу засах үед гардаг. Системийн администратор үүнийг давж гарахын тулд дараах тушаалуудыг ашиглаж болно: [source,shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... Дээрхээс харахад ажиллуулсан процессод өөр хаяг/шошго зааж бодлогын модулийн тохиргоонуудыг өөрчлөхөд `setpmac` тушаалыг хэрэглэж болох юм байна. `getpmac` хэрэгсэл нь ихэвчлэн тухайн үед ажиллаж байгаа sendmail зэрэг процессуудад хэрэглэгддэг. Хэдийгээр энэ нь тушаалын оронд процессийн ID-г авдаг боловч логик нь туйлын төстэй юм. Хэрэв хэрэглэгчид өөрийн хандалтад байхгүй файлыг удирдахыг оролдвол дуудагдсан бодлогын модулиудын дүрмүүдээс болоод `Operation not permitted` алдаа `mac_set_link` функцээр харуулагдах болно. ==== Нийтлэг хаяг/шошгоны төрлүүд man:mac_biba[4], man:mac_mls[4] болон man:mac_lomac[4] бодлогын модулиудын хувьд энгийн хаяг/шошгонуудыг зааж өгөх боломж олгогдсон байдаг. Эдгээр нь high буюу өндөр/дээд, equal буюу тэнцүү болон low буюу доод гэсэн хэлбэрийг авах бөгөөд эдгээр хаяг/шошгонуудын юу хангадаг талаар товч тайлбарыг доор дурдав: * `low` хаяг/шошго нь обьект эсвэл субьектийн авч болох хамгийн доод хаяг/шошгоны тохиргоо гэгддэг. Үүнийг обьектууд эсвэл субьектууд дээр тохируулах нь өндөр гэж тэмдэглэгдсэн обьектууд эсвэл субьектууд уруу хандах тэдгээрийн хандалтыг хаах болно. * `equal` хаяг/шошго нь бодлогоос чөлөөлөгдөх обьектууд дээр зөвхөн тавигдах ёстой. * `high` хаяг/шошго нь обьект эсвэл субьектэд хамгийн их боломжит тохиргоог зөвшөөрдөг. Бодлогын модуль бүрийн хувьд тэдгээр тохиргоо бүр өөр өөр мэдээллийн урсгалын зааврыг хийх болно. Тохирох гарын авлагын хуудаснуудыг унших нь эдгээр ерөнхий хаяг/шошгоны тохиргоонуудын төрх байдлыг цаашид тайлбарлах болно. ===== Хаяг/шошгоны илүү нарийн тохиргоо Тоон зэргээр илэрхийлсэн хаяг/шошгонууд нь `comparison:compartment+compartment` буюу `харьцуулалт:тасалгаа+тасалгаа` гэсэнд зориулагдаж хэрэглэгддэг, тиймээс дараах нь: [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... Ингэж тайлбарлагдаж болно: "Biba Бодлогын Хаяг/Шошго"/"Зэрэг 10" :"Тасалгаанууд 2, 3 болон 6": ("зэрэг 5 ...") Энэ жишээн дээр эхний зэрэг нь "эффектив тасалгаанууд"тай "эффектив зэрэг" гэж тооцогддог, хоёр дахь зэрэг нь доод зэрэг бөгөөд хамгийн сүүлийнх нь өндөр зэрэг юм. Ихэнх тохиргоонуудад эдгээр тохируулгуудыг ашигладаггүй, харин тэдгээрийг илүү нарийн тохиргоонд зориулж санал болгодог. Системийн обьектуудад хамааруулахад тэдгээр нь системийн субьектуудтай харьцуулах юм бол зөвхөн тухайн үеийн зэрэг/тасалгаануудтай байдаг. Системийн субьектууд нь систем болон сүлжээний интерфэйсүүдэд байгаа эрхүүдийн хүрээг тусгадаг. Сүлжээний интерфэйсүүд дээр хандалтын хяналтын хувьд хаяг/шошгонууд нь ашиглагддаг. Субьект болон обьект хослол дахь зэрэг болон тасалгаанууд нь "давамгайлал" гэгддэг харилцааг бүтээхэд хэрэглэгддэг. Энэ харилцаанд субьект нь обьектийг давамгайлдаг, эсвэл обьект нь субьектийг давамгайлдаг, эсвэл аль нэг нь нөгөөгөө давамгайлахгүй, эсвэл хоёулаа нэг нэгнийгээ давамгайлдаг. "хоёулаа давамгайлах" тохиолдол нь хоёр хаяг/шошго тэнцүү байхад тохиолддог. Biba-ийн мэдээллийн урсгалын мөн чанараас болоод төсөлд тохирох "мэдэх хэрэгтэй" тасалгаануудын олонлогийн эрхүүд танд байдаг. Гэхдээ обьектууд нь бас тасалгаануудын олонлогтой байна. Хэрэглэгчид нь өөрсдөө хязгаарлалтгүй байдаг тасалгаа дахь обьектуудад хандахын тулд `su` эсвэл `setpmac` тушаалуудыг ашиглан өөрсдийнхөө эрхүүдийг дэд эрхүүд болгож болох юм. ==== Хэрэглэгчид болон хаяг/шошгоны тохиргоонууд Хэрэглэгчдийн өөрсдийнх нь файлууд болон процессууд систем дээр тодорхойлсон аюулгүй байдлын бодлоготой зөв харилцан ажилладаг байхын тулд хэрэглэгчид нь өөрсдөө хаяг/шошгонуудтай байх шаардлагатай байдаг. Үүнийг [.filename]#login.conf# файлд нэвтрэлтийн ангиллуудыг ашиглан тохируулдаг. Хаяг/шошгонуудыг ашигладаг бодлогын модуль бүр хэрэглэгчийн ангиллын тохиргоог хийх болно. Бодлогын модуль бүрийн тохиргоог агуулах жишээ оруулгыг доор үзүүлэв: [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... `label` тохируулга нь хэрэглэгчийн ангиллын MAC-ийн үйлчлэх анхдагч хаяг/шошгыг тохируулахад хэрэглэгддэг. Хэрэглэгчид энэ утгыг өөрчлөх зөвшөөрөл хэзээ ч өгөгдөхгүй учраас энэ нь хэрэглэгчийн хувьд сонгох боломжгүй юм. Гэхдээ жинхэнэ тохиргоон дээр администратор нь бодлогын модуль бүрийг идэвхжүүлэхийг хэзээ ч хүсэхгүй. Энэ тохиргоонуудаас аль нэгийг нь хийж гүйцэтгэхээсээ өмнө энэ бүлгийн үлдсэнийг дахин шалгаж уншихыг зөвлөж байна. [NOTE] ==== Хэрэглэгчид нь эхний нэвтрэлтийнхээ дараа өөрсдийн хаяг/шошгыг өөрчилж болох юм. Гэхдээ энэ өөрчлөлт нь бодлогын шахалтуудын эрхшээлд байдаг. Дээрх жишээ нь процессийн хамгийн бага бүрэн бүтэн байдлыг 5, түүний хамгийн их утга нь 15, гэхдээ анхдагч эффектив хаяг/шошго нь 10 гэж Biba бодлогод хэлж байна. Процесс нь магадгүй хэрэглэгч setpmac тушаалыг ажиллуулснаас болоод хаяг/шошгоо өөрчлөхөөр сонгох хүртэл 10 дээр ажиллах болно. setpmac тушаал нь нэвтрэлтийн үед хүрээг тохируулах Biba-ийн шахалтад байх болно. ==== Бүх тохиолдлуудад [.filename]#login.conf#-д өөрчлөлт хийсний дараа нэвтрэлтийн ангиллын боломжийн мэдээллийн баазыг `cap_mkdb` тушаал ашиглан дахин бүтээх ёстой бөгөөд энэ нь ойртож байгаа жишээ эсвэл хэлэлцүүлэг бүрт тусгагдах болно. Олон сайтууд нь хэд хэдэн өөр өөр хэрэглэгчийн ангиллуудыг шаарддаг ялангуяа асар их тооны хэрэглэгчидтэй байж болохыг тэмдэглэх хэрэгтэй юм. Маш сайн төлөвлөх хэрэгтэй бөгөөд удирдахад туйлын хэцүү болж болох юм. ==== Сүлжээний интерфэйсүүд болон хаяг/шошгоны тохиргоонууд Хаяг/шошгонууд нь сүлжээний дагуух өгөгдлийн урсгалыг хянахад туслах зорилгоор сүлжээний интерфэйсүүд дээр бас тавигдаж болно. Бүх тохиолдолд тэдгээр нь бодлогууд обьектуудад үйлчилдэг шигээр үйлчилдэг. `biba` дээрх өндөр тохиргоонуудтай хэрэглэгчдийг жишээ нь доод хаяг/шошготой сүлжээний интерфэйсүүдэд хандахыг зөвшөөрдөггүй. Сүлжээний интерфэйсүүд дээр MAC хаяг/шошгыг тохируулахдаа `maclabel` тохируулгыг `ifconfig` тушаал уруу өгч болох юм. Жишээ нь: [source,shell] .... # ifconfig bge0 maclabel biba/equal .... тушаал нь `biba/equal`-ийн MAC хаяг/шошгыг man:bge[4] интерфэйс дээр тохируулах болно. `biba/high(low-high)`-тай төстэй тохиргоог ашиглаж байх үед бүх хаяг/шошгыг тэр чигээр нь хаалтанд ("") хийх ёстой, тэгэхгүй бол алдаа буцаагдах болно. Хаяглалтыг дэмждэг бодлогын модуль бүр тааруулах боломжтой хувьсагчтай байдаг бөгөөд тэдгээрийг сүлжээний интерфэйсүүд дээр MAC хаяг/шошгыг хаахдаа хэрэглэж болох юм. Хаяг/шошгыг `equal` буюу тэнцүү гэж тохируулах нь ижил нөлөөлөлтэй байх болно. Тэдгээр тааруулах боломжтой хувьсагчуудын хувьд `sysctl`-ийн тушаалын гаралт, бодлогын гарын авлагын хуудаснууд эсвэл бүр энэ бүлгийн үлдсэн хэсэг дэх мэдээллийг дахин үзээрэй. === Ганц хаяг/шошго уу эсвэл олон хаяг/шошго уу? Анхдагчаар систем нь `singlelabel` тохируулгыг ашиглах болно. Гэхдээ энэ нь администраторт юу гэж ойлгогдох вэ? Хэд хэдэн ялгаанууд байдаг бөгөөд тэдгээр нь системийн аюулгүй байдлын загварт уян хатан чанарын хувьд давуу болон сул талуудыг үзүүлдэг. `singlelabel` нь зөвхөн нэг хаяг/шошгоны хувьд зөвшөөрөх бөгөөд жишээлбэл `biba/high`-ийг субьект эсвэл обьект бүрийн хувьд ашиглах юм. Энэ нь удирдлагын хувьд бага ажиллагааг өгдөг боловч хаяглалтыг дэмждэг бодлогуудын уян хатан чанарыг бууруулдаг. Олон администраторууд өөрсдийн аюулгүй байдлын бодлогодоо `multilabel` тохируулгыг ашиглахыг хүсэж болох юм. `multilabel` тохируулга нь субьект эсвэл обьект бүрийг хуваалтад зөвхөн нэг хаяг/шошгыг зөвшөөрөх стандарт `singlelabel` тохируулгын оронд өөрийн гэсэн тусдаа MAC хаягтай байхыг зөвшөөрөх болно. `multilabel` болон `single` хаяг/шошгоны тохируулгууд нь Biba, Lomac, MLS болон SEBSD зэрэг хаяглалтын боломжийг хийж гүйцэтгэдэг бодлогуудад зөвхөн шаардлагатай байдаг. Ихэнх тохиолдолд `multilabel`-ийг тохируулах ерөөсөө хэрэггүй байж болох юм. Дараах тохиолдол болон аюулгүй байдлын загварыг авч үзье: * MAC тогтолцоо болон төрөл бүрийн бодлогуудын холимгийг ашигладаг FreeBSD вэб сервер. * Энэ машин нь зөвхөн нэг хаяг/шошго `biba/high`-ийг системийн бүх юмандаа шаарддаг. Энд ганц хаяг/шошго нь үргэлж нөлөөлөх болохоор файлын систем нь `multilabel` тохируулгыг шаардахгүй. * Гэхдээ энэ машин нь вэб сервер болох бөгөөд бичих боломжоос хамгаалахын тулд вэб серверийг `biba/low`-д ажиллуулах ёстой. Biba бодлого болон энэ нь хэрхэн ажилладаг талаар сүүлд хэлэлцэх болно. Тэгэхээр хэрэв өмнөх тайлбар ойлгоход хэцүү байгаа бол зүгээр л цааш үргэлжлүүлэн уншаад буцаж эргэж ирээрэй. Сервер нь ажиллаж байх үеийн төлвийнхээ ихэнх үед `biba/low` тавигдсан тусдаа хуваалтыг ашиглаж болох юм. Энэ жишээн дээр нэлээн их зүйл байхгүй байгаа, жишээ нь өгөгдөл, тохиргоо болон хэрэглэгчийн тохиргоонууд дээр хязгаарлалтууд байхгүй; гэхдээ энэ нь зөвхөн дээр дурдсаныг батлах хурдхан жишээ юм. Хэрэв хаягладаггүй бодлогуудын аль нэг ашиглагдах бол `multilabel` тохируулга хэзээ ч шаардагдахгүй. Эдгээрт `seeotheruids`, `portacl` болон `partition` бодлогууд ордог. Хуваалтад `multilabel` тохируулгыг ашиглаж `multilabel`-ийн ажиллагаан дээр тулгуурласан аюулгүй байдлын загварыг байгуулах нь удирдлагын хувьд илүү ажиллагаанд хүргэж болох юм. Учир нь файлын систем дэх бүх зүйлс хаяг/шошготой болох юм. Эдгээр зүйлсэд сангууд, файлууд, болон бүр төхөөрөмжийн цэгүүд хүртэл орно. Дараах тушаал нь файлын системүүд дээр олон хаяг/шошготой байхаар `multilabel`-ийг тохируулна. Үүнийг зөвхөн ганц хэрэглэгчийн горимд хийж болно: [source,shell] .... # tunefs -l enable / .... Энэ нь swap файлын системийн хувьд шаардлагатай биш юм. [NOTE] ==== Зарим хэрэглэгчид `multilabel` тугийг root хуваалт дээр тохируулахад асуудлуудтай тулгарсан байж болох юм. Хэрэв ийм тохиолдол бол энэ бүлгийн <> хэсгийг дахин үзнэ үү. ==== [[mac-planning]] == Аюулгүй байдлын тохиргоог төлөвлөх нь Шинэ технологи хийгдэх болгонд төлөвлөлтийн үе шат үргэлж зөв зүйтэй санаа байдаг. Төлөвлөх шатуудын үеэр администратор ерөнхийд нь "том дүр зургийг" харах ёстой бөгөөд ядаж дараах зүйлүүдийг хараандаа байлгаж байх хэрэгтэй: * Шийдлийн шаардлагууд; * Шийдлийн зорилгууд; MAC суулгацуудын хувьд эдгээрт дараах зүйлс орно: * Системүүд дээр байгаа мэдээлэл болон эх үүсвэрүүдийг хэрхэн ангилах. * Мэдээлэл ба эх үүсвэрүүдийн ямар төрлүүдэд хандахыг хийгдэх ёстой хязгаарлалтуудын төрлийн хамтаар хязгаарлах. * Энэ зорилгод хүрэхийн тулд аль MAC модуль эсвэл модулиуд шаардлагатай болох. Системийн эх үүсвэрүүд болон аюулгүй байдлын тохиргоонуудыг дахин тохируулж өөрчлөх боломж үргэлж байдаг бөгөөд системээс хайж файлууд болон хэрэглэгчийн бүртгэлүүдийг засах нь ихэвчлэн маш тохиромжгүй байдаг. Төлөвлөх нь ямар нэг асуудалгүй, үр ашигтай итгэгдсэн системийг бүтээхэд туслах юм. Тохиргоо бүхий итгэгдсэн системийн туршилт нь ихэвчлэн амин чухал байдаг бөгөөд MAC шийдлийг жинхэнэ ажиллах системүүд дээр ашиглахаас _өмнө_ лавтай ашигтай байдаг билээ. MAC бүхий систем дээр сул тохируулж орхих нь амжилтгүй байдлыг тохируулж байна гэсэн үг юм. Өөр өөр орчнууд өөр тусгай хэрэгцээ болон шаардлагуудтай байж болох юм. Гүнзгий, бүрэн гүйцэд аюулгүй байдлын хувийн тохируулгыг үүсгэх нь систем ажиллагаанд орсны дараа өөрчлөлтүүдийн хэрэгцээг багасгах болно. Тиймээс дараа дараагийн хэсгүүд администраторуудад байдаг өөр өөр модулиудын талаар өгүүлэх бөгөөд тэдгээрийн хэрэглээ болон тохиргоог тайлбарлаж зарим тохиолдолд тэдгээр нь ямар нөхцөл байдлын үед хамгийн тохиромжтой байхыг харуулах болно. Жишээ нь вэб сервер нь man:mac_biba[4] болон man:mac_bsdextended[4] бодлогуудыг ашиглаж болох юм. Бусад тохиолдлуудад жишээ нь маш цөөн локал хэрэглэгчидтэй машины хувьд man:mac_partition[4] магадгүй зөв сонголт болж болох юм. [[mac-modules]] == Модулийн тохиргоо MAC тогтолцоонд орсон модуль бүр дээр дурдсан шиг цөмд эмхэтгэгдэж эсвэл цөмийн ажиллах үеийн модуль хэлбэрээр дуудагдаж болно. Бидний зөвлөдөг арга бол модулийг эхний ачаалалтын үйлдлийн үеэр дуудагдахаар болгож модулийн нэрийг [.filename]#/boot/loader.conf# файлд нэмэх явдал юм. Дараах хэсгүүд нь төрөл бүрийн MAC модулиудыг хэлэлцэж тэдгээрийн боломжуудыг тайлбарлах болно. Тэдгээрийг тусгай орчинд хийж гүйцэтгэхийг энэ бүлэг бас хамрах болно. Зарим модулиуд хаяглалтын хэрэглээг дэмждэг бөгөөд хаяглалт нь "энийг зөвшөөрсөн, харин энийг зөвшөөрөөгүй" гэх зэрэг хаяг/шошгыг хэрэгжүүлж хандалтыг хянадаг байна. Хаяг/шошгоны тохиргооны файл нь файлуудад хэрхэн хандаж болох, сүлжээний холболтыг хэрхэн солилцож болох гэх зэрэг олон асуудлуудыг хянадаг. Өмнөх хэсэг нь файл бүрийн эсвэл хуваалт бүрийн хандалтын хяналтыг идэвхжүүлэхийн тулд `multilabel` тугийг файлын системүүдэд хэрхэн тохируулах талаар үзүүлсэн. Ганц хаяг/шошго бүхий тохиргоо нь системийн дагуу зөвхөн нэг хаяг/шошгыг хэрэглэх бөгөөд ийм учраас `tunefs`-ийн тохируулга `multilabel` гэж нэрлэгдсэн юм. [[mac-seeotheruids]] == MAC seeotheruids модуль Модулийн нэр: [.filename]#mac_seeotheruids.ko# Цамийн тохиргооны мөр: `options MAC_SEEOTHERUIDS` Ачаалалтын тохируулга: `mac_seeotheruids_load="YES"` man:mac_seeotheruids[4] модуль нь `sysctl`-ийн тааруулах боломжтой `security.bsd.see_other_uids` болон `security.bsd.see_other_gids` хувьсагчуудыг дуурайж өргөтгөдөг. Энэ тохируулга нь тохиргооноос өмнө ямар ч хаяг/шошгонуудыг тохируулахыг шаарддаггүй бөгөөд бусад модулиудтай хамааралгүйгээр ажиллаж чаддаг. Модулийг дуудаж ачаалсны дараа боломжуудыг хянахын тулд дараах `sysctl`-ийн тааруулах боломжтой хувьсагчуудыг ашиглаж болно: * `security.mac.seeotheruids.enabled` нь модулийн боломжуудыг идэвхжүүлж анхдагч тохируулгуудыг ашиглана. Эдгээр анхдагч тохируулгууд нь бусад хэрэглэгчдийн эзэмшиж байгаа процессууд болон сокетуудыг харах боломжийг хэрэглэгчдийн хувьд хаах болно. * `security.mac.seeotheruids.specificgid_enabled` нь зарим нэг бүлгүүдийг энэ бодлогоос чөлөөлж тэдгээрийг зөвшөөрөх болно. Энэ бодлогоос зарим нэг бүлгүүдийг чөлөөлөхийн тулд `sysctl` тушаалын `security.mac.seeotheruids.specificgid=XXX` хувьсагчийг ашиглана. Дээрх жишээн дээрх _XXX_-ийг чөлөөлөх бүлгийн тоон ID-аар солих хэрэгтэй. * `security.mac.seeotheruids.primarygroup_enabled` нь тусгай анхдагч бүлгүүдийг энэ бодлогоос чөлөөлөхийн тулд ашигладаг. Энэ хувьсагчийг хэрэглэхэд `security.mac.seeotheruids.specificgid_enabled` хувьсагч тохируулагдаагүй байж болно. [[mac-bsdextended]] == MAC bsdextended модуль Модулийн нэр: [.filename]#mac_bsdextended.ko# Цөмийн тохиргооны мөр: `options MAC_BSDEXTENDED` Ачаалалтын тохируулга: `mac_bsdextended_load="YES"` man:mac_bsdextended[4] модуль файлын системийн галт ханыг идэвхжүүлдэг. Энэ модулийн бодлого нь стандарт файлын системийн зөвшөөрлүүдийн загварын өргөтгөл болж файлын систем дэх файлууд, хэрэгслүүд болон сангуудыг хамгаалахын тулд администраторт галт ханатай адил дүрмийн олонлогийг үүсгэх боломжийг олгодог. Файлын системийн обьектод хандахыг оролдоход дүрмүүдийн жагсаалтаас тохирох дүрэм таарах хүртэл эсвэл төгсгөл хүртэл шалгадаг. Энэ ажиллагааг man:sysctl[8]-ийн хувьсагч security.mac.bsdextended.firstmatch_enabled параметрийг хэрэглэж өөрчилж болно. FreeBSD дэх бусад галт ханын модулиудтай адилаар хандалтын хяналтын дүрмүүдийг агуулах файлыг үүсгэж man:rc.conf[5]-ийн хувьсагчийн тусламжтайгаар ачаалах үед системээр уншуулж болно. Дүрмийн жагсаалтыг man:ipfw[8]-ийн синтакстай төстэйгөөр бичигддэг man:ugidfw[8] хэрэгслийг ашиглан оруулж болно. Илүү хэрэгслүүдийг man:libugidfw[3] сан дахь функцуудыг ашиглан бичиж болно. Энэ модультай ажиллаж байхдаа маш болгоомжтой байх хэрэгтэй; учир нь буруу хэрэглээ файлын системийн зарим хэсэгт хандах боломжгүй болгож болох юм. === Жишээнүүд man:mac_bsdextended[4] модуль ачаалагдсаны дараа тухайн үед байгаа дүрмийн тохиргоог жагсаахад дараах тушаал ашиглагдаж болно: [source,shell] .... # ugidfw list 0 slots, 0 rules .... Яг бодож байсны дагуу ямар ч дүрмүүд тодорхойлогдоогүй байна. Энэ нь бүгд хандах боломжтой байна гэсэн үг юм. `root`-ийг орхиж бусад хэрэглэгчдийн бүх хандалтыг хаах дүрмийг үүсгэхийн тулд ердөө л дараах тушаалыг ажиллуулна: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... Энэ нь бүх хэрэглэгчдийг `ls` зэрэг хамгийн энгийн тушаалуудыг ажиллуулахыг хаах учраас маш буруу санаа юм. Илүү эх оронч дүрмүүдийн жагсаалт иймэрхүү байж болно: [source,shell] .... # ugidfw set 2 subject uid user1 object uid user2 mode n # ugidfw set 3 subject uid user1 object gid user2 mode n .... Энэ нь `user1` хэрэглэгчээс `_user2_`-ийн гэрийн сан уруу хандах сангийн жагсаалт үзүүлэх зэрэг дурын болон бүх хандалтыг хаах болно. `user1`-ийн оронд `not uid _user2_` тохируулгыг дамжуулж болно. Энэ нь дээрхийн адил хандалтын хязгаарлалтуудыг зөвхөн нэг хэрэглэгчийн хувьд биш бүх хэрэглэгчийн хувьд тавих болно. [NOTE] ==== `root` хэрэглэгчид эдгээр өөрчлөлтүүд нөлөөлөхгүй. ==== Энэ нь файлын системийг бэхэлж батжуулахад туслахын тулд man:mac_bsdextended[4] модулийг хэрхэн ашиглаж болох ерөнхий санааг харуулах ёстой. Илүү дэлгэрэнгүй мэдээллийг man:mac_bsdextended[4] болон man:ugidfw[8] гарын авлагын хуудаснуудаас үзнэ үү. [[mac-ifoff]] == MAC ifoff модуль Модулийн нэр: [.filename]#mac_ifoff.ko# Цөмийн тохиргооны мөр: `options MAC_IFOFF` Ачаалалтын тохируулга: `mac_ifoff_load="YES"` man:mac_ifoff[4] модуль нь сүлжээний интерфэйсүүдийг шууд идэвхгүй болгож системийн эхний ачаалалтын үеэр идэвхжүүлэхгүй байлгах зорилгоор байдаг. Энэ нь систем дээр ямар ч хаяг/шошгуудыг тохируулахыг шаарддаггүйгээс гадна бас бусад MAC модулиудаас хамааралгүй юм. Хяналтын ихэнх нь доор дурдсан `sysctl`-ийн тааруулж болох хувьсагчуудаар хийгддэг. * `security.mac.ifoff.lo_enabled` нь loopback (man:lo[4]) буюу буцах интерфэйс дээрх бүх урсгалыг нээнэ/хаана. * `security.mac.ifoff.bpfrecv_enabled` нь Berkeley Packet Filter буюу Беркли Пакет шүүгч интерфэйс (man:bpf[4]) дээрх бүх урсгалыг нээнэ/хаана. * `security.mac.ifoff.other_enabled` нь бусад бүх интерфэйсүүд дээр бүх урсгалыг нээнэ/хаана. man:mac_ifoff[4]-ийн хамгийн нийтлэг хэрэглээний нэг бол ачаалах дарааллын үеэр сүлжээний урсгалыг зөвшөөрөх ёсгүй орчинд сүлжээг монитор хийх явдал юм. Өөр нэг санал болгох хэрэглээ бол хамгаалагдсан сангуудад шинэ эсвэл өөрчлөгдсөн файлуудыг олсон тохиолдолд сүлжээний урсгалыг автоматаар хаахын тулд package:security/aide[]-г ашигладаг скриптийг бичих байж болох юм. [[mac-portacl]] == MAC portacl модуль Модулийн нэр: [.filename]#mac_portacl.ko# Цөмийн тохиргооны мөр: `MAC_PORTACL` Ачаалалтын тохируулга: `mac_portacl_load="YES"` man:mac_portacl[4] модулийг төрөл бүрийн `sysctl` хувьсагчуудыг ашиглан локал TCP болон UDP портуудыг холбохыг хязгаарлахад хэрэглэдэг. Мөн чанартаа man:mac_portacl[4] нь заагдсан эрх бүхий портуудыг өөрөөр хэлбэл 1024-оос бага портуудыг холбох боломжийг `root` биш хэрэглэгчдэд зөвшөөрдөг. Ачаалагдсаны дараа энэ модуль нь бүх сокетууд дээр MAC бодлогыг идэвхжүүлдэг. Дараах тааруулж болох хувьсагчууд байдаг: * `security.mac.portacl.enabled` нь бодлогыг бүр мөсөн нээнэ/хаана. * `security.mac.portacl.port_high` нь man:mac_portacl[4]-ийн хамгаалалтыг нь идэвхжүүлдэг хамгийн дээд портын дугаарыг тохируулдаг. * `security.mac.portacl.suser_exempt` нь тэгээс ялгаатай утгаар тохируулагдсан үедээ `root` хэрэглэгчийг энэ бодлогоос чөлөөлнө. * `security.mac.portacl.rules` нь яг mac_portacl бодлогыг заадаг; доорхоос харна уу. `mac_portacl` бодлого нь `security.mac.portacl.rules` sysctl-д заагдсаны дагуу хэрэгцээнээсээ хамааран хэдэн ч дүрмүүдтэй байж болох `rule[,rule,...]` текст хэлбэрийн байдаг. Дүрэм бүр `idtype:id:protocol:port` гэсэн хэлбэрийн байдаг. [parameter]#idtype# параметр нь `uid` эсвэл `gid` байж болох бөгөөд [parameter]#id# параметрийг хэрэглэгчийн id эсвэл бүлгийн id гэж тайлбарладаг. [parameter]#protocol# параметр нь `tcp` эсвэл `udp` гэж заагдан дүрмийг TCP эсвэл UDP-ийн алинд хамаарахыг тодорхойлоход хэрэглэгддэг. Сүүлийн [parameter]#port# параметр нь заагдсан хэрэглэгч эсвэл бүлэгт холбохыг зөвшөөрөх портын дугаар юм. [NOTE] ==== Дүрмийн олонлог нь цөмөөр шууд тайлбарлагддаг болохоор хэрэглэгчийн ID бүлгийн ID болон портын параметруудын хувьд зөвхөн тоон утгуудыг ашиглаж болно. Өөрөөр хэлбэл хэрэглэгч, бүлэг болон портын үйлчилгээний нэрсийг ашиглаж болохгүй. ==== Анхдагчаар UNIX(R) төст системүүд дээр 1024-өөс бага портуудыг зөвхөн эрх бүхий процессууд буюу өөрөөр хэлбэл `root`-ээр ажилладаг процессуудад ашиглахад/холбоход хэрэглэдэг. man:mac_portacl[4]-ийн хувьд эрхгүй процессуудыг 1024-өөс бага портуудад холбохыг зөвшөөрөхдөө энэ стандарт UNIX(R) хязгаарлалтыг хаасан байх ёстой. Үүнийг man:sysctl[8]-ийн `net.inet.ip.portrange.reservedlow` болон `net.inet.ip.portrange.reservedhigh` хувьсагчуудыг тэг болгон хийж болно. Доор жишээнүүдийг үзнэ үү, эсвэл дэлгэрэнгүй мэдээллийг man:mac_portacl[4] гарын авлагын хуудаснаас лавлана уу. === Жишээнүүд Дараах жишээнүүд нь дээрх хэлэлцүүлгийг арай илүү тайлбарлах болно: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 net.inet.ip.portrange.reservedhigh=0 .... Эхлээд бид man:mac_portacl[4]-ийг стандарт эрх бүхий портуудыг хамарч ердийн UNIX(R) холболтын хязгаарлалтуудыг хаахаар тохируулна. [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... `root` хэрэглэгчийг энэ бодлогоор хязгаарлахгүйн тулд `security.mac.portacl.suser_exempt`-г тэгээс ялгаатай утгаар тохируулна. man:mac_portacl[4] модуль нь одоо UNIX(R) төст системүүд анхдагч тохиргоотойгоор ажилладаг шигээр тохируулагдсан байна. [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... UID 80 бүхий (ердийн тохиолдолд `www` хэрэглэгч) хэрэглэгчид 80 портыг холбохыг зөвшөөрнө. `root` эрхгүйгээр вэб сервер ажиллуулахыг `www` хэрэглэгчид зөвшөөрөхөд үүнийг ашиглаж болно. [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... UID 1001 бүхий хэрэглэгчид TCP 110 ("pop3") болон 995 ("pop3s") портуудыг холбохыг зөвшөөрнө. Энэ нь 110 болон 995 портуудаар холболтуудыг хүлээн авдаг сервер эхлүүлэхийг хэрэглэгчид зөвшөөрдөг. [[mac-partition]] == MAC хуваалтын модуль Модулийн нэр: [.filename]#mac_partition.ko# Цөмийн тохиргооны мөр: `options MAC_PARTITION` Ачаалалтын тохируулга: `mac_partition_load="YES"` man:mac_partition[4] бодлого нь процессуудыг тэдгээрийн MAC хаяг/шошго дээр үндэслэн тусгай "хуваалтуудад" оруулдаг. Үүнийг man:jail[8]-ийн тусгай нэг төрөл гэж бодох хэрэгтэй, гэхдээ энэ нь тийм ч зохистой харьцуулалт биш юм. Ачаалах процессийн үеэр энэ бодлогыг дуудаж идэвхжүүлэхийн тулд man:loader.conf[5] файлд нэмэгдэх ёстой нэг модуль нь энэ юм. Энэ бодлогын ихэнх тохиргоо нь доор тайлбарлагдах man:setpmac[8] хэрэгслээр хийгддэг. Энэ бодлогод зориулагдсан дараах `sysctl`-ийн хувьсагч байдаг: * `security.mac.partition.enabled` нь MAC процессийн хуваалтуудыг хэрэглэхийг идэвхжүүлдэг. Энэ бодлого идэвхтэй болоход хэрэглэгчдэд зөвхөн өөрийн процессуудыг болон нэг хуваалтад байгаа бусад хэрэглэгчдийн процессуудыг харахыг зөвшөөрөх бөгөөд гэхдээ энэ хуваалтын хүрээнээс гадна байгаа хэрэгслүүдтэй ажиллахыг зөвшөөрөхгүй байх болно. Жишээ нь дээрх `insecure` ангилалд байгаа хэрэглэгчийг `top` тушаал болон процесс үүсгэх ёстой бусад олон тушаалуудад хандахыг зөвшөөрөхгүй юм. Хэрэгслүүдийг хуваалтын хаяг/шошго уруу оруулах буюу тохируулахын тулд `setpmac` хэрэгслийг хэрэглэнэ: [source,shell] .... # setpmac partition/13 top .... Энэ нь `top` тушаалыг `insecure` ангилал дахь хэрэглэгчдийн хаяг/шошгоны олонлогт нэмэх болно. `insecure` ангиллын хэрэглэгчдийн үүсгэсэн бүх процессууд `partition/13` хаяг/шошгод байхыг тэмдэглэх нь зүйтэй юм. === Жишээнүүд Дараах тушаал нь хуваалтын хаяг/шошго болон процессийн жагсаалтыг танд харуулах болно: [source,shell] .... # ps Zax .... Дараагийн тушаал нь өөр хэрэглэгчийн процессийн хуваалтын хаяг/шошго болон тэр хэрэглэгчийн тухайн үед ажиллаж байгаа процессуудыг харахыг зөвшөөрөх болно: [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== man:mac_seeotheruids[4] бодлого дуудагдаж ачаалагдаагүй бол `root` хаяг/шошго дахь процессуудыг хэрэглэгч харж чадна. ==== Жинхэнэ ур дүй шаардсан шийдэл нь [.filename]#/etc/rc.conf# файл дахь бүх үйлчилгээнүүдийг хааж тэдгээрт зөв хаяглалтыг тохируулж тэдгээрийг скриптээр эхлүүлдэг байж болох юм. [NOTE] ==== Дараах бодлогууд нь санал болгосон гурван анхдагч хаяг/шошгоны оронд бүхэл тоон тохируулгуудыг дэмждэг. Эдгээр тохируулгууд болон тэдгээрийн хязгаарлалтууд нь модулийн гарын авлагын хуудаснуудад дэлгэрэнгүй тайлбарлагдсан байгаа. ==== [[mac-mls]] == MAC олон түвшинт аюулгүй байдлын модуль Модулийн нэр: [.filename]#mac_mls.ko# Цөмийн тохиргооны мөр: `options MAC_MLS` Ачаалалтын тохируулга: `mac_mls_load="YES"` man:mac_mls[4] бодлого нь систем дэх субьектууд болон обьектуудын хоорондын хандалтыг мэдээллийн урсгалын чанд бодлогын тусламжтайгаар хянаж хэрэгжүүлдэг. MLS орчнуудад "clearance" буюу цэвэрлэгээ түвшин нь субьект болон обьектуудын хаяг/шошгонд тасалгаануудын цуг тохируулагддаг. Эдгээр цэвэрлэгээ буюу мэдрэхүйн түвшингүүд нь зургаан мянгаас их тоонд хүрч болох учир ямар ч администраторын хувьд субьект эсвэл обьект бүрийг нарийн тохируулах нь сүрдмээр ажил байдаг. Харин үүнийг хөнгөвчлөх гурван ширхэг "хормын" хаяг/шошго энэ бодлогод орсон байдаг. Эдгээр хаяг/шошгонууд нь `mls/low`, `mls/equal` болон `mls/high` юм. Эдгээр хаяг/шошгонууд нь гарын авлагын хуудсанд дэлгэрэнгүй тайлбарлагдсан болохоор энд зөвхөн товчхон тайлбарлая: * `mls/low` хаяг/шошго нь доод тохиргоог агуулдаг бөгөөд энэ нь түүнийг бусад бүх обьектуудаар захируулахыг зөвшөөрдөг. `mls/low`-ээр хаяглагдсан болгон доод цэвэрлэгээний түвшинтэй байх бөгөөд өндөр түвшний мэдээлэлд хандах нь зөвшөөрөгдөөгүй байх болно. Мөн энэ хаяг/шошго нь цэвэрлэгээний өндөр түвшингийн обьектуудад бичих эсвэл тэдгээрт мэдээлэл дамжуулахаас сэргийлдэг. * `mls/equal` хаяг/шошго энэ бодлогоос чөлөөлөгдөхөөр болсон обьектуудад тавигдах ёстой. * `mls/high` хаяг/шошго нь цэвэрлэгээний боломжит хамгийн өндөр түвшин юм. Энэ хаяг/шошгыг заасан обьектууд систем дэх бусад бүх обьектуудаас давуу эрхтэй байх бөгөөд гэхдээ тэдгээр нь доод ангиллын обьектуудад мэдээлэл алдагдахыг зөвшөөрөхгүй байх болно. MLS дараах боломжуудыг олгодог: * Шатлаагүй зэрэглэлүүдийн олонлогтой аюулгүй байдлын шаталсан түвшин; * Тогтмол дүрмүүд: дээш уншихгүй, доош бичихгүй (субьект нь өөрөөсөө дээд түвшинд биш зөвхөн өөрийн түвшний болон доод түвшний обьектуудад унших хандалттай байж болно. Үүнтэй адилаар субьект нь өөрөөсөө доод түвшинд биш зөвхөн өөрийн түвшний болон дээд түвшний обьектуудад бичих хандалттай байж болно.); * Нууцлаг байдал (өгөгдлийн зохисгүй ил болголтоос сэргийлэх); * Мэдрэмжийн олон түвшингүүдэд өгөгдөлтэй зэрэгцээгээр ажиллах системүүдийн дизайны үндэс (нууц болон итгэмжлэгдсэн мэдээллийн хооронд мэдээлэл алдахгүйгээр). Тусгай төхөөрөмжүүд болон интерфэйсүүдийн хувьд дараах `sysctl`-ийн тааруулах боломжтой хувьсагчууд байдаг: * `security.mac.mls.enabled` нь MLS бодлогыг нээх/хаахад хэрэглэгддэг. * `security.mac.mls.ptys_equal` нь бүх man:pty[4] төхөөрөмжүүдийг үүсгэлтийнх нь үеэр `mls/equal` гэж хаяглана. * `security.mac.mls.revocation_enabled` нь обьектуудын хаяг/шошго доод зэргийнх уруу болж өөрчлөгдсөний дараа тэдгээрт хандах хандалтыг цуцлахад хэрэглэгддэг. * `security.mac.mls.max_compartments` нь обьектуудад хамгийн их тооны тасалгааны түвшингүүдийг тохируулахад хэрэглэгддэг; үндсэндээ системд зөвшөөрөгдсөн тасалгааны хамгийн их дугаар байна. MLS хаяг/шошгонуудтай ажиллахын тулд man:setfmac[8] байдаг. Обьектод хаяг/шошгыг олгохын тулд дараах тушаалыг ажиллуулна: [source,shell] .... # setfmac mls/5 test .... [.filename]#test# файлын хувьд MLS хаяг/шошгыг авахын тулд дараах тушаалыг ажиллуулна: [source,shell] .... # getfmac test .... Энэ нь MLS бодлогын боломжуудын товч дүгнэлт юм. Өөр нэг хандлага нь MLS бодлогын мэдээллийг тохируулах мастер бодлогын файлыг [.filename]#/etc# санд үүсгэж тэр файлыг `setfmac` тушаалд өгөх явдал юм. Энэ аргыг бүх бодлогуудыг авч үзсэнийхээ дараа тайлбарлах болно. === Албадмал Мэдрэмжийг төлөвлөх нь Олон түвшинт аюулгүй байдлын бодлогын модулиар администратор эмзэг мэдээллийн урсгалыг хянахын тулд төлөвлөдөг. Анхдагчаар өөрийн блок дээш унших, блок доош бичих мөн чанараараа систем бүгдийг доод төлөвт болгодог. Бүгд хандах боломжтой байх бөгөөд администратор тохиргооны явцад аажмаар үүнийг мэдээллийн итгэмжлэгдсэн байдлыг нэмэгдүүлэн өөрчилдөг. Дээрх гурван үндсэн хаяг/шошгоноос гадна администратор хэрэглэгчид болон бүлгүүдийг шаардлагын дагуу тэдгээрийн хооронд мэдээллийн урсгалыг хаахаар бүлэглэж болно. Цэвэрлэгээний түвшингүүдэд мэдээллийг танигдсан үгсээр хайх нь амар байж болох бөгөөд жишээ нь `Confidential`, `Secret`, болон `Top Secret` гэх зэрэг ангиллууд байж болох юм. Зарим администраторууд төслийн түвшингүүд дээр үндэслэн өөр бүлгүүдийг үүсгэж болох юм. Ангиллын аргаас үл хамааран ийм хязгаарласан бодлогыг хийхээс өмнө сайн бодож гаргасан төлөвлөгөө байж байх ёстой. Энэ аюулгүй байдлын бодлогын модулийн хувьд зарим жишээ тохиолдлууд гэх юм бол e-commerce вэб сервер, компанийн чухал мэдээлэл болон санхүүгийн байгууллагын орчнуудыг агуулсан файл сервер байж болох юм. Хамгийн үнэмшилгүй газар бол зөвхөн хоёр, гуравхан хэрэглэгчтэй ажлын станц байх юм. [[mac-biba]] == MAC Biba модуль Модулийн нэр: [.filename]#mac_biba.ko# Цөмийн тохиргооны мөр: `options MAC_BIBA` Ачаалалтын тохируулга: `mac_biba_load="YES"` man:mac_biba[4] модуль MAC Biba бодлогыг дууддаг. Энэ бодлого нь MLS бодлоготой адил ажилладаг бөгөөд ялгаатай нь мэдээллийн урсгалын дүрмүүд нь нэлээн эсрэгээр байдаг. Энэ нь эмзэг мэдээллийн буурсан урсгалаас сэргийлдэг гэдэг бол MLS бодлого нь эмзэг мэдээллийн өгссөн урсгалаас сэргийлдэг; тиймээс энэ хэсгийн ихэнх нь хоёр бодлогод хоёуланд нь хамаатай юм. Biba орчнуудад "integrity" буюу бүрэн бүтэн байдлын хаяг/шошго субьект эсвэл обьект бүр дээр тавигддаг. Эдгээр хаяг/шошгууд нь шаталсан зэргүүд болон шатлаагүй бүрэлдэхүүнүүдээс тогтдог. Обьект болон субьектийн зэрэг өсөх тусам бүрэн бүтэн байдал ч бас дээшилдэг. Дэмжигдсэн хаяг/шошгууд нь `biba/low`, `biba/equal`, болон `biba/high` бөгөөд доор тайлбарлав: * `biba/low` хаяг/шошго нь обьект эсвэл субьектийн авч болох хамгийн доод бүрэн бүтэн байдал гэж үздэг. Үүнийг обьектууд эсвэл субьектууд дээр тавих нь илүү өндрөөр тэмдэглэгдсэн обьектууд эсвэл субьектууд уруу хийх тэдгээрийн бичих хандалтыг хаана. Гэхдээ тэдгээрт унших хандалт байх болно. * `biba/equal` хаяг/шошго нь бодлогоос чөлөөлөгдөх обьектууд дээр зөвхөн тавигдах ёстой. * `biba/high` хаяг/шошго нь доод хаяг/шошго дээр тавигдсан обьектуудад бичихийг зөвшөөрөх боловч тэр обьектийг уншихыг зөвшөөрдөггүй. Бүхэл системийн бүрэн бүтэн байдалд нөлөөлдөг обьектуудад энэ хаяг/шошгыг тавихыг зөвлөдөг. Biba дараах боломжуудыг олгодог: * Шатлаагүй бүрэн бүтэн байдлын зэрэглэлүүдийн олонлог бүхий шаталсан бүрэн бүтэн байдлын түвшин; * Тогтмол дүрмүүд: дээш бичихгүй, доош уншихгүй (MLS-ийн эсрэг). Субьект нь өөрөөсөө дээд түвшинд биш зөвхөн өөрийн түвшний болон доод түвшний обьектуудад бичих хандалттай байж болно. Үүнтэй адилаар субьект нь өөрөөсөө доод түвшинд биш зөвхөн өөрийн түвшний болон дээд түвшний обьектуудад унших хандалттай байж болно; * Бүрэн бүтэн байдал (өгөгдлийн зохисгүй өөрчлөлтөөс сэргийлэх); * Бүрэн бүтэн байдлын түвшингүүд (MLS-ийн мэдрэмжийн түвшингүүдийн оронд). Дараах `sysctl`-ийн тааруулах боломжтой хувьсагчуудыг Biba бодлоготой ажиллахын тулд хэрэглэж болно. * `security.mac.biba.enabled` нь машин дээр Biba бодлогыг нээхэд/хаахад хэрэглэгдэж болно. * `security.mac.biba.ptys_equal` нь Biba бодлогыг man:pty[4] төхөөрөмжүүд дээр хаахад хэрэглэглэгдэж болно. * `security.mac.biba.revocation_enabled` нь хаяг/шошго субьектийг захирахаар өөрчлөгдсөн бол обьектод хийх хандалтыг цуцлах болно. Системийн обьектууд дахь Biba бодлогын тохиргоонд хандахын тулд `setfmac` болон `getfmac` тушаалуудыг ашиглана: [source,shell] .... # setfmac biba/low test # getfmac test test: biba/low .... === Албадмал бүрэн бүтэн байдлыг төлөвлөх нь Бүрэн бүтэн байдал нь мэдрэмтгий байдлаас өөр бөгөөд мэдээллийг итгэгдээгүй талуудаар хэзээ ч удирдуулахгүй байлгаж баталгаажуулдаг. Үүнд субьектууд болон обьектууд, тэдгээрийн хооронд дамжих мэдээлэл ордог. Энэ нь хэрэглэгчдэд зөвхөн өөрчилж чадах боломж болон бүр зарим тохиолдолд тэдэнд хэрэгтэй мэдээлэлд хандах боломжийг олгодог. man:mac_biba[4] аюулгүй байдлын бодлогын модуль нь аль файлууд болон програмуудыг хэрэглэгч эсвэл хэрэглэгчид харах ёстойг заахыг администраторт зөвшөөрч програмууд болон файлууд нь аюул заналаас ангид бөгөөд тэр хэрэглэгч, эсвэл хэрэглэгчдийн бүлгийн хувьд системээр итгэгдсэн гэдгийг баталгаажуулж дууддаг. Эхний төлөвлөлтийн үеэр администратор зэргүүд, түвшингүүд болон бүсүүдэд хэрэглэгчдийг хуваахад бэлдэх ёстой. Хэрэглэгчдийн хувьд зөвхөн өгөгдлөөс гадна бас програмууд болон хэрэгслүүдэд тэдгээрийг эхлэхээс өмнө болон тэдгээрийг эхлүүлсний дараа тэдгээрт хандах хандалт хаагдсан байх болно. Энэ бодлогын модуль идэвхжүүлэгдсэний дараа систем өндөр хаяг/шошго уруу анхдагчаар шилжих бөгөөд хэрэглэгчдийн хувьд өөр зэргүүд болон түвшингүүдийг тохируулах нь администраторын хэрэг юм. Цэвэрлэгээний түвшингүүдийг дээр тайлбарласны дагуу ашиглахын оронд сайн төлөвлөх арга нь сэдвүүдийг оруулж болох юм. Жишээ нь эх кодын архив, эх код эмхэтгэгч болон бусад хөгжүүлэлтийн хэрэгслүүдэд өөрчлөх хандалтыг зөвхөн хөгжүүлэгчдэд зөвшөөрөх байж болно. Тэгээд бусад хэрэглэгчдийг тест хийгчид, дизайн хийгчид эсвэл зүгээр л энгийн хэрэглэгчид зэрэг өөр зэрэглэлд бүлэглэж зөвхөн унших хандалтыг зөвшөөрөх юм. Цаанаасаа хийгдсэн аюулгүй байдлын хяналтаас болоод доод түвшний бүрэн бүтэн байдлын субьект нь дээд түвшний бүрэн бүтэн байдлын субьект уруу бичиж чаддаггүй; дээд түвшний бүрэн бүтэн байдлын субьект нь доод түвшний бүрэн бүтэн байдлын обьектийг ажиглаж эсвэл уншиж чаддаггүй. Хамгийн доод боломжит зэрэгт хаяг/шошгыг тохируулах нь субьектуудыг түүнд хандах боломжгүй болгож болох юм. Энэ аюулгүй байдлын бодлогын модулийн зарим хэтийн орчнуудад хүчилсэн вэб сервэр, хөгжүүлэлтийн болон тестийн машин, болон эх кодын архив зэрэг орж болох юм. Тийм ч ашигтай бус шийдэлд персонал ажлын станц, чиглүүлэгч маягаар ашиглагдаж байгаа машин эсвэл сүлжээний галт хана зэрэг байж болох юм. [[mac-lomac]] == MAC LOMAC модуль Модулийн нэр: [.filename]#mac_lomac.ko# Цөмийн тохиргооны файл: `options MAC_LOMAC` Ачаалалтын тохируулга: `mac_lomac_load="YES"` MAC Biba бодлогоос ондоо нь man:mac_lomac[4] бодлого нь бүрэн бүтэн байдлын дүрмүүдийг эвдэхгүйн тулд бүрэн бүтэн байдлын түвшинг заавал багасгасны дараа бүрэн бүтэн байдлын хувьд доор орших обьект уруу хандахыг зөвшөөрдөг. Low-watermark integrity policy буюу доод түвшний бүрэн бүтэн байдлын MAC хувилбарыг хуучин man:lomac[4]-ийн шийдэлтэй эндүүрч болохгүй бөгөөд энэ хувилбар нь Biba-тай бараг л төстэй ажилладаг боловч ялгаатай тал нь субьектийн бууруулалтыг туслах зэргийн тасалгааны тусламжтай дэмжихийн тулд хөвөгч хаяг/шошгуудыг ашигладаг явдал юм. Энэ хоёр дахь тасалгаа нь `[auxgrade]` хэлбэрийг авдаг. lomac бодлогыг туслах зэргээр зааж өгөх үед энэ нь иймэрхүү харагдах ёстой: `lomac/10[2]`. Энд байгаа хоёр (2) гэсэн тоо нь туслах зэрэг юм. MAC LOMAC бодлого нь бүрэн бүтэн байдлын хаяг/шошгоор бүх системийн обьектуудыг хаа сайгүй хаяглах явдалд тулгуурладаг бөгөөд субьектуудад бүрэн бүтэн байдлын хувьд доор орших обьектуудаас уншихыг зөвшөөрч дараа нь өндөр бүрэн бүтэн байдал бүхий обьектуудад ирээдүйд хийгдэж болзошгүй бичилтүүдээс урьдчилан сэргийлэхийн тулд субьект дээрх хаяг/шошгыг доошлуулж бууруулдаг. Энэ нь дээр хэлэлцэгдсэн `[auxgrade]` тохируулга болохоор уг бодлого нь илүү сайн нийцтэй байдлыг хангаж Biba-аас бага эхний тохиргоог шаардаж болох юм. === Жишээнүүд Biba болон MLS бодлогуудын нэгэн адил `setfmac` болон `setpmac` хэрэгслүүд системийн обьектууд дээр хаяг/шошгонууд байрлуулахад хэрэглэгдэж болно: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... Энд байгаа туслах зэрэг нь `low` буюу доор гэж байгааг анзаараарай, энэ нь зөвхөн MAC LOMAC бодлогын хангадаг боломж юм. [[mac-implementing]] == MAC Шорон дахь Nagios Дараах нь зөв тохируулсан бодлогуудын хамтаар төрөл бүрийн MAC модулиудыг ашиглан аюулгүй орчинг үүсгэхийг харуулах болно. Энэ нь зөвхөн тест бөгөөд хүн бүгдийн аюулгүй байдлын асуудалд бүрэн хариулт болно гэж тооцох ёсгүй юм. Бодлогыг зөвхөн шийдэж түүнийг орхигдуулах нь хэзээ ч ажиллахгүй бөгөөд жинхэнэ ажиллаж байгаа үйлдвэрлэлийн орчинд сүйрлийн болж болох юм. Энэ процессийг эхлүүлэхээсээ өмнө `multilabel` тохируулга файлын систем бүр дээр энэ бүлгийн эхэнд дурдсаны дагуу тавигдах ёстой. Ингэж хийхгүй бол алдаа гарах болно. Энд байхдаа package:net-mngt/nagios-plugins[], package:net-mngt/nagios[], болон package:www/apache22[] портууд бүгд суулгагдаж тохируулагдаж зөв ажиллаж байгаа эсэхийг шалгаарай. === Хэрэглэгчийн insecure ангилал үүсгэнэ Дараах хэрэглэгчийн ангиллыг [.filename]#/etc/login.conf# файлд нэмж: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... мөн дараах мөрийг анхдагч хэрэглэгчийн ангилалд нэмж процедурыг эхэлнэ: [.programlisting] .... :label=biba/high: .... Энэ хийгдсэний дараа мэдээллийн баазыг дахин бүтээхийн тулд дараах тушаалыг ажиллуулах ёстой: [source,shell] .... # cap_mkdb /etc/login.conf .... === Ачаалалтын тохиргоо Дахин ачаалах гэсний хэрэггүй, шаардлагатай модулиудыг систем эхлүүлэхэд дуудахын тулд дараах мөрүүдийг [.filename]#/boot/loader.conf# файлд нэмнэ: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" .... === Хэрэглэгчдийг тохируулна `root` хэрэглэгчийг анхдагч ангилалд доор дурдсаныг ашиглан тохируулна: [source,shell] .... # pw usermod root -L default .... `root` эсвэл системийн хэрэглэгчид биш бүх хэрэглэгчийн бүртгэлүүд одоо нэвтрэлийн ангилал шаардах болно. Нэвтрэлтийн ангилал шаардлагатай, түүнгүй бол хэрэглэгчид man:vi[1] зэрэг нийтлэг тушаалд хандах боломжгүй болно. Дараах `sh` скрипт үүнийг хийх болно: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ /etc/passwd`; do pw usermod $x -L default; done; .... `nagios` болон `www` хэрэглэгчдийг insecure ангилалд оруулна: [source,shell] .... # pw usermod nagios -L insecure .... [source,shell] .... # pw usermod www -L insecure .... === Contexts буюу Сэдвийн файл үүсгэнэ Сэдвийн файл нь одоо үүсгэгдсэн байх ёстой; дараах жишээ файлыг [.filename]#/etc/policy.contexts#-д байрлуулах ёстой. [.programlisting] .... # This is the default BIBA policy for this system. # System: /var/run biba/equal /var/run/* biba/equal /dev biba/equal /dev/* biba/equal /var biba/equal /var/spool biba/equal /var/spool/* biba/equal /var/log biba/equal /var/log/* biba/equal /tmp biba/equal /tmp/* biba/equal /var/tmp biba/equal /var/tmp/* biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # For Nagios: /usr/local/etc/nagios /usr/local/etc/nagios/* biba/10 /var/spool/nagios biba/10 /var/spool/nagios/* biba/10 # For apache /usr/local/etc/apache biba/10 /usr/local/etc/apache/* biba/10 .... Энэ бодлого нь мэдээллийн урсгалд хязгаарлалтуудыг тавьж аюулгүй байдлыг хангадаг. Энэ тусгайлсан тохиргооны хувьд хэрэглэгчид, `root` болон бусад хэрэглэгчид Nagios програмд хандахаар хэзээ ч зөвшөөрөгдсөн байх ёсгүй. Nagios-ийн тохиргооны файлууд болон процессууд нь бүр мөсөн өөртөө багтсан буюу шоронд хийгдсэн байх болно. Одоо энэ файлыг өөрийн систем уруу уншуулахдаа дараах тушаалыг ажиллуулна: [source,shell] .... # setfsmac -ef /etc/policy.contexts / # setfsmac -ef /etc/policy.contexts / .... [NOTE] ==== Дээрх файлын системийн байршил орчноосоо хамааран өөр байж болно; гэхдээ үүнийг файлын систем бүр дээр ажиллуулах ёстой. ==== [.filename]#/etc/mac.conf# файл гол хэсэгт дараах өөрчлөлтүүдийг шаарддаг: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === Сүлжээг идэвхжүүлнэ Дараах мөрийг [.filename]#/boot/loader.conf#-д нэмнэ: [.programlisting] .... security.mac.biba.trust_all_interfaces=1 .... Тэгээд дараа нь доор дурдсаныг [.filename]#rc.conf# файлд хадгалагдсан сүлжээний картны тохиргоонд нэмнэ. Хэрэв анхдагч Интернэтийн тохиргоо DHCP-ээр хийгдсэн бол системийг ачаалах болгоны дараа үүнийг гараараа тохируулах хэрэгтэй болох юм: [.programlisting] .... maclabel biba/equal .... === Тохиргоог тест хийх нь Вэб сервер болон Nagios нь системийг эхлүүлэхэд ажиллахааргүй байгаа эсэхийг шалгаад дахин ачаална. `root` хэрэглэгч Nagios-ийн тохиргооны сан дахь ямар ч файлд хандаж чадах ёсгүйг баталгаажуулна. Хэрэв `root` нь [.filename]#/var/spool/nagios#-д man:ls[1]-ийг ажиллуулж чадаж байвал ямар нэг юм буруу байна гэсэн үг. Зөв бол "permission denied" алдаа буцаагдах ёстой. Хэрэв бүгд зүгээр юм шиг санагдвал Nagios, Apache, болон Sendmail-ийг одоо аюулгүй байдлын бодлогод тааруулж ажиллуулж болно. Үүнийг дараах тушаал хийх болно: [source,shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Бүгд зөв ажиллаж байгаа эсэхийг баталгаажуулж дахин шалгаарай. Хэрэв үгүй бол бүртгэлийн файлуудаас алдааны мэдэгдлүүд байгаа эсэхийг шалгана. man:sysctl[8] хэрэгсэл ашиглаж man:mac_biba[4] аюулгүй байдлын бодлогын модулийн үйлчлэлийг хааж бүгдийг эхнээс нь эхлэхийг оролдоорой. [NOTE] ==== `root` хэрэглэгч аюулгүй байдлын үйлчлэлийг өөрчилж тохиргооны файлыг айлгүйгээр засварлаж чадна. Дараах тушаал нь шинээр үүсгэсэн бүрхүүлийн хувьд аюулгүй байдлын бодлогыг доод зэрэг уруу орж буурахыг зөвшөөрөх болно: [source,shell] .... # setpmac biba/10 csh .... Үүнийг болгохгүй байлгахын тулд man:login.conf[5]-оор хэрэглэгчийг хүрээнд оруулна. Хэрэв man:setpmac[8] тушаалыг тасалгааных нь хүрээнээс гадна ажиллуулах гэж оролдвол алдаа буцаагдах бөгөөд тушаал ажиллахгүй байх болно. Энэ тохиолдолд root-ийг `biba/high(high-high)` болгож тохируулна. ==== [[mac-userlocked]] == Хэрэглэгчийг түгжих Энэ жишээ нь харьцангуй жижиг, тавиас бага хэрэглэгчтэй хадгалалтын системийг авч үздэг. Хэрэглэгчид нь нэвтрэлтийн боломжуудтай байх бөгөөд тэдэнд зөвхөн өгөгдөл биш бас хандалтын эх үүсвэрүүдийг хадгалахыг зөвшөөрнө. Энэ тохиолдолд man:mac_bsdextended[4] нь man:mac_seeotheruids[4]-тэй холилдон оршиж болох бөгөөд системийн обьектуудад хандахыг хаагаад зогсохгүй бас хэрэглэгчийн процессийг нуух хандалтыг бас хаадаг. Дараах мөрийг [.filename]#/boot/loader.conf# файлд нэмж эхэлнэ: [.programlisting] .... mac_seeotheruids_load="YES" .... man:mac_bsdextended[4] аюулгүй байдлын бодлогын модулийг дараах rc.conf хувьсагчийг хэрэглэн идэвхтэй болгож болно: [.programlisting] .... ugidfw_enable="YES" .... [.filename]#/etc/rc.bsdextended# файлд хадгалагдах анхдагч дүрмүүд нь системийг эхлүүлэхэд дуудагдана. Гэхдээ анхдагч оруулгууд нь өөрчлөлтүүд шаардаж болох юм. Энэ машин нь зөвхөн хэрэглэгчдэд үйлчлэхээр зориулагдсан болохоор сүүлийн хоёроос бусдыг хааж тайлбар болгон үлдээж болох юм. Сүүлийн хоёр нь анхдагчаар хэрэглэгчийн эзэмших системийн обьектуудыг дуудуулах болно. Шаардлагатай хэрэглэгчдийг энэ машин уруу нэмээд дахин ачаална. Тест хийх зорилгоор хоёр консол дээр өөр хэрэглэгчээр нэвтрэхийг оролдоорой. Бусад хэрэглэгчдийн процессууд харж болохоор байгаа эсэхийг харахын тулд `ps aux` тушаалыг ажиллуулна. man:ls[1]-ийг нөгөө хэрэглэгчийн гэрийн сан дээр ажиллуулахыг оролдоорой, энэ нь амжилтгүй болох болно. Супер хэрэглэгчийн хандалтыг хаахын тулд ашигладаг тусгай `sysctl`-уудыг өөрчлөхөөс бусад тохиолдолд `root` хэрэглэгчээр тест битгий хийгээрэй. [NOTE] ==== Шинэ хэрэглэгч нэмэгдэхэд тэдгээрийн man:mac_bsdextended[4] дүрмүүд дүрмийн олонлогийн жагсаалтад байхгүй байна. Дүрмийн олонлогийг хурдан шинэчлэхийн тулд man:kldunload[8] болон man:kldload[8] хэрэгслүүдийг ашиглан аюулгүй байдлын бодлогын модулийг буулгаж дараа нь түүнийг дахин ачаалж хийнэ. ==== [[mac-troubleshoot]] == MAC Тогтолцооны алдааг олж засварлах Хөгжүүлэлтийн явцад цөөн хэрэглэгчид энгийн тохируулга дээр асуудлууд гарснаа мэдээлсэн. Эдгээр асуудлуудын заримыг доор жагсаав: === `multilabel` тохируулгыг [.filename]#/# дээр идэвхжүүлж болохгүй байна `multilabel` туг миний root ([.filename]#/#) хуваалтан дээр идэвхтэй болохгүй байна! 50 хэрэглэгч тутмын нэг нь ийм асуудалтай байдаг бололтой, харин бид энэ асуудалтай эхний тохиргооны үеэр тулгарсан. "bug" буюу "цох" гэж нэрлэгдэх үүний цаадах ажиглалт нь үүнийг буруу баримтжуулалт эсвэл баримтын буруу тайлбарлалтын үр дүн гэж намайг итгэхэд хүргэсэн. Энэ яагаад болсноос үл хамааран үүнийг шийдэхийн тулд дараах алхмуудыг хийж болох юм: [.procedure] ==== . [.filename]#/etc/fstab#-ийг засварлаж root хуваалтыг зөвхөн унших зорилгоор `ro` гэж тохируулна. . Ганц хэрэглэгчийн горимд дахин ачаална. . `tunefs -l enable` тушаалыг [.filename]#/# дээр ажиллуулна. . Системийг энгийн горимд дахин ачаална. . `mount -urw`[.filename]#/# тушаалыг ажиллуулж `ro` тохируулгыг `rw` болгож [.filename]#/etc/fstab# файлд өөрчлөн системийг дахин ачаална. . root файлын систем дээр `multilabel` тохируулга зөв тохируулагдсаныг баталгаажуулж `mount` тушаалын гаралтыг дахин шалгаарай. ==== === MAC-ийн дараа X11 серверийг эхлүүлж чадахгүй байна MAC-ийн тусламжтай аюулгүй орчинг үүсгэсний дараа би X-ийг дахиж эхлүүлж чадахаа больчихлоо! Энэ нь MAC ``хуваалт``ын бодлого эсвэл MAC хаяглалтын бодлогуудын аль нэгний буруу хаяглалтаас болсон байж болох юм. Дибаг хийхийн тулд доор дурдсаныг оролдоод үзээрэй: [.procedure] ==== . Алдааны мэдэгдлийг шалгана; хэрэв хэрэглэгч `insecure` ангилалд байгаа бол ``хуваалт``ын бодлого гэмтэн байж болох юм. Хэрэглэгчийн ангиллыг `default` буюу анхдагч ангилал уруу тохируулж мэдээллийн баазыг `cap_mkdb` тушаалын тусламжтай дахин бүтээх хэрэгтэй. Хэрэв энэ нь асуудлыг арилгаж чадахгүй байгаа бол хоёрдугаар алхам уруу ор. . Хаяг/шошгоны бодлогуудыг давхар шалгаарай. Асуудалтай байгаа хэрэглэгч, X11 програм болон [.filename]#/dev# оруулгуудын хувьд бодлогууд зөв заагдсан эсэхийг баталгаажуулаарай. . Хэрэв эдгээрийн аль нь ч асуудлыг тань шийдэхгүй бол http://www.TrustedBSD.org[TrustedBSD] вэб сайтад байрлах TrustedBSD-ийн хэлэлцүүлгийн жагсаалтууд эсвэл {freebsd-questions} захидлын жагсаалт уруу алдааны мэдэгдэл болон өөрийн орчны тухай мэдээллийг илгээгээрэй. ==== === Error: man:_secure_path[3] cannot stat [.filename]#.login_conf# Намайг `root` хэрэглэгчээс систем дээрх өөр хэрэглэгч уруу шилжихийг оролдох үед `_secure_path: unable to state .login_conf` гэсэн алдаа гараад байна. Энэ мэдэгдэл нь тэр болох гэж байгаа хэрэглэгчийн хаяг/шошгоны тохиргооноос хэрэглэгчийн өөрийнх нь тохиргоо өндөр байгааг ихэвчлэн үзүүлдэг. Жишээ нь систем дээрх хэрэглэгч `joe` анхдагч `biba/low` гэсэн хаяг/шошготой байна гэж бодъё. `biba/high` хаяг/шошготой `root` хэрэглэгч `joe`-ийн гэр санг харж чадахгүй. Энэ нь `root` хэрэглэгч `su` тушаал ашиглан `joe` болсон ч гэсэн болохгүй байна. Энэ тохиолдолд Biba бүрэн бүтэн байдлын загвар нь `root` хэрэглэгчийг бүрэн бүтэн байдлын доод түвшин тохируулагдсан обьектуудыг харахыг зөвшөөрөхгүй байх болно. === `root` хэрэглэгчийн нэр эвдэрсэн байна! Энгийн эсвэл бүр ганц хэрэглэгчийн горимд `root` хэрэглэгч танигддаггүй. `whoami` тушаал 0 (тэг) буцаах бөгөөд `su` тушаал `who are you?` гэсэн алдааны мэдэгдлийг буцаадаг. Юу болоод байгаа юм бол оо? Энэ нь хаяглах бодлого man:sysctl[8]-оор хаагдсан эсвэл бодлогын модулийг буулгаснаас болдог. Хэрэв бодлого хаагдсан эсвэл түр зуур хаагдсан бол `label` тохируулгыг арилган нэвтрэлтийн боломжуудын мэдээллийн баазыг дахин тохируулах хэрэгтэй. Бүх `label` тохируулгууд арилсан эсэхийг баталгаажуулж [.filename]#login.conf# файлаа дахин шалгаж мэдээллийн баазаа `cap_mkdb` тушаалаар дахин бүтээх хэрэгтэй. Энэ нь [.filename]#master.passwd# файлд эсвэл мэдээллийн баазад хандах хандалтыг бодлого хязгаарласнаас болоод бас гарч болох юм. Системд ашиглагдаж байгаа ерөнхий бодлоготой зөрчилдөх хаяг/шошгоны доор администратор файлыг өөрчлөхөд ихэвчлэн ингэдэг. Ийм тохиолдлуудад хэрэглэгчийн мэдээллийг систем унших бөгөөд файл нь шинэ хаяг/шошго удамшин авсан болохоор хандалт хаалттай байх болно. Бодлогыг man:sysctl[8]-ий тусламжтай хаах хэрэгтэй. Ингэхэд бүх зүйлс хэвийндээ эргэн орох болно. diff --git a/documentation/content/mn/books/handbook/network-servers/_index.adoc b/documentation/content/mn/books/handbook/network-servers/_index.adoc index 03e90cf5ed..7eb541dfe0 100644 --- a/documentation/content/mn/books/handbook/network-servers/_index.adoc +++ b/documentation/content/mn/books/handbook/network-servers/_index.adoc @@ -1,2987 +1,2986 @@ --- title: Бүлэг 30. Сүлжээний орчны Серверүүд part: хэсэг IV. Сүлжээний Холболт prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 35 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Сүлжээний орчны Серверүүд :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 30 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Ерөнхий агуулга Энэ бүлэгт UNIX(R) системүүдэд өргөн хэрэглэгддэг, сүлжээний орчинд ажилладаг зарим нэг үйлчилгээнүүдийн талаар авч үзнэ. Бид тэдгээр үйлчилгээнүүдийг хэрхэн суулгах, тохируулах, турших болон үйлчилгээг хариуцах талаар үзэх болно. Танд зориулж жишээ тохиргооны файлуудыг мөн оруулж өгсөн байгаа. Энэ бүлгийг уншсаны дараа, та дараах зүйлсийг мэдэх болно: * inetd дэмоныг хэрхэн удирдах. * Сүлжээний орчны файл системийг хэрхэн зохион байгуулах. * Хэрэглэгчийн бүртгэлийг хуваалцах сүлжээний орчны мэдээллийн серверийг хэрхэн зохион байгуулах. * DHCP ашиглан автоматаар сүлжээний тохиргоог хэрхэн хийх. * Домэйн нэрийн серверийг хэрхэн зохион байгуулах. * Apache HTTP Серверийг хэрхэн зохион байгуулах. * File Transfer Protocol буюу Файл Дамжуулах Протокол(FTP) Серверийг хэрхэн зохион байгуулах. * Samba ашиглан Windows(R) хэрэглэгчдэд зориулсан файл болон хэвлэгч серверийг хэрхэн зохион байгуулах. * NTP протокол ашиглан цаг болон өдрийг тохируулах хийгээд цагийн серверийг хэрхэн зохион байгуулах. * How to configure the standard logging daemon, `syslogd`, to accept logs from remote hosts. Энэ бүлгийг уншихаасаа өмнө, та дараах шаардлагыг хангасан байх хэрэгтэй: * [.filename]#/etc/rc# скриптүүдийн үндсийг ойлгосон байх. * Сүлжээний үндсэн нэр томъёоллыг мэддэг байх. * Гуравдагч этгээдийн програмыг(crossref:ports[ports,Програм суулгах. Багцууд болон портууд]) хэрхэн нэмж суулгахыг мэддэг байх. [[network-inetd]] == inetd"Супер-Сервер" [[network-inetd-overview]] === Ерөнхий агуулга man:inetd[8] нь олон тооны үйлчилгээний сүлжээний холболтыг удирддаг тул заримдаа түүнийг "Интернэт Супер-Сервер" гэж нэрлэх нь бий. Гаднаас үүсч буй холболтыг inetd хүлээн авч, аль програмтай холбогдохыг тодорхойлон, тухайн процессийг салаалуулж, сокетийг түүн рүү чиглүүлнэ (програмын стандарт оролт, гаралт болон алдааны дескриптороор үйлчилгээний сокетийг өгнө). Байнга ашиглагддаггүй үйлчилгээний хувьд inetd-г ажиллуулах нь бүх дэмонг дангаар бие-даах горимд ажиллуулсантай харьцуулахад системийн нийт ачааллыг бууруулж өгдөг. Голчлон, inetd нь бусад дэмонуудыг салаалуулахад хэрэглэгддэг боловч chargen, auth, ба daytime гэх мэт нилээд олон ердийн протоколуудыг шууд зохицуулан ажиллуулж чадна. Энэ хэсэгт inetd-н үндсэн тохиргоог тушаалын мөрний тохируулгаар, мөн [.filename]#/etc/inetd.conf# тохиргооны файлаар хэрхэн хийхийг үзэх болно. [[network-inetd-settings]] === Тохиргоо inetd нь man:rc[8] системээр эхлүүлэгдэнэ. `inetd_enable` тохируулгын анхдагч утга нь `NO` бөгөөд, системийг суулгах явцад хэрэглэгчийн зааж өгсний дагуу sysinstall програмын тусламжтай идэвхжүүлж болно. [.programlisting] .... inetd_enable="YES" .... эсвэл [.programlisting] .... inetd_enable="NO" .... гэсэн мөрийг [.filename]#/etc/rc.conf# файл дотор байрлуулснаар inetd-г систем ачаалахад эхэлдэг болгож болно. Доор дурдсан: [.programlisting] .... service inetd rcvar .... тушаалыг өгөн одоо идэвхтэй байгаа тохиргоог харж болно. Дээр нь, `inetd_flags` тохируулгаар дамжуулан inetd програмд тушаалын мөрнөөс өөр бусад тохируулгуудыг зааж өгч болно. [[network-inetd-cmdline]] === Тушаалын мөрний тохируулгууд Ихэнх сервер дэмоны нэгэн адил, inetd нь түүнийг өөрчлөн тохируулахад зориулагдсан олон тооны тохируулгуудын хамт ирдэг. Сонголтуудын бүрэн жагсаалтыг man:inetd[8] гарын авлагын хуудаснаас үзнэ үү. [.filename]#/etc/rc.conf# файл доторх `inetd_flags` тохируулгыг ашиглан эдгээр тохируулгуудыг inetd-д дамжуулна. Анхдагч байдлаар, `inetd_flags` нь `-wW -C 60` гэсэн утгыг авсан байх ба энэ нь inetd-ны үйлчилгээнүүдийн хувьд TCP wrapping буюу TCP-ийн дундын хяналтыг идэвхжүүлэх ба нэг IP хаягнаас аль нэг үйлчилгээнд нэг минутанд 60-аас дээш удаа хүсэлт тавих боломжгүй болгоно. Хэдийгээр бид хурдыг хэрхэн хязгаарлахыг доор үзүүлж байгаа ч, анхлан суралцагчдын хувьд эдгээр параметрүүдийг ихэвчлэн өөрчлөх шаардлагагүй байдаг. Эдгээр тохируулга нь гаднаас хэтэрхий олон тооны хандалт хийгдэж байгаа үед тустай байдаг Тохируулгуудын бүрэн жагсаалтыг man:inetd[8] заавар хуудаснаас үзнэ үү. -c maximum:: Үйлчилгээг нэгэн зэрэг хамгийн ихдээ хэдэн удаа дуудаж болохыг заана; Анхдагч утга нь хязгааргүй. Үйлчилгээ тус бүрээр `max-child` параметрийн тусламжтай утгыг дарж өөрчилж болно. -C rate:: Үйлчилгээг нэг IP хаягнаас нэг минутын дотор хамгийн ихдээ хэдэн удаа дуудаж болохыг заана; Анхдагч утга нь хязгааргүй. Үйлчилгээ тус бүрээр `max-connections-per-ip-per-minute` параметрийн тусламжтай утгыг дарж өөрчилж болно. -R rate:: Үйлчилгээг нэг минутын дотор хамгийн ихдээ хэдэн удаа дуудаж болохыг заана; Анхдагч утга нь 256. 0-г тавьснаар хязгааргүй болгоно. -s maximum:: Үйлчилгээг нэг IP хаягнаас хамгийн ихдээ хэдэн удаа дуудаж болохыг заана; Анхдагч утга нь хязгааргүй. Үйлчилгээ тус бүрээр `max-child-per-ip` параметрийн тусламжтай утгыг дарж өөрчилж болно. [[network-inetd-conf]] === [.filename]#inetd.conf# inetd-г [.filename]#/etc/inetd.conf# файлын тусламжтай тохируулна. [.filename]#/etc/inetd.conf# файлд өөрчлөлт хийсний дараа, inetd-р тохиргооны файлыг дахин уншуулахдаа дараах тушаалыг өгнө: [[network-inetd-reread]] .inetd-н тохиргооны файлыг дахин ачаалах нь [example] ==== [source,shell] .... # service inetd reload .... ==== Тохиргооны файлын мөр бүр тусдаа дэмонг заана. Файл доторх тайлбарууд нь мөрийн эхэнд "#" тэмдэгтэй байна. [.filename]##/etc/inetd.conf## файл доторх бичлэгүүдийн формат дараах байдалтай байна: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... IPv4 ашигладаг man:ftpd[8] дэмоны хувьд жишээ бичлэг дараах байдалтай байж болно: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... service-name:: Тухайн дэмоны үйлчилгээний нэрийг заана. Энэ нь [.filename]#/etc/services# файл дотор бичигдсэн үйлчилгээнүүдийн нэг байх ёстой бөгөөд аль портон дээр сонсохыг inetd-д хэлж өгнө. Хэрэв шинэ үйлчилгээ үүсгэсэн бол түүнийг заавал [.filename]#/etc/services# файл дотор нэмсэн байх ёстой. socket-type:: `stream`, `dgram`, `raw`, эсвэл `seqpacket` эдгээрийн нэг байна. `stream`-г холболтон дээр үндэслэсэн TCP дэмонуудын хувьд хэрэглэдэг бол, `dgram`-г UDP протоколоор ажилладаг дэмонуудын хувьд хэрэглэнэ. protocol:: Доор дурдсанаас нэг нь байна: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Протокол | Тайлбар |tcp, tcp4 |TCP IPv4 |udp, udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |TCP IPv4 ба v6 хоёул |udp46 |UDP IPv4 ба v6 хоёул |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: `wait|nowait` нь inetd-р дуудагдсан дэмон өөрийн сокетийг удирдаж чадах эсэхийг заана. `dgram` төрлийн сокет дэмоны хувьд `wait` тохируулгыг хэрэглэх ёстой байдаг бол, ихэвчлэн олон урсгалтай байдаг `stream` сокет дэмоны хувьд `nowait` тохируулгыг хэрэглэх хэрэгтэй байдаг. `wait` нь ихэвчлэн олон сокетийг нэг дэмонд шилжүүлэн өгдөг бол, `nowait` нь шинээр үүссэн сокет тус бүрт харгалзуулан хүүхэд дэмонг салаалуулан үүсгэдэг. + inetd-ийн салаалуулан үүсгэж болох хамгийн их хүүхэд дэмоны тоог `max-child` тохируулгын тусламжтай зааж өгч болно. Хэрэв тухайн дэмоны ажиллаж болох тохиолдлыг 10-р хязгаарлах бол, `nowait`-н ард `/10` гэж бичнэ. `/0` нь хүүхдийн тоог хязгаарлахгүй гэсэн утгатай. + `max-child`-с гадна, нэг газраас тухайн дэмонтой үүсгэж байгаа холболтын тоог хязгаарладаг өөр хоёр тохируулгыг хэрэглэж болно. `max-connections-per-ip-per-minute` нь тухайн ямар нэг IP хаягнаас нэг минутанд үүсгэж болох холболтын тоог хязгаарлана, жишээлбэл: 10 гэсэн утга нь тухайн ямар нэг IP хаягнаас нэг минутын дотор тухайн үйлчилгээнд холбогдохоор оролдох оролдлогын тоог 10-р хязгаарлана. `max-child-per-ip` нь Тухайн ямар нэг IP хаяг дээр үүсгэгдсэн хүүхдийн тоог хязгаарлана. Эдгээр тохируулгууд нь санаатай болон санамсаргүйгээр нөөцийг хэтрүүлэн хэрэглэх, мөн Үйлчилгээг Зогсоох (DoS) халдлагаас хамгаалахад хэрэгтэй байдаг. + Хэрэглэхдээ, `wait` ба `nowait` хоёрын аль нэгийг заавал хэрэглэх ёстой. Харин `max-child`, `max-connections-per-ip-per-minute` ба `max-child-per-ip` тохируулгуудыг сонгон хэрэглэж болно. + Stream төрлийн олон урсгалтай дэмоны хувьд, `max-child`, `max-connections-per-ip-per-minute` эсвэл `max-child-per-ip` хязгаарлалтуудын алийг ч хэрэглэхгүй тохиолдолд ердөө: `nowait` байна. + Дээрхтэй адил дэмон, 10 хүүхэд дэмоны хязгаарлалттай бол: `nowait/10` байна. + Мөн адил дэмон, 10 хүүхэд дэмоны хязгаарлалттай, минутанд нэг IP хаягнаас үүсгэх холболтын тоог 20-р хязгаарлах бол: `nowait/10/20` болно. + Эдгээр тохируулгуудыг man:fingerd[8] дэмоны анхдагч тохиргоон дээр жишээ болгон харвал: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -s .... + Эцэст нь, 100 хүүхдийн хязгаарлалттай, нэг IP хаягнаас үүсэх холболтын тоог 5-р хязгаарласан дэмоны жишээг авбал: `nowait/100/0/5` байх юм. user:: Энд тухайн дэмон ямар хэрэглэгчийн нэрээр ажиллахыг зааж өгнө. Ихэвчлэн дэмонууд `root` хэрэглэгчийн нэр дээр ажилладаг. Аюулгүй байдлын үүднээс, зарим серверүүд `daemon`, эсвэл хамгийн бага эрхтэй `nobody` хэрэглэгчийн нэр дээр ажиллах нь элбэг байдаг. server-program:: Энд гаднаас холболт хүлээн авахад ажиллуулах дэмоны бүрэн замыг зааж өгнө. Хэрэв энэ дэмон inetd-р удирдагдсан дотоод үйлчилгээ бол `internal` тохируулгыг хэрэглэх хэрэгтэй. server-program-arguments:: Үүнийг `server-program`-тай хамт, `argv[0]`-с эхлэн програмын аргументыг зааж өгөх байдлаар хэрэглэнэ. Хэрэв командын мөрөнд `mydaemon -d` гэсэн байдлаар хэрэглэдэг бол, `server-program-arguments`-н утга `mydaemon -d` байна. Дахин хэлэхэд, хэрэв тухайн дэмон дотоод үйлчилгээний нэг бол `internal`-г энд мөн хэрэглэнэ үү. [[network-inetd-security]] === Аюулгүй байдал Үйлдлийн системийг суулгах үед хийсэн сонголтуудаас хамааран inetd-н үйлчилгээнүүдийн ихэнх нь идэвхтэй болсон байдаг. Хэрэв хэрэглэх онцын шаардлага байхгүй бол тэдгээрийг идэвхгүй болгоно уу. [.filename]#/etc/inetd.conf# файл дотор, идэвхгүй болгох гэж байгаа демоныхоо харгалзах мөрийн урд "#" тэмдгийг тавьж өгнө. Дараа нь <>. fingerd зэрэг зарим дэмонууд гадны халдагчид хэрэгтэй мэдээллийг түгээж байдаг тул тэдгээр үйлчилгээг бүрмөсөн хааж болох юм. Зарим дэмонууд аюулгүй байдлыг бодолцолгүйгээр бүтээгдсэн байдаг ба холболт тогтоох харьцангуй урт болзоот хугацаатай, эсвэл болзоот хугацааг огт зааж өгөөгүй байдаг. Энэ нь халдагчид тодорхой дэмон уруу холболт тогтоох хүсэлтийг олон дахин илгээж, нөөцийг дуусгах замаар системд халдах боломжийг олгодог. Хэрэв ямар нэг дэмоны хувьд үүссэн холболтын тоо хэтэрхий олон байвал `max-connections-per-ip-per-minute`, `max-child` эсвэл `max-child-per-ip` тохиргооны тусламжтайгаар хязгаарлалт хийх нь оновчтой байдаг. Анхдагч байдлаар TCP-ийн дундын хяналт (гүйцэтгэл хялбаршуулалт) идэвхтэй байдаг. inetd-р дуудагдсан дэмонуудын хувьд TCP хязгаарлалтыг хэрхэн тавих талаар дэлгэрэнгүй мэдээллийг man:hosts_access[5] заавар хуудаснаас үзнэ үү. [[network-inetd-misc]] === Элдэв зүйлс daytime, time, echo, discard, chargen, ба auth бүгд inetd-н дотоод үйлчилгээнүүд юм. auth үйлчилгээ нь сүлжээний орчинд, тодорхойлолт өгөх үйлчилгээ үзүүлдэг бөгөөд тодорхой түвшинд тохиргоо хийх боломжтой байдаг бол бусад үйлчилгээнүүдийг зөвхөн идэвхтэй эсвэл идэвхгүй болгох боломжтой. Дээрх үйлчилгээнүүдийн талаар бүрэн дүүрэн мэдээллийг man:inetd[8] заавар хуудаснаас үзнэ үү. [[network-nfs]] == Сүлжээний Файлын Систем (NFS) FreeBSD дээр дэмжигддэг олон файлын системүүдийн нэг бол Network File System буюу Сүлжээний Файлын Систем юм, мөн NFS гэж нэрлэнэ. NFS нь сүлжээний орчинд файл болон санг бусадтай хуваалцах боломжийг олгодог. NFS-г хэрэглэн, хэрэглэгчид болон програмууд алслагдсан систем рүү дотоод файл руу хандаж байгаатай адилаар хандах боломжтой. NFS-н тэмдэглүүштэй давуу талуудаас дурдвал: * Өргөн хэрэглэгддэг өгөгдлийг нэгтгэн нэг машин дээр байрлуулж, түүнд алсаас хандах боломжтой болсноор дотоод машинууд илүү бага диск хэрэглэх болно. * Хэрэглэгчийн хувьд сүлжээнд байгаа машин бүр дээр тус тусдаа гэрийн сантай байх шаардлагагүй болно. Гэрийн санг нэг удаа NFS сервер дээр үүсгээд түүнийгээ сүлжээгээр дамжин хэрэглэх боломжтой. * Уян диск, CDROM болон Zip(R) төхөөрөмжүүдийг сүлжээний бусад машинууд хэрэглэх боломжтой болно. Ингэснээр сүлжээнд хэрэглэгдэх зөөвөрлөх боломжтой хадгалах төхөөрөмжүүдийн тоог багасгана. === NFS хэрхэн ажилладаг вэ NFS нь үндсэн хоёр хэсгээс бүрдэнэ: сервер болон нэг ба түүнээс дээш тооны харилцагч. Сервер машин дээр хадгалагдаж байгаа өгөгдөл рүү харилцагч алсаас хандана. Дээрх үйлдлийг зөв гүйцэтгэхийн тулд нилээд хэдэн процессийн тохиргоог хийж, ажиллуулсан байх ёстой. Сервер дээр дараах дэмонууд ажиллаж байх ёстой: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Дэмон | Тайлбар |nfsd |NFS харилцагчдаас ирэх хүсэлтийг хүлээн авах NFS дэмон. |mountd |man:nfsd[8]-с дамжиж ирсэн хүсэлтийг гүйцэтгэгч NFS холбох дэмон. |rpcbind |Энэ дэмоны тусламжтай NFS харилцагчид NFS сервер аль портон дээр ажиллаж байгааг олж мэднэ. |=== Харилцагч nfsiod гэсэн дэмонг мөн ажиллуулж болно. nfsiod дэмон NFS серверээс ирэх хүсэлтийг гүйцэтгэнэ. Ингэх нь системийг хэвийн, алдаагүй ажиллуулахад зайлшгүй шаардлагагүй боловч зарим үзүүлэлтүүдийг сайжруулдаг тул нэмэлт байдлаар хэрэглэж болно. Дэлгэрэнгүй мэдээллийг man:nfsiod[8] хуудаснаас үзнэ үү. [[network-configuring-nfs]] === NFS-н тохиргоог хийх NFS-н тохиргоог хийх нь харьцангуй амархан. Ажиллах ёстой процессуудыг системтэй хамт автоматаар асдаг болгохын тулд [.filename]#/etc/rc.conf# файлыг бага зэрэг өөрчлөхөд хангалттай. NFS сервер дээрх [.filename]#/etc/rc.conf# файл дотор дараах тохируулгууд идэвхжсэн байгаа эсэхийг шалгана уу: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" .... mountd нь NFS серверийг идэвхжүүлсэн тохиолдолд өөрөө автоматаар ажиллана. Харилцагч талд, [.filename]#/etc/rc.conf# файл дотор дараах тохируулга идэвхтэй байгаа эсэхийг шалгана уу: [.programlisting] .... nfs_client_enable="YES" .... [.filename]#/etc/exports# файл дотор NFS ямар файл системүүдийг экспорт хийхийг (заримдаа "хуваалцах" гэж мөн нэрлэнэ) зааж өгнө. [.filename]#/etc/exports# файлын мөр бүр нь нэг файл системд харгалзана. Энэ файл системд хандах эрхтэй машинуудыг заахаас гадна, ямар тохируулгаар хандахыг мөн зааж өгч болно. Энэ файл дотор бичигдэж болох нилээд олон ийм тохируулгууд байгаа хэдий ч, бид тэдгээрээс зөвхөн заримыг нь энд авч үзэх болно. Та бусад тохируулгуудын талаар man:exports[5] заавар хуудаснаас уншиж мэднэ үү. Доор [.filename]#/etc/exports# файлаас хэдэн жишээ мөрийг үзүүлэв: Дараах жишээн дээрээс файл системийг хэрхэн экспортлох санааг олж авах болно. Тохируулгууд нь таны сүлжээний тохиргоо, нөхцөл байдлаас шалтгаалан өөр байхыг анхаарна уу. Жишээ нь, [.filename]#/cdrom# гэсэн санг 3 машин руу экспортлохын тулд дараах байдалтай бичнэ. Жишээн дээрх 3 машин сервертэй адил домэйн нэртэй, эсвэл таны [.filename]#/etc/hosts# файл дотор тодорхойлогдсон гэж үзсэн болно. `-ro` туг нь экспортлогдож буй файл системийг зөвхөн унших боломжтой болохыг заана. Энэ тугийг тавьснаар алсаас хандаж буй машин энэ файл систем дээр ямар нэг өөрчлөлт хийх боломжгүй болно. [.programlisting] .... /cdrom -ro host1 host2 host3 .... Дараах жишээн дээр [.filename]#/home# санг IP хаягаар нь зааж өгсөн 3 машин руу экспортолж байна. Ингэж IP хаягаар нь зааж өгөх нь дотоод сүлжээндээ DNS сервер ажиллуулаагүй үед их хэрэгтэй байдаг. Эсвэл [.filename]#/etc/hosts# файл дотор дотоод хостуудын нэрийг тохируулж болно; man:hosts[5] хэсгийг дахин үзнэ үү. `-alldirs` гэсэн туг нь дэд сангуудыг холболтын цэг байхыг зөвшөөрч өгдөг. Өөрөөр хэлбэл, дэд сангуудыг холболгүй орхиж, харилцагч зөвхөн өөрийн хэрэгцээтэй байгаа сангаа холбохыг зөвшөөрнө гэсэн үг юм. [.programlisting] .... /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... Дараах жишээн дээр [.filename]#/a# санг хоёр өөр домэйноос 2 харилцагч хандаж болохоор экспортолж байна. `-maproot=root` гэсэн туг нь алслагдсан систем дээрх `root` хэрэглэгч экспортлогдсон файл систем дээр `root` эрхээр бичихийг зөвшөөрнө. Хэрэв `-maproot=root` тугийг тусгайлан зааж өгөөгүй бол, хэдий алслагдсан систем дээрх хэрэглэгч `root` эрхтэй ч экспортлогдсон файл систем дээр бичих эрхгүй болно. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... Харилцагч экспортлогдсон файл систем рүү хандахын тулд эрх нь байх ёстой. Тухайн харилцагч [.filename]#/etc/exports# файл дотор бүртгэлтэй эсэхийг шалгаарай. [.filename]#/etc/exports# файл дотор мөр болгон нь нэг файл системийг нэг хост руу экспортлох мэдээллийг төлөөлнө. Алслагдсан хост аль нэг файл системийн хувьд зөвхөн ганц удаа л тодорхойлогдсон байх ёстой ба үүнд харгалзах ганцхан анхдагч бичлэг байж болно. Жишээ нь, [.filename]#/usr# нь нэг файл систем гэж бодъё. [.filename]#/etc/exports# файл доторх дараах бичлэгүүд нь буруу юм: [.programlisting] .... # Invalid when /usr is one file system /usr/src client /usr/ports client .... Учир нь [.filename]#/usr# гэсэн файл системийг `client` гэсэн хост руу экспортолсон хоёр бичлэг байна. Энэ тохиолдолд дараах форматаар бичвэл зөв болно: [.programlisting] .... /usr/src /usr/ports client .... Нэг хост руу экспортлогдож байгаа файл системийн хувьд шинжүүдийг бүгдийг нэг мөрөнд жагсаан бичих ёстой. Харилцагчийг зааж өгөөгүй мөрүүдийг энгийн хост гэж үзнэ. Энэ нь файл системийг экспортлох боломжийг хязгаарлана, гэвч энэ нь ихэнх хүмүүст хүнд асуудал биш байдаг. Дараагийн жишээн дээр [.filename]#/usr# ба [.filename]#/exports# гэсэн дотоод файл системийг экспортолсон байна: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... [.filename]#/etc/exports# файл дотор гарсан өөрчлөлтүүдийг хүчинтэй болгохын тулд, өөрчлөлт орсон тухай бүрд mountd дэмонг албадан [.filename]#/etc/exports#-г дахин уншуулах хэрэгтэй болдог. Үүний тулд эсвэл HUP дохиог ажиллаж байгаа дэмонд өгөх хэрэгтэй: [source,shell] .... # kill -HUP `cat /var/run/mountd.pid` .... эсвэл `mountd` man:rc[8] скриптийг зохих параметрийн хамт ажиллуулах хэрэгтэй: [source,shell] .... # service mountd onereload .... rc скриптийг хэрэглэх зааврыг crossref:config[configtuning-rcd,FreeBSD дээр rc(8) ашиглах нь] хэсгээс үзнэ үү. Бас нэг боломж нь, FreeBSD-г эхнээс нь ачаалж, бүх процессийг дахин эхлүүлэх юм. Гэвч үүний тулд заавал системийг дахин ачаалах шаардлага байхгүй. `root` эрхээр дараах тушаалуудыг өгснөөр зөвхөн хэрэгтэй процессуудаа дахин эхлүүлэх боломжтой. NFS сервер дээр: [source,shell] .... # rpcbind # nfsd -u -t -n 4 # mountd -r .... NFS харилцагч дээр: [source,shell] .... # nfsiod -n 4 .... Одоо алсын файл системийг холбоход бэлэн боллоо. Доорх жишээнүүд дээр серверийн нэрийг `server`, харилцагчийн нэрийг `client` гэж авсан болно. Хэрэв та алсын файл системийг зөвхөн түр хугацаагаар холбох гэж байгаа эсвэл тохиргоогоо шалгах гэж байгаа бол, харилцагч талд `root` эрхээр дараах тушаалыг өгөхөд хангалттай: [source,shell] .... # mount server:/home /mnt .... Энэ тушаалыг өгснөөр та сервер талд байгаа [.filename]#/home# гэсэн санг харилцагч талд байгаа [.filename]#/mnt# сантай холбох болно. Хэрэв бүх зүйл зөв тохируулагдсан бол, та харилцагч талын [.filename]#/mnt# сан дотор орж сервер дээр байгаа файлуудыг харж чадах ёстой. Хэрэв систем шинээр ачаалах бүрд ямар нэг алсын файл системийг холбох хүсэлтэй байгаа бол, түүнийгээ [.filename]#/etc/fstab# файл дотор нэмж бичих хэрэгтэй. Жишээ нь: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... Боломжит бүх сонголтуудын талаар man:fstab[5] заавар хуудаснаас үзнэ үү. === Цоожлолт Зарим програмууд (ж.н. mutt) зөв ажиллахын тулд файл цоожлолтыг шаарддаг. NFS-н хувьд, rpc.lockd-г файл цоожлолтонд хэрэглэж болно. Түүнийг идэвхжүүлэхийн тулд, сервер болон харилцагч талд хоёуланд нь [.filename]#/etc/rc.conf# файл дотор дараах мөрүүдийг нэмж өгөх хэрэгтэй (NFS сервер болон харилцагч талуудыг аль хэдийн тохируулчихсан гэж үзэв): [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... Програмыг дараах байдалтай эхлүүлнэ: [source,shell] .... # service lockd start # service statd start .... Хэрэв NFS харилцагч болон NFS сервер талуудын хооронд жинхэнэ файл цоожлолт хийгдэх шаардлагагүй бол, NFS харилцагч талд man:mount_nfs[8]-д `-L` тохируулгыг өгөн дотоод цоожлолт хийлгэж болно. Дэлгэрэнгүй мэдээллийг man:mount_nfs[8] заавар хуудаснаас үзнэ үү. === Практик хэрэглээ NFS нь олон практик хэрэглээтэй. Хамгийн элбэг тохиолддог хэрэглээг доор жагсаав: * Олон машиныг нэг CDROM эсвэл төхөөрөмжийг дундаа хэрэглэдэг байхаар зохион байгуулах. Энэ нь нэг програмыг олон машин дээр суулгах хамгийн хямд, хялбар арга юм. * Том сүлжээний хувьд, бүх хэрэглэгчдийн гэрийн санг хадгалдаг төвлөрсөн NFS серверийг тохируулах. Эдгээр гэрийн сангуудыг дараа нь сүлжээний орчинд экспортолсноор хэрэглэгчид аль машин дээр ажиллаж буйгаас үл хамааран өөрийн нэг л сан дотор ажиллах боломжтой болно. * Олон машин дундаа нэг [.filename]#/usr/ports/distfiles# сантай байх. Ийм замаар, нэг портыг олон машин дээр суулгах хэрэгтэй үед машин бүр дээр эх файлыг татаж авалгүйгээр хурдан суулгах боломжтой болно. [[network-amd]] === amd-р автоматаар холбох нь man:amd[8] (автоматаар холбогч дэмон) нь алсын файл системийн файл эсвэл санд хэрэглэгч хандах тухай бүрт уг файл системийг автоматаар холбодог. Хэсэг хугацааны туршид идэвхгүй байгаа файл системийг amd мөн автоматаар салгана. amd-г хэрэглэснээр [.filename]#/etc/fstab# дотор бичигддэг байнгын холболтоос гадна, холболт хийх боломжийг олгодог. amd нь өөрийгөө, [.filename]#/host# ба [.filename]#/net# сангууд дээр NFS сервер байдлаар холбож ажиллах бөгөөд эдгээр сангууд доторх файлд хандах үед, amd харгалзах алсын холболтыг хайж олоод автоматаар холбох болно. [.filename]#/net# нь экспортлогдсон файл системийг IP хаягаар нь холбоход, харин [.filename]#/host# нь хост нэрээр нь холбоход хэрэглэгдэнэ. [.filename]#/host/foobar/usr# сан доторх файлд хандана гэдэг нь amd-г `foobar` гэсэн хост дээр экспортлогдсон [.filename]#/usr# санг холбохын зааж өгнө. .Экспортыг amd-р холбох [example] ==== Алсын хост дээр байгаа боломжит холболтуудын жагсаалтыг `showmount` тушаалын тусламжтай харж болно. Жишээлбэл, `foobar` нэртэй хостын экспортыг харахын тулд: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /host/foobar/usr .... ==== Жишээн дээр үзүүлснээр `showmount` нь [.filename]#/usr#-г экспортлогдсон болохыг харуулж байна. [.filename]#/host/foobar/usr# сан дотор ороход, amd нь `foobar` гэсэн хост нэрийг тайлахыг оролдох ба заасан санг холбоно. amd-г эхлэл скриптүүдээр эхлүүлж болох ба үүний тулд [.filename]#/etc/rc.conf# файл дотор дараах мөрийг нэмэх хэрэгтэй: [.programlisting] .... amd_enable="YES" .... Мөн, amd програмд `amd_flags` тохируулгын тусламжтай тугуудыг өгч болно. `amd_flags`-н анхдагч утга нь: [.programlisting] .... amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" .... [.filename]#/etc/amd.map# файл дотор экспортуудыг холбох анхдагч тохируулгуудыг зааж өгсөн байна. [.filename]#/etc/amd.conf# файл дотор amd-н илүү дээд түвшний чанаруудыг тодорхойлж өгнө. Дэлгэрэнгүй мэдээллийг man:amd[8] ба man:amd.conf[8] заавар хуудаснаас үзнэ үү. [[network-nfs-integration]] === Бусад системтэй нэгтгэхэд тохиолдох асуудлууд ISA PC системд зориулсан зарим Ethernet адаптерууд учир дутагдалтай байдгаас сүлжээний орчинд ажиллахад, тэр дундаа NFS-тэй ажиллахад нилээд асуудалтай байдаг. Энэ асуудал зөвхөн FreeBSD-д тохиолддоггүй боловч FreeBSD систем үүнд нилээд өртөмтгий байдаг. Энэ асуудал нь (FreeBSD) PC системийг өндөр үзүүлэлттэй машинуудтай (жишээлбэл, Silicon Graphics, Inc., ба Sun Microsystems, Inc компаниудын хийсэн) сүлжээнд холбох үед бараг үргэлж тохиолддог. NFS холболт хийхэд асуудалгүй, зарим үйлдлүүдийг хийхэд амжилттай байх боловч, гаднаас ирж явж байгаа хүсэлтүүдийг боловсруулж чадаж байгаа хэдий ч сервер гэнэт харилцагчид хариу өгөхгүй болдог. Энэ асуудал мөн харилцагчийн хувьд, харилцагч FreeBSD систем эсвэл ажлын машин байхаас үл шалтгаалан тохиолдоно. Ихэнх системийн хувьд, нэгэнт ийм асуудалд орсон бол харилцагч талыг ном ёсных нь дагуу унтраах боломжгүй болдог. Ганц авдаг арга хэмжээ бол системийг хүчээр унтрааж асаах юм. Учир нь, NFS-н энэ асуудал одоо хир нь шийдэгдээгүй байна. Хэдийгээр "зөв" шийдэл бол FreeBSD системд тохирох илүү өндөр үзүүлэлттэй, илүү багтаамжтай Ethernet адаптерийг олж авах боловч, боломжит ажиллагааг хангахын тулд нэг арга байна. Хэрэв FreeBSD систем нь _сервер_ бол, харилцагч талаас холболт хийхдээ `-w=1024` тохируулгыг оруулж өгөх явдал юм. Хэрэв FreeBSD систем нь _харилцагч_ бол, NFS файл системтэй холбогдохдоо `-r=1024` тохируулгыг хэрэглэх юм. Эдгээр тохируулгуудыг автомат холболтын хувьд [.filename]#fstab# бичлэгийн дөрөв дэх талбарыг ашиглан, гар аргаар холболт хийх бол man:mount[8] тушаалын `-o` параметрыг ашиглан зааж өгч болно. NFS сервер болон харилцагчид өөр өөр сүлжээнд байхад гардаг өөр нэг асуудлыг энэ асуудалтай хольж хутгах тохиолдол байдгийг энд дурдах нь зүйтэй болов уу. Хэрэв тийм бол, чиглүүлэгчид шаардлагатай UDP мэдээллийг дамжуулж чадаж байгаа эсэхийг _нягталж_ үзээрэй. Үгүй бол, өөр юу ч хийлээ гээд та үр дүнд хүрч чадахгүй. Дараах жишээн дээр, `fastws` нь өндөр үзүүлэлттэй ажлын машины хост (интерфэйс) нэр, `freebox` нь бага үзүүлэлттэй Ethernet адаптертай FreeBSD системийн нэр юм. Мөн, [.filename]#/sharedfs# нь экспортлогдох гэж байгаа NFS файл систем (man:exports[5]-г үз), ба [.filename]#/project# нь харилцагч талын экспортлогдсон файл системийг холбох цэг байх болно. Аль ч тохиолдолд, `hard` эсвэл `soft` ба `bg` зэрэг нэмэлт тохируулгууд таны хувьд хэрэгтэй байж болох юм. FreeBSD системийг (`freebox`) `freebox` дээр [.filename]#/etc/fstab# дотор харилцагч байдлаар зааж өгөх жишээ: [.programlisting] .... fastws:/sharedfs /project nfs rw,-r=1024 0 0 .... `freebox` дээр гараар холбохдоо: [source,shell] .... # mount -t nfs -o -r=1024 fastws:/sharedfs /project .... FreeBSD системийг (`freebox`) `fastws` дээр [.filename]#/etc/fstab# дотор сервер байдлаар зааж өгөх жишээ: [.programlisting] .... freebox:/sharedfs /project nfs rw,-w=1024 0 0 .... `fastws` дээр гараар холбохдоо: [source,shell] .... # mount -t nfs -o -w=1024 freebox:/sharedfs /project .... Бараг бүх 16-битийн Ethernet адаптерийн хувьд унших ба бичих хэмжээн дээр дээрх байдлаар хязгаарлалт хийлгүйгээр ажиллах боломжтой байдаг. Сонирхсон улсуудад толилуулахад, дээрх алдаа гарахад чухам юу тохиолддог, яагаад засагдах боломжгүй болох талаар дор тайлбарлав. NFS нь голчлон 8 K (хэдийгээр илүү бага хэмжээтэй хэсэг дээр ажиллаж чадах боловч) хэмжээтэй "блок"ууд дээр ажилладаг. Хамгийн урт Ethernet пакет 1500 байт орчим байх ба, NFS "блок" нь хэд хэдэн Ethernet пакетуудад хуваагдах хэрэгтэй болдог. Дээд түвшний програмын хувьд энэ нь нэг нэгж хэвээр байх ба хүлээж аваад, нийлүүлээд, бататгал хийхэд ч мөн нэг нэгж хэвээр байдаг. Өндөр үзүүлэлттэй ажлын машинууд NFS нэгжийг бүрдүүлж байгаа тэдгээр пакетуудыг стандартад заасны дагуу аль болох ойрхон ойрхон, нэг нэгээр нь цувуулж гаргана. Жижиг, бага багтаамжтай картууд дээр, дээд түвшний програмд дамжуулахаас өмнө сүүлийн пакет нь өмнөх пакетаа дарснаар тухайн нэгжийг буцааж нийлүүлж, бататгах боломжгүй болно. Үүнээс болж, ажлын машины болзоот хугацаа дуусаж бүхэл бүтэн 8 K нэгжийг дахин дамжуулах болно. Энэ үйл ажиллагаа дахин дахин хязгааргүй давтагдах болно. Нэгжийн хэмжээг Ethernet пакетийн хэмжээнээс бага байлгаснаар, бид Ethernet пакет тус бүрийг бататгаж мухардалд орохоос сэргийлж чадна. Өндөр үзүүлэлттэй ажлын машинууд PC систем рүү өгөгдлийг цацсаар байх үед давхцал үүссээр байх боловч, илүү сайн карт ашигласнаар NFS "нэгж"ийн хувьд заавал тийм давхцал үүсэх албагүй болно. Давхцал үүссэн тохиолдолд, түүнд өртсөн нэгжийг дахин дамжуулах ба түүнийг хүлээн авч, нийлүүлж, бататгах боломж өндөртэй. [[network-nis]] == Сүлжээний Мэдээллийн Систем (NIS/YP) === Энэ юу вэ? NIS, нь Network Information Services буюу Сүлжээний Мэдээллийн Үйлчилгээнүүд гэсэн үгийн товчлол бөгөөд UNIX(R) (анхандаа SunOS(TM)) системүүдийн удирдлагыг төвлөрүүлэх зорилгоор Sun Microsystems анх хөгжүүлсэн. Одоо энэ салбарын үндсэн стандарт болжээ; бүх гол UNIX(R) төрлийн системүүд (Solaris(TM), HP-UX, AIX(R), Линукс, NetBSD, OpenBSD, FreeBSD, гэх мэт) NIS-г дэмждэг. NIS анх Yellow Pages буюу Шар Хуудас гэсэн нэртэй байсан боловч худалдааны тэмдгийн асуудлаас болж Sun нэрийг нь сольсон. Хуучин нэр (ба yp) нь одоо хир нь хэрэглэгдсээр байдаг. Энэ нь RPC дээр үндэслэсэн, нэг NIS домэйнд байгаа бүлэг машинууд дундаа адилхан тохиргооны файлтай боломжийг олгодог харилцагч/сервер систем юм. Үүний тусламжтай системийн администратор NIS харилцагч системийг зайлшгүй байх үндсэн тохиргоотойгоор үүсгэх, тохиргооны өгөгдлийг нэг дор нэмэх, хасах, өөрчлөх зэрэг үйлдлүүдийг хийх боломжтой болдог. Энэ нь Windows NT(R)-н домэйн системтэй төстэй. Хэдийгээр тэдгээрийн дотоод ажиллагаа нь ердөө ч адилхан биш боловч үндсэн үүргийг нь адилтгаж болох юм. === Таны мэдэж байх ёстой Нэр томъёо/Процессууд NIS сервер эсвэл NIS харилцагч байдлаар ажилладаг NIS-г FreeBSD дээр зохион байгуулахын тулд нилээд хэдэн нэр томъёо, чухал хэрэглэгчийн процессуудтай та тааралдах болно: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Нэр томъёо | Тайлбар |NIS домэйн нэр |NIS мастер сервер болон түүний бүх харилцагчид (түүний зарц серверийг оруулаад) бүгд NIS домэйн нэртэй байна. Windows NT(R)-н домэйн нэртэй адилаар, NIS домэйн нэр DNS-тэй ямар ч хамаагүй. |rpcbind |RPC-г (Remote Procedure Call буюу Алсын Процедур Дуудах, NIS-н ашигладаг сүлжээний протокол) идэвхтэй байлгахын тулд заавал ажиллаж байх ёстой. Хэрэв rpcbind ажиллахгүй бол, NIS сервер ажиллуулах, NIS харилцагч болох боломжгүй. |ypbind |NIS харилцагчийг NIS сервертэй "холбоно". NIS домэйн нэрийг системээс авч, RPC ашиглан сервертэй холбоно. ypbind нь NIS орчны харилцагч-серверийн харилцааны цөм нь болж өгдөг; Хэрэв харилцагчийн машин дээр ypbind үхвэл, NIS сервер рүү хандах боломжгүй болно. |ypserv |Зөвхөн NIS сервер дээр ажиллаж байх ёстой; энэ бол NIS сервер процесс өөрөө юм. Хэрэв man:ypserv[8] үхвэл, сервер NIS хүсэлтэд хариу өгөх боломжгүй болно (магадгүй, түүний үүргийг үргэлжлүүлэх зарц сервер байгаа байх). Зарим NIS-н хувьд (FreeBSD-гийх биш), анх холбогдож байсан сервер байхгүй болбол өөр сервертэй холбоо тогтоохыг оролддоггүй хувилбарууд байдаг. Ихэнхдээ, ийм үед ганц тус болох зүйл бол сервер процессийг дахин эхлүүлэх (эсвэл серверийг бүхлээр нь), эсвэл харилцагч талын ypbind процессийг дахин эхлүүлэх юм. |rpc.yppasswdd |Зөвхөн NIS эзэн сервер дээр ажиллаж байх ёстой өөр нэг процесс; Энэ дэмон NIS харилцагч нарыг өөрсдийн нэвтрэх үгийг солих боломжийг олгоно. Хэрэв энэ дэмон ажиллахгүй бол, хэрэглэгчид NIS эзэн сервер рүү нэвтэрч орон тэнд нэвтрэх үгээ солих хэрэгтэй болно. |=== === Хэрхэн ажилладаг вэ? NIS орчинд гурван төрлийн хост байна: эзэн сервер, зарц сервер, ба харилцагч. Серверүүд нь хостуудын тохиргооны мэдээллийг хадгалсан агуулахын үүргийг гүйцэтгэнэ. Эзэн сервер энэ мэдээллийн бүрэн эрхтэй хуулбарыг хадгалж байдаг бол, зарц сервер нь энэ мэдээллийн хуулбарыг нөөцөнд хадгалж байдаг. Серверүүд харилцагчдыг эдгээр мэдээллээр хангана. Олон файлд байгаа мэдээллийг энэ маягаар хуваалцаж хэрэглэнэ. [.filename]#master.passwd#, [.filename]#group#, ба [.filename]#hosts# гэсэн файлуудыг ихэвчлэн NIS тусламжтай хуваалцана. Эдгээр файлд байдаг мэдээлэл харилцагч талын нэг процессод хэрэгтэй боллоо гэхэд түүнийг өөрийн дотоодоос хайхын оронд түүнд оноогдсон NIS серверээс асуулга хийнэ. ==== Машины төрөл * _NIS эзэн сервер_. Энэ сервер, Windows NT(R)-н анхдагч домэйн сервер хянагчийн нэг адил, NIS харилцагчдын хэрэгцээний бүх файлуудыг агуулсан байна. [.filename]#passwd#, [.filename]#group# ба NIS харилцагчийн хэрэглэх бусад олон файлууд эзэн сервер дээр байна. + [NOTE] ==== Нэг машин нэгээс олон NIS домэйны хувьд NIS эзэн сервер байж болно. Гэхдээ, энд бид бага хэмжээний NIS орчны талаар ярилцах тул энэ талаар энд үзэхгүй. ==== * _NIS зарц сервер_. Windows NT(R)-н нөөц домэйн хянагчтай адилаар, NIS зарц сервер нь NIS эзэн серверийн өгөгдлийн файлын хуулбарыг хадгална. NIS зарц серверүүд нь нөөцөнд байдаг. Тэдгээр нь мөн эзэн серверийн ачааллыг хуваалцаж байдаг: NIS Харилцагчид нь хамгийн түрүүнд хариу өгсөн серверт холбогдох ба үүний тоонд зарц серверүүд ч бас орно. * _NIS харилцагч_. NIS харилцагч нь ихэнх Windows NT(R) ажлын машины адилаар, NIS серверт шалгуулж (эсвэл Windows NT(R) ажлын машины хувьд Windows NT(R) домэйн хянагчид) нэвтэрнэ. === NIS/YP-г хэрэглэх нь Энэ хэсэгт жишээ NIS орчныг үүсгэх болно. ==== Төлөвлөх Та өөрийгөө нэгэн их сургуулийн жижигхэн лабораторын администратор гэж бод. Энэ лаб 15 FreeBSD машинаас бүрдэх ба одоогоор төвлөрсөн удирдлага байхгүй; машин бүр өөрийн [.filename]#/etc/passwd# ба [.filename]#/etc/master.passwd# файлуудтай. Эдгээр файлуудыг адилхан байлгахын тулд гараараа зөөж тавьдаг; одоогийн байдлаар лабораторид шинэ хэрэглэгч нэмэхийн тулд, бүх 15 машин дээр нэг бүрчлэн `adduser` тушаалыг оруулах хэрэгтэй байгаа. Мэдээж үүнийг өөрчлөх хэрэгтэй, иймээс та лабораторидоо NIS хэрэглэхээр боллоо. Машинуудаасаа хоёрыг нь сервер болгохоор сонгож авлаа. Тиймээс, лабораторын тохиргоо дараах байдалтай байна: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Машины нэр | IP хаяг | Машины үүрэг |`ellington` |`10.0.0.2` |NIS эзэн |`coltrane` |`10.0.0.3` |NIS зарц |`basie` |`10.0.0.4` |Факультетийн ажлын машин |`bird` |`10.0.0.5` |Харилцагч машин |`cli[1-11]` |`10.0.0.[6-17]` |Бусад харилцагч машинууд |=== Хэрэв та NIS зураглалыг анх удаа хийж байгаа бол, хаанаас эхлэхээ эхлээд сайн бодох хэрэгтэй. Сүлжээ чинь ямар ч хэмжээтэй байж болно, гол нь хэд хэдэн сонголт хийх хэрэгтэй. ===== NIS Домэйн Нэрийг сонгох нь Өөрийн тань байнга хэрэглэдэг "домэйн нэр" байж болохгүй. Залруулж хэлбэл "NIS домэйн нэр" байх ёстой. Харилцагч мэдээлэл олж авахын тулд хүсэлтээ цацах үед NIS домэйн нэрийг хэрэглэнэ. Үүгээр нэг сүлжээнд байгаа олон серверүүд хэн нь хэний асуултанд хариулах ёстойгоо мэдэж авна. NIS домэйн нэрийг хоорондоо ямар нэг байдлаар хамаатай бүлэг хостын нэр гэж ойлгож болно. Зарим байгууллагууд өөрийн Интернэтийн домэйн нэрийг NIS домэйн нэрээр хэрэглэх нь байдаг. Энэ нь сүлжээний ямар нэг асуудлыг задлан шинжлэх явцад түвэг удах тул энэ аргыг зөвлөдөггүй. NIS домэйн нэр нь сүлжээний орчинд цор ганц байх ёстой бөгөөд төлөөлж байгаа бүлэг машинаа онцолсон нэр байвал дөхөм байдаг. Жишээлбэл, Acme Inc. компаний Урлагийн хэлтэс "acme-art" гэсэн NIS домэйнтой байж болох юм. Бид өөрсдийн жишээндээ `test-domain` гэсэн домэйн нэрийг авлаа. Гэвч, зарим үйлдлийн системүүд (цохон дурдвал SunOS(TM)) өөрийн NIS домэйн нэрийг Интернэт домэйн нэрээр хэрэглэдэг. Хэрэв таны сүлжээний нэг болон түүнээс дээш тооны машин ийм асуудалтай бол, та Интернэт домэйн нэрээ NIS домэйндоо хэрэглэх _ёстой_. ===== Серверт тавигдах шаардлагууд NIS серверт зориулсан машин сонгон авахдаа анхаарах хэд хэдэн зүйлс бий. NIS-тэй холбоотой нэг учир дутагдалтай зүйл бол харилцагчдын серверээс хамаарах хамаарал юм. Хэрэв харилцагч өөрийн NIS домэйныг асуухаар сервертэй холбогдож чадахгүй бол, тэр машин ашиглагдах боломжгүй болдог. Хэрэглэгч болон бүлгийн мэдээлэл дутуугаас ихэнх системүүд түр хугацаанд зогсдог. Тиймээс, дахин дахин асааж унтраалгаад байхааргүй, эсвэл туршилтад хэрэглэгдэхээр машиныг сонгох хэрэгтэй. NIS сервер нь тусдаа, зөвхөн NIS серверт зориулагдсан машин байх ёстой. Хэрэв ачаалал багатай сүлжээнд ажиллаж байгаа бол, NIS серверийг өөр үйлчилгээ ажиллаж байгаа машин дээр тавьж болох талтай. Хамгийн гол нь NIS сервер чинь ажиллахгүй болбол, _бүх_ NIS харилцагчид чинь мөн ажиллахгүй болохыг санаарай. ==== NIS Серверүүд Бүх NIS мэдээлэл он цагийн дарааллаараа NIS эзэн сервер дээр хадгалагдаж байдаг. Энэ мэдээллийг хадгалж байгаа өгөгдлийн санг NIS буулгалт гэж нэрлэнэ. FreeBSD-д, эдгээр буулгалтууд [.filename]#/var/yp/[domainname]# файл дотор байрлана. [.filename]#[domainname]# нь NIS домэйн нэр болно. Нэг NIS сервер хэд хэдэн домэйныг зэрэг агуулж чадах тул домэйн тус бүрт зориулсан хэд хэдэн ийм сан байж болно. Домэйн бүр өөрийн гэсэн буулгалтуудтай байна. NIS эзэн болон зарц серверүүд бүх NIS хүсэлтийг `ypserv` дэмоны тусламжтай удирдаж явуулна. `ypserv` нь NIS харилцагч нараас ирж буй хүсэлтийг хүлээн авч, домэйныг хөрвүүлэн, уг домэйн нэрд харгалзах өгөгдлийн файлын замыг хайж олоод, өгөгдлийг буцаан харилцагчид дамжуулах үүрэгтэй. ===== NIS Эзэн Серверийг зохион байгуулах нь Эзэн NIS серверийг зохион байгуулах нь харьцангуй ойлгомжтой. FreeBSD нь бэлэн NIS суучихсан ирдэг. Зөвхөн [.filename]#/etc/rc.conf# файл дотор дараах мөрүүдийг нэмэхэд л хангалттай, үлдсэнийг нь FreeBSD таны өмнөөс хийгээд өгөх болно. [.procedure] ==== [.programlisting] .... nisdomainname="test-domain" .... . Энэ мөр сүлжээ асахад (жишээ нь, систем дахин ачаалсны дараа) NIS домэйн нэрийг `test-domain` болгоно. + [.programlisting] .... nis_server_enable="YES" .... + . Энэ мөр нь сүлжээ асахад NIS сервер процессуудыг асаахыг хэлж өгнө. + [.programlisting] .... nis_yppasswdd_enable="YES" .... . Энэ мөр нь `rpc.yppasswdd` дэмонг идэвхжүүлнэ. Дээр хэлсэнчлэн, энэ дэмон нь харилцагч машин дээрээс хэрэглэгч өөрийн NIS нэвтрэх үгийг солих боломжтой болгодог. ==== [NOTE] ==== Таны NIS тохиргооноос хамааран, нэмэлт мөрүүдийг оруулах хэрэгтэй болж магадгүй. <>, доор, дэлгэрэнгүй мэдээллийг авна уу. ==== Дээрхийг тохируулсны дараа супер хэрэглэгчийн эрхээр `/etc/netstart` тушаалыг ажиллуулна. Энэ нь таны [.filename]#/etc/rc.conf# файл дотор тодорхойлж өгсөн утгуудыг ашиглан бүх зүйлсийг таны өмнөөс хийх болно. Хамгийн сүүлд нь NIS буулгалтуудыг эхлүүлэхээс өмнө ypserv демоныг гараар ажиллуулах хэрэгтэй. [source,shell] .... # service ypserv start .... ===== NIS Буулгалтуудыг эхлүүлэх нь _NIS буулгалтууд_ нь өгөгдлийн сангийн файлууд бөгөөд [.filename]#/var/yp# сан дотор хадгалагдана. Тэдгээрийг NIS эзэн серверийн [.filename]#/etc# сан дотор байгаа [.filename]#/etc/master.passwd# файлаас бусад тохиргооны файлуудаас үүсгэдэг. Энэ нь их учиртай. Мэдээж та өөрийн `root` болон удирдах эрхтэй дансуудынхаа нэвтрэх үгийг NIS домэйн дахь бүх сервер дээр тарааж тавих хүсэлгүй байгаа биз дээ. Тиймээс, NIS буулгалтуудыг эхлүүлэхийн өмнө, дараах зүйлсийг хийх хэрэгтэй: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... Системийн дансуудад хамаарах мөрүүдийг (`bin`, `tty`, `kmem`, `games`, гэх мэт), мөн NIS харилцагч дээр тарааж тавих хүсэлгүй байгаа дансуудад хамаарах мөрүүдийг (жишээлбэл `root` ба бусад UID 0 (супер хэрэглэгчийн) дансууд) бүгдийг устгах хэрэгтэй. [NOTE] ==== [.filename]#/var/yp/master.passwd# файл бүлгийн болон нийтийн хувьд унших эрхгүй (600 төлөв) байгааг нягтална уу! Шаардлагатай бол `chmod` тушаалыг хэрэглээрэй. ==== Дээр дурдсаныг гүйцэтгэж дууссаны дараа, сая NIS буулгалтуудыг эхлүүлнэ! FreeBSD нь танд үүнийг хийж өгөх `ypinit` нэртэй скриптийг (холбогдох заавар хуудаснаас дэлгэрэнгүй мэдээллийг авна уу) агуулж байдаг. Энэ скрипт ихэнх UNIX(R) үйлдлийн системд байдаг боловч, заримд нь байхгүй байх тохиолдол бий. Digital UNIX/Compaq Tru64 UNIX дээр энэ скрипт `ypsetup` гэсэн нэртэй байдаг. Бид NIS эзэн серверийн хувьд буулгалтуудыг үүсгэж байгаа тул `ypinit` тушаалыг `-m` тохируулгын хамт өгнө. Дээрх алхмуудыг бүгдийг хийсний дараа, NIS буулгалтуудыг үүсгэхдээ дараах тушаалыг өгнө: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... `ypinit` нь [.filename]#/var/yp/Makefile.dist#-с [.filename]#/var/yp/Makefile#-г үүсгэсэн байх ёстой. Үүсэхдээ, энэ файл таныг ганц NIS сервертэй орчинд зөвхөн FreeBSD машинуудтай ажиллаж байна гэж үзнэ. `test-domain` нь зарц сервертэй тул, та [.filename]#/var/yp/Makefile# файлыг засах хэрэгтэй: [source,shell] .... ellington# vi /var/yp/Makefile .... Доорх мөрийг далдлах хэрэгтэй [.programlisting] .... NOPUSH = "True" .... (хэрэв далдлагдаагүй бол). ===== NIS Зарц Серверийг зохион байгуулах нь NIS зарц серверийг зохион байгуулах нь эзэн серверийг зохион байгуулахаас ч хялбар байдаг. Зарц сервер рүү нэвтэрч ороод түрүүн хийсэн шигээ [.filename]#/etc/rc.conf# файлыг засах хэрэгтэй. Ганц ялгаа нь `ypinit` тушаалыг өгөхдөө `-s` тохируулгыг өгнө. `-s` тохируулга нь NIS эзэн серверийн нэрийг хамт оруулахыг шаардах тул бидний тушаалын мөр дараах байдалтай байна: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Don't forget to update map ypservers on ellington. .... Одоо [.filename]#/var/yp/test-domain# нэртэй сан үүссэн байх ёстой. NIS эзэн серверийн буулгалтуудын хуулбарууд энэ сан дотор байх ёстой. Эдгээр файлууд шинэчлэгдэж байгаа эсэхийг нягтлаж байх хэрэгтэй. Таны зарц серверийн [.filename]#/etc/crontab# доторх дараах мөрүүд үүнийг хийх болно: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... Энэ хоёр мөр нь зарц сервер өөрийн буулгалтуудыг эзэн сервертэй ижилхэн байлгахыг шаарддаг. Эзэн сервер буулгалтын өөрчлөлтийг өөрийн зарц нарт автоматаар оруулж өгөхийг оролддог болохоор эдгээр мөрүүдийг заавал хэрэглэх шаардлагагүй юм. Гэхдээ зарц серверээс хамааралтай бусад клиентүүд дээрх зөв нууц үгийн мэдээллийн чухлаас хамаараад нууц үгийн буулгалтын шинэчлэлтийг давтамжтайгаар хийхийг зөвлөдөг. Буулгалтын шинэчлэлт үргэлж гүйцэд биш байж болох ачаалал их сүлжээний хувьд энэ нь илүүтэй чухал юм. Одоо, зарц сервер талд мөн `/etc/netstart` тушаалыг өгч NIS серверийг ажиллуулна. ==== NIS Харилцагчид NIS харилцагч нь `ypbind` дэмоны тусламжтай тодорхой нэг NIS сервертэй холбоо тогтооно. `ypbind` системийн анхдагч домэйныг шалгах ба (`domainname` тушаалаар өгөгдсөн), дотоод сүлжээнд RPC хүсэлтийг цацаж эхлэнэ. Эдгээр хүсэлтүүд нь `ypbind`-н холбоо тогтоох гэж байгаа домэйн нэрийг зааж өгнө. Хэрэв тухайн домэйнд үйлчлэхээр тохируулагдсан сервер дээрх хүсэлтийг хүлээн авбал, `ypbind`-д хариу өгөх ба хариуг хүлээж авсан тал серверийн хаягийг тэмдэглэж авна. Хэрэв хэд хэдэн сервер хариу өгсөн бол (нэг эзэн ба хэд хэдэн зарц), `ypbind` хамгийн түрүүнд хариу өгсөн серверийг сонгон авна. Түүнээс хойш, харилцагч өөрийн бүх NIS хүсэлтүүдээ тэр сервер рүү явуулна. `ypbind` нь хааяа сервер амьд байгаа эсэхийг нягтлахын тулд "ping" хийж үзнэ. Хэрэв хангалттай хугацааны дотор хариу хүлээж аваагүй бол, `ypbind` энэ домэйнтой холбоо тасарлаа гэж үзээд өөр сервер олохын тулд хүсэлтээ цацаж эхэлнэ. ===== NIS Харилцагчийг зохион байгуулах FreeBSD машин дээр NIS харилцагчийг зохион байгуулах нь нилээд хялбар байдаг. [.procedure] ==== . [.filename]#/etc/rc.conf# файлыг нээгээд, NIS домэйн нэрийг зааж өгөх ба сүлжээ асах үед `ypbind`-г ажиллуулдаг болгохын тулд дараах мөрүүдийг нэмж бичнэ: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... + . NIS серверээс хэрэгтэй нэвтрэх үгүүдийг импортолж авахын тулд [.filename]#/etc/master.passwd# файл дотор байгаа бүх хэрэглэгчийн дансыг устгаад, файлын төгсгөлд дараах мөрийг нэмэхийн тулд `vipw` тушаалыг ашиглана: + [.programlisting] .... +::::::::: .... + [NOTE] ====== Энэ мөр нь NIS серверийн нэвтрэх үгийн буулгалтад байгаа хүчинтэй хэрэглэгчид данс олгоно. Энэ мөрийг өөрчлөх замаар NIS харилцагчийг хэд хэдэн янзаар тохируулж болно. Дэлгэрэнгүй мэдээллийг доорх <> хэсгээс үзнэ үү. Цааш гүнзгийрүүлэн судлах хүсэлтэй бол `NFS ба NIS-г удирдах нь` тухай O'Reilly-н номыг үзнэ үү. ====== + [NOTE] ====== Дор хаяж нэг дотоод эрхийг (өөрөөр хэлбэл NIS-с импортолж аваагүй) [.filename]#/etc/master.passwd# файл дотор авч үлдэх хэрэгтэй. Энэ данс `wheel` бүлгийн гишүүн байх ёстой. Хэрэв NIS дээр ямар нэг асуудал гарлаа гэхэд энэ эрхээр алсаас нэвтрэн орж, `root` болоод асуудлыг шийдвэрлэх болно. ====== + . NIS серверээс бүх бүлгүүдийг импортолж авахын тулд дараах мөрийг [.filename]#/etc/group# файлд нэмнэ: + [.programlisting] .... +:*:: .... ==== NIS клиентийг нэн даруй эхлүүлэхийн тулд дараах тушаалыг супер хэрэглэгчийн эрхээр ажиллуулах хэрэгтэй: [source,shell] .... # /etc/netstart # service ypbind start .... Үүний дараа, `ypcat passwd` тушаалыг өгч NIS серверийн passwd буулгалтыг харж чадаж байх ёстой. === NIS-н Аюулгүй байдал Ер нь ямар ч алсын хэрэглэгчийн хувьд өөрийн чинь домэйн нэрийг мэдэж байвал RPC хүсэлтийг man:ypserv[8]-д явуулж NIS буулгалтыг харах боломжтой. Ийм төрлийн зөвшөөрөгдөөгүй үйлдлээс сэргийлэхийн тулд man:ypserv[8] нь зөвхөн зааж өгсөн хостуудаас ирсэн хандалтыг зөвшөөрдөг "securenets" гэсэн функцыг агуулж байдаг. Систем анх ачаалахад, man:ypserv[8] нь securenets-н мэдээллийг [.filename]#/var/yp/securenets# гэсэн файлаас ачаална. [NOTE] ==== Энэ замыг `-p` тохируулгаар зааж өгөх ба янз бүр байж болно. Энэ файлд сүлжээг сүлжээний багийн хамт зайгаар тусгаарлан оруулж өгсөн байна. "#" тэмдгээр эхэлсэн мөрүүд нь тайлбар болно. Жишээ securenets файл дараах байдалтай байна: ==== [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... Хэрэв man:ypserv[8]-н хүсэлт хүлээж авсан хаяг эдгээр дүрмүүдийн аль нэгэнд тохирч байвал хүсэлтийг ердийн байдлаар боловсруулна. Хэрэв энэ хаяг ямар ч дүрмэнд тохирохгүй байвал, хүсэлтийг үл анхаарах бөгөөд анхааруулах бичлэгийг бүртгэлд нэмнэ. Хэрэв [.filename]#/var/yp/securenets# гэсэн файл байхгүй бол, `ypserv` нь гаднаас ирсэн бүх хүсэлтийг хүлээн авна. `ypserv` програм нь Wietse Venema-н TCP Wrapper багцыг дэмждэг. Энэ нь администраторуудын хувьд [.filename]#/var/yp/securenets#-ны оронд TCP Wrapper-н тохиргооны файлыг хандалтыг хянахад хэрэглэх боломжтой болгодог. [NOTE] ==== Хэдийгээр эдгээр хандалтыг хянах механизмууд нь аюулгүй байдлыг адил түвшинд хангах боловч, хоёул "IP залилах" халдлагад өртөмтгий байдаг. NIS-тэй холбоотой бүх урсгалыг галт хана дээрээ хааж өгөх хэрэгтэй. [.filename]#/var/yp/securenets# хэрэглэж байгаа серверүүд хуучин TCP/IP дээр ажиллаж байгаа зүй ёсны NIS харилцагчид үйлчилж чадахгүй байж магадгүй. Учир нь, тэдгээр нь өргөн цацалт хийхдээ хост битүүдийг бүгдийг тэглэдэг ба өргөн цацалтын хаягийг тооцоолохдоо дэд сүлжээний багийг таньж чаддаггүй болно. Хэдийгээр эдгээр асуудлуудыг харилцагчийн тохиргоог өөрчилснөөр шийдэж болох боловч, бусад асуудлууд нь харилцагчийн системийг цааш ашиглах боломжгүй эсвэл [.filename]#/var/yp/securenets#-г болиулах шаардлагатай болдог. Ийм хуучин TCP/IP дээр ажилладаг сервер дээр [.filename]#/var/yp/securenets#-г хэрэглэх нь үнэхээр хэрэггүй бөгөөд сүлжээний ихэнх хэсэгт NIS-г ашиглах боломжгүй байдаг. TCP Wrapper багцыг ашиглах нь NIS серверийн хоцролтыг ихэсгэдэг. Энэ нэмэлт саатал нь харилцагчийн програм дээр ялангуяа ачаалал ихтэй сүлжээнд, эсвэл удаан NIS сервертэй бол хүлээх хугацаа дуусахад хүргэх талтай. Хэрэв таны харилцагч систем чинь дээрх шинж тэмдгүүдийн аль нэгийг агуулж байгаа бол та энэ харилцагч системээ NIS зарц сервер болгож өөрчлөн хүчээр өөрөөсөө өөртөө холбогдохоор тохируулах хэрэгтэй. ==== === Зарим хэрэглэгчдийн нэвтрэхийг хаах Манай лабораторын жишээн дээр, `basie` нэртэй нэг машин байгаа. Энэ машиныг зөвхөн багш нар хэрэглэх ёстой. Бид энэ машиныг NIS домэйн дотроос гаргахыг хүсэхгүй байгаа, дээр нь эзэн NIS сервер дээр байгаа [.filename]#passwd# файл нь багш нар болон оюутнуудын дансыг хоёуланг агуулж байгаа. Бид одоо яах ёстой вэ? NIS өгөгдлийн сан дотор бүртгэл нь байгаа ч, зарим хэрэглэгчдийг тухайн машин руу нэвтрэхийг хаах нэг арга байна. Үүний тулд `-username` гэсэн мөрийг бусад мөрүүдийн адил форматаар харилцагч машин дээр [.filename]#/etc/master.passwd# файлын төгсгөлд нэмэх хэрэгтэй. Энд _username_ гэдэг нь нэвтрэхийг нь хаах гэж байгаа хэрэглэгчийн нэр юм. Хаасан хэрэглэгчийн мөр `+` гэж нээсэн NIS хэрэглэгчийн мөрөөс дээр байх ёстой. Дээрх үйлдлийг хийхдээ `vipw`-г ашиглахыг зөвлөж байна. `vipw` нь [.filename]#/etc/master.passwd# файл дотор хийгдсэн өөрчлөлтийг хянах бөгөөд өөрчлөлт хийж дууссаны дараа нэвтрэх үгийн санг автоматаар дахин үүсгэж өгдөг. Жишээ нь, хэрэв бид `bill` гэсэн хэрэглэгчийг `basie` хост дээр нэвтрэхийг хаахыг хүсэж байгаа бол: [source,shell] .... basie# vipw [add -bill::::::::: to the end, exit] vipw: rebuilding the database... vipw: done basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin -bill +::::::::: basie# .... [[network-netgroups]] === Netgroups-г Хэрэглэх нь Цөөхөн тооны машин эсвэл хэрэглэгчийн хувьд тусгай дүрэм хэрэгтэй үед өмнөх хэсэгт дурдсан аргыг хэрэглэх нь илүү тохиромжтой. Харин том сүлжээний хувьд зарим хэрэглэгчийн чухал машин руу нэвтрэх эрхийг хаахаа мартах, эсвэл бүх машиныг нэг бүрчлэн гараараа тохируулж өгөх, өөрөөр хэлбэл NIS-н _төвлөрсөн_ удирдлага гэсэн гол санааг ашиглаж чадахгүй байх тохиолдлууд _гарах болно_. NIS-г хөгжүүлэгчид энэ асуудлыг шийдэхийн тулд _netgroups_ буюу сүлжээний бүлгүүд гэсэн шинэ зүйлийг бий болгожээ. Түүний зорилго болон семантикийг UNIX(R) файл системийн жирийн бүлэгтэй дүйцүүлж болох юм. Гол ялгаанууд нь гэвэл тоон дугаар байхгүй, мөн сүлжээний бүлгийг тодорхойлж өгөхдөө хэрэглэгч болон өөр сүлжээний бүлгийг оруулж болдог. Сүлжээний бүлэг нь хэдэн зуун хэрэглэгч болон машинтай том, төвөгтэй сүлжээтэй ажиллахад зориулж бүтээгдсэн юм. Нэг талаар, хэрэв та үнэхээр тийм том сүлжээнд ажиллаж байгаа бол энэ нь Сайн Зүйл юм. Харин нөгөө талаас, энэ байдал нь жижигхэн сүлжээнд хялбар жишээн дээр сүлжээний бүлгийг тайлбарлах бараг боломжгүй болгож байна. Энэ хэсгийн үлдсэн хэсэгт хэрэглэж байгаа жишээн дээр энэ асуудлыг харуулахыг оролдлоо. NIS-г лабораторидоо нэвтрүүлсэн тань танай удирдлагуудын анхаарлыг татсан гэж бодьё. Одоо оюутны хотхон дотор байгаа бусад машиныг NIS домэйнд оруулж өргөтгөх ажлыг хийхийг танд даалгажээ. Дараах хоёр хүснэгтэнд шинээр нэмэх хэрэглэгч болон машины нэрийг товч тайлбарын хамт үзүүллээ. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Хэрэглэгчийн нэр | Тайлбар |`alpha`, `beta` |IT хэлтсийн ердийн ажилчид |`charlie`, `delta` |IT хэлтсийн шинэ дагалдан |`echo`, `foxtrott`, `golf`, ... |бусад ердийн ажилчид |`able`, `baker`, ... |дадлагажигчид |=== [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Машины нэр | Тайлбар |`war`, `death`, `famine`, `pollution` |Таны хамгийн чухал серверүүд. Зөвхөн IT хэлтсийн ажилчид л нэвтрэх эрхтэй. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |Харьцангуй чухал биш серверүүд. IT хэлтэст харъяалагддаг бүх хүмүүс нэвтрэх эрхтэй. |`one`, `two`, `three`, `four`, ... |Ердийн ажлын машинууд. Зөвхөн _үндсэн_ ажилчид нэвтрэх эрхтэй. |`trashcan` |Чухал зүйл байхгүй маш хуучин машин. Дадлагажигчид хүртэл нэвтрэх эрхтэй. |=== Хэрэв та дээрх хязгаарлалтуудыг тус бүрд нь хэрэглэгчийг хаах замаар хийх гэж оролдвол бүх машин дээр хаах хэрэглэгч тус бүрийн хувьд `-user` мөрийг [.filename]#passwd# файл дотор нэмж өгөх ёстой болно. Хэрэв нэг л мөрийг нэмэхээ мартвал асуудалд орно гэсэн үг. Энэ байдалд сүлжээний бүлгийг ашиглах нь нилээд олон давуу талтай. Хэрэглэгч бүрийг тус тусад нь авч үзнэ; нэг хэрэглэгчийг нэг болон түүнээс дээш тооны сүлжээний бүлэгт оноож, тухайн сүлжээний бүлгийн бүх гишүүдийн хувьд нэвтрэхийг эсвэл зөвшөөрч эсвэл хаана. Хэрэв та шинэ машин нэмбэл, зөвхөн сүлжээний бүлгүүдийн хувьд л нэвтрэх эрхийг зааж өгнө. Хэрэв шинэ хэрэглэгч нэмбэл, тухайн хэрэглэгчийг нэг болон түүнээс дээш тооны сүлжээний бүлэгт нэмэхэд л хангалттай. Эдгээр өөрчлөлтүүд нь нэг нэгнээсээ хамааралгүй: "хэрэглэгч ба машины бүх хувилбарт нэмэх..." шаардлагагүй болно. Хэрэв та NIS-г анхнаас нь бодлоготой хийх юм бол, машинууд руу нэвтрэх эрхийг хянахдаа зөвхөн ганцхан тохиргооны файлыг өөрчлөхөд хангалттай. Хамгийн эхний алхам бол NIS сүлжээний бүлгийн буулгалтыг эхлүүлэх юм. FreeBSD-н man:ypinit[8] нь энэ буулгалтыг анхдагч байдлаар үүсгэдэггүй, гэвч хэрэв нэгэнт үүсгэчихвэл түүний NIS-тэй ажиллах хэсэг нь энэ буулгалт дээр ажиллах чадвартай. Хоосон буулгалт үүсгэхийн тулд: [source,shell] .... ellington# vi /var/yp/netgroup .... гэж бичээд дараах зүйлсийг нэмж бичнэ. Манай жишээний хувьд, бидэнд дор хаяж дөрвөн сүлжээний бүлэг хэрэгтэй: IT ажилчид, IT дагалдангууд, ердийн ажилчид болон дадлагажигчид. [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... `IT_EMP`, `IT_APP` гэх мэт нь сүлжээний бүлгийн нэр. Хаалтан дотор байгаа бүлэг нь хэрэглэгч нэмж байгаа нь. Бүлэг доторх гурван талбар нь: . Дараах зүйлүүд хүчинтэй байх хостын нэр. Хэрэв хостын нэр зааж өгөхгүй бол, бүх хостын хувьд хүчинтэй гэсэн үг. Хэрэв хостын нэр зааж өгвөл, та үл ойлгогдох, толгой эргүүлсэн хачин зүйлстэй тулгарах болно. . Энэ сүлжээний бүлэгт хамаарах дансны нэр. . Тухайн дансны NIS домэйн. Хэрэв та нэгээс олон NIS домэйнд харъяалагддаг азгүй залуусын нэг бол, өөрийн сүлжээний бүлэгт өөр NIS домэйноос данс импортолж болно. Эдгээр талбаруудын алинд ч орлуулагддаг тэмдэгт ашиглаж болно. Дэлгэрэнгүй мэдээллийг man:netgroup[5] заавар хуудаснаас үзнэ үү. [NOTE] ==== Сүлжээний бүлгүүдийн нэр 8-с дээш тэмдэгт байж болохгүй, ялангуяа тухайн NIS домэйнд өөр үйлдлийн системтэй машинууд ажиллаж байгаа бол. Нэрүүд нь том жижиг үсгийн ялгаатай; сүлжээний бүлгийн нэрийг том үсгээр бичих нь хэрэглэгчийн нэр, машины нэр болон сүлжээний бүлгийн нэрийг хооронд нь ялгахад хялбар болгодог. Зарим NIS харилцагчид (FreeBSD-с бусад) олон тооны гишүүдтэй сүлжээний бүлэгтэй ажиллаж чаддаггүй. Жишээлбэл, SunOS(TM)-н зарим хуучин хувилбарууд сүлжээний бүлэг 15-с дээш тооны _гишүүн_-тэй бол асуудалтай байдаг. Энэ хязгаарыг давахын тулд 15 ба түүнээс доош тооны хэрэглэгчтэй дэд сүлжээний бүлгүүд үүсгээд, дараа нь эдгээр дэд сүлжээний бүлгүүдээс тогтсон жинхэнэ сүлжээний бүлэг үүсгэх замаар үүсгэж болно: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Хэрэв танд нэг сүлжээний бүлэгт 225-с дээш хэрэглэгч хэрэгтэй бол, дээрх үйлдлийг давтах маягаар цааш үргэлжлүүлж болно. ==== Шинээр үүсгэсэн NIS буулгалтаа идэвхжүүлэх болон тараах нь амархан: [source,shell] .... ellington# cd /var/yp ellington# make .... Ингэснээр [.filename]#netgroup#, [.filename]#netgroup.byhost# ба [.filename]#netgroup.byuser# гэсэн гурван NIS буулгалт үүсэх болно. Дээрх шинэ буулгалтууд идэвхтэй болсон эсэхийг man:ypcat[1] ашиглан шалгаарай: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... Эхний тушаалын үр дүн [.filename]#/var/yp/netgroup# файл доторхтой төстэй байх ёстой. Хэрэв та хостоор тусгайлан сүлжээний бүлэг үүсгээгүй бол хоёр дахь тушаалын үр дүнд юу ч гарах ёсгүй. Гурав дахь тушаалын тусламжтай тухайн хэрэглэгчийн сүлжээний бүлгүүдийн жагсаалтыг харахад хэрэглэгдэнэ. Харилцагчийг тохируулахад нилээд хялбар. `war` нэртэй серверийг тохируулахын тулд, man:vipw[8]-г ажиллуулаад [.programlisting] .... +::::::::: .... гэсэн мөрийг [.programlisting] .... +@IT_EMP::::::::: .... гэсэн мөрөөр сольж бичих хэрэгтэй. Ингэснээр, зөвхөн `IT_EMP` сүлжээний бүлэгт заагдсан хэрэглэгчдийн мэдээлэл `war`-н нэвтрэх үгийн санд импортлогдож, зөвхөн эдгээр хэрэглэгчид л энэ машин руу нэвтрэх эрхтэй боллоо. Харамсалтай нь, энэ хязгаарлалт нь бүрхүүлийн `~` функцад, мөн хэрэглэгчийн нэр ба тоон дугаарыг хооронд нь хөрвүүлдэг бүх дэд програмуудад хамаатай. Өөрөөр хэлбэл, `cd ~user` тушаал ажиллахгүй, `ls -l` тушаал хэрэглэгчийн нэрийн оронд түүний тоон дугаарыг харуулах ба `find . -user joe -print` тушаал `Тийм хэрэглэгч байхгүй` гэсэн алдааны мэдээлэл өгч амжилтгүй болох болно. Үүнийг засахын тулд, бүх хэрэглэгчдийн бүртгэлийг _сервер рүү нэвтрэх эрхгүйгээр_ импортлох хэрэгтэй болно. Үүний тулд өөр нэг мөрийг [.filename]#/etc/master.passwd# файлд нэмж өгөх хэрэгтэй. Энэ мөр нь: `+:::::::::/sbin/nologin` гэсэн бичлэгийг агуулж байх ёстой бөгөөд, энэ нь "бүх бүртгэлийг импортол, гэхдээ импортлогдож байгаа бүртгэлүүдийн бүрхүүлийг [.filename]#/sbin/nologin#-р соль" гэсэн утгатай. Үүнтэй адилаар `passwd` файлын ямар ч талбарыг [.filename]#/etc/master.passwd# файл дахь анхдагч утгыг сольж бичсэнээр өөрчилж болно. [WARNING] ==== `+:::::::::/sbin/nologin` гэсэн мөр `+@IT_EMP:::::::::` гэсэн мөрийн дараа бичигдсэн эсэхийг сайтар нягтлаарай. Үгүй бол, NIS-с импортлогдсон бүх хэрэглэгчдийн бүрхүүл [.filename]#/sbin/nologin# болчихно шүү. ==== Дээрх өөрчлөлтийг хийсний дараа, хэрэв IT хэлтэст шинэ ажилчин орвол, зөвхөн ганцхан NIS буулгалтыг өөрчлөх боллоо. Чухал бус бусад серверийн хувьд ижилхэн арга хэрэглэж, тэдгээрийн өөрийн [.filename]#/etc/master.passwd# файл дотор байгаа хуучин `+:::::::::` мөрийг: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin .... гэсэн мөрөөр сольж бичих хэрэгтэй. Ердийн ажлын машины хувьд: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/sbin/nologin .... байх ёстой. Ингээд бүх зүйл асуудалгүй ажиллах болно. Гэтэл хэдэн долоо хоногийн дараа дүрэм, журманд өөрчлөлт орлоо: IT хэлтэс дадлагажигч авч эхэллээ. IT хэлтсийн дадлагажигчид ердийн ажлын машин болон чухал бус серверүүдэд нэвтрэх эрхтэй; IT дагалдангууд гол сервер рүү нэвтрэх эрхтэй болжээ. Одоо `IT_INTERN` гэсэн шинэ сүлжээний бүлэг нэмж, энэ бүлэгт шинэ IT дадлагажигчдийг нэмээд, энэ өөрчлөлтийг бүх машины тохиргоонд оруулж эхлэх хэрэгтэй... Бидний хэлж заншсанаар: "Төвлөрсөн төлөвлөгөөн дээрх алдаа, бүх юмыг орвонгоор нь эргүүлнэ". Энэ мэт тохиолдолуудад NIS-н өөр сүлжээний бүлгээс шинэ сүлжээний бүлэг үүсгэх боломж нь тус болно. Нэг боломж нь үүрэг дээр үндэслэсэн сүлжээний бүлэг юм. Жишээ нь, чухал серверүүд рүү нэвтрэх эрхийг хянахын тулд `BIGSRV` гэсэн нэртэй сүлжээний бүлэг үүсгэж болох ба, чухал бус серверүүдийн хувьд өөр `SMALLSRV` гэсэн бүрэг үүсгэж, `USERBOX` гэсэн гурав дахь бүлгийг ердийн ажлын машинуудад зориулж үүсгэж болох юм. Эдгээр сүлжээний бүлэг тус бүр дээрх гурван төрлийн машинд нэвтрэх эрхтэй сүлжээний бүлгүүдийг агуулна. NIS сүлжээний бүлгийн буулгалт дараах байдалтай байна: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... Нэвтрэх эрхийг хязгаарлах энэ арга нь ижил төрлийн хязгаарлалттай машинуудыг нэг бүлэг болговол илүү үр дүнтэй ажиллана. Харамсалтай нь, заавал тийм байх албагүй. Ихэнх тохиолдолд, машин тус бүрээр нэвтрэх эрхийг хязгаарлах боломжтой байх шаардлага зайлшгүй тулгардаг. Машин дээр үндэслэсэн сүлжээний бүлэг тодорхойлох нь дээрх мэтийн дүрэм журамд өөрчлөлт ороход хэрэглэж болох хоёр дахь боломж юм. Энэ тохиолдолд, машин бүрийн [.filename]#/etc/master.passwd# файл дотор "+"-р эхэлсэн хоёр мөр бичлэг байна. Эхнийх нь энэ машин руу нэвтрэх эрхтэй дансуудаас бүрдсэн сүлжээний бүлгийг нэмж өгнө, хоёр дахь нь бусад дансуудыг [.filename]#/sbin/nologin# бүрхүүлтэйгээр нэмнэ. Сүлжээний бүлгийн нэрийг машины нэрийг "БҮХ ҮСГИЙГ ТОМООР" байхаар сонгож авах нь тохиромжтой. Өөрөөр хэлбэл, мөрүүд дараах байдалтай харагдах ёстой: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/sbin/nologin .... Бүх машины хувьд дээрх үйлдлийг хийж дууссаны дараа, өөрийн [.filename]#/etc/master.passwd# файлыг дахин өөрчлөх шаардлагагүй болно. Бусад бүх өөрчлөлтүүдийг NIS буулгалтыг өөрчилснөөр шийдэх болно. Дээрх асуудалд тохирох сүлжээний бүлгийн буулгалтыг зарим нэмэлт өөрчлөлтүүдийн хамт дор жишээ болгож үзүүлэв: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... Хэрэв та хэрэглэгчдийнхээ дансыг удирдахын тулд ямар нэг өгөгдлийн санг ашигладаг бол, дээрх буулгалтын эхний хэсгийг өгөгдлийн сангийнхаа тайлан бэлтгэх багажуудыг ашиглах үүсгэх боломжтой. Энэ замаар, шинэ хэрэглэгчид машинуудад хандах эрхийг автоматаар олж авах болно. Эцэст нь анхааруулж хэлэх нэг зүйл байна: Машин дээр үндэслэсэн сүлжээний бүлгийг хэрэглэхийг байнга зөвлөхгүй. Хэрэв оюутны лабораторид зориулсан, хэдэн арван эсвэл хэдэн зуун нэг ижил машинтай ажиллаж байгаа бол, NIS буулгалтыг тодорхой хэмжээнд барьж байхын тулд машин дээр үндэслэсэн сүлжээний бүлгийн оронд үүрэг дээр үндэслэсэн сүлжээний бүлгийг хэрэглэх хэрэгтэй. === Санаж явах чухал зүйлс NIS орчинд ороод, өөрөөр хийх ёстой хэд хэдэн зүйлс байна. * Лабораторид шинэ хэрэглэгч нэмэх бүрдээ _зөвхөн_ эзэн NIS серверт нэмэх ёстой, ба _NIS буулгалтыг заавал дахин үүсгэх ёстой_. Хэрэв ингэхээ мартвал, шинэ хэрэглэгч эзэн NIS серверээс өөр хаашаа ч нэвтэрч чадахгүй болно. Жишээ нь, бид `jsmith` гэсэн шинэ хэрэглэгчийг лабораторид нэмэх боллоо: + [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... + `pw useradd jsmith`-н оронд `adduser jsmith`-г мөн хэрэглэж болно. * _Администратор эрхтэй дансуудыг NIS буулгалтад оруулах ёсгүй_. Администратор эрхээр орох ёсгүй хэрэглэгчдийн машин дээр администратор эрхтэй дансууд болон нэвтрэх үгүүдийг тараах хүсэлгүй байгаа биз дээ. * _NIS эзэн болон зарц серверийн аюулгүй байдлыг хангаж, ажиллахгүй байх хугацааг багасгах хэрэгтэй_. Хэрэв хэн нэг нь серверт нууцаар нэвтэрч, эсвэл унтрааж орхивол хүмүүсийг лабораторын машинууд руу нэвтрэх боломжгүй болгож, саад болох болно. + Энэ нь ямар ч төвлөрсөн удирдах системийн гол сул тал юм. Хэрэв та өөрийн NIS серверийг хамгаалахгүй бол, та маш олон ууртай хэрэглэгчидтэй таарах болно шүү! === NIS v1 нийцтэй байдал FreeBSD-н ypserv нь NIS v1 харилцагчдад үйлчлэх зарим дэмжигчтэй ирдэг. FreeBSD-н NIS нь зөвхөн NIS v2 протоколыг хэрэглэдэг, гэхдээ бусад нь хуучин системүүдтэй нийцтэй ажиллахын тулд v1 протоколыг дэмждэг байхаар бүтээгдсэн байдаг. Эдгээр системтэй хамт ирсэн ypbind дэмонууд хэдийгээр үнэн хэрэг дээрээ хэзээ ч хэрэглэхгүй боловч NIS v1 сервертэй холболт үүсгэхийг оролддог (ба v2 серверээс хариу хүлээж авсан ч өргөн цацалт хийж хайлтаа үргэлжлүүлдэг талтай). Хэдийгээр ердийн харилцагчийн хүсэлтийг дэмждэг боловч, ypserv-н энэ хувилбар v1 буулгалтыг зөөх хүсэлттэй ажиллаж чадахгүй; иймээс, зөвхөн v1 протоколыг дэмждэг хуучин NIS серверүүдтэй холбоотойгоор эзэн эсвэл зарц байдлаар ажиллаж чадахгүй. Аз болоход, ийм серверийг одоо хэрэглэж байгаа газар байхгүй. [[network-nis-server-is-client]] === NIS Сервер мөртлөө NIS Харилцагч Сервер машин нь мөн NIS харилцагч байдлаар ажилладаг олон сервертэй домэйнд ypserv-г ажиллуулахдаа анхааралтай байх хэрэгтэй. Ийм серверийг өргөн цацалт хийлгэж, өөр нэг сервертэй холбоо тогтоохыг зөвшөөрөхийн оронд өөрөө өөртэй нь хүчээр холбох нь ихэвчлэн дээр байдаг. Хэрэв нэг сервер унтарч, бусад серверүүд түүнээс хамааралтай байх юм бол хачин алдаанууд гарч болзошгүй. Эцэст нь бүх харилцагчдын хүлээх хугацаа дуусаж, бүгд өөр сервертэй холбогдохыг оролдох болно. Хэдийгээр бүх серверүүд холболтуудаа сэргээж буцаад хэвийн байдалдаа орсон ч, саатлаас болж харилцагчид холбогдож чадахгүй хэвээр байх болно. Хостыг ямар нэг сервертэй холбогдохыг `ypbind` тушаалыг `-S` тугийн хамт ажиллуулж, урдаас зааж өгч болно. Хэрэв NIS серверийг дахин ачаалах тоолонд энэ тушаалыг гараар оруулах хүсэлгүй байгаа бол, дараах мөрүүдийг өөрийн [.filename]#/etc/rc.conf# файл дотор нэмээрэй: [.programlisting] .... nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" .... Дэлгэрэнгүй мэдээллийг man:ypbind[8] заавар хуудаснаас үзнэ үү. === Нэвтрэх үгийн хэлбэр NIS-г зохион байгуулах явцад ихэвчлэн тохиолддог асуудлуудын нэг бол нэвтрэх үгийн хэлбэрийн нийцгүй байдал юм. Хэрэв таны NIS сервер DES хувиргалттай нэвтрэх үгийг хэрэглэдэг бол, зөвхөн DES хэрэглэдэг харилцагчид үйлчлэх чадвартай. Жишээлбэл, хэрэв сүлжээнд чинь Solaris(TM) NIS харилцагчид байгаа бол, та бараг л DES хувиргалттай нэвтрэх үг хэрэглэх шаардлагатай гэсэн үг. Таны сервер болон харилцагчид ямар хэлбэрийн нэвтрэх үг хэрэглэдгийг шалгахдаа [.filename]#/etc/login.conf# файлыг үзээрэй. Хэрэв тухайн хост DES хувиргалттай нэвтрэх үг хэрэглэдэг бол, `default` буюу анхдагч ангилал нь дараах мөрүүдийг агуулсан байх болно: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Further entries elided] .... `passwd_format` нь өөр `blf` ба `md5` гэсэн утгуудыг авч болно (Blowfish болон MD5 хувиргалттай нэвтрэх үгийн хувьд). Хэрэв та [.filename]#/etc/login.conf# файлд өөрчлөлт хийсэн бол, нэвтрэх чадварын санг дахин үүсгэх шаардлагатай. Үүний тулд дараах тушаалыг `root` эрхээр өгөх хэрэгтэй: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== [.filename]#/etc/master.passwd# файл дотор аль хэдийн үүссэн нэвтрэх үгийн хэлбэр нь хэрэглэгч нэвтрэх чадварын сан дахин үүссэнээс хойш анх удаа нэвтрэх үгээ солих хүртэл өөрчлөгдөхгүй. ==== Мөн, таны сонгосон хэлбэрээр нэвтрэх үгүүдэд хувиргалт хийгддэг болгохын тулд, [.filename]#/etc/auth.conf# файл доторх `crypt_default` утга таны сонгосон хэлбэрийг хамгийн түрүүнд оруулсан байгаа эсэхийг шалгах хэрэгтэй. Жишээ нь, DES хувиргалттай нэвтрэх үгийг хэрэглэх үед: [.programlisting] .... crypt_default = des blf md5 .... FreeBSD дээр тулгуурласан NIS сервер болон харилцагч бүр дээр дээрх үйлдлүүдийг хийснээр, нэвтрэх үгийн хэлбэр бүгд таарч байгаа гэдэгт санаа амар байж болно. Хэрэв NIS харилцагч дээр нэвтэрч ороход асуудал гарвал, асуудлыг тодруулах нэг газар байна. Хэрэв та холимог сүлжээний хувьд NIS сервер босгох гэж байгаа бол, ихэнх систем дээр зайлшгүй байх хамгийн бага стандарт тул, бүх системүүд дээрээ DES ашиглах хэрэгтэйг санаарай. [[network-dhcp]] == Автомат Сүлжээний Тохиргоо (DHCP) === DHCP гэж юу вэ? DHCP, Dynamic Host Configuration Protocol буюу Динамик Хостын Тохиргооны Протокол нь систем ямар байдлаар сүлжээнд холбогдох, тухайн сүлжээнд харилцаанд орохын тулд шаардагдах мэдээллийг хэрхэн олж авахыг зааж өгдөг. FreeBSD нь OpenBSD 3.7-с авсан OpenBSD-н `dhclient`-г хэрэглэдэг. Энэ бүлэгт гарах `dhclient`-р ISC ба OpenBSD DHCP харилцагчийг хоёуланг нь төлөөлүүлсэн болно. DHCP серверийн хувьд ISC тархацын серверийг авч үзэх болно. === Энэ хэсэгт авч үзэх зүйлс Энэ хэсэгт ISC ба OpenBSD DHCP харилцагчийн харилцагч талыг бүтээж байгаа элементүүд, болон ISC DHCP системийн сервер талыг бүтээж байгаа элементүүдийг хоёуланг нь авч үзэх болно. Харилцагч талын програм, `dhclient`, нь FreeBSD-тэй нэгдмэл байдлаар ирдэг бол, сервер талын хэсэг нь package:net/isc-dhcp42-server[] портоос суулгах боломжтой байдлаар ирдэг. man:dhclient[8], man:dhcp-options[5], ба man:dhclient.conf[5] заавар хуудсууд болон доор өгөгдсөн зөвлөмжүүд нь хэрэг болно. === Хэрхэн ажилладаг вэ? Харилцагч машин дээр `dhclient` DHCP харилцагчийг ажиллуулахад, тохиргооны мэдээллийг хүссэн хүсэлтийг цацаж эхэлнэ. Анхдагч байдлаар, эдгээр хүсэлтүүд нь UDP 68-р портоос гарч, серверийн UDP 67 порт руу илгээгдэнэ. Сервер харилцагчид IP хаяг болон сүлжээний баг, чиглүүлэгч, DNS серверийн хаяг зэрэг хэрэгтэй мэдээллийг хариу илгээнэ. Энэ бүх мэдээллийг DHCP "түрээслэх" хэлбэрээр өгөх ба зөвхөн тодорхой хугацааны туршид хүчинтэй байна (DHCP серверийг хариуцагч тохируулж өгсөн байна). Ийм байдлаар, сүлжээнд холбогдохоо больсон харилцагчийн ашиглагдаагүй IP хаягуудыг автоматаар буцааж авах боломжтой болно. DHCP харилцагч серверээс өргөн мэдээллийг авч чадна. Бүрэн жагсаалтыг man:dhcp-options[5]-с олж үзэж болно. === FreeBSD-тэй нэгдмэл байдал FreeBSD нь OpenBSD DHCP харилцагч, `dhclient`-г өөртэйгөө бүрэн нэгтгэсэн байдаг. DHCP сервер ажиллаж байгаа сүлжээнд сүлжээний тохиргоог хийх нарийн чимхлүүр ажлаас хөнгөвчлөх үүднээс, DHCP харилцагчийг систем суулгагч болон үндсэн системийн аль алинд хамт оруулж өгсөн байдаг. sysinstall нь DHCP-г дэмждэг. sysinstall-р сүлжээний интерфэйсийг тохируулахад асуудаг хоёр дахь асуулт бол: "Та энэ интерфэйсийг DHCP-р тохируулахыг хүсэж байна уу?". Зөвшөөрсөн хариулт өгсөн тохиолдолд `dhclient`-г ажиллуулах бөгөөд, хэрэв амжилттай бол сүлжээний тохиргоо автоматаар хийгдэнэ. Систем ачаалах үед DHCP ашигладаг болгохын тулд, хоёр зүйлийг хийх хэрэгтэй: * [.filename]#bpf# төхөөрөмж цөмтэй хамт эмхэтгэгдсэн байх ёстой. Үүний тулд, `device bpf` мөрийг цөмийн тохиргооны файлд нэмж бичээд цөмийг дахин бүтээх хэрэгтэй. Цөмийг бүтээх талаар дэлгэрэнгүй мэдээллийг crossref:kernelconfig[kernelconfig,FreeBSD цөмийг тохируулах нь] хэсгээс авна уу. + [.filename]#bpf# төхөөрөмж нь FreeBSD-н [.filename]#GENERAL# цөмийн нэг хэсэг бөгөөд, DHCP-г ажиллуулахын тулд тусгайлан шинээр цөм бүтээх шаардлагагүй. + [NOTE] ==== Аюулгүй байдлын талаар сэтгэл зовнидог хүмүүст зөвлөхөд, [.filename]#bpf# нь пакет шиншлэгчдийг зөв ажиллах боломжийг олгодог төхөөрөмж болохыг анхааралдаа авна уу (хэдийгээр тэдгээр програм ажиллахын тулд `root` эрх хэрэгтэй боловч). DHCP-г ашиглахын тулд [.filename]#bpf#_заавал_ хэрэгтэй, гэвч хэрэв та аюулгүй байдлыг маш ихээр анхааралдаа авдаг бол, зөвхөн хэзээ нэгэн цагт DHCP-г ашиглахын тулд [.filename]#bpf#-г цөмд нэмэх хэрэггүй. ==== * Анхдагчаар FreeBSD-н DHCP тохиргоо ар талд буюу асинхрон (_asynchronously_) горимд хийгддэг. DHCP дуустал бусад скриптүүд ажилладаг бөгөөд ингэснээр системийн эхлүүлэлтийг хурдасгадаг. + Ард ажиллах DHCP нь DHCP сервер хүсэлтүүдэд хурдан хариу өгч DHCP тохиргооны процесс түргэн хийгдэх үед сайн ажилладаг. Гэхдээ DHCP зарим системүүд дээр хийгдэж дуустлаа удаан ажиллаж болно. DHCP дуусахаас өмнө сүлжээний үйлчилгээнүүд ажиллахаар оролдвол амжилтгүй болно. DHCP-г синхрон (_synchronous_) горимд ашиглах нь DHCP тохиргоог дуустал эхлүүлэлтийг түр зогсоож асуудал гарахаас сэргийлдэг. + Бусад эхлүүлэлтүүд үргэлжилж байх үед ар талд DHCP сервер рүү холбогдохын тулд (асинхрон горим) [.filename]#/etc/rc.conf# файлд "`DHCP`" гэсэн утгыг ашиглана: + [.programlisting] .... ifconfig_fxp0="DHCP" .... + DHCP дуустал эхлэлийг түр зогсоохын тулд синхрон горимыг "`SYNCDHCP`" утгатайгаар хэрэглэнэ: + [.programlisting] .... ifconfig_fxp0="SYNCDHCP" .... + [NOTE] ==== crossref:config[config-network-setup,Сүлжээний интерфэйс картууд суулгах нь]-д тайлбарласан ёсоор, эдгээр жишээн дээр байгаа `fxp0`-г динамикаар тохируулах гэж байгаа интерфэйсийн нэрээр сольж бичнэ. ==== + Хэрэв таны `dhclient` өөр газар байгаа бол, эсвэл хэрэв та `dhclient`-г нэмэлт тугуудын хамт ажиллуулах хүсэлтэй бол, дараах мөрүүдийг нэмж бичнэ үү (шаардлагатай бол засаж бичнэ үү): + [.programlisting] .... dhclient_program="/sbin/dhclient" dhclient_flags="" .... DHCP сервер dhcpd нь портуудын цуглуулгад байгаа package:net/isc-dhcp42-server[] портын нэг хэсэг байдлаар ирдэг. Энэ порт нь ISC DHCP сервер болон түүний баримтуудыг агуулсан байдаг. === Файлууд * [.filename]#/etc/dhclient.conf# + `dhclient` нь [.filename]#/etc/dhclient.conf# гэсэн тохиргооны файлыг шаарддаг. Ихэвчлэн энэ файл зөвхөн тайлбаруудаас бүрдэх ба анхдагч утгууд нь харьцангуй өөрчлөх шаардлагагүйгээр өгөгдсөн байдаг. Энэ тохиргооны файлыг man:dhclient.conf[5] заавар хуудсанд тайлбарласан байгаа. * [.filename]#/sbin/dhclient# + `dhclient` нь статикаар холбогдсон байх ба [.filename]#/sbin# дотор байрлана. man:dhclient[8] хуудаснаас `dhclient`-н талаар дэлгэрэнгүй мэдээллийг авна уу. * [.filename]#/sbin/dhclient-script# + `dhclient-script` нь зөвхөн FreeBSD-д байдаг, DHCP харилцагчийг тохируулах зориулалттай тусгай скрипт юм. Энэ скриптийг man:dhclient-script[8] заавар хуудсанд тайлбарласан байх ба, ажиллуулахын тулд хэрэглэгч ямар нэг засвар хийх шаардлагагүй. * [.filename]#/var/db/dhclient.leases.interface# + DHCP харилцагч нь түрээсэлж авсан хаягуудаа агуулсан өгөгдлийн санг энэ файлд хадгалах бөгөөд бүртгэл маягаар бичдэг. man:dhclient.leases[5] хэсэгт илүү дэлгэрэнгүй тайлбар бий. === Гүнзгийрүүлэн унших DHCP протокол нь бүрэн хэмжээгээр http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]-д тодорхойлогдсон байдаг. Нэмэлт эх үүсвэрүүд http://www.dhcp.org/[http://www.dhcp.org/]-д мөн бий. [[network-dhcp-server]] === DHCP Серверийг Суулгах болон Тохируулах ==== Энэ хэсэгт авч үзэх зүйлс Энэ хэсэгт ISC (Internet Systems Consortium) DHCP серверийг ашиглан FreeBSD системийг хэрхэн DHCP сервер байдлаар ажиллуулах талаар авч үзэх болно. Сервер нь FreeBSD-н нэг хэсэг байдлаар ирдэггүй бөгөөд ийм үйлчилгээ үзүүлэхийн тулд package:net/isc-dhcp42-server[] портыг суулгах хэрэгтэй болдог. Портуудын цуглуулгын хэрхэн ашиглах талаар crossref:ports[ports,Програм суулгах. Багцууд болон портууд] хэсгээс дэлгэрэнгүй мэдээллийг авна уу. ==== DHCP Серверийг суулгах нь FreeBSD системийг DHCP сервер байдлаар тохируулахын тулд, man:bpf[4] төхөөрөмж цөмд эмхэтгэгдсэн байх ёстой. Үүний тулд, цөмийн тохиргооны файл дотор ``bpf төхөөрөмж``ийг нэмээд цөмийг дахин бүтээх хэрэгтэй. Цөмийг бүтээх талаар дэлгэрэнгүй мэдээллийг crossref:kernelconfig[kernelconfig,FreeBSD цөмийг тохируулах нь] хэсгээс үзнэ үү. [.filename]#bpf# төхөөрөмж нь FreeBSD-н [.filename]#GENERAL# цөмийн нэг хэсэг бөгөөд, DHCP-г ажиллуулахын тулд тусгайлан шинээр цөм бүтээх шаардлагагүй. [NOTE] ==== Аюулгүй байдлын талаар сэтгэл зовнидог хүмүүст зөвлөхөд, [.filename]#bpf# нь пакет шиншлэгчдийг зөв ажиллах боломжийг олгодог төхөөрөмж болохыг анхааралдаа авна уу (хэдийгээр тэдгээр програм ажиллахын тулд `root` эрх хэрэгтэй боловч). DHCP-г ашиглахын тулд [.filename]#bpf#_заавал_ хэрэгтэй, гэвч хэрэв та аюулгүй байдлыг маш ихээр анхааралдаа авдаг бол, зөвхөн хэзээ нэгэн цагт DHCP-г ашиглахын тулд [.filename]#bpf#-г цөмд нэмэх хэрэггүй. ==== Үүний дараа package:net/isc-dhcp42-server[] порттой хамт ирсэн жишээ [.filename]#dhcpd.conf# файлыг засах хэрэгтэй. Анхдагч байдлаар, [.filename]#/usr/local/etc/dhcpd.conf.sample# гэсэн файл байх ба өөрчлөлт хийхийнхээ өмнө энэ файлыг [.filename]#/usr/local/etc/dhcpd.conf# нэртэйгээр хуулж тавих хэрэгтэй. ==== DHCP Серверийг тохируулах [.filename]#dhcpd.conf# нь дэд сүлжээ болон хостуудтай холбоотой өгөгдөл зарлалтаас бүрдэх ба жишээн дээр тайлбарлавал илүү амархан байх болов уу: [.programlisting] .... option domain-name "example.com";<.> option domain-name-servers 192.168.4.100;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 3600;<.> max-lease-time 86400;<.> ddns-update-style none;<.> subnet 192.168.4.0 netmask 255.255.255.0 { range 192.168.4.129 192.168.4.254;<.> option routers 192.168.4.1;<.> } host mailhost { hardware ethernet 02:03:04:05:06:07;<.> fixed-address mailhost.example.com;<.> } .... <.> Энэ тохируулга нь анхдагч хайлтын домэйн байдлаар харилцагчид өгөх домэйныг заана. Энэ талаар дэлгэрэнгүй мэдээллийг man:resolv.conf[5] хэсгээс үзнэ үү. <.> Энэ тохируулга нь харилцагчийн хэрэглэх ёстой DNS серверүүдийг таслалаар холбосон жагсаалт байна. <.> Хэрэглэгчид өгөх сүлжээний багийг заана. <.> Түрээслэлт (lease) хүчинтэй байх тийм тусгай хугацааг харилцагч хүсэж болох юм. Хэрэв харилцагч хүсээгүй бол сервер энд заасан дуусах хугацаагаар (секундээр) түрээс хийх болно. <.> Серверийн түрээслүүлэх хамгийн дээд хугацааг заана. Харилцагч үүнээс урт хугацаагаар түрээслэх хүсэлт тавибал хүсэлтийг хүлээж авах боловч зөвхөн `max-lease-time` секундын туршид хүчинтэй байна. <.> Түрээслэх болон эргүүлж авахад DHCP сервер DNS-г шинэчлэхийг оролдох шаардлагатай эсэхийг зааж өгнө. ISC шийдлийн хувьд, энэ тохируулга _заавал_ байх ёстой. <.> Харилцагчид оноох IP хаягуудын хүрээг заана. Энэ хүрээнд багтах IP хаягуудыг харилцагчид өгөх болно. <.> Харилцагчид өгөх анхдагч гарцыг заана. <.> Хостын MAC хаягийг заана (ингэснээр DHCP сервер тухайн хостыг хүсэлт тавихад таньж чадна). <.> Хостод тогтмол IP хаяг оноохыг заана. Энд хостын нэрийг хэрэглэж болохыг тэмдэглэх хэрэгтэй. DHCP сервер IP хаяг түрээслүүлэх хариуг өгөхөөс өмнө хост нэрийг тайлах болно. [.filename]#dhcpd.conf# файлыг бичиж дууссаны дараа, [.filename]#/etc/rc.conf# файл дотор DHCP серверийг идэвхжүүлэх хэрэгтэй, өөрөөр хэлбэл доорх мөрүүдийг нэмж бичих хэрэгтэй: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... `dc0`-г өөрийн тань DHCP сервер DHCP харилцагчдын хүсэлтийг хүлээж авах ёстой интерфэйсийн нэрээр (эсвэл интерфэйсүүдийг зайгаар тусгаарлан) сольж бичих хэрэгтэй. Дараа нь, доорх тушаалыг өгөн серверийг ажиллуулах хэрэгтэй: [source,shell] .... # service isc-dhcpd start .... Серверийнхээ тохиргооны файлд өөрчлөлт оруулах бүрдээ, `SIGHUP` дохиог dhcpd-д өгөх нь бусад дэмонуудын хувьд тохиргоог дахин дууддаг шиг биш харин тохиргоог дахин ачаалах_гүй_ болохыг анхаарах хэрэгтэй. Процессийг зогсоохын тулд `SIGTERM` дохиог өгөх хэрэгтэй ба дээрх тушаалыг өгөн дахин эхлүүлэх хэрэгтэй. ==== Файлууд * [.filename]#/usr/local/sbin/dhcpd# + dhcpd нь статикаар холбогдсон байх ба [.filename]#/usr/local/sbin# дотор байрлана. Порттой хамт суусан man:dhcpd[8] заавар хуудаснаас dhcpd-н талаар дэлгэрэнгүй мэдээллийг авна уу. * [.filename]#/usr/local/etc/dhcpd.conf# + dhcpd нь [.filename]#/usr/local/etc/dhcpd.conf# гэсэн тохиргооны файлыг шаарддаг. Энэ файл дотор харилцагчид өгөх бүх мэдээллээс гадна серверийн өөрийн үйл ажиллагаатай холбоотой мэдээлэл байх ёстой. Энэ тохиргооны файлыг портоос суусан man:dhcpd.conf[5] заавар хуудсанд тайлбарласан байгаа. * [.filename]#/var/db/dhcpd.leases# + DHCP сервер нь түрээслүүлсэн хаягуудаа агуулсан өгөгдлийн санг энэ файлд хадгалах бөгөөд бүртгэл маягаар бичдэг. Портоос суусан man:dhcpd.leases[5] заавар хуудсанд илүү дэлгэрэнгүй тайлбар бий. * [.filename]#/usr/local/sbin/dhcrelay# + dhcrelay-г нэг DHCP сервер харилцагчаас хүлээн авсан хүсэлтийг өөр сүлжээнд байгаа нөгөө DHCP сервер рүү дамжуулдаг, нарийн бүтэцтэй орчинд хэрэглэнэ. Хэрэв энэ функцыг ашиглах шаардлагатай бол, package:net/isc-dhcp42-relay[] портыг суулгаарай. Порттой хамт ирэх man:dhcrelay[8] заавар хуудаснаас дэлгэрэнгүй мэдээллийг авна уу. [[network-dns]] == Домэйн Нэрийн Систем (DNS) === Удиртгал FreeBSD анхдагч байдлаар DNS протоколын хамгийн өргөн хэрэглэгддэг хэрэгжүүлэлт болох BIND (Berkeley Internet Name Domain)-н аль нэг хувилбарыг агуулсан байдаг. DNS нь нэрүүдийг IP хаягууд руу, мөн эсрэгээр нь буулгахад хэрэглэгддэг протокол юм. Жишээ нь, `www.FreeBSD.org`-г асуусан DNS асуулга явуулахад, хариуд нь FreeBSD Төсөлийн вэб серверийн IP хаяг ирэх бол, `ftp.FreeBSD.org`-н хувьд асуулга явуулахад, хариуд нь харгалзах FTP машины IP хаяг ирэх болно. Яг үүнтэй адилаар эсрэгээр нь хийж болно. Ямар нэг IP-р асуулга явуулахад түүний хост нэрийг олж болно. DNS хайлт хийхийн тулд тухайн системд домэйн нэрийн сервер ажиллаж байх ёстой. FreeBSD нь одоо BIND9 DNS сервер програмын хамт ирдэг болсон. Бидний суулгац нь файл системийн шинэчилсэн зохион байгуулалт, автомат man:chroot[8] тохиргоо зэрэг аюулгүй байдлыг дээд зэргээр хангах функцүүдтэй ирдэг. DNS бол Интернэт дээр тулгуурласан, бүрэн эрхт root буюу эх сервер, Top Level Domain буюу Дээд Түвшний Домэйн (TLD) сервер, болон домэйн тус бүрийн мэдээллийг агуулж байдаг бусад жижиг нэрийн серверүүдээс бүтсэн нарийн төвөгтэй систем юм. BIND одоо Internet Systems Consortium http://www.isc.org/[http://www.isc.org/]-н мэдэлд байдаг. === Нэр Томъёо Энэ баримтыг ойлгохын тулд, DNS-тэй холбоотой зарим нэр томъёог ойлгосон байх шаардлагатай. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Нэр | Тайлбар |Forward буюу Ердийн DNS |Хост нэрийг IP хаяг руу буулгана. |Origin буюу Үүсэл |Тухайн бүсийн файлд хамрагдаж байгаа домэйныг заана. |named, BIND |FreeBSD-н BIND нэрийн серверийг нэрлэх түгээмэл нэршил. |Resolver буюу Тайлагч |Машин, бүсийн мэдээллийн талаар нэрийн серверээс асуулга явуулахын тулд ашигладаг системийн процесс. |Reverse буюу Урвуу DNS |IP хаягийг хост нэр рүү буулгана. |Root zone буюу Эх бүс |Интернэт бүсийн шатлалын эхлэл. Файл системийн бүх файлууд эх санд харъяалагддаг шиг, бүх бүсүүд эх бүсэд харъяалагдана. |Zone буюу Бүс |Нэг бүрэн эрхт газраар удирдуулж байгаа домэйн, дэд домэйн, эсвэл DNS-н нэг хэсэг. |=== Бүсүүдийн жишээ: * `.` нь баримтад ихэвчлэн эх бүс гэж заагддаг. * `org.` бол эх бүсийн доорх Top Level Domain буюу Дээд Түвшний Домэйн (TLD). * `example.org.` бол `org.` TLD-н доорх бүс. * `1.168.192.in-addr.arpa` бол `192.168.1.*` IP хаягийн хүрээнд багтаж байгаа бүх IP хаягуудыг агуулсан бүс. Хост нэр зүүн тал руугаа явах тусам илүү тодорхой болж байгааг та бүхэн анзаарсан байх. Жишээлбэл, `example.org.` нь `org.`-с илүү тодорхой, харин `org.` нь эх бүсээс илүү тодорхой байна. Хост нэрийн зохион байгуулалт нь файл системийнхтэй төстэй: [.filename]#/dev# директор нь эх директорт харъяалагдана, гэх мэт. === Нэрийн Сервер ажиллуулах Шалтгаанууд Нэрийн Серверүүд ерөнхийдөө хоёр янз байна: authoritative буюу бүрэн эрхт нэрийн сервер, ба caching буюу түр тогтоогч нэрийн сервер. Бүрэн эрхт нэрийн сервер нь дараах тохиолдлуудад хэрэгтэй: * DNS мэдээллийг өөртөө агуулж, энэ мэдээллийг нийтэд зарлан, ирсэн асуулгуудад бүрэн эрхтэйгээр хариулах хүсэлтэй үед. * Бүртгэлтэй домэйны хувьд, жишээлбэл `example.org`, түүний дор орших хост нэрүүдэд IP хаяг оноож өгөх хэрэгтэй үед. * Бүлэг IP хаягуудад урвуу DNS мэдээлэл хэрэгтэй үед (IP-с хост нэр рүү). * Нөөц эсвэл хоёрдогч нэрийн сервер, зарц гэж нэрлэнэ, асуулгуудад хариулуулах шаардлагатай үед. Түр тогтоогч нэрийн сервер дараах тохиолдлуудад хэрэгтэй: * Дотоод DNS сервер нь асуулгын хариуг түр тогтоосноор гадаад нэрийн серверээс илүү хурдан хариу өгч байгаа үед. `www.FreeBSD.org`-р асуулга явуулсан үед, тайлагч ихэвчлэн үйлчилгээ авдаг ISP-нхаа нэрийн серверээс асуугаад хариуг олж авна. Дотоод, түр тогтоогч DNS сервер ажиллуулснаар, асуулгыг гадаад интернэтээс зөвхөн ганц удаа явуулах бөгөөд, хариуг тогтоож авна. Нэмэлт асуулгуудад түр тогтоогч нэрийн сервер хариулах ба гадагшаа дахин асуулга явуулах шаардлага байхгүй. === Хэрхэн ажилладаг вэ? FreeBSD-д BIND дэмонг named гэж нэрлэнэ. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Файл | Тайлбар |man:named[8] |BIND дэмон. |man:rndc[8] |Нэрийн серверийг хянах хэрэгсэл. |[.filename]#/etc/namedb# |BIND-н бүсийн мэдээлэл хадгалагдаж байгаа сан. |[.filename]#/etc/namedb/named.conf# |дэмоны тохиргооны файл. |=== Тухайн бүс сервер дээр хэрхэн тохируулагдсанаас хамаарч энэ бүстэй хамааралтай файлууд [.filename]#/etc/namedb# директорын [.filename]#master#, [.filename]#slave#, эсвэл [.filename]#dynamic# гэсэн дэд сангуудад байрлана. Эдгээр файлуудад гадны асуулгад хариу болгон өгөх DNS мэдээллүүд байрлана. === BIND-г ажиллуулах нь BIND нь анхдагч байдлаар суучихсан ирдэг тул тохируулахад хялбар байдаг. named-н анхдагч тохиргоо нь man:chroot[8] орчинд ажиллах, тайлагч нэрийн сервер байдлаар хийгдсэн байдаг бөгөөд локал IPv4 loopback хаяг (127.0.0.1) дээр ажиллахаар хязгаарлагдсан байдаг. Энэ тохиргоогоор серверийг ажиллуулахын тулд дараах тушаалыг өгөх хэрэгтэй: [source,shell] .... # service named onestart .... named дэмонг систем ачаалах үед ажиллуулдаг болгохын тулд [.filename]#/etc/rc.conf# дотор дараах мөрүүдийг нэмэх хэрэгтэй: [.programlisting] .... named_enable="YES" .... Мэдээж [.filename]#/etc/namedb/named.conf# файл дотор өөр олон тохируулгууд байгаа боловч энэ баримтын мэдлээс халих тул энд дурдсангүй. Хэрэв FreeBSD дээрх named-н эхлэл тохируулгуудын талаар сонирхож байгаа бол [.filename]#/etc/defaults/rc.conf# дотор байгаа `named_*` тугуудыг нэг ороод үзээрэй. Мөн man:rc.conf[5] заавар хуудаснаас тусламж авч болно. crossref:config[configtuning-rcd,FreeBSD дээр rc(8) ашиглах нь] хэсгийг уншихад илүүдэхгүй. === Тохиргооны файлууд named-н тохиргооны файлууд нь [.filename]#/etc/namedb# директор дотор байрлах ба хэрэв хялбар тайлагчаас өөр түвшинд ажиллах хэрэгтэй бол ажиллуулахаасаа өмнө тохиргооны файлд засвар хийх хэрэгтэй. Ихэнх тохиргоог энэ сан дотор гүйцэтгэнэ. ==== [.filename]#/etc/namedb/named.conf# [.programlisting] .... // $FreeBSD$ // // Refer to the named.conf(5) and named(8) man pages, and the documentation // in /usr/shared/doc/bind9 for more details. // // If you are going to set up an authoritative server, make sure you // understand the hairy details of how DNS works. Even with // simple mistakes, you can break connectivity for affected parties, // or cause huge amounts of useless Internet traffic. options { // All file and path names are relative to the chroot directory, // if any, and should be fully qualified. directory "/etc/namedb/working"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; // If named is being used only as a local resolver, this is a safe default. // For named to be accessible to the network, comment this option, specify // the proper IP address, or delete this option. listen-on { 127.0.0.1; }; // If you have IPv6 enabled on this system, uncomment this option for // use as a local resolver. To give access to the network, specify // an IPv6 address, or the keyword "any". // listen-on-v6 { ::1; }; // These zones are already covered by the empty zones listed below. // If you remove the related empty zones below, comment these lines out. disable-empty-zone "255.255.255.255.IN-ADDR.ARPA"; disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; // If you've got a DNS server around at your upstream provider, enter // its IP address here, and enable the line below. This will make you // benefit from its cache, thus reduce overall DNS traffic in the Internet. /* forwarders { 127.0.0.1; }; */ // If the 'forwarders' clause is not empty the default is to 'forward first' // which will fall back to sending a query from your local server if the name // servers in 'forwarders' do not have the answer. Alternatively you can // force your name server to never initiate queries of its own by enabling the // following line: // forward only; // If you wish to have forwarding configured automatically based on // the entries in /etc/resolv.conf, uncomment the following line and // set named_auto_forward=yes in /etc/rc.conf. You can also enable // named_auto_forward_only (the effect of which is described above). // include "/etc/namedb/auto_forward.conf"; .... Тайлбар дээр хэлсэнчлэн дээд гарцын түр тогтоогчоос хүртэхийн тулд `forwarders`-г идэвхжүүлж болох юм. Энгийн үед, нэрийн сервер нь хариултыг олтлоо давталттай байдлаар хэд хэдэн нэрийн серверүүдээр дамжин асууна. Энэ тохируулгыг идэвхжүүлснээр, дээд гарцынхаа нэрийн серверээс (эсвэл зааж өгсөн нэрийн сервер) хамгийн түрүүнд асууж, энэ серверийн түр санах ойд байгаа мэдээллээс хүртэхийг эрмэлзэнэ. Хэрэв дээд гарцын нэрийн сервер нь олон асуулгад хариулдаг, хурдан үйлчилдэг сервер байвал дээрх тохируулгыг идэвхжүүлсний үр ашиг гарна. [WARNING] ==== `127.0.0.1` энд ажиллах_гүй_. Энэ IP хаягийг өөрийн дээд гарцын нэрийн серверээр сольж бичнэ үү. ==== [.programlisting] .... /* Modern versions of BIND use a random UDP port for each outgoing query by default in order to dramatically reduce the possibility of cache poisoning. All users are strongly encouraged to utilize this feature, and to configure their firewalls to accommodate it. AS A LAST RESORT in order to get around a restrictive firewall policy you can try enabling the option below. Use of this option will significantly reduce your ability to withstand cache poisoning attacks, and should be avoided if at all possible. Replace NNNNN in the example with a number between 49160 and 65530. */ // query-source address * port NNNNN; }; // If you enable a local name server, don't forget to enter 127.0.0.1 // first in your /etc/resolv.conf so this server will be queried. // Also, make sure to enable it in /etc/rc.conf. // The traditional root hints mechanism. Use this, OR the slave zones below. zone "." { type hint; file "/etc/namedb/named.root"; }; /* Slaving the following zones from the root name servers has some significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots 3. Greater resilience to any potential root server failure/DDoS On the other hand, this method requires more monitoring than the hints file to be sure that an unexpected failure mode has not incapacitated your server. Name servers that are serving a lot of clients will benefit more from this approach than individual hosts. Use with caution. To use this mechanism, uncomment the entries below, and comment the hint zone above. As documented at http://dns.icann.org/services/axfr/ these zones: "." (the root), ARPA, IN-ADDR.ARPA, IP6.ARPA, and ROOT-SERVERS.NET are availble for AXFR from these servers on IPv4 and IPv6: xfr.lax.dns.icann.org, xfr.cjr.dns.icann.org */ /* zone "." { type slave; file "/etc/namedb/slave/root.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; zone "arpa" { type slave; file "/etc/namedb/slave/arpa.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; */ /* Serving the following zones locally will prevent any queries for these zones leaving your network and going to the root name servers. This has two significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots */ // RFCs 1912 and 5735 (and BCP 32 for localhost) zone "localhost" { type master; file "/etc/namedb/master/localhost-forward.db"; }; zone "127.in-addr.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; zone "255.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // RFC 1912-style zone for IPv6 localhost address zone "0.ip6.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; // "This" Network (RFCs 1912 and 5735) zone "0.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Private Use Networks (RFCs 1918 and 5735) zone "10.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "16.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "17.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "18.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "19.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "20.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "21.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "22.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "23.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "24.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "25.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "26.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "27.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "28.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "29.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "30.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "31.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "168.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Link-local/APIPA (RFCs 3927 and 5735) zone "254.169.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IETF protocol assignments (RFCs 5735 and 5736) zone "0.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // TEST-NET-[1-3] for Documentation (RFCs 5735 and 5737) zone "2.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "100.51.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "113.0.203.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Range for Documentation (RFC 3849) zone "8.b.d.0.1.0.0.2.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Domain Names for Documentation and Testing (BCP 32) zone "test" { type master; file "/etc/namedb/master/empty.db"; }; zone "example" { type master; file "/etc/namedb/master/empty.db"; }; zone "invalid" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.com" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.net" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.org" { type master; file "/etc/namedb/master/empty.db"; }; // Router Benchmark Testing (RFCs 2544 and 5735) zone "18.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "19.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IANA Reserved - Old Class E Space (RFC 5735) zone "240.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "241.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "242.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "243.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "244.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "245.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "246.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "247.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "248.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "249.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "250.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "251.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "252.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "253.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "254.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Unassigned Addresses (RFC 4291) zone "1.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "8.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "c.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "e.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "0.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "1.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "2.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "8.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "0.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "1.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "2.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 ULA (RFC 4193) zone "c.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Link Local (RFC 4291) zone "8.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Deprecated Site-Local Addresses (RFC 3879) zone "c.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "e.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "f.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IP6.INT is Deprecated (RFC 4159) zone "ip6.int" { type master; file "/etc/namedb/master/empty.db"; }; // NB: Do not use the IP addresses below, they are faked, and only // serve demonstration/documentation purposes! // // Example slave zone config entries. It can be convenient to become // a slave at least for the zone your own domain is in. Ask // your network administrator for the IP address of the responsible // master name server. // // Do not forget to include the reverse lookup zone! // This is named after the first bytes of the IP address, in reverse // order, with ".IN-ADDR.ARPA" appended, or ".IP6.ARPA" for IPv6. // // Before starting to set up a master zone, make sure you fully // understand how DNS and BIND work. There are sometimes // non-obvious pitfalls. Setting up a slave zone is usually simpler. // // NB: Don't blindly enable the examples below. :-) Use actual names // and addresses instead. /* An example dynamic zone key "exampleorgkey" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "example.org" { type master; allow-update { key "exampleorgkey"; }; file "/etc/namedb/dynamic/example.org"; }; */ /* Example of a slave reverse zone zone "1.168.192.in-addr.arpa" { type slave; file "/etc/namedb/slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ .... [.filename]#named.conf# доторх эдгээр жишээнүүд нь ердийн болон урвуу бүсийн зарц бүртгэлүүд болно. Шинэ бүс нэмэхдээ, [.filename]#named.conf# файл дотор шинэ бүртгэл оруулах хэрэгтэй. Жишээ нь, `example.org` домэйны хувьд хамгийн хялбар бүртгэл дараах байдалтай байна: [.programlisting] .... zone "example.org" { type master; file "master/example.org"; }; .... Энэ бүс нь эзэн бүс болохыг `type` илэрхийллээс харж болно. Мөн бүсийн мэдээллийг [.filename]#/etc/namedb/master/example.org# файл дотор агуулж байгааг `file` илэрхийллээс харж болно. [.programlisting] .... zone "example.org" { type slave; file "slave/example.org"; }; .... Зарц бүсийн хувьд, тухайн бүсийн хувьд бүсийн мэдээлэл эзэн нэрийн серверээс зөөгдөж ирэх ба зааж өгсөн файлд хадгалагдана. Эзэн сервер унтарсан эсвэл холбоо тогтоох боломжгүй болбол, зарц нэрийн серверт бүсийн мэдээлэл байгаа тул асуулгуудад хариулах чадвартай байна. ==== Бүсийн Файлууд `example.org` домэйны хувьд жишээ эзэн бүсийн файлыг дор үзүүлэв ([.filename]#/etc/namedb/master/example.org# файл): [.programlisting] .... $TTL 3600 ; 1 hour default TTL example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ; Negative Response TTL ) ; DNS Servers IN NS ns1.example.org. IN NS ns2.example.org. ; MX Records IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Machine Names localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 ; Aliases www IN CNAME example.org. .... "." тэмдэгтээр төгссөн хост нэрүүд нь жинхэнэ хост нэрүүд бөгөөд "." тэмдэгтээр төгсөөгүй нэрүүдэд үүсэл залгагдахыг анхаарна уу. Жишээлбэл, `ns1` нь `ns1.example.org.`-руу хөрвүүлэгдэх болно. Бүсийн файл дараах хэлбэртэй байна: [.programlisting] .... recordname IN recordtype value .... Хамгийн өргөн хэрэглэгддэг DNS бичлэгүүд: SOA:: start of zone authority буюу бүсийн бүрэн эрхт мэдээллийн эхлэл NS:: бүрэн эрхт нэрийн сервер A:: хостын хаяг CNAME:: хуурамч дүрд өгөх хүлээн зөвшөөрөгдсөн нэр MX:: захидал солилцогч PTR:: домэйн нэрийг заагч (урвуу DNS-д хэрэглэнэ) [.programlisting] .... example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh after 3 hours 3600 ; Retry after 1 hour 604800 ; Expire after 1 week 300 ) ; Negative Response TTL .... `example.org.`:: домэйн нэр, мөн энэ бүсийн файлын хувьд үүсэл болно. `ns1.example.org.`:: энэ бүсийн гол/бүрэн эрхт нэрийн сервер. `admin.example.org.`:: энэ бүсийг хариуцагч хүн, "@" тэмдэгтийг нь орлуулсан цахим захидлын хаяг. (mailto:admin@example.org[admin@example.org] нь `admin.example.org` болно) `2006051501`:: Файлын сериал дугаар. Бүсийн файлд өөрчлөлт оруулах болгонд энэ дугаарыг нэмэгдүүлэх шаардлагатай. Одоо цагт ихэнх админууд энэ сериал дугаарыг `yyyymmddrr` хэлбэрээр хэрэглэх болсон. `2006051501` гэдэг нь хамгийн сүүлд 05/15/2006-нд засвар хийсэн, хамгийн сүүлийн `01` гэдэг нь энэ өдөр хийгдсэн хамгийн анхны засвар гэдгийг илтгэнэ. Энэ сериал дугаар нь зарц серверүүдэд бүсийн мэдээлэл өөрчлөгдсөн талаар мэдээлэл өгдөг тул их чухал зүйл байгаа юм. [.programlisting] .... IN NS ns1.example.org. .... Энэ бол NS бичлэг. Тухайн бүсийн хувьд бүрэн эрхт хариултыг өгч чадах сервер бүрийн хувьд энэ бичлэг байх ёстой. [.programlisting] .... localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 .... A бичлэг нь машины нэрийг заана. Дээр үзүүлсэнчлэн, `ns1.example.org` нь `192.168.1.2`-руу буулгагдана. [.programlisting] .... IN A 192.168.1.1 .... Энэ мөр нь `192.168.1.1` гэсэн IP хаягийг үүсэлд оноож байна, бидний жишээн дээр `example.org`. [.programlisting] .... www IN CNAME @ .... Хүлээн зөвшөөрөгдсөн нэрийн бичлэг нь машинд хуурамч дүр өгөхөд хэрэглэгдэнэ. Энэ жишээн дээр, `www` нь `example.org` (`192.168.1.1`) гэсэн домэйн нэртэй "master" машины хуурамч дүрийн нэр юм. CNAME-г тухайн хостын нэрийн хувьд өөр төрлийн бичлэгтэй хэзээ ч цуг хэрэглэж болохгүй. [.programlisting] .... IN MX 10 mail.example.org. .... MX бичлэг нь аль захидлын серверүүд тухайн бүсийн захидлыг хүлээж авах үүрэгтэй болохыг зааж өгнө. `mail.example.org` нь захидлын серверийн хост нэр бөгөөд 10 нь энэ захидлын серверийн зэрэглэлийг зааж байна. Нэг бүсэд 10, 20 гэх мэт ялгаатай зэрэглэлтэй хэд хэдэн захидлын сервер байж болно. `example.org` домэйн руу захидал явуулах гэж байгаа сервер эхлээд хамгийн өндөр зэрэглэлтэй MX сервертэй (хамгийн бага зэрэглэлийн дугаартай), дараа нь дараагийн хамгийн өндөр зэрэглэлтэй сервертэй гэх мэтчилэн захидлыг явуулж чадтал дарааллаар нь холбоо тогтооно. in-addr.arpa бүсийн файл (урвуу DNS) нь ижил хэлбэртэй байна. Ганцхан ялгаа нь A болон CNAME бичлэгийн оронд PTR бичлэгийг хэрэглэнэ. [.programlisting] .... $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ) ; Negative Response TTL IN NS ns1.example.org. IN NS ns2.example.org. 1 IN PTR example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 4 IN PTR mx.example.org. 5 IN PTR mail.example.org. .... Энэ файлд дээрх домэйны IP-с хост нэр рүү буулгасан зохих шаардлагатай буулгалтуудыг үзүүлсэн байна. PTR бичлэгийн баруун талын бүх нэрс төгссөн байх ёстой (өөрөөр хэлбэл "."-ээр төгссөн байна). === Түр тогтоогч Нэрийн Сервер (Caching Name Server) Түр тогтоогч нэрийн сервер гэдэг нь рекурсив хүсэлтэд хариу өгөх гол үүрэгтэй нэрийн серверийг хэлнэ. Ийм төрлийн сервер нь зөвхөн асуулга явуулах бөгөөд хариултыг дараа хэрэглэхээр тогтоож авдаг. === DNSSEC Домэйн Нэрийн Системийн Аюулгүй байдлын Өргөтгөлүүд, товчоор DNSSEC, бол нэр тайлагч серверүүдийг залилуулсан DNS бичлэг гэх мэт хуурамч DNS өгөгдлөөс хамгаалах заавруудын иж бүрдэл юм. Электрон гарын үсгийн тусламжтай нэр тайлагч нь бичлэгийн бүрэн бүтэн байдлыг магадлах боломжтой. DNSSEC нь зөвхөн Боломжит Бичлэгүүд дээр (RRs) электрон гарын үсэг зурах замаар өгөгдлийн бүрэн бүтэн байдлыг хангадаг болохыг тэмдэглэн хэлье. Нууцлалыг хангаж, эцсийн хэрэглэгчийн буруу үйлдлээс хамгаалж чадахгүй. Өөрөөр хэлбэл хүмүүсийг `example.com`-н оронд `example.net`-руу орохыг болиулж чадахгүй гэсэн үг юм. DNSSEC-н хийж чадах ганц зүйл бол өгөгдөл замдаа хувиралгүйгээр очсоныг магадлан тогтоох юм. DNS-н аюулгүй байдал бол Интернэтийн аюулгүй байдлыг хангахад чухал алхам болдог. DNSSEC хэрхэн ажилладаг талаар дэлгэрэнгүй мэдээллийг тухайн RFC-үүдээс аваарай. <>-д байгаа жагсаалтыг үзнэ үү. Дараах бүлгүүдэд BIND 9 ажиллаж байгаа бүрэн эрхт DNS сервер болон рекурсив (эсвэл түр тогтоогч) DNS сервер дээр DNSSEC-г хэрхэн идэвхжүүлэхийг үзүүлэх болно. BIND 9-н бүх хувилбарууд DNSSEC-г дэмжих боловч, DNS асуулгуудын хүчинтэй эсэхийг шалгахад гарын үсэгтэй эх бүсийг ашиглахын тулд хамгийн багадаа 9.6.2 хувилбарыг суулгах шаардлагатай. Яагаад гэвэл өмнөх хувилбаруудад эх (root) бүсийн түлхүүрийг ашиглах шалгалтыг идэвхжүүлэхэд шаардлагатай алгоритмууд байдаггүй. Эх түлхүүрт зориулж автоматаар түлхүүрийг шинэчлэх боломж болон автоматаар бүсүүдийг гарын үсгээр баталгаажуулж гарын үсгүүдийг байнга шинэ байлгахын тулд BIND-ийн хамгийн сүүлийн хувилбар 9.7 юм уу эсвэл түүний дараагийн хувилбарыг ашиглахыг шаарддаг. 9.6.2 болон 9.7 болон түүнээс хойшхи хувилбаруудын хооронд тохиргооны зөрүү байвал харуулах болно. ==== Рекурсив DNS серверийн тохиргоо Рекурсив DNS серверийн гүйцэтгэсэн хүсэлтүүдийн DNSSEC шалгалтыг идэвхжүүлэхийн тулд [.filename]#named.conf# файлд цөөн өөрчлөлтийг хийх хэрэгтэй. Эдгээр өөрчлөлтүүдийг хийхээс өмнө эх бүсийн түлхүүр эсвэл итгэлцлийн анкорыг (anchor) авсан байх шаардлагатай. Одоогоор эх бүсийн түлхүүр нь BIND ойлгох файлын форматаар байдаггүй бөгөөд зөв хэлбэр рүү гараар хувиргах ёстой байдаг. Түлхүүрийг dig ашиглан эх бүсээс асууж авч болдог. Ингэхийн тулд [source,shell] .... % dig +multi +noall +answer DNSKEY . > root.dnskey .... гэж ажиллуулна. Түлхүүр [.filename]#root.dnskey# файлд байх болно. Доторх нь иймэрхүү байдалтай харагдана: [.programlisting] .... . 93910 IN DNSKEY 257 3 8 ( AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQ bSEW0O8gcCjFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh /RStIoO8g0NfnfL2MTJRkxoXbfDaUeVPQuYEhg37NZWA JQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaDX6RS6CXp oY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3 LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGO Yl7OyQdXfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGc LmqrAmRLKBP1dfwhYB4N7knNnulqQxA+Uk1ihz0= ) ; key id = 19036 . 93910 IN DNSKEY 256 3 8 ( AwEAAcaGQEA+OJmOzfzVfoYN249JId7gx+OZMbxy69Hf UyuGBbRN0+HuTOpBxxBCkNOL+EJB9qJxt+0FEY6ZUVjE g58sRr4ZQ6Iu6b1xTBKgc193zUARk4mmQ/PPGxn7Cn5V EGJ/1h6dNaiXuRHwR+7oWh7DnzkIJChcTqlFrXDW3tjt ) ; key id = 34525 .... Олж авсан түлхүүрүүд энэ жишээн дээрхээс өөр байвал сандрах хэрэггүй. Тэдгээр нь энэ зааврыг бичсэнээс хойш өөрчлөгдсөн байж болох юм. Энэ гаралт нь хоёр түлхүүрийг агуулдаг. DNSKEY бичлэгийн төрлийн дараах 257 гэсэн утга бүхий жагсаалтад байгаа эхний түлхүүр нь хэрэгтэй нь юм. Энэ утга нь Аюулгүй Орох Цэг (SEP), түлхүүрийг гарын үсгээр баталгаажуулах түлхүүр гэгддэг (KSK) гэдгийг илэрхийлдэг. 256 гэсэн хоёр дахь түлхүүр нь захирагдагч түлхүүр бөгөөд Бүсийг гарын үсгээр баталгаажуулах түлхүүр (ZSK) гэгддэг. Эдгээр өөр түлхүүрийн төрлүүдийн талаар <> хэсэгт дэлгэрэнгүй байгаа. Одоо түлхүүрийг шалгаж BIND ашиглаж болох хэлбэрт оруулах ёстой. Түлхүүрийг баталгаажуулахын тулд DSRR-г үүсгэнэ. Эдгээр RR-уудыг агуулсан файлыг дараах тушаалаар үүсгэнэ [source,shell] .... % dnssec-dsfromkey -f root-dnskey . > root.ds .... Эдгээр бичлэгүүд нь SHA-1 болон SHA-256-г ашигладаг бөгөөд дараах жишээтэй төстэй харагдах ёстой. Урт нь SHA-256-г ашигладаг. [.programlisting] .... . IN DS 19036 8 1 B256BD09DC8DD59F0E0F0D8541B8328DD986DF6E . IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5 .... SHA-256 RR-г https://data.iana.org/root-anchors/root-anchors.xml[https://data.iana.org/root-anchors/root-anchors.xml] дээр байгаа дайжесттай харьцуулж болно. Түлхүүрийг XML файлын өгөгдлөөр өөрчлөгдөөгүйг жинхэнэ утгаар мэдэхийн тулд https://data.iana.org/root-anchors/root-anchors.asc[https://data.iana.org/root-anchors/root-anchors.asc] дахь PGP гарын үсгийг ашиглан шалгаж болно. Дараа нь түлхүүрийг зөв хэлбэрт оруулсан байх ёстой. Энэ нь BIND 9.6.2 болон 9.7 түүнээс хойшхи хувилбаруудын хооронд жаахан ялгаатай байдаг. 9.7 хувилбарт түлхүүрт хийгдэх өөрчлөлтийг автоматаар хянаж шаардлагатай бол шинэчилдэг дэмжлэг нэмэгдсэн байдаг. Үүнийг доорх жишээн дээр үзүүлсэн шиг `managed-keys` ашиглан хийдэг. Хуучин хувилбар ашиглаж байгаа тохиолдолд түлхүүрийг `trusted-keys` гэдгийг ашиглан нэмдэг бөгөөд шинэчлэлтүүдийг гараар хийх ёстой байдаг. BIND 9.6.2-ийн хувьд формат доорхтой адил хэлбэрийн байна: [.programlisting] .... trusted-keys { "." 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; .... For 9.7 the format will instead be: [.programlisting] .... managed-keys { "." initial-key 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; .... Эх түлхүүрийг [.filename]#named.conf# файл руу шууд эсвэл түлхүүр бүхий файлыг оруулан нэмж өгч болно. Эдгээр алхмуудын дараа BIND-г хүсэлтүүд дээр DNSSEC шалгалтыг хийдэг болгохын тулд [.filename]#named.conf# файлыг засварлан дараах мөрийг `options` хэсэгт нэмж тохиргоог хийнэ: [.programlisting] .... dnssec-enable yes; dnssec-validation yes; .... Ажиллаж байгааг шалгахын тулд дөнгөж тохируулсан тайлагчийг ашиглан гарын үсгээр баталгаажсан бүсийг асуусан хүсэлтийг dig ашиглан явуулна. Амжилттай хариулт `AD` тэмдэглэгээтэй байх бөгөөд энэ нь өгөгдлийг таньж зөвшөөрсөн гэсэн үг юм. Доорх хүсэлттэй адил хүсэлтийг ажиллуулбал [source,shell] .... % dig @resolver +dnssec se ds .... `.se` бүсийн хувьд DSRR-г буцаах ёстой. `flags:` хэсэг дээр `AD` флаг тохируулагдсан байх ёстой бөгөөд доорх байдлаар харагдана: [.programlisting] .... ... ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ... .... Тайлагч одоо DNS хүсэлтүүдийг шалгаж таних чадвартай боллоо. [[dns-dnssec-auth]] ==== Бүрэн эрхт DNS серверийн тохиргоо DNSSEC-р баталгаажсан бүсэд үйлчлэх бүрэн эрхт нэрийн сервертэй болохын тулд бага зэргийн зүйлс хийх шаардлагатай. Бүсийг криптограф түлхүүрүүд ашиглан баталгаажуулах ёстой бөгөөд түлхүүрүүдийг үүсгэх ёстой. Энэ зорилгоор зөвхөн нэг түлхүүр ашиглаж болно. Гэхдээ зөвлөдөг арга бол байнга өөрчлөгдөөд байдаггүй, хүчтэй, маш сайн хамгаалагдсан Түлхүүрийг гарын үсгээр баталгаажуулах Түлхүүр (KSK) болон байнга өөрчлөгддөг Бүсийг гарын үсгээр баталгаажуулах Түлхүүртэй (ZSK) байх явдал юм. Үйл ажиллагааны хувьд зөвлөсөн практикуудын талаарх мэдээллийг http://tools.ietf.org/rfc/rfc4641.txt[RFC 4641: DNSSEC үйл ажиллагааны практикууд] хаягаас авч болно. Эх бүсийн талаарх практикуудыг http://www.root-dnssec.org/wp-content/uploads/2010/06/icann-dps-00.txt[Эх бүсийн KSKоператорт зориулсан DNSSEC практик] болон http://www.root-dnssec.org/wp-content/uploads/2010/06/vrsn-dps-00.txt[Эх бүсийн ZSKоператорт зориулсан DNSSEC практик] хаягуудаас олж болно. KSK нь дараалсан бүрэн эрхийг шалгагдах шаардлагатай байгаа өгөгдөлд өгөхөд хэрэглэгддэг бөгөөд бас Secure Entry Point буюу Аюулгүй Орох Цэг (SEP) түлхүүр гэгддэг. Энэ түлхүүрийн зурвасын дайжестийг Delegation Signer буюу Төлөөлөн баталгаажуулагч(DS) бичлэг гэгддэг бөгөөд итгэлцлийн дарааллыг бий болгохын тулд эцэг бүсэд бичигдсэн байх ёстой. Үүнийг хэрхэн хийх нь эцэг бүсийг эзэмшигчээс хамаардаг. ZSK нь бүсийг баталгаажуулахад хэрэглэгддэг бөгөөд тэндээ бичигдсэн байх ёстой байдаг. Өмнөх жишээн дээр харуулсан `example.com` бүсийн хувьд DNSSEC-г идэвхжүүлэхийн тулд эхний алхам нь KSK болон ZSK түлхүүрийн хослолыг үүсгэх dnssec-keygen-г ашиглах явдал юм. Энэ түлхүүрийн хослол нь өөр өөр криптограф алгоритмуудыг хэрэглэж болно. Түлхүүрүүдийн хувьд RSA/SHA256-г ашиглахыг зөвлөдөг бөгөөд 2048 битийн түлхүүрийн урт хангалттай. `example.com`-н хувьд KSK-г үүсгэхийн тулд дараахийг ажиллуулна [source,shell] .... % dnssec-keygen -f KSK -a RSASHA256 -b 2048 -n ZONE example.com .... ZSK-г үүсгэхийн тулд [source,shell] .... % dnssec-keygen -a RSASHA256 -b 2048 -n ZONE example.com .... dnssec-keygen хоёр файлыг гаргах бөгөөд нийтийн болон хувийн түлхүүрүүд нь [.filename]#Kexample.com.+005+nnnnn.key# (нийтийн) болон [.filename]#Kexample.com.+005+nnnnn.private# (хувийн) гэсэн файлуудтай төстэй нэртэйгээр байна. Файлын нэрийн `nnnnn` хэсэг нь таван оронтой түлхүүрийн ID юм. Аль түлхүүрийн ID аль түлхүүрт харгалзаж байгааг хянаж байх хэрэгтэй. Энэ нь ялангуяа бүсэд нэгээс илүү түлхүүр ашиглаж байгаа үед чухал юм. Түлхүүрүүдийн нэрийг бас өөрчилж болно. KSK файл бүрийн хувьд дараахийг ажиллуулна: [source,shell] .... % mv Kexample.com.+005+nnnnn.key Kexample.com.+005+nnnnn.KSK.key % mv Kexample.com.+005+nnnnn.private Kexample.com.+005+nnnnn.KSK.private .... ZSK файлуудын хувьд `KSK`-г `ZSK`-р солиорой. Одоо файлуудыг `$include` ашиглан бүсийн файлд оруулж болно. Иймэрхүү байдалтай харагдана: [.programlisting] .... $include Kexample.com.+005+nnnnn.KSK.key ; KSK $include Kexample.com.+005+nnnnn.ZSK.key ; ZSK .... Төгсгөлд нь бүсийг баталгаажуулж BIND-д баталгаажуулсан бүсийн файлыг ашиглахыг зааж өгнө. Бүсийг баталгаажуулахын тулд dnssec-signzone-г ашиглана. [.filename]#example.com.db#-д байрлах `example.com` бүсийг баталгаажуулах тушаал иймэрхүү байна [source,shell] .... % dnssec-signzone -o example.com -k Kexample.com.+005+nnnnn.KSK example.com.db Kexample.com.+005+nnnnn.ZSK.key .... `-k` аргументад өгөгдсөн түлхүүр нь KSK ба нөгөө нэг түлхүүрийн файл нь ZSK бөгөөд баталгаажуулахад хэрэглэгдэх ёстой. Нэгээс илүү KSK болон ZSK өгч болох бөгөөд ингэсэн тохиолдолд бүс бүх өгөгдсөн түлхүүрээр баталгаажна. Энэ нь бүсийн өгөгдлийг нэгээс илүү алгоритмаар баталгаажуулахын тулд хэрэгтэй байж болно. dnssec-signzone-ий гаралт нь бүх RR нь баталгаажсан бүсийн файл байна. Энэ гаралт нь [.filename]#example.com.db.signed# мэтийн `.signed` гэсэн өргөтгөлтэй файлд байх болно. DS бичлэгүүд нь бас тусдаа [.filename]#dsset-example.com# файлд бичигддэг. Энэ баталгаажсан бүсийг ашиглахын тулд [.filename]#named.conf# файлын бүсийн хэсэгт [.filename]#example.com.db.signed#-г ашиглахаар болгож өөрчлөх хэрэгтэй. Анхдагчаар гарын үсгүүд нь 30 хоног хүчинтэй байдаг бөгөөд хүчингүй гарын үсгүүд бүхий бичлэгүүдийг нэр тайлагчдаар хадгалуулахгүй байлгахын тулд бүсийг ядаж ойролцоогоор 15 хоногийн дараа дахин баталгаажуулах хэрэгтэй гэсэн үг юм. Үүнийг хийхийн тулд скрипт бичээд cron-д ажиллуулахаар тохируулж болно. Дэлгэрэнгүйг холбогдох гарын авлагуудаас харна уу. Бүх криптограф түлхүүрүүдийн адил хувийн түлхүүрүүдийг нууцлан хадгалахыг санаарай. Түлхүүрийг солихдоо шинэ түлхүүрийг бүсэд оруулан хуучнаар эхлээд баталгаажуулах нь зүйтэй бөгөөд дараа нь шинэ түлхүүрийг ашиглан баталгаажуулах хэрэгтэй. Эдгээр алхмуудыг хийсний дараа хуучин түлхүүрийг бүсээс арилгаж болно. Ингэж хийхгүй бол шинэ түлхүүр DNS-н шатлалаар түгээгдэн зарлагдтал DNS-н өгөгдөл нь хүртээмжгүй байх нөхцөлд хүргэж болно. Түлхүүр солих мэдээлэл болон DNSSEC-г ажиллуулахтай холбоотой асуудлуудын талаар дэлгэрэнгүйг http://www.ietf.org/rfc/rfc4641.txt[RFC 4641: DNSSEC Operational practices] хаягаас үзнэ үү. ==== BIND 9.7 болон түүнээс хойшхи хувилбаруудыг ашиглан автоматжуулах BIND 9.7 хувилбараас эхлээд _Smart Signing_ буюу ухаалгаар баталгаажуулах боломж шинээр нэмэгдсэн. Энэ боломж нь түлхүүрийг удирдах болон баталгаажуулах процессын зарим хэсгийг автоматжуулснаар хялбар болгохыг зорьдог. _key repository_ санд түлхүүрүүдийг байршуулж `auto-dnssec` гэсэн шинэ тохиргоог ашиглан шаардлагатай тохиолдолд дахин баталгаажуулагддаг динамик бүсийг үүсгэх боломжтой байдаг. Энэ бүсийг шинэчлэхийн тулд nsupdate-г шинэ `-l` аргументтай хэрэглэнэ. rndc бас түлхүүр байрлах сан дахь түлхүүрүүдээр бүсүүдийг `sign` гэсэн тохиргоо ашиглан баталгаажуулах боломжтой болсон. `example.com`-н хувьд энэ автоматаар хийх баталгаажуулалт болон бүсийг шинэчлэх боломжийг BIND-д зааж өгөхийн тулд дараахийг [.filename]#named.conf# файлд нэмж өгөх хэрэгтэй: [.programlisting] .... zone example.com { type master; key-directory "/etc/named/keys"; update-policy local; auto-dnssec maintain; file "/etc/named/dynamic/example.com.zone"; }; .... Эдгээр өөрчлөлтүүдийг хийсний дараа <>-д тайлбарласны дагуу бүсийн хувьд түлхүүрүүдийг үүсгэж өгнө. Ингэхийн тулд тэр түлхүүрүүдийг түлхүүр байрлах санд хийж бүсийн тохиргооны `key-directory` гэдэгт уг санг өгөх бөгөөд ингэснээр бүс автоматаар баталгаажуулагдах болно. Ийм замаар тохируулсан бүсэд хийх шинэчлэлтийг nsupdate ашиглан хийх ёстой бөгөөд энэ нь бүсэд шинэ өгөгдөл нэмэн дахин баталгаажуулах ажлыг хийдэг байна. Илүү дэлгэрэнгүйг <> болон BIND-н баримтаас үзнэ үү. === Аюулгүй байдал Хэдийгээр BIND нь хамгийн өргөн хэрэглэгддэг DNS сервер боловч, аюулгүй байдалтай холбоотой асуудлууд байнга тулгардаг. Гадны халдлагад өртөж болзошгүй аюулгүй байдлын цоорхой заримдаа олддог. Хэдийгээр FreeBSD named-г автоматаар man:chroot[8] орчинд оруулдаг боловч; DNS халдлагад ашиглаж болохуйц хэд хэдэн механизмууд байсаар байна. http://www.cert.org/[CERT]-с гаргадаг аюулгүй байдлын санамжуудыг уншихыг зөвлөж байна. Мөн {freebsd-security-notifications}-д бүртгүүлж, шинээр гарч байгаа Интернэт болон FreeBSD-н аюулгүй байдлын асуудлуудын талаар мэдээлэлтэй байхыг зөвлөе. [TIP] ==== Хэрэв ямар нэгэн асуудал тулгарвал эхийг байнга шинэчилж, named-г шинээр бүтээх нь тусалж болох юм. ==== [[dns-read]] === Гүнзгийрүүлэн унших BIND/named заавар хуудсууд: man:rndc[8] man:named[8] man:named.conf[8]man:nsupdate[8] man:dnssec-signzone[8] man:dnssec-keygen[8] * https://www.isc.org/software/bind[ISC BIND-н Албан ёсны Хуудас] * https://www.isc.org/software/guild[ISC BIND-н Албан ёсны Хэлэлцүүлэг] * http://www.root-dnssec.org/documentation/[Root DNSSEC] * http://www.oreilly.com/catalog/dns5/[O'Reilly "DNS ба BIND" 5 дахь Хэвлэлт] * http://data.iana.org/root-anchors/draft-icann-dnssec-trust-anchor.html[ DNSSECЭх бүсэд зориулсан итгэмжит анкор зарлалт (Trust Anchor Publication for the Root Zone)] * http://tools.ietf.org/html/rfc1034[RFC1034 - Домэйн Нэрүүд - Зарчмууд болон Боломжууд] * http://tools.ietf.org/html/rfc1035[RFC1035 - Домэйн Нэрүүд - Хэрэгжүүлэлт болон Үзүүлэлтүүд] * http://tools.ietf.org/html/rfc4033[RFC4033 - DNS-н аюулгүй байдлын танилцуулга ба шаардлагууд] * http://tools.ietf.org/html/rfc4034[RFC4034 - DNS-н аюулгүй байдлын өргөтгөлүүдэд зориулсан Resource Records буюу Нөөцийн Бичлэгүүд] * http://tools.ietf.org/html/rfc4035[RFC4035 - DNS-н аюулгүй байдлын өргөтгөлүүдэд зориулсан протоколын өөрчлөлтүүд] * http://tools.ietf.org/html/rfc4641[RFC4641 - DNSSEC ажиллуулах практикууд] * http://tools.ietf.org/html/rfc5011[RFC 5011 - DNS-н аюулгүй байдлын автомат шинэчлэлтүүд (DNSSEC Trust Anchors] [[network-apache]] == Apache HTTP Сервер === Удиртгал Дэлхийн хамгийн их ачаалалтай ажилладаг зарим вэб сайтууд FreeBSD дээр ажилладаг. Интернэтэд ажиллаж байгаа вэб серверүүдийн олонхи нь Apache HTTP Серверийг ашиглаж байна. Apache програм хангамжийн багц таны FreeBSD суулгах дискэнд орсон байгаа. Хэрэв та FreeBSD-г анх суулгахдаа Apache-г хамт суулгаагүй бол package:www/apache22[] портоос суулгаж болно. Apache нэгэнт амжилттай суусан бол түүнийг тохируулах шаардлагатай. [NOTE] ==== Apache HTTP Server-н 2.2.X хувилбар нь FreeBSD-д хамгийн өргөн хэрэглэгддэг тул бид энэ хэсэгт энэ хувилбарыг үзэх болно. Apache 2.X-н талаар энэ баримтын хүрээнээс хальсан дэлгэрэнгүй мэдээллийг http://httpd.apache.org/[http://httpd.apache.org/] хаягаар орж үзнэ үү. ==== === Тохиргоо FreeBSD дээрх Apache HTTP Серверийн гол тохиргооны файл бол [.filename]#/usr/local/etc/apache22/httpd.conf# юм. Энэ файлд, UNIX(R)-н текст тохиргооны файлын нэгэн адил тайлбар мөрүүдийн өмнө `#` тэмдэгтийг хэрэглэдэг. Бүх боломжит тохируулгуудын талаар дэлгэрүүлж тайлбарлах нь энэ номын хүрээнээс халих тул, хамгийн их өөрчлөлт хийгддэг директивүүдийг энд авч үзье. `ServerRoot "/usr/local"`:: Энэ директив Apache суулгацын анхдагч директор шатлалын эхийг зааж өгнө. Хоёртын файлууд серверийн эх директорын [.filename]#bin# ба [.filename]#sbin# дэд директоруудад, тохиргооны файлууд [.filename]#etc/apache# дэд директорт байрлана. `ServerAdmin you@your.address`:: Сервертэй холбоотой асуудлуудын талаар илгээх цахим захидлын хаягийг заана. Энэ хаяг алдааны хуудсууд гэх зэрэг сервер талаас автоматаар үүсгэгддэг зарим хуудсууд дээр бичигдэх болно. `ServerName www.example.com`:: `ServerName` нь хост дээр тохируулагдсан хост нэрээс өөр нэрийг сервертээ өгөх боломжийг танд олгоно (өөрөөр хэлбэл, хостын жинхэнэ хост нэрийн оронд `www`-г хэрэглэх). Энэ нэрээр таны сервер харилцагч нартай харилцах болно. `DocumentRoot "/usr/local/www/apache22/data"`:: `DocumentRoot`: Энэ директорт байгаа вэб баримтуудыг харилцагч нарт үзүүлэх болно. Анхдагч байдлаар, бүх хүсэлтүүд энэ директорт өгөгдөнө. Гэвч симбол холбоосууд болон хуурамч дүрүүдийг ашиглан өөр газар руу зааж өгч болно. Apache-н тохиргооны файлд ямар нэг өөрчлөлт хийхээсээ өмнө нөөц хуулбарыг авч үлдэхээ мартуузай. Тохиргоо хийж дууссан бол одоо Apache-г ажиллуулах хэрэгтэй. === Apache-г ажиллуулах нь package:www/apache22[] порт нь Apache-г эхлүүлэх, зогсоох болон дахин ачаалахад хэрэгтэй man:rc[8] скриптийг суулгадаг бөгөөд энэ нь [.filename]#/usr/local/etc/rc.d/# санд байрладаг. Систем ачаалах үед Apache-г эхлүүлэхийн тулд дараах мөрүүдийг [.filename]#/etc/rc.conf# файлд нэмж бичнэ: [.programlisting] .... apache22_enable="YES" .... Хэрэв Apache-г анхдагч биш сонголтуудтай ажиллуулах бол дараах мөрийг [.filename]#/etc/rc.conf# файлд нэмж тохируулж болно: [.programlisting] .... apache22_flags="" .... Apache-н тохиргоог `httpd` демонг анх эхлүүлэхээсээ өмнө юм уу эсвэл `httpd` ажиллаж байгаа үед дараалсан тохиргооны өөрчлөлтүүдиийг хийсний дараа алдаа байгаа эсэхийг тест хийж болно. Үүнийг man:rc[8] скриптээр шууд хийх юм уу эсвэл man:service[8] хэрэгслийг ашиглан дараах тушаалуудын аль нэгийг ажиллуулж хийнэ: [source,shell] .... # service apache22 configtest .... [NOTE] ==== `configtest` нь man:rc[8]-ий хувьд стандарт биш гэдгийг санаарай, бүх man:rc[8] эхлүүлэх скриптүүдийн хувьд ажиллахгүй байж болно. ==== Хэрэв Apache тохиргооны алдаа өгөөгүй бол Apache `httpd`-г адил man:service[8] механизмаар эхлүүлж болно: [source,shell] .... # service apache22 start .... `httpd` үйлчилгээг вэб хөтөч дээр `http://localhost` гэж тест хийж болно. Хэрэв энэ нь локал машин биш бол `httpd` ажиллаж байгаа машины бүрэн танигдсан домен нэрээр сольж тестлээрэй. Харуулагдах анхдаг вэб хуудас нь [.filename]#/usr/local/www/apache22/data/index.html# байна. === Давхар байршуулалт Apache нь хоёр төрлийн давхар байршуулах үйлчилгээг дэмждэг. Эхнийх нь нэр дээр үндэслэсэн давхар байршуулалт юм. Нэр дээр үндэслэсэн давхар байршуулалт дээр хост нэрийг ялгаж мэдэхдээ харилцагчийн HTTP/1.1 толгойн хэсгийг ашигладаг. Иим байдлаар олон өөр домэйнууд нэг IP хаягийг хуваан хэрэглэх боломжтой болдог. Apache дээр, нэр дээр үндэслэсэн давхар байршуулалтыг хэрэглэхийн тулд доор дурдсантай төстэй бүртгэлийг [.filename]#httpd.conf# файл дотор нэмж бичих хэрэгтэй: [.programlisting] .... NameVirtualHost * .... Таны вэб серверийн нэр `www.domain.tld` бөгөөд `www.someotherdomain.tld` нэртэй домэйныг давхар байршуулах хүсэлтэй бол, та дараах бүртгэлийг [.filename]#httpd.conf# файлд нэмэх хэрэгтэй болно: [source,shell] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... Дээрх хаягуудын оронд хэрэгтэй хаягуудыг, замуудын оронд баримтууд байгаа зохих замуудыг сольж бичнэ үү. Давхар хостуудыг зохион байгуулах талаар дэлгэрэнгүй мэдээллийг Apache-н албан ёсны баримтжуулалт: http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]-с олж үзнэ үү. === Apache Модулиуд Үндсэн серверийн үүрэг функцыг сайжруулахын тулд бүтээгдсэн Apache-н олон модулиуд байдаг. FreeBSD Портуудын Цуглуулга нь Apache-г түүний өргөн хэрэглэгддэг зарим модулиудын хамт хялбар суулгах боломжийг олгодог. ==== mod_ssl mod_ssl модуль нь Secure Sockets Layer (SSL v2/v3) ба Transport Layer Security (TLS v1) протоколоор дамжуулан өндөр нууцлалыг хангахын тулд OpenSSL санг ашигладаг. Энэ модуль нь батламж олгодог итгэмжлэгдсэн байгууллагаас батламж авахын тулд шаардлагатай бүх зүйлсээр хангадаг тул та үүнийг ашиглан FreeBSD дээр аюулгүй вэб сервер ажиллуулж чадна. mod_ssl модуль нь анхдагчаар бүтээгдсэн байдаг боловч бүхээх үедээ `-DWITH_SSL` сонголт ашиглан идэвхжүүлж болно. ==== Хэлний холболтууд Ихэнх гол скрипт хэлнүүдэд зориулсан Apache-ийн модулиуд байдаг. Эдгээр модулиуд нь Apache-ийн модулиудыг бүхэлд нь скрипт хэл дээр бичих боломжийг ихэвчлэн бүрдүүлдэг. Эдгээр нь бас гадаад тайлбарлагчийг эхлүүлэх нэмэлт зардал болон димамик вэб сайтуудын хувьд байдаг эхлүүлэх хугацааны алдагдлыг тойрон гарах, серверт багтааж хийгдсэн байнгын тайлбарлагч болон дараагийн хэсэгт тайлбарлагдсан шигээр ихэвчлэн ашиглагддаг. === Динамик вэб сайтууд Сүүлийн 10 жилд, өөрийн ашиг орлогыг нэмэгдүүлэх, хүмүүст хүрэх зорилгоор илүү олон компаниуд бизнесээ Интернэтээр явуулах болжээ. Энэ нь динамик агуулгатай вэб хуудсууд төрөн гарах хэрэгцээ шаардлагыг улам нэмэгдүүлсэн. Microsoft(R) гэх мэт зарим компаниуд ч өөрийн бүтээгдэхүүнүүдэд тэдгээрээс оруулах болсон хэдий ч, нээлттэй эхийн нэгдэл энэ асуудалд хариу өгсөн юм. Динамик вэб агуулгыг бий болгох орчин үеийн боломжууд бол Django, Ruby on Rails, mod_perl2 болон mod_php юм. ==== Django Django нь өндөр ажиллагаатай, гоёмсог вэб програмыг хурдан бичих боломжийг хөгжүүлэгчдэд олгохоор хийгдсэн, BSD лицензтэй тогтолцоо юм. Энэ нь өгөгдлийн төрлүүд Python обьект хэлбэрээр хөгжүүлэгддэг байхаар болгосон обьектийн харилцааг оноогчтой бөгөөд тэдгээр обьектуудад зориулсан хөгжүүлэгчдэд SQL бичих шаардлагагүй болгож өгдөг, баялаг динамик өгөгдлийн сангийн хандалтын API-тай юм. Энэ нь бас програмын логикийг HTML үзүүлбэрээс тусгаарлах боломжийг бүрдүүлэх нэмэлт загварын системтэй байдаг. Django нь mod_python, Apache, болон таны сонгосон SQL өгөгдлийн сангийн хөдөлгүүрээс хамаардаг. FreeBSD порт нь эдгээр бүх хамаарлуудыг тохирсон сонголтуудтай нь танд суулгаж өгөх болно. [[network-www-django-install]] .Django-г Apache2, mod_python3, болон PostgreSQL суулгах нь [example] ==== [source,shell] .... # cd /usr/ports/www/py-django; make all install clean -DWITH_MOD_PYTHON3 -DWITH_POSTGRESQL .... ==== Django болон бусад хамаарлууд суулгагдсаны дараа та Django төслийн санг үүсгэх хэрэгтэй бөгөөд өөрийн сайт дээрх тухайн URL дээр өөрийн програмыг дуудахын тулд суулгагдсан Python тайлбарлагчийг ашиглахаар болгож Apache-г тохируулах хэрэгтэй. [[network-www-django-apache-config]] .Django/mod_python-д зориулсан Apache-ийн тохиргоо [example] ==== Та өөрийн вэб програм руу тодорхой URL-уудад зориулсан хүсэлтүүдийг дамжуулахаар Apache-г тохируулахын тулд apache-ийн [.filename]#httpd.conf# файлд мөр нэмэх шаардлагатай: [source,shell] .... SetHandler python-program PythonPath "['/dir/to/your/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On .... ==== ==== Ruby on Rails Ruby on Rails нь бүрэн гүйцэд хөгжүүлэлтийн стекийн боломжийг олгодог бөгөөд вэб хөгжүүлэгчдийг хүчирхэг програмыг хурдан шуурхай, илүү үр бүтээлтэй бичдэг байхаар оновчлогдсон, нээлттэй эхийн вэб тогтолцоо юм. Үүнийг портын системээс хялбараар суулгаж болно. [source,shell] .... # cd /usr/ports/www/rubygem-rails; make all install clean .... ==== mod_perl2 Apache/Perl нэгтгэх төсөл Perl програмчлалын хэл ба Apache HTTP Серверийн бүх хүч чадлыг нэгтгэсэн юм. mod_perl2 модулийн тусламжтай Apache модулиудыг тэр чигээр нь Perl дээр бичих боломжтой. Дээр нь, серверт суулгасан шургуу хөрвүүлэгч, гадны хөрвүүлэгч ашиглах илүү ажил болон Perl эхлүүлэх хугацааны алдагдлаас зайлсхийж чадсан юм. mod_perl2 нь package:www/mod_perl2[] портод байдаг. ==== mod_php PHP буюу "PHP:Hypertext Preprocessor" бол вэб хөгжүүлэлтэд тусгайлан тохируулсан, энгийн хэрэглээний скрипт хэл юм. HTML дотор суулгах боломжтой түүний синтакс C, Java(TM), ба Perl-с гаралтай. Энэ нь вэб хөгжүүлэгчдэд динамикаар үүсгэгдэх вэб хуудсыг хурдан бичих боломжтой болгох үүднээс тэгсэн хэрэг. Apache вэб серверийг PHP5-г дэмждэг болгохын тулд, package:lang/php5[] портыг суулгаж эхлэх хэрэгтэй. Хэрэв package:lang/php5[] портыг анх удаа суулгаж байгаа бол, боломжит `ТОХИРУУЛГУУД` автоматаар дэлгэцэн дээр гарч ирнэ. Хэрэв цэс гарч ирэхгүй бол, өөрөөр хэлбэл package:lang/php5[] портыг өмнө нь хэзээ нэгэн цагт суулгаж байсан бол, тохируулгуудын харилцах цонхыг гаргаж ирэхийн тулд дараах тушаалыг: [source,shell] .... # make config .... порт директор дотор өгөх хэрэгтэй. Тохируулгуудын харилцах цонхонд, mod_php5-г Apache-н ачаалах боломжтой модуль байдлаар бүтээхийн тулд `APACHE` тохируулгыг идэвхжүүлнэ. [NOTE] ==== Олон сайтууд PHP4-г янз бүрийн шалтгааны улмаас (өөрөөр хэлбэл, нийцтэй байдал эсвэл аль хэдийн үйлчилгээнд гаргачихсан вэб програмууд) ашигласаар байна. Хэрэв mod_php4-г mod_php5-н оронд ашиглах шаардлагатай бол, package:lang/php4[] портыг ашиглаарай. package:lang/php4[] порт нь package:lang/php5[] портод байдаг тохиргооны болон бүтээх үеийн олон тохируулгуудыг дэмждэг. ==== Энэ хэсэг код динамик PHP програмыг дэмждэг болгоход шаардлагатай модулиудыг суулгаж тохируулах болно. Доорх мөрүүд [.filename]#/usr/local/etc/apache22/httpd.conf# файл дотор нэмэгдсэн эсэхийг шалгаарай: [.programlisting] .... LoadModule php5_module libexec/apache/libphp5.so .... [.programlisting] .... AddModule mod_php5.c DirectoryIndex index.php index.html AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps .... Үүний дараа, PHP модулийг ачаалахын тулд, дараах тушаалыг өгч серверийг дахин ачаалах хэрэгтэй: [source,shell] .... # apachectl graceful .... Дараа, PHP-н хувилбарыг дээшлүүлэх үедээ, `make config` тушаалыг өгөх шаардлагагүй; идэвхжүүлсэн `ТОХИРУУЛГУУД` FreeBSD Портуудын тогтолцоонд автоматаар хадгалагдсан байгаа. FreeBSD-н PHP дэмжлэг нь дээд зэргээр модульчлагдсан тул үндсэн суулгац нь маш хязгаарлагдмал байдаг. package:lang/php5-extensions[] портыг ашиглан дэмжлэг нэмэх нь үнэхээр амархан асуудал. PHP өргөтгөлийг суулгах явцад, энэ порт танд цэсээс тогтсон интерфэйсийг санал болгоно. Өөрөөр, өргөтгөлүүдийг нэг нэгээр нь харгалзах портуудаас суулгаж болно. Жишээлбэл, PHP5-д MySQL өгөгдлийн сангийн серверийн дэмжлэгийг нэмэхийн тулд, [.filename]#databases/php5-mysql# портыг суулгахад хангалттай. Ямар нэг өргөтгөл суулгасны дараа, тохиргооны өөрчлөлтийг хүчин төгөлдөр болгохын тулд Apache серверийг дахин ачаалах шаардлагатайг анхаарна уу: [source,shell] .... # apachectl graceful .... [[network-ftp]] == Файл Дамжуулах Протокол (FTP) === Удиртгал File Transfer Protocol буюу Файл Дамжуулах Протокол (FTP) нь хэрэглэгчдэд FTP серверээс файлыг авах болон тавих хялбар замыг бий болгодог. FreeBSD үндсэн систем дотроо FTP сервер програм ftpd-г агуулж байдаг. Энэ нь FreeBSD дээр FTP серверийг босгох, удирдах ажлыг төвөггүй болгодог. === Тохиргоо Тохиргоо хийхийн өмнөх хамгийн чухал алхам бол ямар дансууд FTP серверт хандах эрхтэй байх вэ гэдгийг шийдэх байдаг. Ердийн FreeBSD систем нь янз бүрийн дэмонуудад хэрэглэгддэг олон тооны системийн дансуудтай байдаг ба гадны хэрэглэгчид эдгээр дансыг ашиглан нэвтрэх ёсгүй. [.filename]#/etc/ftpusers# файл дотор FTP хандалт зөвшөөрөгдөөгүй хэрэглэгчдийн жагсаалтыг хадгална. Анхдагч байдлаар, дээр дурдсан системийн дансууд энэ файлд байна. FTP хандалтыг зөвшөөрөх ёсгүй өөр хэрэглэгчдийг ч мөн энэ файлд нэмж болно. Зарим хэрэглэгчдийн FTP хэрэглэхийг нь бүр болиулчихалгүйгээр, зөвхөн зарим нэг эрхийг нь хязгаарлаж бас болно. Үүнийг [.filename]#/etc/ftpchroot# файлын тусламжтай гүйцэтгэж болно. Энэ файл дотор FTP хандалтыг нь хязгаарлах хэрэглэгчид болон бүлгүүдийн жагсаалт байна. man:ftpchroot[5] заавар хуудсанд бүх мэдээлэл байгаа тул энд дурдсангүй. Хэрэв сервертээ нийтийн FTP хандалтыг зөвшөөрөх хүсэлтэй байгаа бол, FreeBSD систем дээрээ `ftp` нэртэй хэрэглэгч нэмэх хэрэгтэй. Ингэснээр хэрэглэгчид таны FTP сервер рүү `ftp` эсвэл `anonymous` гэсэн нэрээр ямар ч нэвтрэх үг шаардагдахгүйгээр (тогтсон заншил ёсоор хэрэглэгч цахим шуудангийн хаягаа нэвтрэх үгийн оронд хэрэглэх шаардлагатай) нэвтрэн орох болно. Нийтийн хэрэглэгч системд орж ирэхэд FTP сервер түүний эрхийг зөвхөн `ftp` хэрэглэгчийн гэрийн сан дотор хязгаарлахын тулд man:chroot[2]-г дуудна. FTP харилцагчдад зориулсан мэндчилгээний үгнүүдийг агуулсан хоёр текст файл байдаг. [.filename]#/etc/ftpwelcome# файл дотор байгааг нэвтрэлт хүлээх мөр гарахаас өмнө хэрэглэгчдэд дэлгэцэн дээр хэвлэнэ. Амжилттай нэвтэрч орсны дараа [.filename]#/etc/ftpmotd# файл дотор байгааг дэлгэцэн дээр хэвлэнэ. Энэ файлын зам нь нэвтэрч орсон орчинтой харьцангуйгаар авсан зам гэдгийг анхаарна уу, тиймээс нийтийн хэрэглэгчдийн хувьд [.filename]#~ftp/etc/ftpmotd# файлыг хэвлэх болно. FTP серверийн тохиргоог зохих ёсоор хийсний дараа, [.filename]#/etc/inetd.conf# файл дотор идэвхжүүлэх хэрэгтэй. Үүний тулд, ftpd гэсэн мөрний өмнөх "#" тэмдэгтийг арилгахад хангалттай: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... <> хэсэгт тайлбарласан ёсоор энэ тохиргооны файлд өөрчлөлт оруулсны дараа inetd-г дахин ачаалах шаардлагатай. Өөрийн систем дээр inetd-г идэвхжүүлэх талаар дэлгэрэнгүйг <>-с үзнэ үү. Мөн ftpd-ийг дангаар нь ажиллуулахаар тохируулж болно. Энэ тохиолдолд [.filename]#/etc/rc.conf# файлд тохирох хувьсагчийг тохируулахад хангалттай байдаг: [.programlisting] .... ftpd_enable="YES" .... Дээрх хувьсагчийг тохируулсны дараа сервер дараачийн ачаалалт хийхэд ажиллах боломжтой болох бөгөөд эсвэл дараах тушаалыг `root` эрхээр ажиллуулан эхлүүлж болно: [source,shell] .... # service ftpd start .... Одоо та дараах тушаалыг өгөн FTP сервер рүү нэвтрэн орж болно: [source,shell] .... % ftp localhost .... === Арчилгаа ftpd дэмон бүртгэл хөтлөхдөө man:syslog[3]-г ашигладаг. Анхдагч байдлаар, системийн бүртгэлийн дэмон FTP-тэй холбоотой зурвасуудыг [.filename]#/var/log/xferlog# файлд бичнэ. FTP бүртгэлийн файлын байршлыг өөрчлөхийн тулд [.filename]#/etc/syslog.conf# файл дотор, дараах мөрийг засах хэрэгтэй: [.programlisting] .... ftp.info /var/log/xferlog .... Нийтийн FTP сервер ажиллуулахад тохиолдох болзошгүй асуудлуудын талаар мэдлэгтэй байгаарай. Ялангуяа, нийтийн хэрэглэгчдэд файл байршуулахыг зөвшөөрөх тухайд сайн бодох хэрэгтэй. Таны FTP сайт лицензгүй програм хангамжуудыг наймаалцдаг талбар болох, эсвэл түүнээс ч муу зүйл тохиолдохыг үгүйсгэхгүй. Хэрэв нийтийн FTP байршуулалтыг зөвшөөрөх шаардлагатай бол, файлуудыг нягталж үзэхээс нааш бусад нийтийн хэрэглэгчид тэдгээр файлыг унших эрхгүй байхаар тохируулж өгөх хэрэгтэй. [[network-samba]] == Microsoft(R) Windows(R) харилцагчдад зориулсан Файл болон Хэвлэх Үйлчилгээ (Samba) === Ерөнхий Агуулга Samba бол Microsoft(R) Windows(R) харилцагчдад файл болон хэвлэх үйлчилгээг үзүүлдэг, өргөн хэрэглэгддэг нээлттэй эхийн програм хангамжийн багц юм. Ийм төрлийн харилцагчид FreeBSD файлын орчинд холбогдож, файлуудыг өөрийн дискэн дээр байгаа юм шиг, эсвэл FreeBSD хэвлэгчийг өөрийн дотоод хэвлэгч шиг хэрэглэх боломжтой болдог. Samba програм хангамжийн багцууд таны FreeBSD суулгах дискэнд орсон байгаа. Хэрэв та анх FreeBSD суулгахдаа Samba-г хамт суулгаагүй бол, package:net/samba34[] порт эсвэл багцаас суулгаж болно. === Тохиргоо Samba-н анхдагч тохиргооны файл [.filename]#/usr/local/shared/examples/samba34/smb.conf.default# гэж суугдсан байдаг. Энэ файлыг [.filename]#/usr/local/etc/smb.conf# нэртэй хуулаад, Samba-г ашиглаж эхлэхээсээ өмнө өөртөө тааруулан засварлах ёстой. [.filename]#smb.conf# файл нь Windows(R) харилцагчтай хуваалцах хүсэлтэй "файл системийн хэсэг" ба хэвлэгчийн тодорхойлолт гэх зэрэг Samba-н ажиллах үеийн тохиргооны мэдээллийг агуулж байдаг. Samba багц дотор [.filename]#smb.conf# файл дээр ажиллах хялбар арга замыг хангасан swat нэртэй вэб дээр суурилсан хэрэгсэл хамт ирдэг. ==== Samba-г Вэбээр Удирдах Хэрэгсэл (SWAT) Samba Web Administration Tool буюу Samba-г Вэбээр Удирдах Хэрэгсэл (SWAT) нь inetd-н дэмон хэлбэрээр ажиллана. Тиймээс <> дээр харуулсан шиг inetd-г идэвхжүүлж Samba-г swat ашиглан тохируулахын өмнө [.filename]#/etc/inetd.conf# доторх дараах мөрийг ил гаргах шаардлагатай: [.programlisting] .... swat stream tcp nowait/400 root /usr/local/sbin/swat swat .... <> хэсэгт тайлбарласан ёсоор, энэ тохиргооны файлд өөрчлөлт оруулсны дараа inetd-ийн тохиргоог дахин ачаалах шаардлагатай. swat-г [.filename]#inetd.conf# дотор идэвхжүүлсний дараа, вэб хөтөч ашиглан http://localhost:901[http://localhost:901] хаяганд холбогдоно. Та эхлээд системийн `root` дансаар нэвтэрч орох ёстой. Samba-н тохиргооны үндсэн хуудсанд амжилттай нэвтэрч орсон бол, системийн баримтуудаар аялах, эсвэл menu:Globals[] цэсэн дээр дарж тохиргоог хийх боломжтой болно. menu:Globals[] хэсэг [.filename]#/usr/local/etc/smb.conf# файлын `[global]` хэсэгт байгаа хувьсагчдад харгалзана. ==== Глобал тохиргоо swat-г хэрэглэж байгаа эсвэл [.filename]#/usr/local/etc/smb.conf#-г гараараа засаж байгаа аль нь ч бай, Samba-г тохируулах явцад тааралдах хамгийн эхний директивууд бол: `workgroup`:: Энэ нь сервер рүү хандах компьютеруудын NT Домэйн-Нэр эсвэл Ажлын бүлгийн-Нэр. `netbios name`:: Энэ директив Samba серверийн NetBIOS нэрийг заана. Анхдагч байдлаар, хостын DNS нэрийн эхний хэсэгтэй адил байна. `серверийн мөр`:: Энэ директив `net view` тушаалын хариуд гарч ирэх эсвэл зарим сүлжээний хэрэгслүүд дээр энэ серверийг төлөөлж гарах мөрийг заана. ==== Аюулгүй байдлын Тохиргоо [.filename]#/usr/local/etc/smb.conf# доторх хамгийн чухал хоёр тохиргоо бол аюулгүй байдлын загвар, болон харилцагчдын нэвтрэх үгийн арын шугамны хэлбэр юм. Дараах директивүүд эдгээр тохируулгуудыг хянана: `security`:: Энд хамгийн элбэг хэрэглэгддэг хоёр сонголт бол `security = share` ба `security = user` юм. Хэрэв танай харилцагч нар FreeBSD машин дээр хэрэглэдэг хэрэглэгчийн нэртэй ижил нэрийг ашигладаг бол, user түвшний аюулгүй байдлыг сонгохыг хүсэж байж магадгүй. Энэ бол аюулгүй байдлын анхдагч бодлого бөгөөд эх үүсвэрт хандахаас өмнө харилцагчийг системд нэвтэрч орохыг шаардана. + share түвшний аюулгүй байдалд, харилцагчид эх үүсвэрт хандахаас өмнө хүчин төгөлдөр хэрэглэгчийн нэр болон нэвтрэх үгээр сервер рүү нэвтрэн орох шаардлагагүй байдаг. Энэ бол Samba-н хуучин хувилбаруудын хувьд аюулгүй байдлын анхдагч загвар байсан. `passdb backend`:: + ++ Samba-д хэд хэдэн төрлийн арын шугамны магадлах загварууд байдаг. Харилцагчдыг LDAP, NIS+, SQL өгөгдлийн сан, эсвэл хувиргасан нэвтрэх үгийн файлаар магадлаж болно. Анхдагч магадлах арга бол `smbpasswd` бөгөөд бид зөвхөн энэ талаар авч үзэх болно. Анхдагч `smbpasswd` арын шугамыг хэрэглэж байгаа гэж үзвэл, Samba харилцагчдыг магадлахын тулд [.filename]#/usr/local/etc/samba/smbpasswd# файлыг эхлээд үүсгэх ёстой. Хэрэв UNIX(R) хэрэглэгчийн эрхээр Windows(R) харилцагчаас ханддаг байх шаардлагатай бол, дараах тушаалыг хэрэглэнэ: [source,shell] .... # smbpasswd -a username .... [NOTE] ==== Энэ үед санал болгодог арын мэдээллийн сан нь `tdbsam` бөгөөд хэрэглэгчийн бүртгэлийг нэмэхийн тулд дараах тушаалыг ашиглах ёстой: [source,shell] .... # pdbedit -a -u username .... ==== Тохируулгуудын талаар нэмэлт мэдээллийг http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/[Албан ёсны Samba HOWTO]-с олж авна уу. Энд цухас дурдсан үндсэн мэдлэгтэйгээр Samba-г ажиллуулж эхлэх чадвартай байх ёстой. === Samba-г Эхлүүлэх нь package:net/samba34[] портод Samba-г удирдахад зориулсан шинэ эхлэл скрипт орсон байгаа. Энэ скриптийг идэвхжүүлэхийн тулд, өөрөөр хэлбэл энэ скриптийг ашиглан Samba-г эхлүүлэх, зогсоох болон дахин эхлүүлдэг болохын тулд, [.filename]#/etc/rc.conf# файл дотор дараах мөрийг нэмж бичих хэрэгтэй: [.programlisting] .... samba_enable="YES" .... Эсвэл илүү нарийнаар доор дурдсан шиг тохируулж болно: [.programlisting] .... nmbd_enable="YES" .... [.programlisting] .... smbd_enable="YES" .... [NOTE] ==== Ингэснээр мөн Samba-г систем ачаалах үед автоматаар эхлүүлдэг болгоно. ==== Үүний дараа хүссэн үедээ Samba-г эхлүүлэхийн тулд дараах тушаалыг өгөхөд хангалттай: [source,shell] .... # service samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. .... rc скриптийг ашиглах талаар дэлгэрэнгүй мэдээллийг crossref:config[configtuning-rcd,FreeBSD дээр rc(8) ашиглах нь] хэсгээс авна уу. Samba нь үнэн хэрэгтээ гурван тусдаа дэмоноос тогтоно. nmbd ба smbd дэмонууд [.filename]#samba# скриптээр эхлүүлдэг болохыг та анзаарах болно. Хэрэв [.filename]#smb.conf# дотор winbind нэр тайлах үйлчилгээг идэвхжүүлсэн бол winbindd дэмон бас ажиллаж эхэлсэн болохыг харж болно. Samba-г хүссэн үедээ зогсоохын тулд дараах тушаалыг өгөхөд хангалттай: [source,shell] .... # service samba stop .... Samba бол Microsoft(R) Windows(R) сүлжээтэй өргөн хүрээнд нэгдмэл ажиллах боломжийг олгодог нарийн төвөгтэй програмын цогц юм. Энд тайлбарласан үндсэн суулгацаас хальсан функцуудын талаар дэлгэрэнгүй мэдээллийг http://www.samba.org[http://www.samba.org] хаягаар орж авна уу. [[network-ntp]] == ntpd-р Цаг Тааруулах нь === Ерөнхий Агуулга Цаг хугацаа өнгөрөхөд компьютерийн цаг зөрөх хандлагатай байдаг. Network Time Protocol буюу Сүлжээний Цагийн Протоколыг(NTP) цагийг зөв байлгах, зөв ажиллуулахад хэрэглэдэг. Олон тооны Интернэт үйлчилгээнүүд компьютерийн цагаас хамаарч, эсвэл хүртэж ажилладаг. Жишээлбэл, вэб сервер тодорхой цагаас хойш өөрчлөлт орсон файлуудыг илгээх хүсэлт хүлээн авсан байж болох юм. Дотоод сүлжээний орчинд, нэг файл серверээр үйлчлүүлж байгаа компьютеруудын хувьд файлын цагийн тамга дүйж байхын тулд тэдгээрийн цагууд хоорондоо тохирч байх ёстой. man:cron[8] зэрэг үйлчилгээнүүд тодорхой цагт тушаалыг гүйцэтгэхийн тулд системийн цагт бүрэн найдаж ажилладаг. FreeBSD man:ntpd[8] NTP серверийн хамт ирдэг. man:ntpd[8] NTP нь таны машины цагийг тааруулахын тулд бусад NTP серверүүдээс асуух эсвэл бусдад цагийн мэдээллийг түгээх үйлчилгээг үзүүлдэг. === Зохимжтой NTP Серверийг Сонгох нь Цагаа тааруулахын тулд, та нэг болон түүнээс дээш тооны NTP серверийг хэрэглэх хэрэгтэй болно. Танай сүлжээний администратор эсвэл ISP үүнд зориулсан NTP сервертэй байж болох юм-тийм эсэхийг тэдний заавраас шалгана уу. http://support.ntp.org/bin/view/Servers/WebHome[нийтэд зориулсан NTP серверүүдийн онлайн жагсаалт]ыг ашиглан өөртөө ойрхон байгаа NTP серверийг олно уу. Сонгож авсан серверийнхээ ашиглах журмыг судлаарай. Мөн хэрэв шаардлагатай бол зөвшөөрөл аваарай. Таны сонгосон сервер холбогдох боломжгүй, эсвэл цаг нь бүрэн итгэж болохооргүй үе гарах тул, хоорондоо хамааралгүй хэд хэдэн NTP серверүүдийг сонгох нь хамгийн зөв сонголт болдог. man:ntpd[8] бусад серверээс хүлээн авсан хариултуудыг маш ухаалгаар хэрэглэдэг-итгэж болох серверүүдийг илүү авч үздэг. === Өөрийн Машиныг Тохируулах нь ==== Үндсэн Тохиргоо Хэрэв та машин асахад цагаа тааруулах хүсэлтэй байгаа бол, man:ntpdate[8]-г ашиглаж болно. Энэ нь олон дахин тааруулах шаардлагагүй, ойр ойрхон асааж унтраадаг ширээний компьютерийн хувьд зохимжтой байж болох юм. Гэхдээ ихэнх машины хувьд man:ntpd[8]-г ажиллуулах нь зүйтэй. Систем ачаалах үед man:ntpdate[8]-г ашиглах нь man:ntpd[8] ажиллаж байгаа машинуудын хувьд зөв санаа юм. Учир нь man:ntpd[8] програм нь цагийг алгуур өөрчилдөг байхад, man:ntpdate[8] машины одоогийн цаг болон зөв цагын хооронд хир их ялгаа байгааг үл хайхран цагийг тааруулдаг. man:ntpdate[8]-г систем ачаалах үед идэвхжүүлэхийн тулд, `ntpdate_enable="YES"` гэсэн мөрийг [.filename]#/etc/rc.conf# файлд нэмэх хэрэгтэй. Мөн цаг авах гэж байгаа бүх серверүүд болон man:ntpdate[8]-д өгөх тугуудыг `ntpdate_flags`-д зааж өгөх хэрэгтэй. ==== Ерөнхий Тохиргоо NTP-г [.filename]#/etc/ntp.conf# файлын тусламжтай, man:ntp.conf[5]-д заасан хэлбэрээр тохируулна. Доор хялбар жишээг үзүүлэв: [.programlisting] .... server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift .... `server` тохируулгаар ямар серверүүдийг ашиглахыг заана. Нэг мөрөнд нэг серверийг бичнэ. Хэрэв аль нэг серверийг `prefer` гэсэн аргументаар онцолсон бол, `ntplocal.example.com` шиг, тэр серверийг бусдаас илүүд үзнэ. Илүүд үзсэн серверээс ирсэн хариу бусад серверүүдийн хариунаас мэдэгдэхүйцээр зөрж байгаа үед хариуг тоохгүй өнгөрөөнө. Түүнээс бусад тохиолдолд бусад серверийн хариуг үл харгалзан тэр серверийн хариуг хэрэглэх болно. `prefer` аргументийг ер нь өндөр нарийвчлалтай, тусгай цаг хянадаг тоног төхөөрөмж дээр тулгуурласан NTP серверийн хувьд хэрэглэнэ. `driftfile` тохируулгаар ямар файлд системийн цагийн алдах зөрүү утгыг хадгалж байгааг заана. man:ntpd[8] програм энэ утгыг ашиглан цагийн алдсан зөрүүг автоматаар нөхнө. Ингэснээр цагийн бүх гадаад эх үүсвэрүүдтэй холбоо тогтоох боломжгүй болсон үед, хэсэг хугацааны туршид ч гэсэн цагийг харьцангуй зөв ажиллуулах боломжийг олгоно. `driftfile` тохируулгаар ямар файлд таны зааж өгсөн NTP серверүүдийн өмнөх хариунуудын тухай мэдээллийг хадгалж байгааг заана. Энэ файлд NTP-н дотоод үйл ажиллагааны мэдээллийг хадгалдаг. Энэ мэдээллийг өөр ямар ч процесс өөрчлөх ёсгүй. ==== Өөрийн Сервер рүү Хандах Хандалтыг Хянах нь Анхдагч байдлаар, таны NTP сервер рүү Интернэтэд байгаа бүх хост хандах боломжтой. [.filename]#/etc/ntp.conf# файл дотор `restrict` тохируулгаар ямар машинууд таны сервер рүү хандаж болохыг хянаж болно. Хэрэв та өөрийн NTP сервер рүү хэнийг ч хандуулахыг хүсэхгүй байгаа бол [.filename]#/etc/ntp.conf# файл дотор дараах мөрийг нэмэх хэрэгтэй: [.programlisting] .... restrict default ignore .... [NOTE] ==== Энэ нь таны серверээс өөрийн чинь локал тохиргоонд жагсаагдсан аль ч сервер үрүү хандах боломжийг бас хаана. Хэрэв та өөрийн NTP серверийг гадаад NTP сервертэй синхрончлох хэрэгтэй бол ямар нэг серверийг зөвшөөрөх ёстой. Дэлгэрэнгүй мэдээллийг man:ntp.conf[5] гарын авлагаас үзнэ үү. ==== Хэрэв та зөвхөн өөрийн сүлжээнд байгаа машинуудыг таны сервертэй цагаа тааруулахыг зөвшөөрөөд, гэхдээ таны серверийн тохиргоог өөрчлөх болон тэгш эрхтэй серверүүд шиг цагийн мэдээллийг хуваахыг зөвшөөрөхгүй бол дээр дурдсаны оронд: [.programlisting] .... restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap .... гэсэн мөрийг бичнэ үү. Энд `192.168.1.0` нь таны сүлжээний IP хаяг, `255.255.255.0` нь таны сүлжээний баг болно. [.filename]#/etc/ntp.conf# дотор олон тооны `restrict` тохируулгууд байж болно. Илүү дэлгэрэнгүй мэдээллийг man:ntp.conf[5]-н `Хандалтыг Удирдах Дэмжлэг` дэд хэсгээс үзнэ үү. === NTP Серверийг Ажиллуулах нь NTP серверийг систем ачаалах үед эхлүүлэхийн тулд, `ntpd_enable="YES"` гэсэн мөрийг [.filename]#/etc/rc.conf# файлд нэмж бичих хэрэгтэй. Хэрэв man:ntpd[8]-д нэмэлт тугуудыг өгөх хүсэлтэй бол, [.filename]#/etc/rc.conf# файлд байгаа `ntpd_flags` параметрийг засах хэрэгтэй. Машиныг дахин ачаалалгүйгээр серверийг эхлүүлэхийн тулд, `ntpd` тушаалыг [.filename]#/etc/rc.conf#-д заасан `ntpd_flags` нэмэлт параметрүүдийн хамтаар өгөх хэрэгтэй. Жишээлбэл: [source,shell] .... # ntpd -p /var/run/ntpd.pid .... === ntpd-г Түр зуурын Интернэт Холболттой үед Хэрэглэх нь man:ntpd[8] програм зөв ажиллахын тулд байнгын Интернэт холболт шаардлагагүй. Гэхдээ, хэрэгцээтэй үедээ гадагшаа залгадаг тийм төрлийн түр зуурын холболттой бол, NTP трафикийг гадагшаа залгах болон холболтыг бариад байхаас сэргийлэх нь чухал. Хэрэв та PPP хэрэглэдэг бол, [.filename]#/etc/ppp/ppp.conf# файл дотор байгаа `filter` директивийг ашиглаж болно. Жишээ нь: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... Дэлгэрэнгүй мэдээллийг man:ppp[8]-н `PACKET FILTERING` хэсгээс болон [.filename]#/usr/shared/examples/ppp/#-д байгаа жишээнүүдээс авч болно. [NOTE] ==== Зарим Интернэт үйлчилгээ үзүүлэгчид бага дугаартай портуудыг хаасан байдаг бөгөөд ингэснээр хариу нь таны машинд хэзээ ч хүрэхгүй болж NTP ажиллахгүй болдог. ==== === Цааших Мэдээлэл NTP серверийн баримтжуулалтыг HTML хэлбэрээр [.filename]#/usr/shared/doc/ntp/#-с олж үзэж болно. [[network-syslogd]] == `syslogd` ашиглан алсын хост руу бүртгэх нь Системийн бүртгэлтэй ажиллах нь аюулгүй байдлын болоод системийг удирдах ажиллагааны чухал асуудал юм. Хостууд дунд зэргийн эсвэл том сүлжээнд тархсан эсвэл тэдгээр нь төрөл бүрийн олон янзын сүлжээний хэсэг болсон байх тохиолдолд эдгээр олон хостын бүртгэлийн файлуудыг монитор хийх нь ихээхэн төвөгтэй болдог. Энэ тохиолдолд алсаас бүртгэхийг тохируулах нь бүх л процессийг илүү тухтай болгодог. Тусгайлан заасан бүртгэх хост руу төвлөрүүлэн бүртгэх нь бүртгэлийн файлын удирдлагатай холбоотой зарим хүндрэлүүдийг багасгаж чаддаг. man:syslogd[8] болон man:newsyslog[8] зэрэг FreeBSD-ийн эх хэрэгслүүдийг ашиглан бүртгэлийн файлын цуглуулга, нийлүүлэлт болон багасгалтыг нэг газар тохируулж болдог. Дараах жишээ тохиргоонд `logserv.example.com` гэж нэрлэгдсэн хост `A` локал сүлжээнээс бүртгэлийн мэдээллийг цуглуулах болно. `logclient.example.com` гэж нэрлэгдсэн хост `B` бүртгэлийн мэдээллийг сервер систем рүү дамжуулах болно. Жинхэнэ тохиргоонд эдгээр хостууд зохих дамжуулах болон буцах DNS эсвэл [.filename]#/etc/hosts# файлд оруулгууд шаардана. Тэгэхгүй бол өгөгдлийг сервер хүлээн авахгүй татгалзах болно. === Бүртгэлийн серверийн тохиргоо Бүртгэлийн серверүүд нь алсын хостуудаас бүртгэлийн мэдээллийг хүлээн авахаар тохируулагдсан машинууд юм. Ихэнх тохиолдолд энэ нь тохиргоог хялбар болгох зорилготой бөгөөд зарим тохиолдолд энэ нь удирдлагыг арай сайжруулж байгаа хэлбэр байж болох юм. Аль ч шалтгаан байсан гэсэн үргэлжлүүлэхээсээ өмнө цөөн хэдэн шаардлагыг дурдъя. Зөв тохируулсан бүртгэлийн сервер дараах хамгийн бага шаардлагыг хангасан байх шаардлагатай: * Клиент болон сервер дээр 514-р порт руу UDP-г дамжуулах боломжийг бүрдүүлэх галт хананы дүрэм; * Клиент машинаас алсын мэдэгдлүүдийг хүлээн авахаар syslogd тохируулагдсан байх; * syslogd сервер болон бүх клиент машинууд нь дамжуулах болон буцах DNS-ийн хувьд зөв оруулгуудтай эсвэл [.filename]#/etc/hosts# файлд зөв тохируулсан байх шаардлагатай. Бүртгэлийн серверийг тохируулахын тулд клиент нь [.filename]#/etc/syslog.conf#-д нэмэгдсэн байх ёстой бөгөөд бүртгэх боломжийг зааж өгсөн байх шаардлагатай: [.programlisting] .... +logclient.example.com *.* /var/log/logclient.log .... [NOTE] ==== Төрөл бүрийн дэмжигдсэн, байгаа _facility_ буюу _боломжуудын_ талаарх дэлгэрэнгүй мэдээллийг man:syslog.conf[5] гарын авлагын хуудаснаас олж болно. ==== Нэмсэний дараа бүх `facility` мэдэгдлүүд өмнө заасан [.filename]#/var/log/logclient.log# файл руу бүртгэгдэх болно. Сервер машин дараах тохиргоог бас [.filename]#/etc/rc.conf# файлдаа хийсэн байх шаардлагатай: [.programlisting] .... syslogd_enable="YES" syslogd_flags="-a logclient.example.com -v -v" .... Эхний тохиргоо нь `syslogd` демоныг эхлүүлэхийг заах бөгөөд хоёр дахь нь клиетийн өгөгдлийг энэ сервер дээр хүлээн авахыг зөвшөөрнө. Сүүлийн `-v -v` хэсэг нь бүртгэж байгаа мэдэгдлүүдийн гаралтыг илүү дэлгэрэнгүй болгоно. Энэ нь facility-г тохируулахад ихээхэн ашигтай байдаг. Администраторууд ямар төрлийн мэдэгдлүүд ямар facility-р бүртгэгдэж байгааг хянах боломжийг энэ нь бүрдүүлдэг. Олон клиентээс бүртгэлийг хүлээн авахын тулд олон `-a` сонголтыг зааж өгч болно. IP хаягууд болон бүхэл сүлжээний блокийг бас зааж өгч болох бөгөөд боломжит сонголтуудын бүх жагсаалтыг man:syslog[3] гарын авлагын хуудаснаас үзнэ үү. Төгсгөлд нь бүртгэлийн файлыг үүсгэх хэрэгтэй. Хэрэглэгсэн арга нь хамаагүй боловч man:touch[1] үүнтэй адил тохиолдлуудад сайн ажилладаг: [source,shell] .... # touch /var/log/logclient.log .... Энэ үед `syslogd` демоныг дахин ажиллуулж шалгах ёстой: [source,shell] .... # service syslogd restart # pgrep syslog .... Хэрэв PID буцаагдвал сервер нь амжилттай дахин эхэлсэн гэсэн үг бөгөөд клиентийн тохиргоо ажиллаж эхэлнэ. Хэрэв сервер дахин эхлээгүй бол ямар нэг зүйл болсон эсэхийг [.filename]#/var/log/messages# файл дахь мэдэгдлүүдээс шалгаарай. === Клиентийн бүртгэлийн тохиргоо Бүртгэл илгээгч клиент нь өөр дээрээ хуулбараа үлдээхээс гадна бас бүртгэлийн сервер рүү бүртгэлийн мэдээллийг явуулдаг машин юм. Бүртгэлийн серверүүдийн нэгэн адил клиентүүд нь бас хамгийн бага шаардлагыг хангасан байх ёстой: * man:syslogd[8] нь бүртгэлийн сервер хүлээн авах ёстой заасан төрлийн мэдэгдлүүдийг бүртгэлийн сервер рүү илгээхээр тохируулагдсан байх ёстой; * Галт хана UDP пакетуудыг 514-р порт руу зөвшөөрөх ёстой; * Дамжуулах болон буцах DNS тохируулагдсан эсвэл [.filename]#/etc/hosts# файл зохих оруулгуудтай байх шаардлагатай. Клиентийн тохиргоо нь серверийнхтэй харьцуулах юм бол арай зөөлөн байдаг. Клиент машин нь [.filename]#/etc/rc.conf# файлдаа дараахийг нэмж өгсөн байх шаардлагатай байдаг: [.programlisting] .... syslogd_enable="YES" syslogd_flags="-s -v -v" .... Өмнө дурдсаны адил эдгээр тохиргоонууд нь `syslogd` демоныг ачаалж эхлэхэд эхлүүлэхийг заах бөгөөд бүртгэх мэдэгдлүүдийг дэлгэрэнгүйгээр харуулах болно. `-s` сонголт нь бусад хостуудаас бүртгэлийг энэ клиент хүлээн авахаас сэргийлдэг. Facility нь мэдэгдэл үүсгэгдэж байгаа тэр системийн хэсгийг тайлбарладаг. Жишээ нь ftp болон ipfw нь хоёулаа facility юм. Эдгээр хоёр үйлчилгээний хувьд бүртгэлийн мэдэгдлүүд үүсэхэд ихэвчлэн дээрх хоёр хэрэгслийг бүртгэлийн мэдэгдэл бүртээ агуулсан байдаг. Facility нь бүртгэлийн мэдэгдэл ямар чухлыг тэмдэглэхэд хэрэглэгдэх дараалал эсвэл түвшинтэй байдаг. Хамгийн түгээмэл нь `warning` ба `info` юм. Боломжит бүх facilty болон дарааллуудын жагсаалтыг man:syslog[3] гарын авлагын хуудаснаас үзнэ үү. Бүртгэлийн серверийг клиентийн [.filename]#/etc/syslog.conf# файлд заасан байх шаардлагатай. Энэ жишээн дээр алсын сервер рүү бүртгэлийн өгөгдлийг илгээхийн тулд `@` тэмдгийг ашигласан бөгөөд доор дурдсан мөртэй төстэй харагдана: [.programlisting] .... *.* @logserv.example.com .... Нэмсэний дараа өөрчлөлтийг хүчинтэй болгохын тулд `syslogd`-г дахин эхлүүлэх шаардлагатай: [source,shell] .... # service syslogd restart .... Сүлжээгээр бүртгэлийн мэдэгдлүүдийг илгээж байгаа эсэхийг тест хийхийн тулд клиент дээр man:logger[1]-г ашиглаж мэдэгдлийг `syslogd` руу илгээнэ: [source,shell] .... # logger "Test message from logclient" .... Энэ мэдэгдэл клиент дээрх [.filename]#/var/log/messages# болон сервер дээрх [.filename]#/var/log/logclient.log# файлд одоо орсон байх ёстой. === Бүртгэлийн серверүүдийг дибаг хийх Зарим тохиолдолд хэрэв бүртгэлийн сервер дээр мэдэгдлүүд нь хүлээн авагдаагүй бол дибаг хийх шаардлагатай байж болох юм. Хэд хэдэн шалтгаанаас болж ийм байдалд хүрч болох юм. Хамгийн түгээмэл хоёр нь сүлжээний холболтын болон DNS-тэй холбоотой асуудлууд юм. Эдгээр тохиолдлуудыг тест хийхийн тулд хоёр хост хоёулаа [.filename]#/etc/rc.conf# файлд заагдсан хостын нэрээрээ нэг нэгэн рүүгээ хүрч чадаж байгааг шалгах хэрэгтэй. Хэрэв энэ зөв ажиллаж байгаа бол [.filename]#/etc/rc.conf# файлд `syslogd_flags` тохиргоог өөрчлөх шаардлагатай болно. Дараах жишээн дээр [.filename]#/var/log/logclient.log# нь хоосон бөгөөд [.filename]#/var/log/messages# файл нь амжилтгүй болсон шалтгааныг харуулна. Дибаг хийж байгаа гаралтыг илүү дэлгэрэнгүй харуулахын тулд дараах жишээтэй төстэйгөөр `syslogd_flags` тохируулгыг өөрчилж дахин ачаалах хэрэгтэй: [.programlisting] .... syslogd_flags="-d -a logclien.example.com -v -v" .... [source,shell] .... # service syslogd restart .... Доор дурдсантай төстэй дибаг өгөгдөл дахин ачаалсны дараа дэлгэц дээр хурдан гарч өнгөрнө: [source,shell] .... logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel Logging to FILE /var/log/messages syslogd: kernel boot file is /boot/kernel/kernel cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; rejected in rule 0 due to name mismatch. .... Мэдэгдлүүд нэр зөрснөөс болоод дамжихгүй байгааг эндээс харж болно. Тохиргоог алхам алхмаар дахин шалгасны дараа [.filename]#/etc/rc.conf# дахь дараах мөр буруу бичигдсэн бөгөөд асуудалтай байгааг олж харна: [.programlisting] .... syslogd_flags="-d -a logclien.example.com -v -v" .... Энэ мөр `logclien` биш `logclient` гэдгийг агуулсан байх ёстой. Зөв болгож засан дахин ачаалсны дараа хүлээж байсан үр дүнгээ харах болно: [source,shell] .... # service syslogd restart logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel syslogd: kernel boot file is /boot/kernel/kernel logmsg: pri 166, flags 17, from logserv.example.com, msg Dec 10 20:55:02 logserv.example.com syslogd: exiting on signal 2 cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; accepted in rule 0. logmsg: pri 15, flags 0, from logclient.example.com, msg Dec 11 02:01:28 trhodes: Test message 2 Logging to FILE /var/log/logclient.log Logging to FILE /var/log/messages .... Энэ үед мэдэгдлүүдийг зөв хүлээн аван зөв файлд бичих болно. === Аюулгүй байдлын хувьд бодолцох зүйлс Сүлжээний аль ч үйлчилгээний нэгэн адил энэ тохиргоог хийхээсээ өмнө аюулгүй байдлын шаардлагуудыг бодолцох ёстой. Заримдаа бүртгэлийн файлууд нь локал хост дээр идэвхжүүлсэн үйлчилгээнүүд, хэрэглэгчдийн бүртгэл болон тохиргооны өгөгдлийн талаарх эмзэг өгөгдлүүдийг агуулсан байж болох юм. Клиентээс сервер рүү илгээсэн сүлжээний өгөгдөл нь шифрлэгдээгүй эсвэл нууц үгээр хамгаалагдаагүй байдаг. Хэрэв шифрлэх шаардлагатай бол өгөгдлийг шифрлэсэн хоолойгоор дамжуулах package:security/stunnel[] хэрэгслийг ашиглаж болох юм. Локал аюулгүй байдал нь бас л асуудал юм. Бүртгэлийн файлууд нь хэрэглэж байхад юм уу эсвэл бүртгэлийн багасгах үед шифрлэгддэггүй. Локал хэрэглэгчид эдгээр файлуудад хандаж системийн тохиргооны талаар нэмэлт мэдээлэл олж авч болох юм. Ийм тохиолдолд эдгээр файлууд дээр зөв зөвшөөрлүүдийг тавих нь чухал юм. man:newsyslog[8] хэрэгсэл нь шинээр үүсгэгдсэн болон багасгагдсан бүртгэлийн файлууд дээр зөвшөөрөл тавихыг дэмждэг. Бүртгэлийн файлууд дээр `600` горимыг тавьснаар хүсээгүй локал хэрэглэгчид тэдгээрийг шиншлэх боломжийг хаах юм. diff --git a/documentation/content/nl/books/handbook/mac/_index.adoc b/documentation/content/nl/books/handbook/mac/_index.adoc index 1cb83858c8..6285b8a0a8 100644 --- a/documentation/content/nl/books/handbook/mac/_index.adoc +++ b/documentation/content/nl/books/handbook/mac/_index.adoc @@ -1,943 +1,941 @@ --- title: Hoofdstuk 17. Verplichte Toegangscontrole (MAC) part: Deel III. Systeembeheer prev: books/handbook/jails next: books/handbook/audit showBookMenu: true weight: 21 params: path: "/books/handbook/mac/" --- [[mac]] = Verplichte Toegangscontrole (MAC) :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 17 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == Overzicht In FreeBSD 5.X werden nieuwe beveiligingsuitbreidingen geïntroduceerd uit het TrustedBSD project, dat is gebaseerd op de POSIX(R).1e draft. Twee van de meest significante nieuwe beveiligingsmechanismen zijn faciliteiten voor Toegangscontrolelijsten voor bestandssystemen (ACLs) en Verplichte Toegangscontrole (Mandatory Access Control of MAC). Met Verplichte Toegangscontrole kunnen nieuwe toegangscontrolemodules geladen worden, waarmee nieuw beveiligingsbeleid opgelegd kan worden. Een aantal daarvan bieden beveiliging aan hele kleine onderdelen van het systeem, waardoor een bepaalde dienst weerbaarder wordt. Andere bieden allesomvattende gelabelde beveiliging op alle vlakken en objecten. Het verplichte deel van de definitie komt van het feit dat het opleggen van de controle wordt gedaan door beheerders en het systeem en niet wordt overgelaten aan de nukken van gebruikers, zoals wel wordt gedaan met toegangscontrole naar goeddunken (discretionary access control of DAC, de standaardrechten voor bestanden en System V IPC rechten in FreeBSD). In dit hoofdstuk wordt de nadruk gelegd op het Verplichte Toegangscontrole Raamwerk (MAC Framework) en een verzameling van te activeren beveiligingsbeleidsmodules waarmee verschillende soorten beveiligingsmechanismen wordt ingeschakeld. Na het lezen van dit hoofdstuk weet u: * Welke MAC beveiligingsbeleidsmodules op dit moment in FreeBSD beschikbaar zijn en welke mechanismen daarbij horen. * Wat MAC beveiligingsbeleidsmodules implementeren en het verschil tussen gelabeld en niet-gelabeld beleid. * Hoe een systeem efficiënt ingesteld kan worden om met het MAC-raamwerk te werken. * Hoe het beleid van de verschillende beveiligingsbeleidsmodules die in het MAC-raamwerk zitten ingesteld kunnen worden. * Hoe een veiligere omgeving gemaakt kan worden met het MAC-raamwerk en de getoonde voorbeelden; * Hoe de MAC-instellingen getest kunnen worden om er zeker van te zijn dat het raamwerk juist is geïmplementeerd. Aangeraden voorkennis: * Begrip van UNIX(R) en FreeBSD basiskennis (crossref:basics[basics,UNIX® beginselen]); * Bekend zijn met de beginselen van het instellen en compileren van de kernel (crossref:kernelconfig[kernelconfig,De FreeBSD-kernel instellen]); * Enigszins bekend zijn met beveiliging en wat dat te maken heeft met FreeBSD (crossref:security[security,Beveiliging]). [WARNING] ==== Het verkeerd gebruiken van de informatie die hierin staat kan leiden tot het niet langer toegang hebben tot een systeem, ergernis bij gebruikers, of het niet langer kunnen gebruiken van de mogelijkheden die X11 biedt. Nog belangrijker is dat niet alleen op MAC vertrouwd moet worden voor de beveiliging van een systeem. Het MAC-raamwerk vergroot alleen het bestaande beveiligingsbeleid; zonder goede beveiligingsprocedures en regelmatige beveiligingscontroles is een systeem nooit helemaal veilig. Het is ook van belang op te merken dat de voorbeelden in dit hoofdstuk alleen voorbeelden zijn. Het is niet aan te raden ze uit te rollen op een productiesysteem. Het implementeren van de verschillende beveiligingsbeleidsmodules dient goed overdacht en getest te worden. Iemand die niet helemaal begrijpt hoe alles werkt, komt er waarschijnlijk achter dat die het complete systeem van voor naar achter en weer terug doorloopt en vele bestanden en mappen opnieuw moet instellen. ==== === Wat niet wordt behandeld In dit hoofdstuk wordt een brede reeks beveiligingsonderwerpen met betrekking tot het MAC-raamwerk behandeld. De ontwikkeling van nieuwe MAC-beveiligingsbeleidsmodules wordt niet behandeld. Een aantal modules die bij het MAC-raamwerk zitten hebben specifieke eigenschappen voor het testen en ontwikkelen van nieuwe modules. Daaronder vallen man:mac_test[4], man:mac_stub[4] en man:mac_none[4]. Meer informatie over deze beveiligingsbeleidsmodules en de mogelijkheden die ze bieden staan in de hulppagina's. [[mac-inline-glossary]] == Sleuteltermen in dit hoofdstuk Voordat dit hoofdstuk gelezen wordt, moeten er een aantal sleuteltermen toegelicht worden. Hiermee wordt hopelijk mogelijke verwarring en de abrupte introductie van nieuwe termen en informatie voorkomen. * _compartiment_: een compartiment is een verzameling van programma's en gegevens die gepartitioneerd of gescheiden dient te worden en waartoe gebruikers expliciet toegang moeten krijgen op een systeem. Een compartiment staat ook voor een groep, zoals een werkgroep, afdeling, project, of onderwerp. Door gebruik te maken van compartimenten is het mogelijk om een "need-to-know" beveiligingsbeleid in te stellen. * _hoogwatermarkering_: Een hoogwatermarkeringsbeleid is een beleid dat toestaat om beveiligingsniveaus te verhogen met het doel informatie dat op een hoger niveau aanwezig is te benaderen. In de meeste gevallen wordt het originele niveau hersteld nadat het proces voltooid is. Momenteel heeft het MAC-raamwerk van FreeBSD hier geen beleid voor, maar de definitie is voor de volledigheid opgenomen. * _integriteit_: integriteit, als sleutelconcept, is het niveau van vertrouwen dat in gegevens gesteld kan worden. Als de integriteit van gegevens wordt vergroot, dan geldt dat ook voor het vertrouwen dat in die gegevens gesteld kan worden. * _label_: een label is een beveiligingsattribuut dat toegepast kan worden op bestanden, mappen of andere onderdelen van een systeem. Het kan gezien worden als een vertrouwelijkheidsstempel: als er een label op een bestand is geplaatst, beschrijft dat de beveiligingseigenschappen voor dat specifieke bestand en is daarop alleen toegang voor bestanden, gebruikers, bronnen, enzovoort, met gelijke beveiligingsinstellingen. De betekenis en interpretatie van labelwaarden hangt af van de beleidsinstellingen: hoewel sommige beleidseenheden een label beschouwen als representatie van de integriteit of het geheimhoudingsniveau van een object, kunnen andere beleidseenheden labels gebruiken om regels voor toegang in op te slaan. * _niveau_: de verhoogde of verlaagde instelling van een beveiligingsattribuut. Met het stijgen van het niveau wordt ook aangenomen dat de veiligheid stijgt. * _laagwatermarkering_: Een laagwatermarkeringsbeleid is een beleid dat toestaat om de beveiligingsniveaus te verlagen met het doel informatie te benaderen die minder veilig is. In de meeste gevallen wordt het originele beveiligingsniveau van de gebruiker hersteld nadat het proces voltooid is. De enige beveiligingsbeleidsmodule in FreeBSD die dit gebruikt is man:mac_lomac[4]. * _meervoudig label_: de eigenschap `multilabel` is een optie van het bestandssysteem die in enkelegebruikersmodus met man:tunefs[8], tijdens het opstarten via het bestand man:fstab[5] of tijdens het maken van een nieuw bestandssysteem ingesteld kan worden. Met deze optie wordt het voor een beheerder mogelijk om verschillende MAC-labels op verschillende objecten toe te passen. Deze optie is alleen van toepassing op beveiligingsbeleidsmodules die labels ondersteunen. * _object_: een object of systeemobject is een entiteit waar informatie doorheen stroomt op aanwijzing van een _subject_. Hieronder vallen mappen, bestanden, velden, schermen, toetsenborden, geheugen, magnetische opslag, printers en alle andere denkbare apparaten waarmee gegevens kunnen worden vervoerd of kunnen worden opgeslagen. In de basis is een object een opslageenheid voor gegevens of een systeembron; toegang tot een _object_ betekent in feite toegang tot de gegevens. * _beleidseenheid_: een verzameling van regels die aangeven hoe doelstellingen bereikt moeten worden. In een _beleidseenheid_ staat meestal beschreven hoe bepaalde eenheden behandeld dienen te worden. In dit hoofdstuk wordt de term _beleidseenheid_ in deze context gezien als een _beveiligingsbeleidseenheid_, wat zoveel wil zeggen als een verzameling regels die bepaalt hoe gegevens en informatie stroomt en aangeeft wie toegang tot welke gegevens en informatie heeft. * _gevoeligheid_: meestal gebruikt bij het bespreken van MLS. Een gevoeligheidsniveau is een term die gebruikt wordt om te beschrijven hoe belangrijk of geheim de gegevens horen te zijn. Met het stijgen van het gevoeligheidsniveau stijgt ook het belang van de geheimhouding of de vertrouwelijkheid van de gegevens. * _enkelvoudig label_: een enkelvoudig label wordt gebruikt als een heel bestandssysteem gebruik maakt van één label om het toegangsbeleid over de gegevensstromen af te dwingen. Als dit voor een bestandssysteem is ingesteld, wat geldt als er geen gebruik gemaakt wordt van de optie `multilabel`, dan gehoorzamen alle bestanden aan dezelfde labelinstelling. * _subject_: een subject is een gegeven actieve entiteit die het stromen van informatie tussen _objecten_ veroorzaakt, bijvoorbeeld een gebruiker, gebruikersprocessor, systeemproces, enzovoort. Op FreeBSD is dit bijna altijd een thread die in een proces namens een gebruiker optreedt. [[mac-initial]] == Uitleg over MAC Met al deze nieuwe termen in gedachten, kan overdacht worden hoe het MAC-raamwerk de complete beveiliging van een systeem kan vergroten. De verschillende beveiligingsbeleidsmodules die het MAC-raamwerk biedt zouden gebruikt kunnen worden om het netwerk en bestandssystemen te beschermen, gebruikers toegang tot bepaalde poorten en sockets kunnen ontzeggen, en nog veel meer. Misschien kunnen de beleidsmodules het beste gebruikt worden door ze samen in te zetten, door meerdere beveiligingsbeleidsmodules te laden om te komen tot een omgeving waarin de beveiliging uit meerdere lagen is opgebouwd. In een omgeving waarin de beveiliging uit meerdere lagen is opgebouwd zijn meerdere beleidsmodules actief om de beveiliging in de hand te houden. Deze aanpak is anders dan een beleid om de beveiliging sec beter te maken, omdat daarmee in het algemeen elementen in een systeem beveiligd worden dat voor een specifiek doel wordt gebruikt. Het enige nadeel is het benodigde beheer in het geval van meervoudige bestandssysteemlabels, het instellen van toegang tot het netwerk per gebruiker, enzovoort. De nadelen zijn wel minimaal als ze worden vergeleken met het immer durende effect van het raamwerk. Zo zorgt bijvoorbeeld de mogelijkheid om te kiezen welke beleidseenheden voor een specifiek gebruik nodig zijn voor het zo laag mogelijk houden van de beheerslast. Het terugdringen van ondersteuning voor onnodige beleidseenheden kan de beschikbaarheid van systemen verhogen en ook de keuzevrijheid vergroten. Voor een goede implementatie worden alle beveiligingseisen in beschouwing genomen en daarna worden de verschillende beveiligingsbeleidsmodules effectief door het raamwerk geïmplementeerd. Een systeem dat gebruik maakt van de mogelijkheden van MAC dient dus tenminste de garantie te bieden dat een gebruiker niet de mogelijkheid heeft naar eigen inzicht beveiligingsattributen te wijzigen. Alle gebruikersprogramma's en scripts moeten werken binnen de beperkingen die de toegangsregels voorschrijven volgens de geselecteerde beveiligingsbeleidsmodules. Het voorgaande impliceert ook dat de volledige controle over de MAC-toegangsregels bij de systeembeheerder ligt. Het is de taak van de systeembeheerder om zorgvuldig de juiste beveiligingsbeleidsmodules te kiezen. Voor sommige omgevingen kan het nodig zijn dat de toegang tot het netwerk wordt beperkt. In dat soort gevallen zijn de beleidsmodules man:mac_portacl[4], man:mac_ifoff[4] en zelfs man:mac_biba[4] goede startpunten. In andere gevallen kan de strikte vertrouwelijkheid van bestandssysteemobjecten van belang zijn. Dan zijn beleidsmodules zoals man:mac_bsdextended[4] en man:mac_mls[4] voor dit doel gemaakt. Beslissingen over beleid zouden gemaakt kunnen worden op basis van het netwerkontwerp. Wellicht wordt alleen bepaalde gebruikers toegestaan gebruik te maken van de mogelijkheden van man:ssh[1] om toegang te krijgen tot het netwerk of Internet. In dat geval is de juiste beleidsmodule man:mac_portacl[4]. Maar wat te doen voor bestandssystemen? Moet alle toegang tot bepaalde mappen worden afgesneden van andere gebruikersgroepen of specifieke gebruikers, of moeten de toegang voor gebruikers of programma's tot bepaalde bestanden worden ingesteld door bepaalde objecten als geheim te bestempelen? In het geval van het bestandssysteem, kan ervoor gekozen worden om de toegang voor sommige objecten voor bepaalde gebruikers als geheim te bestempelen, maar voor andere niet. Bijvoorbeeld: een groot ontwikkelteam wordt opgedeeld in kleinere eenheden van individuen. Ontwikkelaars in project A horen geen toegang te hebben tot objecten die zijn geschreven door ontwikkelaars in project B. Maar misschien moeten ze wel toegang hebben tot objecten die zijn geschreven door ontwikkelaars in project C. Dat is nogal wat. Door gebruik te maken van de verschillende beveiligingsbeleidsmodules in het MAC-raamwerk kunnen gebruikers in hun groepen worden opgedeeld en kan ze toegang gegeven worden tot de juiste locaties zonder dat er angst hoeft te zijn voor het lekken van informatie. Zo heeft dus iedere beveiligingsbeleidsmodule een unieke wijze om om te gaan met de totale beveiliging van een systeem. Het kiezen van modules hoort gebaseerd te zijn op een zorgvuldig uitgedacht beveiligingsbeleid. In veel gevallen wordt het totale beveiligingsbeleid aangepast en opnieuw toegepast op het systeem. Een goed begrip van de verschillende beveiligingsbeleidsmodules die het MAC-raamwerk biedt helpt beheerders bij het kiezen van de juiste beleidseenheden voor hun situatie. De standaard FreeBSD-kernel kent geen ondersteuning voor het MAC-raamwerk en daarom dient de volgende kerneloptie toegevoegd te worden voordat op basis van de voorbeelden of informatie uit dit hoofdstuk wijzigen worden gemaakt: [.programlisting] .... options MAC .... Hierna dient de kernel herbouwd en opnieuw geïnstalleerd te worden. [CAUTION] ==== Hoewel in de verschillende hulppagina's voor MAC-beleidsmodules staat dat ze in de kernel gebouwd kunnen worden, is het mogelijk het systeem van het netwerk af te sluiten en meer. Het implementeren van MAC is net zoiets als het implementeren van een firewall en er moet opgepast worden dat een systeem niet totaal op slot gaat. Er dient rekening gehouden te worden met het teruggaan naar een vorige instelling en het op afstand implementeren van MAC dient bijzonder voorzichtig te gebeuren. ==== [[mac-understandlabel]] == MAC-labels begrijpen Een MAC-label is een beveiligingsattribuut dat toegepast kan worden op subjecten en objecten die door het systeem gaan. Bij het instellen van een label moet de gebruiker in staat zijn om precies te begrijpen wat er gebeurt. De attributen die voor een object beschikbaar zijn hangen af van de geladen beleidsmodule en die interpreteren hun attributen op nogal verschillende manieren. Het resultaat kan resulteren in onverwacht en wellicht ongewenst gedrag van een systeem als het beleid door een gebrek aan begrip verkeerd is ingesteld. Het beveiligingslabel op een object wordt gebruikt als onderdeel van een beveiligingstoegangscontrolebeslissing door een beleidseenheid. Voor sommige beleidseenheden bevat het label zelf alle informatie die nodig is voor het maken van een beslissing; in andere modellen kunnen de labels als onderdeel van een grotere verzameling verwerkt worden, enzovoort. Zo staat bijvoorbeeld het instellen van het label `biba/low` op een bestand voor een label dat wordt beheerd door de beveiligingsbeleidsmodule Biba, met een waarde van "low". Een aantal beleidsmodules die in FreeBSD de mogelijkheid voor labelen ondersteunen, bieden drie specifieke voorgedefinieerde labels: low, high en equal. Hoewel ze in verschillende beleidsmodules op een andere manier toegangscontrole afdwingen, is er de garantie dat het label `low` de laagst mogelijke instelling is, het label `equal` het subject of object uitschakelt of ongemoeid laat en het label `high` de hoogst mogelijk instelling afdwingt die beschikbaar is in de beleidsmodules Biba en MLS. Binnen een bestandssysteemomgeving met een enkelvoudig label kan er maar één label gebruikt worden op objecten. Hiermee wordt een verzameling van toegangsrechten op het hele systeem opgelegd en dat is voor veel omgevingen voldoende. Er zijn echter een aantal gevallen waarin het wenselijk is meervoudige labels in te stellen op subjecten of objecten in het bestandssysteem. In die gevallen kan de optie `multilabel` meegegeven worden aan man:tunefs[8]. In het geval van Biba en MLS kan er een numeriek label gezet worden om het precieze niveau van de hiërarchische controle aan te geven. Dit numerieke niveau wordt gebruikt om informatie in verschillende groepen te partitioneren of te sorteren voor het classificeren voor het geven van toegang voor een bepaalde groep of een groep van een hoger niveau. In de meeste gevallen stelt een beheerder alleen maar een enkelvoudig label in dat door het hele bestandssysteem wordt gebruikt. _Wacht eens, dat klinkt net als DAC! MAC gaf de controle toch strikt aan de beheerder?_ Dat klopt nog steeds, `root` heeft nog steeds de controle in handen en is degene die het beleid instelt zodat gebruikers in de juiste categorie en/of toegangsniveaus worden geplaatst. Daarnaast kunnen veel beleidsmodules ook de gebruiker `root` beperkingen opleggen. Dan wordt de controle overgedragen aan een groep, maar kan `root` de instellingen op ieder gewenst moment intrekken of wijzigen. Dit is het hiërarchische of toegangsmodel dat wordt afgedekt door beleidseenheden zoals Biba en MLS. === Labelinstellingen Vrijwel alle aspecten voor het instellen van labelbeleid worden uitgevoerd met basissysteemprogramma's. Die commando's bieden een eenvoudige interface voor object- of subjectinstellingen of de manipulatie en verificatie van de instellingen. Alle instellingen kunnen gemaakt worden met de hulpprogramma's man:setfmac[8] en man:setpmac[8]. Het commando `setfmac` wordt gebruikt om MAC labels op systeemobjecten in te stellen en `setpmac` voor het instellen van de labels op systeemsubjecten: [source,shell] .... # setfmac biba/high test .... Als het bovenstaande commando geen foutmeldingen heeft veroorzaakt, dan komt er een prompt terug. Deze commando's geven nooit uitvoer, tenzij er een fout is opgetreden; net als bij de commando's man:chmod[1] en man:chown[8]. In sommige gevallen kan de foutmelding `Permission denied` zijn en deze treedt meestal op als het label wordt ingesteld of gewijzigd op een object dat is beperkt. De systeembeheerder kan de volgende commando's gebruiken om dit probleem te voorkomen: [source,shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... Hierboven is te zien dat `setpmac` gebruikt kan worden om aan de instellingen van een beleidsmodules voorbij te gaan door een ander label toe te wijzen aan het aangeroepen proces. Het hulpprogramma `getpmac` wordt meestal toegepast op processen die al draaien, zoals sendmail: hoewel er een proces-ID nodig is in plaats van een commando, is de logica gelijk. Als gebruikers proberen een bestand te manipuleren waar ze geen toegang tot hebben, onderhevig aan de regels van de geladen beleidsmodules, dan wordt de foutmelding `Operation not permitted` weergegeven door de functie `mac_set_link`. ==== Labeltypen Met de beleidsmodules man:mac_biba[4], man:mac_mls[4] en man:mac_lomac[4] is het mogelijk eenvoudige labels toe te wijzen. Die kunnen hoog, gelijk aan en laag zijn. Hieronder een beschrijving van wat die labels betekenen: * Het label `low` is de laagst mogelijke labelinstelling die een object of subject kan hebben. Deze instelling op objecten of subjecten blokkeert hun toegang tot objecten of subjecten met de markering hoog. * Het label `equal` hoort alleen ingesteld te worden op objecten die uitgesloten moeten worden van een beleidsinstelling. * Het label `high` geeft een object of subject de hoogst mogelijke instelling. Afhankelijke van iedere beleidsmodule heeft iedere instelling een ander informatiestroomdirectief tot gevolg. Het lezen van de hulppagina's die van toepassing zijn geeft inzicht in de precieze eigenschappen van de standaard labelinstellingen. ===== Gevorderde labelinstellingen Dit zijn de labels met numerieke graden die gebruikt worden voor `vergelijking:afdeling+afdeling`. [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... Het bovenstaande kan dus geïnterpreteerd worden als: "Biba-beleidslabel"/"Graad 10":"Afdelingen 2, 3 en 6": ("graad 5 ...") In dit voorbeeld is de eerste graad de "effectieve graad" met de "effectieve afdelingen", de tweede graad is de lage graad en de laatste is de hoge graad. In de meeste instellingen worden deze instellingen niet gebruikt. Ze zijn inderdaad instellingen voor gevorderden. Als ze worden toegepast op systeemobjecten, hebben ze alleen een huidige graad/afdeling in vergelijking met systeemsubjecten, omdat ze de reikwijdte van rechten in het systeem en op netwerkinterfaces aangeven, waar ze gebruikt worden voor toegangscontrole. De graad en afdelingen in een subject en object paar wordt gebruikt om een relatie te construeren die "dominantie" heet, waar een subject een object domineert, geen van beiden domineert, of beiden elkaar domineren. Het geval "beiden domineren" komt voor als de twee labels gelijk zijn. Vanwege de natuur van de informatiestroom van Biba, heeft een gebruiker rechten op een verzameling van afdelingen, "need to know", die overeen zouden kunnen komen met projecten, maar objecten hebben ook een verzameling van afdelingen. Gebruikers dienen wellicht hun rechten onder te verdelen met `su` of `setpmac` om toegang te krijgen tot objecten in een afdeling die geen verboden terrein voor ze zijn. ==== Gebruikers en labelinstellingen Gebruikers moeten zelf labels hebben, zodat hun bestanden en processen juist kunnen samenwerken met het beveiligingsbeleid dat op een systeem is ingesteld. Dit wordt ingesteld via het bestand [.filename]#login.conf# door gebruik te maken van aanmeldklassen. Iedere beleidsmodule die labels gebruikt implementeert ook de instelling van de gebruikersklasse. Een voorbeeld dat iedere instelling uit de beleidsmodule bevat is hieronder te zien: [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... De optie `label` wordt gebruikt om het standaardlabel voor aanmeldklasse in te stellen dat door MAC wordt afgedwongen. Het wordt gebruikers nooit toegestaan deze waarde te wijzigen, dus kan het gezien worden als niet optioneel vanuit het perspectief van de gebruiker. In de echte wereld besluit een beheerder echter nooit iedere beleidsmodule te activeren. Het wordt sterk aangeraden de rest van die hoofdstuk te lezen alvorens (een deel van) de bovenstaande instellingen te implementeren. [NOTE] ==== Gebruikers kunnen hun label wijzigen na het initiële aanmelden, maar dit is wel afhankelijk van de beperkingen van een beleidsinstelling. Het bovenstaande voorbeeld vertelt de beleidseenheid Biba dat de minimale integriteit van een proces 5 en het maximum 15, maar dat het effectieve label standaard 10 is. Het proces draait op niveau 10, totdat het proces het label wijzigt, misschien door een gebruiker die `setpmac` gebruikt, bij het aanmelden beperkt tot de door Biba ingestelde reeks. ==== In alle gevallen dient de database met aanmeldklassemogelijkheden opnieuw gebouwd te worden met `cap_mkdb` na het wijzigen van [.filename]#login.conf#. Dit wordt ook in alle komende voorbeelden en beschrijvingen gedaan. Het is belangrijk op te merken dat in veel gevallen sites te maken hebben met bijzonder grote aantallen gebruikers waardoor er een aantal verschillende aanmeldklassen nodig zijn. Het is dan nodig gedetailleerd te plannen omdat dit anders bijzonder complex wordt om te onderhouden. ==== Netwerkinterfaces en labelinstellingen Labels kunnen ook ingesteld worden op netwerkinterfaces om te assisteren bij het controleren van het stromen van gegevens over het netwerk. In alle gevallen werken ze op dezelfde wijze als het beleid werkt ten aanzien van objecten. Gebruikers met bijvoorbeeld een hoge instelling in `biba` krijgen geen toegang tot interfaces met een laag label. Het `maclabel` kan meegegeven worden aan `ifconfig` als het MAC-label op netwerkinterfaces wordt ingesteld: [source,shell] .... # ifconfig bge0 maclabel biba/equal .... In het bovenstaande voorbeeld wordt het MAC-label `biba/equal` ingesteld op de interface man:bge[4]. Als er een instelling wordt gebruikt die gelijkvormig is aan `biba/high(low-high)`, dan moet het volledige label worden ingegeven, anders treedt er een fout op. Iedere beleidsmodule die labels ondersteunt een instelling waarmee het MAC-label op netwerkinterfaces kan worden uitgeschakeld. Het label instellen op `equal` heeft hetzelfde effect. Deze instellingen zijn na te kijken in de uitvoer van `sysctl`, de hulppagina van het beleid en zelfs later in dit hoofdstuk. === Enkelvoudig label of meervoudig label? Standaard gebruikt een systeem de optie `singlelabel`. Wat betekent dit voor een beheerder? Er zijn een aantal verschillen die allemaal hun eigen voor- en nadelen hebben voor de flexibiliteit in het beveiligingsmodel voor een systeem. Bij gebruik van `singlelabel` kan er maar één label, bijvoorbeeld `biba/high`, gebruikt worden voor ieder subject of object. Hierdoor is er minder beheer nodig, maar de flexibiliteit voor beleid dat labels ondersteunt daalt erdoor. Veel beheerders willen de optie `multilabel` gebruiken in hun beveiligingsmodel. De optie `multilabel` staat ieder subject of object toe om zijn eigen onafhankelijke MAC-label te hebben in plaats van de standaardoptie `singlelabel`, die maar één label toestaat op een hele partitie. De labelopties `multilabel` en `single` zijn alleen verplicht voor de beleidseenheden die de mogelijkheid bieden om te labelen, waaronder de beleidsmogelijkheden van Biba, Lomac, MLS en SEBSD. In veel gevallen hoeft `multilabel` niet eens ingesteld te worden. Stel er is de volgende situatie en beveiligingsmodel: * FreeBSD-webserver die gebruik maakt van het MAC-raamwerk en een mengeling van verschillende beleidseenheden. * De webserver heeft maar één label nodig, `biba/high`, voor alles in het systeem. Hier is de optie `multilabel` voor het bestandssysteem niet nodig, omdat een enkelvoudig label altijd van toepassing is. * Maar omdat de machine als webserver dienst gaat doen, dient de webserver te draaien als `biba/low` om administratiemogelijkheden te voorkomen. Later wordt beschreven hoe de beleidseenheid Biba werkt, dus als de voorgaande opmerking wat lastig te begrijpen is, lees dan verder en kom later nog een keer terug. De server zou een aparte partitie kunnen gebruiken waarop `biba/low` van toepassing kan zijn voor de meeste, zo niet alle, runtime-statussen. Er ontbreekt veel in dit voorbeeld, bijvoorbeeld de restricties op gegevens en (gebruikers)instellingen. Dit was slechts een snel voorbeeld om de hiervoor aangehaalde stelling te ondersteunen. Als er een niet-labelende beleidseenheid wordt gebruikt, dan is de optie `multilabel` nooit verplicht. Hieronder vallen de beleidseenheden `seeotheruids`, `portacl` en `partition`. Bij gebruik van `multilabel` voor een partitie en het neerzetten van een beveiligingsmodel gebaseerd op `multilabel` functionaliteit gaat de deur open voor hogere administratieve rompslomp, omdat alles in een bestandssysteem een label krijgt. Hieronder vallen mappen, bestanden en zelfs apparaatknooppunten. Het volgende commando stelt `multilabel` in op de bestandssystemen om meerdere labels te kunnen krijgen. Dit kan alleen uitgevoerd worden in enkele gebruikersmodus: [source,shell] .... # tunefs -l enable / .... Dit is geen criterium voor het wisselbestandssysteem. [NOTE] ==== Sommige gebruikers hebben problemen ondervonden met het instellen van de vlag `multilabel` op de rootpartitie. Als dit het geval is, kijk dan naar <> van dit hoofdstuk. ==== [[mac-planning]] == De beveiligingsconfiguratie plannen Wanneer een nieuwe technologie wordt geïmplementeerd is een planningsfase altijd een goed idee. Tijdens de planningsfases zou een beheerder in het algemeen naar de "big picture" moeten kijken, en daarbij minstens het volgende in de gaten proberen te houden: * De implementatiebenodigdheden; * De implementatiedoelen; Voor MAC-installaties houden deze in: * Hoe de beschikbare informatie en bronnen die op het doelsysteem aanwezig zijn te classificeren. * Voor wat voor soort informatie of bronnen de toegang te beperken samen met het type van de beperkingen die dienen te worden toegepast. * Welke MAC-module(s) nodig zullen zijn om dit doel te bereiken. Het is altijd mogelijk om de systeembronnen en de beveiligingsinstellingen te veranderen en te herconfigureren, het komt vaak erg ongelegen om het systeem te doorzoeken en bestaande bestanden en gebruikersaccounts te repareren. Plannen helpt om zeker te zijn van een probleemloze en efficiënte systeemimplementatie. Het is vaak vitaal en zeker in uw voordeel om een proefronde van het vertrouwde systeem, inclusief de configuratie, te draaien _vóórdat_ een MAC-implementatie wordt gebruikt op productiesystemen. Het idee om een systeem met MAC gewoon los te laten is als het plannen van mislukkingen. Verschillende omgevingen kunnen verschillende behoeften en benodigdheden nodig hebben. Het opzetten van een diepgaand en compleet beveiligingsprofiel zal de noodzaak van verandering verminderen wanneer het systeem in gebruik wordt genomen. Zodoende zullen de toekomstige secties de verschillende modules die beschikbaar zijn voor beheerders behandelen; hun gebruik en configuratie beschrijven; en in sommige gevallen inzicht bieden in welke situaties ze het beste tot hun recht komen. Een webserver bijvoorbeeld zou de beleiden man:mac_biba[4] en man:mac_bsdextended[4] in gebruik nemen. In andere gevallen kan voor een machine met erg weinig lokale gebruikers man:mac_partition[4] een goede keuze zijn. [[mac-modules]] == Module-instellingen Iedere module uit het MAC-raamwerk kan zoals zojuist aangegeven in de kernel worden gecompileerd of als runtime-kernelmodule geladen worden. De geadviseerde methode is de naam van een module toevoegen aan het bestand [.filename]#/boot/loader.conf# zodat die wordt geladen tijdens de eerste fase van het starten van een systeem. In de volgende onderdelen worden de verschillende MAC-modules en hun mogelijkheden beschreven. De implementatie in een specifieke omgeving wordt ook in dit hoofdstuk beschreven. Een aantal modules ondersteunt het gebruik van labelen, wat het beperken van toegang is door een label als "dit is toegestaan en dat niet" af te dwingen. Een labelinstellingenbestand kan bepalen hoe bestanden kunnen worden benaderd, hoe netwerkcommunicatie wordt uitgewisseld, en meer. In het vorige onderdeel is beschreven hoe de vlag `multilabel` ingesteld kon worden op bestandssystemen om per bestand of per partitie toegangscontrole in te schakelen. Een instelling met een enkelvoudig label zou maar één label over een heel systeem afdwingen, daarom wordt de optie `tunefs multilabel` genoemd. [[mac-seeotheruids]] == MAC-module seeotheruids Modulenaam: [.filename]#mac_seeotheruids.ko# Kernelinstelling: `options MAC_SEEOTHERUIDS` Opstartoptie: `mac_seeotheruids_load="YES"` De module man:mac_seeotheruids[4] imiteert de `sysctl`-tunables `security.bsd.see_other_uids` en `security.bsd.see_other_gids` en breidt deze uit. Voor deze optie hoeven geen labels ingesteld te worden voor de instelling en hij werkt transparant met de andere modules. Na het laden van de module kunnen de volgende `sysctl`-tunables gebruikt worden om de opties te beheren: * `security.mac.seeotheruids.enabled` schakelt de opties van de module in en gebruikt de standaardinstellingen. Deze standaardinstellingen ontzeggen gebruikt de mogelijkheid processen en sockets te zien die eigendom zijn van andere gebruikers. * `security.mac.seeotheruids.specificgid_enabled` staat toe dat een bepaalde groep niet onder dit beleid valt. Om bepaalde groepen van dit beleid uit te sluiten, kan de `sysctl`-tunable `security.mac.seeotheruids.specificgid=XXX` gebruikt worden. In het bovenstaande voorbeeld dient _XXX_ vervangen te worden door het numerieke ID van een groep die uitgesloten moet worden van de beleidsinstelling. * `security.mac.seeotheruids.primarygroup_enabled` wordt gebruikt om specifieke primaire groepen uit te sluiten van dit beleid. Als deze tunable wordt gebruikt, mag `security.mac.seeotheruids.specificgid_enabled` niet gebruikt worden. [[mac-bsdextended]] == MAC-module bsdextended Modulenaam: [.filename]#mac_bsdextended.ko# Kernelinstelling: `options MAC_BSDEXTENDED` Opstartoptie: `mac_bsdextended_load="YES"` De module man:mac_bsdextended[4] dwingt de bestandssysteemfirewall af. Het beleid van deze module biedt een uitbreiding van het standaard rechtenmodel voor bestandssystemen, waardoor een beheerder een firewallachtige verzameling met regels kan maken om bestanden, programma's en mappen in de bestandssysteemhiërarchie te beschermen. Wanneer geprobeerd wordt om toegang tot een object in het bestandssysteem te krijgen, wordt de lijst met regels afgelopen totdat er òf een overeenkomstige regel is gevonden òf het einde van de lijst is bereikt. Dit gedrag kan veranderd worden door het gebruik van de man:sysctl[8]-parameter security.mac.bsdextended.firstmatch_enabled. Net zoals andere firewall-modules in FreeBSD kan een bestand dat regels voor toegangscontrole bevat tijdens het opstarten door het systeem worden aangemaakt en gelezen door een man:rc.conf[5]-variabele te gebruiken. De lijst met regels kan ingevoerd worden met het hulpprogramma man:ugidfw[8], dat een syntaxis heeft die lijkt op die van man:ipfw[8]. Meer hulpprogramma's kunnen geschreven worden met de functies in de bibliotheek man:libugidfw[3]. Bij het werken met deze module dient bijzondere voorzichtigheid in acht te worden genomen. Verkeerd gebruik kan toegang tot bepaalde delen van het bestandssysteem blokkeren. === Voorbeelden Nadat de module man:mac_bsdextended[4] is geladen, kan met het volgende commando de huidige regels getoond worden: [source,shell] .... # ugidfw list 0 slots, 0 rules .... Zoals verwacht zijn er geen regels ingesteld. Dit betekent dat alles nog steeds volledig toegankelijk is. Om een regel te maken die alle toegang voor alle gebruikers behalve `root` ontzegt: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... Dit is een slecht idee, omdat het voorkomt dat alle gebruikers ook maar het meest eenvoudige commando kunnen uitvoeren, zoals `ls`. Een betere lijst met regels zou kunnen zijn: [source,shell] .... # ugidfw set 2 subject uid gebruiker1 object uid gebruiker2 mode n # ugidfw set 3 subject uid gebruiker1 object gid gebruiker2 mode n .... Hiermee wordt alle toegang, inclusief het tonen van mapinhoud, tot de thuismap van `_gebruiker2_` ontzegd voor de gebruikersnaam `gebruiker1`. In plaats van `gebruiker1`, zou `not uid _gebruiker2_` kunnen worden opgegeven. Hierdoor worden dezelfde restricties als hierboven actief voor alle gebruikers in plaats van voor slechts één gebruiker. [NOTE] ==== De gebruiker `root` blijft onaangetast door deze wijzigingen. ==== Met deze informatie zou een basisbegrip moeten zijn ontstaan over hoe de module man:mac_bsdextended[4] gebruikt kan worden om een bestandssysteem te beschermen. Meer informatie staat in de hulppagina's van man:mac_bsdextended[4] en man:ugidfw[8]. [[mac-ifoff]] == MAC-module ifoff Modulenaam: [.filename]#mac_ifoff.ko# Kernelinstelling: `options MAC_IFOFF` Opstartoptie: `mac_ifoff_load="YES"` De module man:mac_ifoff[4] bestaat alleen om netwerkinterfaces tijdens het draaien uit te schakelen en om te verhinderen dat netwerkinterfaces tijdens het initiële opstarten worden geactiveerd. Er hoeven geen labels ingesteld te worden, noch is deze module afhankelijk van andere MAC-modules. Het meeste beheer wordt gedaan met de `sysctl`-tunables die hieronder zijn vermeld. * `security.mac.ifoff.lo_enabled` schakelt alle verkeer op het teruglusinterface (man:lo[4]) in of uit. * `security.mac.ifoff.bpfrecv_enabled` schakelt alle verkeer op het Berkeley Packet Filterinterface (man:bpf[4]) in of uit. * `security.mac.ifoff.other_enabled` schakelt alle verkeer op alle andere interfaces in of uit. man:mac_ifoff[4] wordt het meest gebruikt om netwerken te monitoren in een omgeving waar netwerkverkeer niet toegestaan zou moeten zijn tijdens het opstarten. Een ander voorgesteld gebruik zou het schrijven van een script zijn dat package:security/aide[] gebruikt om automatisch netwerkverkeer te blokkeren wanneer het nieuwe of veranderde bestanden in beschermde mappen vindt. [[mac-portacl]] == MAC-module portacl Modulenaam: [.filename]#mac_portacl.ko# Kernelinstelling: `MAC_PORTACL` Opstartoptie: `mac_portacl_load="YES"` De module man:mac_portacl[4] wordt gebruikt om het binden aan lokale TCP- en UDP-poorten te begrenzen door een waaier aan `sysctl`-variabelen te gebruiken. In essentie maakt man:mac_portacl[4] het mogelijk om niet-`root`-gebruikers in staat te stellen om aan gespecificeerde geprivilegieerde poorten te binden, dus poorten lager dan 1024. Eenmaal geladen zal deze module het MAC-beleid op alle sockets aanzetten. De volgende tunables zijn beschikbaar: * `security.mac.portacl.enabled` schakelt het beleid volledig in of uit. * `security.mac.portacl.port_high` stelt het hoogste poortnummer in waarvoor man:mac_portacl[4] bescherming biedt. * `security.mac.portacl.suser_exempt` sluit de gebruiker `root` uit van dit beleid wanneer het op een waarde anders dan nul wordt ingesteld. * `security.mac.portacl.rules` specificeert het eigenlijke beleid van mac_portacl; zie onder. Het eigenlijke beleid van `mac_portacl`, zoals gespecificeerd in de sysctl `security.mac.portacl.rules`, is een tekststring van de vorm: `regel[,regel,...]` met zoveel regels als nodig. Elke regel heeft de vorm: `idtype:id:protocol:poort`. De parameter [parameter]#idtype# kan `uid` of `gid` zijn en wordt gebruikt om de parameter [parameter]#id# als respectievelijk een gebruikers-id of groeps-id te interpreteren. De parameter [parameter]#protocol# wordt gebruikt om te bepalen of de regel op TCP of UDP moet worden toegepast door de parameter op `tcp` of `udp` in te stellen. De laatste parameter [parameter]#poort# is het poortnummer waaraan de gespecificeerde gebruiker of groep zich mag binden. [NOTE] ==== Aangezien de regelverzameling direct door de kernel wordt geïnterpreteerd kunnen alleen numerieke waarden voor de parameters voor de gebruikers-ID, groeps-ID, en de poort gebruikt worden. Voor gebruikers, groepen, en poortdiensten kunnen dus geen namen gebruikt worden. ==== Standaard kunnen op UNIX(R)-achtige systemen poorten lager dan 1024 alleen aan geprivilegieerde processen gebonden worden, dus diegenen die als `root` draaien. Om man:mac_portacl[4] toe te laten staan om ongeprivilegieerde processen aan poorten lager dan 1024 te laten binden moet deze standaard UNIX(R)-beperking uitgezet worden. Dit kan bereikt worden door de man:sysctl[8]-variabelen `net.inet.ip.portange.reservedlow` en `net.inet.ip.portrange.reservedhigh` op nul te zetten. Zie de onderstaande voorbeelden of bekijk de handleidingpagina voor man:mac_portacl[4] voor meer informatie. === Voorbeelden De volgende voorbeelden zouden de bovenstaande discussie wat moeten toelichten: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 net.inet.ip.portrange.reservedhigh=0 .... Eerst wordt man:mac_portacl[4] ingesteld om de standaard geprivilegieerde poorten te dekken en worden de normale bindbeperkingen van UNIX(R) uitgeschakeld. [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... De gebruiker `root` zou niet beperkt moeten worden door dit beleid, stel `security.mac.portacl.suser_exempt` dus in op een waarde anders dan nul. De module man:mac_portacl[4] is nu ingesteld om zich op de zelfde manier te gedragen als UNIX(R)-achtige systemen zich standaard gedragen. [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... Sta de gebruiker met UID 80 (normaliter de gebruiker `www`) toe om zich aan poort 80 te binden. Dit kan gebruikt worden om de gebruiker `www` toe te staan een webserver te draaien zonder ooit `root`-rechten te hebben. [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... Sta de gebruiker met UID 1001 om zich aan de TCP-poorten 110 ("pop3") en 995 ("pop3s") te binden. Dit staat deze gebruiker toe om een server te starten die verbindingen accepteert op poorten 110 en 995. [[mac-partition]] == MAC-module partition Modulenaam: [.filename]#mac_partition.ko# Kernelinstelling: `options MAC_PARTITION` Opstartoptie: `mac_partition_load="YES"` Het beleid man:mac_partition[4] plaatst processen in specifieke "partities" gebaseerd op hun MAC-label. Zie dit als een speciaal soort man:jail[8], hoewel dit nauwelijks een waardige vergelijking is. Dit is één module die aan het bestand man:loader.conf[5] dient te worden toegevoegd zodat het het beleid tijdens het opstartproces laadt en aanzet. De meeste configuratie van dit beleid wordt gedaan met het gereedschap man:setpmac[8], wat hieronder zal worden uitgelegd. De volgende `sysctl`-tunable is beschikbaar voor dit beleid: * `security.mac.partition.enabled` zet het afdwingen van MAC-procespartities aan. Wanneer dit beleid aanstaat, mogen gebruikers alleen hun eigen processen zien, en elke andere in hun partitie, maar mogen niet met gereedschappen buiten deze partitie werken. Bijvoorbeeld, een gebruiker in de klasse `insecure` heeft geen toegang tot het commando `top` noch tot vele andere commando's die een proces moeten draaien. Gebruik het gereedschap `setpmac` om gereedschappen in te stellen of ze in een partitielabel te plaatsen: [source,shell] .... # setpmac partition/13 top .... Dit zal het commando `top` toevoegen aan het label dat voor gebruikers in de klasse `insecure` gebruikt wordt. Merk op dat alle processen gestart door gebruikers in de klasse `insecure` in het label `partition/13` zullen blijven. === Voorbeelden Het volgende commando laat de partitielabel en de proceslijst zien: [source,shell] .... # ps Zax .... Het volgende commando staat toe om het procespartitielabel van een andere gebruiker en de momenteel draaiende processen van die gebruiker te zien: [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== Gebruikers kunnen processen in het label van `root` zien tenzij het beleid man:mac_seeotheruids[4] is geladen. ==== Een echte vakmansimplementatie zou alle diensten in [.filename]#/etc/rc.conf# uitzetten en deze door een script met de juiste labeling laten starten. [NOTE] ==== De volgende beleiden ondersteunen integerinstellingen in plaats van de drie standaardlabels die aangeboden worden. Deze opties, inclusief hun beperkingen, worden verder uitgelegd in de handleidingpagina's van de modules. ==== [[mac-mls]] == MAC-module Multi-Level Security Modulenaam: [.filename]#mac_mls.ko# Kernelinstelling: `options MAC_MLS` Opstartoptie: `mac_mls_load="YES"` Het beleid man:mac_mls[4] beheert toegang tussen subjecten en objecten in het systeem door een strikt beleid voor informatiestromen af te dwingen. In MLS-omgevingen wordt een "toestemming"-niveau ingesteld in het label van elk subject of object, samen met compartimenten. Aangezien deze toestemmings- of zinnigheidsniveaus getallen groter dan zesduizend kunnen bereiken; zou het voor elke systeembeheerder een afschrikwekkende taak zijn om elk subject of object grondig te configureren. Gelukkig worden er al drie "kant-en-klare" bij dit beleid geleverd. Deze labels zijn `mls/low`, `mls/equal` en `mls/high`. Aangezien deze labels uitgebreid in de handleidingpagina worden beschreven, worden ze hier slechts kort beschreven: * Het label `mls/low` bevat een lage configuratie welke het toestaat om door alle andere objecten te worden gedomineerd. Alles dat met `mls/low` is gelabeld heeft een laag toestemmingsniveau en heeft geen toegang tot informatie van een hoger niveau. Ook voorkomt dit label dat objecten van een hoger toestemmingsniveau informatie naar hen schrijven of aan hen doorgeven. * Het label `mls/equal` dient geplaatst te worden op objecten die geacht te zijn uitgesloten van het beleid. * Het label `mls/high` is het hoogst mogelijke toestemmingsniveau. Objecten waaraan dit label is toegekend zijn dominant over alle andere objecten in het systeem; ze mogen echter geen informatie lekken naar objecten van een lagere klasse. MLS biedt: * Een hiërarchisch beveiligingsniveau met een verzameling niet-hiërarchische categoriën; * Vaste regels: niet naar boven lezen, niet naar beneden schrijven (een subject kan leestoegang hebben naar objecten op zijn eigen niveau of daaronder, maar niet daarboven. Evenzo kan een subject schrijftoegang hebben naar objecten op zijn eigen niveau of daarboven maar niet daaronder.); * Geheimhouding (voorkomt ongeschikte openbaarmaking van gegevens); * Een basis voor het ontwerp van systemen die gelijktijdig gegevens op verschillende gevoeligheidsniveaus behandelen (zonder informatie tussen geheim en vertrouwelijk te lekken). De volgende `sysctl`-tunables zijn beschikbaar voor de configuratie van speciale diensten en interfaces: * `security.mac.mls.enabled` wordt gebruikt om het MLS-beleid in en uit te schakelen. * `security.mac.mls.ptys_equal` labelt alle man:pty[4]-apparaten als `mls/equal` wanneer ze worden aangemaakt. * `security.mac.mls.revocation_enabled` wordt gebruikt om toegang tot objecten in te trekken nadat hun label in die van een lagere graad verandert. * `security.mac.mls.max_compartments` wordt gebruikt om het maximaal aantal compartimentniveaus met objecten in te stellen; in feite het maximale compartimentnummer dat op een systeem is toegestaan. Het commando man:setfmac[8] kan gebruikt worden om de MLS-labels te manipuleren. Gebruik het volgende commando om een label aan een object toe te kennen: [source,shell] .... # setfmac mls/5 test .... Gebruik het volgende commando om het MLS-label voor het bestand [.filename]#test# te verkrijgen: [source,shell] .... # getfmac test .... Dit is een samenvatting van de mogelijkheden van het beleid MLS. Een andere manier is om een meesterbeleidsbestand in [.filename]#/etc# aan te maken dat de MLS-informatie bevat en om dat bestand aan het commando `setfmac` te geven. Deze methode wordt uitgelegd nadat alle beleiden zijn behandeld. === Verplichte Gevoeligheid plannen Met de beleidsmodule voor meerlaagse beveiliging plant een beheerder het beheren van gevoelige informatiestromen. Standaard zet het systeem met zijn natuur van lezen naar boven blokkeren en schrijven naar beneden blokkeren alles in een lage toestand. Alles is beschikbaar en een beheerder verandert dit langzaam tijdens de configuratiefase; waarbij de vertrouwelijkheid van de informatie toeneemt. Buiten de bovengenoemde drie basisopties voor labels, kan een beheerder gebruikers en groepen indelen als nodig om de informatiestroom tussen hun te blokkeren. Het is misschien gemakkelijker om naar de informatie te kijken in toestemmingsniveaus waarvoor bekende woorden bestaan, zoals `Vertrouwelijk`, `Geheim` en `Strikt Geheim`. Sommige beheerders zullen verschillende groepen aanmaken gebaseerd op verschillende projecten. Ongeacht de classificatiemethode moet er een goed overwogen plan bestaan voordat zo'n berperkend beleid wordt geïmplementeerd. Wat voorbeeldsituaties voor deze beveiligingsbeleidsmodule kunnen een e-commerce webserver, een bestandsserver die kritieke bedrijfsinformatie, en omgevingen van financiële instellingen zijn. De meest onwaarschijnlijke plaats zou een persoonlijk werkstation met slechts twee of drie gebruikers zijn. [[mac-biba]] == MAC-module Biba Modulenaam: [.filename]#mac_biba.ko# Kernelinstelling: `options MAC_BIBA` Opstartoptie: `mac_biba_load="YES"` De module man:mac_biba[4] laadt het beleid MAC Biba. Dit beleid werkt vaak zoals dat van MLS behalve dat de regels voor de informatiestroom lichtelijk zijn omgedraaid. Dit is gezegd om de neerwaartse stroom van gevoelige informatie te voorkomen terwijl het beleid MLS de opwaartse stroom van gevoelige informatie voorkomt; veel van deze sectie is dus op beide beleiden toepasbaar. In Biba-omgevingen wordt een "integriteits"-label op elk subject of object ingesteld. Deze labels bestaan uit hiërarchische graden, en niet-hiërarchische componenten. Een graad van een object of subject stijgt samen met de integriteit. Ondersteunde labels zijn `biba/low`, `biba/equal`, en `biba/high`; zoals hieronder uitgelegd: * Het label `biba/low` wordt gezien als de laagste integriteit die een object of subject kan hebben. Dit instellen op objecten of subjecten zal hun schrijftoegang tot objecten of subjecten die als hoog zijn gemarkeerd blokkeren. Ze hebben echter nog steeds leestoegang. * Het label `biba/equal` dient alleen geplaatst te worden op objecten die geacht te zijn uitgesloten van het beleid. * Het label `biba/high` staat schrijven naar objecten met een lager label toe maar sluit het lezen van dat object uit. Het wordt aangeraden om dit label te plaatsen op objecten die de integriteit van het gehele systeem beïnvloeden. Biba biedt: * Hiërarchische integriteitsniveaus met een verzameling niet-hiërarchische integriteitscategoriën; * Vaste regels: niet naar boven schrijven, niet naar beneden lezen (tegenovergestelde van MLS). Een subject kan schrijftoegang hebben naar objecten op hetzelfde niveau of daaronder, maar niet daarboven. Evenzo kan een subject leestoegang naar objecten op hetzelfde niveau of daarboven hebben, maar niet daaronder; * Integriteit (voorkomt oneigenlijk wijzigen van gegevens); * Integriteitsniveaus (in plaats van de gevoeligheidsniveaus van MLS) De volgende `sysctl`-tunables kunnen gebruikt worden om het Biba-beleid te manipuleren. * `security.mac.biba.enabled` kan gebruikt worden om het afdwingen van het Biba-beleid op de doelmachine aan en uit te zetten. * `security.mac.biba.ptys_equal` kan gebruikt worden om het Biba-beleid op man:pty[4]-apparaten uit te zetten. * `security.mac.biba.revocation_enabled` dwingt het herroepen van toegang tot objecten af als het label is veranderd om het subject te domineren. Gebruik de commando's `setfmac` en `getfmac` om de instellingen van het Biba-beleid op systeemobjecten te benaderen: [source,shell] .... # setfmac biba/low test # getfmac test test: biba/low .... === Verplichte Integriteit plannen Integriteit, anders dan gevoeligheid, garandeert dat de informatie nooit door onvertrouwde gebruikers zal worden gemanipuleerd. Dit geldt ook voor informatie die tussen subjecten, objecten, of beiden wordt doorgegeven. Het verzekert dat gebruikers alleen de informatie kunnen wijzigen en in sommige gevallen zelfs benaderen die ze expliciet nodig hebben. De beveiligingsbeleidsmodule man:mac_biba[4] staat een beheerder in staat om te bepalen welke bestanden en programma's een gebruiker of gebruikers mogen zien en draaien terwijl het verzekert dat de programma's en bestanden vrij zijn van dreigingen en vertrouwt zijn door het systeem voor die gebruiker of groep van gebruikers. Tijdens de initiële planningsfase moet een beheerder bereid zijn om gebruikers in gradaties, niveaus, en gebieden in te delen. Gebruikers zal toegang tot niet alleen gegevens maar ook tot programma's en hulpmiddelen ontzegt worden zowel voordat en nadat ze beginnen. Het systeem zal standaard een hoog label instellen nadat deze beleidsmodule is ingeschakeld, en het is aan de beheerder om de verschillende gradaties en niveaus voor gebruikers in te stellen. In plaats van toestemmingsniveaus zoals boven beschreven te gebruiken, kan een goede planningsmethode onderwerpen bevatten. Bijvoorbeeld, geef alleen ontwikkelaars veranderingstoegang tot het broncoderepository, de broncodecompiler, en andere ontwikkelgereedschappen. Andere gebruikers zouden in andere groepen zoals testers, ontwerpers, of gewone gebruikers worden ingedeeld en zouden alleen leestoegang hebben. Met zijn natuurlijke beveiligingsbeheer kan een subject van lagere integriteit niet schijven naar een subject van hogere integriteit; een subject van hogere integriteit kan geen subject van lagere integriteit observeren of lezen. Een label op de laagst mogelijke graad instellen kan het ontoegankelijk voor subjecten maken. Sommige succesvolle omgevingen voor deze beveiligingsbeheermodule zijn een beperkte webserver, een ontwikkel- en testmachine, en broncoderepositories. Minder nuttige implementaties zouden een persoonlijk werkstation, een machine gebruikt als router, of een netwerkfirewall zijn. [[mac-lomac]] == MAC-module LOMAC Modulenaam: [.filename]#mac_lomac.ko# Kernelinstelling: `options MAC_LOMAC` Opstartoptie: `mac_lomac_load="YES"` In tegenstelling tot het beleid MAC Biba, staat het beleid man:mac_lomac[4] toegang tot objecten van lagere integriteit slechts toe nadat het integriteitsniveau is verlaagt om de integriteitsregels niet te verstoren. De MAC-versie van het laagwatermarkeringsintegreitsbeleid, niet te verwarren met de oudere implementatie van man:lomac[4], werkt bijna hetzelfde als Biba maar met de uitzondering dat er drijvende labels worden gebruikt om subjectdegradatie via een hulpcompartiment met graden te ondersteunen. Dit tweede compartiment heeft de vorm `[hulpgraad]`. Wanneer een lomac-beleid met een hulpgraad wordt toegekend, dient het er ongeveer uit te zien als: `lomac/10[2]` waar het getal twee (2) de hulpgraad is. Het beleid MAC LOMAC berust op het overal labelen van alle systeemobjecten met integriteitslabels, waardoor subjecten wordt toegestaan om te lezen van objecten van lage integriteit en om daarna het label op subject te degraderen om toekomstig schrijven naar objecten van hoge integriteit te voorkomen. Dit is de hierboven besproken optie `[hulpgraad]`, dus biedt het beleid grotere compatibiliteit en vereist het minder initiële configuratie dan Biba. === Voorbeelden Net zoals bij de beleiden Biba en MLS kunnen de commando's `setfmac` en `setpmac` gebruikt worden om labels op systeemobjecten te plaatsen: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... Merk op dat de hulpgraad hier `low` is, dit is een mogelijkheid die alleen door het beleid MAC LOMAC wordt geboden. [[mac-implementing]] == Nagios in een MAC-jail De volgende demonstratie zal een veilige omgeving implementeren door verschillende MAC-modules te gebruiken met juist ingestelde beleiden. Dit is slechts een test en dient niet gezien te worden als het volledige antwoord op de beveiligingszorgen van iedereen. Gewoon een beleid implementeren en het verder negeren werkt nooit en kan rampzalig zijn in een productieomgeving. Voordat met dit proces wordt begonnen, moet de optie `multilabel` zijn geactiveerd op elk bestandssysteem zoals vermeld aan het begin van dit hoofdstuk. Nalatigheid zal in fouten resulteren. Zorg er ook voor dat de ports package:net-mgmt/nagios-plugins[], package:net-mgmt/nagios[], en package:www/apache22[] allemaal geïnstalleerde en geconfigureerd zijn en correct werken. === Gebruikersklasse `insecure` maken Begin de procedure door de volgende gebruikersklasse toe te voegen aan het bestand [.filename]#/etc/login.conf#: [.programlisting] .... insecure:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... Voeg de volgende regel toe aan de standaard gebruikersklasse: [.programlisting] .... :label=biba/high: .... Wanneer dit voltooid is, moet het volgende commando gedraaid worden om de database te herbouwen: [source,shell] .... # cap_mkdb /etc/login.conf .... === Opstartinstellingen Start nog niet opnieuw op, voeg alleen de volgende regels toe aan [.filename]#/boot/loader.conf# zodat de benodigde modules worden geladen tijdens systeeminitialisatie: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" .... === Gebruikers instellen Stel de gebruiker `root` in op de standaardklasse met: [source,shell] .... # pw usermod root -L default .... Alle gebruikersaccounts die geen `root` of systeemgebruikers zijn hebben nu een aanmeldklasse nodig. De aanmeldklasse is nodig om te voorkomen dat gebruikers geen toegang hebben tot gewone commando's als man:vi[1]. Het volgende `sh`-script zou moeten werken: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ /etc/passwd`; do pw usermod $x -L default; done; .... Laat de gebruikers `nagios` en `www` in de klasse insecure vallen: [source,shell] .... # pw usermod nagios -L insecure .... [source,shell] .... # pw usermod www -L insecure .... === Het contextbestand aanmaken Nu dient een contextbestand aangemaakt te worden; het volgende voorbeeld dient geplaatst te worden in [.filename]#/etc/policy.contexts#. [.programlisting] .... # Dit is het standaard-BIBA-beleid voor dit systeem. # Systeem: /var/run biba/equal /var/run/* biba/equal /dev biba/equal /dev/* biba/equal /var biba/equal /var/spool biba/equal /var/spool/* biba/equal /var/log biba/equal /var/log/* biba/equal /tmp biba/equal /tmp/* biba/equal /var/tmp biba/equal /var/tmp/* biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal #Voor Nagios: /usr/local/etc/nagios /usr/local/etc/nagios/* biba/10 /var/spool/nagios biba/10 /var/spool/nagios/* biba/10 #Voor Apache: /usr/local/etc/apache biba/10 /usr/local/etc/apache/* biba/10 .... Dit beleid zal beveiliging afdwingen door beperkingen aan de informatiestroom te stellen. In deze specifieke configuratie mogen gebruikers, inclusief `root`, nooit toegang hebben tot Nagios. Instellingenbestanden en processen die deel zijn van Nagios zullen geheel in zichzelf of in een jail zitten. Dit bestand kan nu in ons systeem worden gelezen door ons systeem door het volgende commando uit te voeren: [source,shell] .... # setfsmac -ef /etc/policy.contexts / # setfsmac -ef /etc/policy.contexts / .... [NOTE] ==== De bovenstaande indeling van het bestandssysteem kan afhankelijk van de omgeving verschillen; het moet echter op elk bestandssysteem gedraaid worden. ==== Het bestand [.filename]#/etc/mac.conf# dient als volgt in de hoofdsectie gewijzigd te worden: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === Het netwerk activeren Voeg de volgende regel toe aan [.filename]#/boot/loader.conf#: [.programlisting] .... security.mac.biba.trust_all_interfaces=1 .... En voeg het volgende toe aan de instellingen van de netwerkkaart opgeslagen in [.filename]#rc.conf#. Als de primaire Internetconfiguratie via DHCP wordt gedaan, kan het nodig zijn om dit handmatig te configureren telkens nadat het systeem is opgestart: [.programlisting] .... maclabel biba/equal .... === De configuratie testen Controleer dat de webserver en Nagios niet tijdens de systeeminitialisatie worden gestart, en start opnieuw op. Controleer dat de gebruiker `root` geen enkel bestand in de instellingenmap van Nagios kan benaderen. Als `root` het commando man:ls[1] op [.filename]#/var/spool/nagios# kan uitvoeren, is er iets verkeerd. Anders zou er een fout "Permission denied" teruggegeven moeten worden. Als alles er goed uitziet, kunnen Nagios, Apache, en Sendmail nu gestart worden op een manier die past in het beveiligingsbeleid. De volgende commando's zorgen hiervoor: [source,shell] .... # cd /etc/mail &↦ make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Controleer nogmaals om er zeker van te zijn dat alles juist werkt. Indien niet, controleer dan de logbestanden of de foutmeldingen. Gebruik het hulpprogramma man:sysctl[8] om de beveiligingsbeleidsmodule man:mac_biba[4] uit te schakelen en probeer om alles opnieuw op te starten, zoals gewoonlijk. [NOTE] ==== De gebruiker `root` kan zonder angst de afgedwongen beveiliging veranderen en de instellingenbestanden bewerken. Het volgende commando staat toe om het beveiligingsbeleid naar een lagere graad te degraderen voor een nieuw voortgebrachte shell: [source,shell] .... # setpmac biba/10 csh .... Om te voorkomen dat dit gebeurt, kan de gebruiker via man:login.conf[5] in een bereik worden gedwongen. Als man:setpmac[8] probeert om een commando buiten het bereik van het compartiment te draaien, zal er een fout worden teruggegeven en wordt het commando niet uitgevoerd. Zet in dit geval root op `biba/high(high-high)`. ==== [[mac-userlocked]] == Gebruikers afsluiten Dit voorbeeld gaat over een relatief klein opslagsysteem met minder dan vijftig gebruikers. Gebruikers kunnen zich aanmelden, en mogen zowel gegevens opslaan als bronnen benaderen. Voor dit scenario kunnen man:mac_bsdextended[4] gecombineerd met man:mac_seeotheruids[4] naast elkaar bestaan en zowel toegang tot systeemobjecten als tot gebruikersprocessen ontzeggen. Begin door de volgende regel aan [.filename]#/boot/loader.conf# toe te voegen: [.programlisting] .... mac_seeotheruids_load="YES" .... Het beveiligingsbeleidsmodule man:mac_bsdextended[4] kan door volgende variabele in rc.conf geactiveerd worden: [.programlisting] .... ugidfw_enable="YES" .... De standaardregels in [.filename]#/etc/rc.bsdextended# zullen tijdens de systeeminitialisatie worden geladen; het kan echter nodig zijn om de standaardregels te wijzigen. Aangezien van deze machine alleen verwacht wordt dat het gebruikers bedient, kunnen alle regels uitgecommentarieerd blijven behalve de laatste twee. Deze forceren het standaard laden van systeemobjecten die eigendom zijn van gebruikers. Voeg de benodigde gebruikers toe aan deze machine en start opnieuw op. Probeer, voor testdoeleinden, u aan te melden als een andere gebruiker over twee consoles. Draai het commando `ps aux` om te zien of processen van andere gebruikers zichtbaar zijn. Probeer om man:ls[1] te draaien op de thuismap van een andere gebruiker, dit zou moeten mislukken. Probeer niet te testen met de gebruiker `root` tenzij de specifieke ``sysctl``'s om supergebruikertoegang te blokkeren zijn aangepast. [NOTE] ==== Wanneer een nieuwe gebruiker is toegevoegd, zit de man:mac_bsdextended[4]-regel van die gebruiker niet in de lijst van regelverzamelingen. Om de regelverzameling snel bij te werken, kan simpelweg de beveiligingsbeleidsmodule worden herladen met de gereedschappen man:kldunload[8] en man:kldload[8]. ==== [[mac-troubleshoot]] == Problemen oplossen met het MAC-raamwerk Tijdens de ontwikkeling hebben een aantal gebruikers problemen aangegeven met normale instellingen. Hieronder worden een aantal van die problemen beschreven: === De optie `multilabel` kan niet ingeschakeld worden op [.filename]#/# De vlag `multilabel` blijft niet ingeschakeld op de rootpartitie ([.filename]#/#)! Het lijkt er inderdaad op dat een paar procent van de gebruikers dit probleem heeft. Nadere analyse van het probleem doet vermoeden dat deze zogenaamde "bug" het resultaat is van òfwel onjuiste documentatie òfwel verkeerde interpretatie van de documentatie. Hoe het probleem ook is ontstaan, met de volgende stappen is het te verhelpen: [.procedure] ==== . Wijzig [.filename]#/etc/fstab# en stel de rootpartitie in op `ro` voor alleen-lezen. . Herstart in enkele-gebruikersmodus. . Draai `tunefs -l enable` op [.filename]#/#. . Herstart in normale modus. . Draai `mount -urw`[.filename]#/# en wijzig `ro` terug in `rw` in [.filename]#/etc/fstab# en start het systeem opnieuw. . Controleer de uitvoer van `mount` om zeker te zijn dat `multilabel` juist is ingesteld op het rootbestandssysteem. ==== === X11-server start niet na MAC Na het instellen van een beveiligde omgeving met MAC start X niet meer! Dit kan komen door de MAC-beleidseenheid `partition` of door een verkeerde labeling van een van de MAC-labeling beleidseenheden. Probeer als volgt te debuggen: [.procedure] ==== . Controleer de foutmelding. Als de gebruiker in de klasse `insecure` zit, kan de beleidseenheid `partition` het probleem zijn. Zet de klasse voor de gebruiker terug naar de klasse `default` en herbouw de database met het commando `cap_mkdb`. Ga naar stap twee als hiermee het probleem niet is opgelost. . Controleer de labelbeleidseenheden nog een keer. Stel zeker dat het beleid voor de bewuste gebruiker, de X11-applicatie, en de onderdelen van [.filename]#/dev# juist zijn ingesteld. . Als geen van beide methodes het probleem oplossen, stuur dan de foutmelding en een beschrijving van de omgeving naar de TrustedBSD-discussielijsten van de http://www.TrustedBSD.org[TrustedBSD] website of naar de {freebsd-questions} mailinglijst. ==== === Error: man:_secure_path[3] cannot stat [.filename]#.login_conf# Bij het wisselen van de gebruiker `root` naar een andere gebruiker in het systeem, verschijnt de foutmelding `_secure_path: unable to state .login_conf`. Deze melding komt meestal voor als de gebruiker een hogere labelinstelling heeft dan de gebruiker waarnaar wordt gewisseld. Als bijvoorbeeld gebruiker `joe` een standaardlabel `biba/low` heeft, dan kan gebruiker `root`, die een label `biba/high` heeft, de thuismap van `joe` niet zien. Dit gebeurt zonder rekening te houden met de mogelijkheid dat `root` met `su` de identiteit van `joe` heeft aangenomen. In dit scenario staat het integriteitsmodel van Biba niet toe dat `root` objecten kan zien van een lager integriteitsniveau. === De gebruikersnaam `root` is stuk! In normale, of zelfs in enkelegebruikersmodus, wordt `root` niet herkend. Het commando `whoami` geeft 0 (nul) terug en `su` heeft als resultaat `who are you?`. Wat is er aan de hand? Dit kan gebeuren als een labelbeleid is uitgeschakeld, òfwel door man:sysctl[8] òf doordat de beleidsmodule niet meer is geladen. Als de beleidseenheid (tijdelijk) is uitgeschakeld dan moet de database met aanmeldmogelijkheden opnieuw worden ingesteld, waarbij de optie `label` wordt verwijderd. Er dient voor te worden zorggedragen dat het bestand [.filename]#login.conf# wordt ontdaan van alle opties met `label`, waarna de database opnieuw gebouwd kan worden met `cap_mkdb`. Dit kan ook gebeuren als een beleid toegang verhinderd tot het bestand of de database [.filename]#master.passwd#. Meestal wordt dit veroorzaakt door een beheerder die het bestand veranderd onder een label welke conflicteert met het globale beleid dat gebruikt wordt op het systeem. In deze gevallen wordt de gebruikersinformatie gelezen door het systeem en wordt de toegang geblokkeerd omdat het bestand het nieuwe label erft. Zet het beleid uit door middel van man:sysctl[8] en alles zou weer normaal moeten zijn. diff --git a/documentation/content/nl/books/handbook/network-servers/_index.adoc b/documentation/content/nl/books/handbook/network-servers/_index.adoc index 7537d40740..6e2c3a50ca 100644 --- a/documentation/content/nl/books/handbook/network-servers/_index.adoc +++ b/documentation/content/nl/books/handbook/network-servers/_index.adoc @@ -1,3005 +1,3004 @@ --- title: Hoofdstuk 29. Netwerkdiensten part: Deel IV. Netwerkcommunicatie prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 34 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Netwerkdiensten :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 29 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Overzicht Dit hoofdstuk behandelt een aantal veelgebruikte netwerkdiensten op UNIX(R) systemen. Er wordt ingegaan op de installatie, het instellen, testen en beheren van verschillende typen netwerkdiensten. Overal in dit hoofdstuk staan voorbeeldbestanden met instellingen waar de lezer zijn voordeel mee kan doen. Na het lezen van dit hoofdstuk weet de lezer: * Hoe om te gaan met de inetd daemon; * Hoe een netwerkbestandssysteem opgezet kan worden; * Hoe een netwerkinformatiedienst (NIS) opgezet kan worden voor het delen van gebruikersaccounts; * Hoe automatische netwerkinstellingen gemaakt kunnen worden met DHCP; * Hoe een domeinnaam server opgezet kan worden; * Hoe een Apache HTTP Server opgezet kan worden; * Hoe een File Transfer Protocol (FTP) Server opgezet kan worden; * Hoe een bestand-- en printserver voor Windows(R) cliënten opgezet kan worden met Samba; * Hoe datum en tijd gesynchroniseerd kunnen worden en hoe een tijdserver opgezet kan worden met het NTP-protocol. * Hoe het standaard log-daemon `syslogd` in te stellen om logs van hosts op afstand te accepteren. Veronderstelde voorkennis: * Basisbegrip van de scripts in [.filename]#/etc/rc#; * Bekend zijn met basis netwerkterminologie; * Kennis van de installatie van software van derde partijen (crossref:ports[ports,Applicaties installeren. pakketten en ports]). [[network-inetd]] == De inetd"Super-Server" [[network-inetd-overview]] === Overzicht man:inetd[8] wordt soms de "Internet Super-Server" genoemd, omdat het verbindingen voor meerdere diensten beheert. Als door inetd een verbinding wordt ontvangen, bepaalt die voor welk programma de verbinding bedoeld is, splitst het dat proces af en delegeert de socket (het programma wordt gestart met de socket van de dienst als zijn standaardinvoer, -uitvoer en -foutbeschrijvingen). Het draaien van inetd voor servers die niet veel gebruikt worden kan de algehele werklast verminderen in vergelijking met het draaien van elke daemon individueel in stand-alone modus. inetd wordt primair gebruikt om andere daemons aan te roepen, maar het handelt een aantal triviale protocollen direct af, zoals chargen, auth en daytime. In deze paragraaf worden de basisinstellingen van inetd behandeld met de opties vanaf de commandoregel en met het instellingenbestand [.filename]#/etc/inetd.conf#. [[network-inetd-settings]] === Instellingen inetd wordt gestart door het man:rc[8]-systeem. De optie `inetd_enable` staat standaard op `NO`, maar kan tijdens de installatie door sysinstall worden aangezet. Door het plaatsen van [.programlisting] .... inetd_enable="YES" .... of [.programlisting] .... inetd_enable="NO" .... in [.filename]#/etc/rc.conf# wordt inetd bij het opstarten van een systeem wel of niet ingeschakeld. Het commando: [source,shell] .... # service inetd rcvar .... kan gedraaid worden om de huidige effectieve instellingen weer te geven. Dan kunnen er ook nog een aantal commandoregelopties aan inetd meegegeven worden met de optie `inetd_flags`. [[network-inetd-cmdline]] === Commandoregelopties Zoals de meeste serverdaemons heeft inetd een aantal opties die doorgegeven kunnen worden om het gedrag aan te passen. Zie de handleidingpagina man:inetd[8] voor een volledige lijst van de opties. Opties kunnen door middel van de optie `inetd_flags` in [.filename]#/etc/rc.conf# aan inetd worden doorgegeven. Standaard staat `inetd_flags` ingesteld op `-wW -C 60`, dat TCP-wrapping aanzet voor de diensten van inetd, en voorkomt dat elk enkelvoudig IP-adres enige dienst meer dan 60 keer per minuut opvraagt. Ook al worden er hieronder rate-limiting opties besproken, beginnende gebruikers kunnen blij zijn met het feit dat deze parameters gewoonlijk niet hoeven te worden aangepast. Deze opties kunnen interessant zijn wanneer er een buitensporige hoeveelheid verbindingen worden opgezet. Een volledige lijst van opties staat in de hulppagina man:inetd[8]. -c maximum:: Geeft het maximale aantal gelijktijdige verzoeken voor iedere dienst aan. De standaard is ongelimiteerd. Kan per dienst ter zijde geschoven worden met de parameter `max-child`. -C rate:: Geeft het maximale aantal keren aan dat een dienst vanaf een bepaald IP-adres per minuut aangeroepen kan worden. Kan per dienst ter zijde geschoven worden met de parameter `max-connections-per-ip-per-minute`. -R rate:: Geeft het maximale aantal keren aan dat een dienst per minuut aangeroepen kan worden. De standaard is 256. De instelling `0` geeft aan dat er geen limiet is. -s maximum:: Specificeert het maximaal aantal keer per minuut dat een dienst aangeroepen kan worden vanuit een enkelvoudig IP-adres; de standaard is onbeperkt. Kan worden overstemd op een per-dienst-basis met de parameter `max-child-per-ip`. [[network-inetd-conf]] === [.filename]#inetd.conf# De instellingen van inetd worden beheerd in [.filename]#/etc/inetd.conf#. Als er een wijziging wordt aangebracht in [.filename]#/etc/inetd.conf#, dan kan inetd gedwongen worden om de instellingen opnieuw in te lezen door dit commando te draaien: [[network-inetd-reread]] .Het instellingenbestand van inetd herladen [example] ==== [source,shell] .... # service inetd reload .... ==== Iedere regel in het bestand met instellingen heeft betrekking op een individuele daemon. Commentaar wordt vooraf gegaan door een `#`. De opmaak van elke regel van [.filename]##/etc/inetd.conf## is als volgt: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... Een voorbeeldregel voor de daemon man:ftpd[8] met IPv4 kan eruit zien als: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... service-name:: Dit is de dienstnaam van een daemon. Die moet overeenkomen met een dienst uit [.filename]#/etc/services#. Hiermee kan de poort waarop inetd moet luisteren aangegeven worden. Als er een nieuwe dienst wordt gemaakt, moet die eerst in [.filename]#/etc/services# gezet worden. socket-type:: Dit is `stream`, `dgram`, `raw` of `seqpacket`. `stream` moet gebruikt worden voor verbindingsgebaseerde TCP-daemons, terwijl `dgram` wordt gebruikt voor daemons die gebruik maken van het transportprotocol UDP. protocol:: Een van de volgende: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protocol | Toelichting |tcp, tcp4 |TCP IPv4 |udp, udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |Zowel TCP IPv4 als v6 |udp46 |Zowel UDP IPv4 als v6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: `wait|nowait` geeft aan of de daemon die door inetd wordt aangesproken zijn eigen sockets kan afhandelen of niet. `dgram` sockettypen moeten de optie `wait` gebruiken, terwijl streamsocket daemons, die meestal multi-threaded zijn, de optie `nowait` horen te gebruiken. `wait` geeft meestal meerdere sockets aan een daemon, terwijl `nowait` een kinddaemon draait voor iedere nieuwe socket. + Het maximum aantal kinddaemons dat inetd mag voortbrengen kan ingesteld worden met de optie `max-child`. Als een limiet van tien instanties van een bepaalde daemon gewenst is, dan zou er `/10` achter `nowait` gezet worden. Door `/0` wordt een onbeperkt aantal kinderen toegestaan. + Naast `max-child` zijn er nog twee andere opties waarmee het maximale aantal verbindingen van een bepaalde plaats naar een daemon ingesteld kan worden. `max-connections-per-ip-per-minute` beperkt het aantal verbindingen per minuut voor enig IP-adres, een waarde van tien betekent hier dat er van ieder IP-adres maximaal tien verbindingen naar een bepaalde dienst tot stand gebracht kunnen worden. `max-child-per-ip` beperkt het aantal kindprocessen dat namens enig IP-adres op enig moment gestart kan worden. Deze opties kunnen zijn nuttig om bedoeld en onbedoeld buitensporig bronnengebruik van en Denial of Service (DoS) aanvallen op een machine te voorkomen. + In dit veld is één van `wait` of `nowait` verplicht. `max-child`, `max-connections-per-ip-per-minute` en `max-child-per-ip` zijn optioneel. + Een stream-type multi-threaded daemon zonder één van de limieten `max-child`, `max-connections-per-ip-per-minute` of `max-child-per-ip` is eenvoudigweg: `nowait`. + Dezelfde daemon met een maximale limiet van tien daemons zou zijn: `nowait/10`. + Dezelfde instellingen met een limiet van twintig verbindingen per IP-adres per minuut en een totaal maximum van tien kinddaemons zou zijn: `nowait/10/20`. + Deze opties worden allemaal gebruikt door de standaardinstellingen van de daemon man:fingerd[8]: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -s .... + Als afsluiting, een voorbeeld in dit veld met een maximum van 100 kinderen in totaal, met een maximum van 5 voor enig IP-adres zou zijn: `nowait/100/0/5`. user:: Dit is de gebruikersnaam waar een daemon onder draait. Daemons draaien meestal als de gebruiker `root`. Om veiligheidsredenen draaien sommige daemons onder de gebruiker `daemon` of de gebruiker met de minste rechten: `nobody`. server-program:: Het volledige pad van de daemon die uitgevoerd moet worden als er een verbinding wordt ontvangen. Als de daemon een dienst is die door inetd intern wordt geleverd, dan moet de optie `internal` gebruikt worden. server-program-arguments:: Deze optie werkt samen met de optie `server-program` en hierin worden de argumenten ingesteld, beginnend met `argv[0]`, die bij het starten aan de daemon worden meegegeven. Als `mijndaemon -d` de commandoregel is, dan zou `mijndaemon -d` de waarde van `server-program-arguments` zijn. Hier geldt ook dat als de daemon een interne dienst is, hier de optie `internal` moet worden. [[network-inetd-security]] === Beveiliging Afhankelijk van keuzes gemaakt tijdens de installatie, kunnen veel van de diensten van inetd standaard ingeschakeld zijn. Het is verstandig te overwegen om een daemon dat niet noodzakelijk is uit te schakelen. Plaats een `#` voor de daemon in [.filename]##/etc/inetd.conf## en <>. Sommige daemons, zoals fingerd, zijn wellicht helemaal niet gewenst omdat ze informatie geven die nuttig kan zijn voor een aanvaller. Sommige daemons zijn zich niet echt bewust van beveiliging en hebben lange of niet bestaande timeouts voor verbindingspogingen. Hierdoor kan een aanvaller langzaam veel verbindingen maken met een daemon en zo beschikbare bronnen verzadigen. Het is verstandig voor die daemons de limietopties `max-connections-per-ip-per-minute`, `max-child` of `max-child-per-ip` te gebruiken als ze naar uw smaak teveel verbindingen hebben. TCP-wrapping staat standaard aan. Er staat meer informatie over het zetten van TCP-restricties op de verschillende daemons die door inetd worden aangesproken in man:hosts_access[5]. [[network-inetd-misc]] === Allerlei daytime, time, echo, discard, chargen en auth zijn allemaal interne diensten van inetd. De dienst auth biedt identiteitsnetwerkdiensten en is tot op een bepaald niveau instelbaar, terwijl de anderen eenvoudigweg aan of uit staan. Meer diepgaande informatie staat in man:inetd[8]. [[network-nfs]] == Netwerkbestandssysteem (NFS) Het Netwerkbestandssysteem (Network File System) is een van de vele bestandssystemen die FreeBSD ondersteunt. Het staat ook wel bekend als NFS. Met NFS is het mogelijk om mappen en bestanden met anderen in een netwerk te delen. Door het gebruik van NFS kunnen gebruikers en programma's bij bestanden op andere systemen op bijna dezelfde manier als bij hun eigen lokale bestanden. De grootste voordelen van NFS zijn: * Lokale werkstations gebruiken minder schijfruimte omdat veel gebruikte data op één machine opgeslagen kan worden en nog steeds toegankelijk is voor gebruikers via het netwerk; * Gebruikers hoeven niet op iedere machine een thuismap te hebben. Thuismappen kunnen op de NFS server staan en op het hele netwerk beschikbaar zijn; * Opslagapparaten als floppydisks, CD-ROM drives en Zip(R) drives kunnen door andere machines op een netwerk gebruikt worden. Hierdoor kan het aantal drives met verwijderbare media in een netwerk verkleind worden. === Hoe NFS werkt NFS bestaat uit tenminste twee hoofdonderdelen: een server en een of meer cliënten. De cliënt benadert de gegevens die op een servermachine zijn opgeslagen via een netwerk. Om dit mogelijk te maken moeten er een aantal processen ingesteld en gestart worden. Op de server moeten de volgende daemons draaien: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Daemon | Beschrijving |nfsd |De NFS-daemon die verzoeken van de NFS cliënten afhandelt. |mountd |De NFS koppeldaemon die doorgestuurde verzoeken van man:nfsd[8] uitvoert. |rpcbind |Deze daemon geeft voor NFS-cliënten aan welke poort de NFS-server gebruikt. |=== Op de cliënt kan ook een daemon draaien: nfsiod. De daemon nfsiod handelt verzoeken van de NFS-server af. Dit is optioneel en kan de prestaties verbeteren, maar het is niet noodzakelijk voor een normale en correcte werking. Meer informatie staat in man:nfsiod[8]. [[network-configuring-nfs]] === NFS instellen NFS instellen gaat redelijk rechtlijnig. Alle processen die moeten draaien kunnen meestarten bij het opstarten door een paar wijzigingen in [.filename]#/etc/rc.conf#. Op de NFS server dienen de volgende opties in [.filename]#/etc/rc.conf# te staan: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" .... mountd start automatisch als de NFS server is ingeschakeld. Op de cliënt dient de volgende optie in [.filename]#/etc/rc.conf# te staan: [.programlisting] .... nfs_client_enable="YES" .... In het bestand [.filename]#/etc/exports# staat beschreven welke bestandssystemen NFS moet exporteren (soms heet dat ook wel delen of "sharen"). Iedere regel in [.filename]#/etc/exports# slaat op een bestandssysteem dat wordt geëxporteerd en welke machines toegang hebben tot dat bestandssysteem. Samen met machines die toegang hebben, kunnen ook toegangsopties worden aangegeven. Er zijn veel opties beschikbaar, maar hier worden er maar een paar beschreven. Alle opties staan beschreven in man:exports[5]. Nu volgen een aantal voorbeelden voor [.filename]#/etc/exports#: Het volgende voorbeeld geeft een beeld van hoe een bestandssysteem te exporteren, hoewel de instellingen afhankelijk zijn van de omgeving en het netwerk. Om bijvoorbeeld de map [.filename]#/cdrom# te exporteren naar drie machines die dezelfde domeinnaam hebben als de server (vandaar dat de machinenamen geef domeinachtervoegsel hebben) of in [.filename]#/etc/hosts# staan. De vlag `-ro` exporteert het bestandssysteem als alleen-lezen. Door die vlag kan een ander systeem niet schrijven naar het geëxporteerde bestandssysteem. [.programlisting] .... /cdrom -ro host1 host2 host3 .... Het volgende voorbeeld exporteert [.filename]#/home# naar drie hosts op basis van IP-adres. Dit heeft zin als er een privaat netwerk bestaat, zonder dat er een DNS server is ingesteld. Optioneel kan [.filename]#/etc/hosts# gebruikt worden om interne hostnamen in te stellen. Er is meer informatie te vinden in man:hosts[5]. Met de vlag `-alldirs` mogen submappen ook koppelpunten zijn. De submap wordt dan niet feitelijk aangekoppeld, maar de cliënt koppelt dan alleen de submappen aan die verplicht of nodig zijn. [.programlisting] .... /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... Het volgende voorbeeld exporteert [.filename]#/a# zo dat twee cliënten uit verschillende domeinen bij het bestandssysteem mogen. Met de vlag `-maproot=root` mag de gebruiker op het andere systeem gegevens naar het geëxporteerde bestandssysteem schrijven als `root`. Als de vlag `-maproot=root` niet wordt gebruikt, dan kan een gebruiker geen bestanden wijzigen op het geëxporteerde bestandssysteem, zelfs niet als een gebruiker daar `root` is. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... Om een cliënt toegang te geven tot een geëxporteerd bestandssysteem, moet die cliënt daar rechten voor hebben. De cliënt moet daarvoor genoemd worden in [.filename]#/etc/exports#. In [.filename]#/etc/exports# staat iedere regel voor de exportinformatie van één bestandssysteem naar één host. Per bestandssysteem mag een host maar één keer genoemd worden en mag maar één standaard hebben. Stel bijvoorbeeld dat [.filename]#/usr# een enkel bestandssysteem is. Dan is de volgende [.filename]#/etc/exports# niet geldig: [.programlisting] .... # Werkt niet als /usr 1 bestandssysteem is /usr/src client /usr/ports client .... Eén bestandssysteem, [.filename]#/usr#, heeft twee regels waarin exports naar dezelfde host worden aangegeven, `client`. In deze situatie is de juiste instelling: [.programlisting] .... /usr/src /usr/ports client .... De eigenschappen van een bestandssysteem dat naar een bepaalde host wordt geëxporteerd moeten allemaal op één regel staan. Regels waarop geen cliënt wordt aangegeven worden behandeld als een enkele host. Dit beperkt hoe bestandssysteem geëxporteerd kunnen worden, maar dat blijkt meestal geen probleem te zijn. Het volgende voorbeeld is een geldige exportlijst waar [.filename]#/usr# en [.filename]#/exports# lokale bestandssystemen zijn: [.programlisting] .... # Exporteer src en ports naar client01 en client02, # maar alleen client01 heeft er rootprivileges /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # De cliëntmachines hebben rootrechten en kunnen overal aankoppelen # op /exports. Iedereen in de wereld kan /exports/obj als alleen-lezen aankoppelen. /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... De daemon mountd moet gedwongen worden om het bestand [.filename]#/etc/exports# te controleren steeds wanneer het is aangepast, zodat de veranderingen effectief kunnen worden. Dit kan worden bereikt door òfwel een HUP-signaal naar de draaiende daemon te sturen: [source,shell] .... # kill -HUP `cat /var/run/mountd.pid` .... of door het man:rc[8] script `mountd` met de juiste parameter aan te roepen: [source,shell] .... # service mountd onereload .... Raadpleeg crossref:config[configtuning-rcd,Gebruik van rc met FreeBSD] voor meer informatie over het gebruik van rc-scripts. Het is ook mogelijk een machine te herstarten, zodat FreeBSD alles netjes in kan stellen, maar dat is niet nodig. Het uitvoeren van de volgende commando's als `root` hoort hetzelfde resultaat te hebben. Op de NFS server: [source,shell] .... # rpcbind # nfsd -u -t -n 4 # mountd -r .... Op de NFS cliënt: [source,shell] .... # nfsiod -n 4 .... Nu is alles klaar om feitelijk het netwerkbestandssysteem aan te koppelen. In de volgende voorbeelden is de naam van de server `server` en de naam van de cliënt is `client`. Om een netwerkbestandssysteem slechts tijdelijk aan te koppelen of om alleen te testen, kan een commando als het onderstaande als `root` op de cliënt uitgevoerd worden: [source,shell] .... # mount server:/home /mnt .... Hiermee wordt de map [.filename]#/home# op de server aangekoppeld op [.filename]#/mnt# op de cliënt. Als alles juist is ingesteld, zijn nu in [.filename]#/mnt# op de cliënt de bestanden van de server zichtbaar. Om een netwerkbestandssysteem iedere keer als een computer opstart aan te koppelen, kan het bestandssysteem worden toegevoegd aan het bestand [.filename]#/etc/fstab#: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... Alle beschikbare opties staan in man:fstab[5]. === Op slot zetten Voor sommige applicaties (b.v. mutt) is het nodig dat bestanden op slot staan om correct te werken. In het geval van NFS, kan rpc.lockd worden gebruikt voor het op slot zetten van bestanden. Voeg het volgende toe aan het bestand [.filename]#/etc/rc.conf# op zowel de cliënt als de server om het aan te zetten (het wordt aangenomen dat de NFS-cliënt en -server reeds zijn geconfigureerd): [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... Start de applicatie met: [source,shell] .... # service lockd start # service statd start .... Als echt op slot zetten tussen de NFS-cliënten en de NFS-server niet nodig is, is het mogelijk om de NFS-cliënt bestanden lokaal op slot te laten zetten door `-L` aan man:mount_nfs[8] door te geven. In de handleidingpagina man:mount_nfs[8] staan verdere details. === Mogelijkheden voor gebruik NFS is voor veel doeleinden in te zetten. Een aantal voorbeelden: * Een aantal machines een CD-ROM of andere media laten delen. Dat is goedkoper en vaak ook handiger, bijvoorbeeld bij het installeren van software op meerdere machines; * Op grote netwerken kan het praktisch zijn om een centrale NFS server in te richten, waarop alle thuismappen staan. Die thuismappen kunnen dan geëxporteerd worden, zodat gebruikers altijd dezelfde thuismap hebben, op welk werkstation ze ook aanmelden; * Meerdere machines kunnen een gezamenlijke map [.filename]#/usr/ports/distfiles# hebben. Dan is het mogelijk om een port op meerdere machines te installeren, zonder op iedere machine de broncode te hoeven downloaden. [[network-amd]] === Automatisch aankoppelen met amd man:amd[8] (de automatic mounter daemon) koppelt automatisch netwerkbestandssystemen aan als er aan een bestand of map binnen dat bestandssysteem wordt gerefereerd. amd ontkoppelt ook bestandssystemen die een bepaalde tijd niet gebruikt worden. Het gebruikt van amd is een aantrekkelijk en eenvoudig alternatief ten opzichte van permanente koppelingen, die meestal in [.filename]#/etc/fstab# staan. amd werkt door zichzelf als NFS-server te koppelen aan de mappen [.filename]#/host# en [.filename]#/net#. Als binnen die mappen een bestand wordt geraadpleegd, dan zoekt amd de bijbehorende netwerkkoppeling op en koppelt die automatisch aan. [.filename]#/net# wordt gebruikt om een geëxporteerd bestandssysteem van een IP-adres aan te koppelen, terwijl [.filename]#/host# wordt gebruikt om een geëxporteerd bestandssysteem van een hostnaam aan te koppelen. Het raadplegen van een bestand in [.filename]#/host/foobar/usr# geeft amd aan dat die moet proberen de [.filename]#/usr# export op de host `foobar` aan te koppelen. .Een export aankoppelen met amd [example] ==== De beschikbare koppelingen van een netwerkhost zijn te bekijken met `showmount`. Om bijvoorbeeld de koppelingen van de host `foobar` te bekijken: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /host/foobar/usr .... ==== Zoals in het bovenstaande voorbeeld te zien is, toont `showmount`[.filename]#/usr# als een export. Als er naar de map [.filename]#/host/foobar/usr# wordt gegaan, probeert amd de hostnaam `foobar` te resolven en de gewenste export automatisch aan te koppelen. amd kan gestart worden door de opstartscript door de volgende regel in [.filename]#/etc/rc.conf# te plaatsen: [.programlisting] .... amd_enable="YES" .... Er kunnen ook nog opties meegegeven worden aan amd met de optie `amd_flags`. Standaard staat `amd_flags` ingesteld op: [.programlisting] .... amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" .... In het bestand [.filename]#/etc/amd.map# staan de standaardinstellingen waarmee exports aangekoppeld worden. In het bestand [.filename]#/etc/amd.conf# staan een aantal van de meer gevorderde instellingen van amd. In man:amd[8] en man:amd.conf[8] staat meer informatie. [[network-nfs-integration]] === Problemen bij samenwerking met andere systemen Bepaalde Ethernet adapters voor ISA PC systemen kennen limieten die tot serieuze netwerkproblemen kunnen leiden, in het bijzonder met NFS. Dit probleem is niet specifiek voor FreeBSD, maar het kan op FreeBSD wel voor komen. Het probleem ontstaat bijna altijd als (FreeBSD) PC-systemen netwerken met hoog presterende werkstations, zoals van Silicon Graphics, Inc. en Sun Microsystems, Inc. De NFS-koppeling werkt prima en wellicht lukken een aantal acties ook, maar dan ineens lijkt de server niet meer te reageren voor de cliënt, hoewel verzoeken van en naar andere systemen gewoon verwerkt worden. Dit gebeurt op een cliëntsysteem, of de cliënt nu het FreeBSD systeem is of het werkstation. Op veel systemen is er geen manier om de cliënt netjes af te sluiten als dit probleem is ontstaan. Vaak is de enige mogelijkheid een reset van de cliënt, omdat het probleem met NFS niet opgelost kan worden. Hoewel de enige "correcte" oplossing de aanschaf van een snellere en betere Ethernet adapter voor het FreeBSD systeem is, is er zo om het probleem heen te werken dat het werkbaar is. Als FreeBSD de _server_ is, kan de optie `-w=1024` gebruikt worden bij het aankoppelen door de cliënt. Als het FreeBSD systeem de _cliënt_ is, dan dient het NFS-bestandssysteem aangekoppeld te worden met de optie `r=1024`. Deze opties kunnen het vierde veld zijn in een regel in [.filename]#fstab# voor automatische aankoppelingen en bij handmatige aankoppelingen met man:mount[8] kan de parameter `-o` gebruikt worden. Soms wordt een ander probleem voor dit probleem versleten, als servers en cliënten zich op verschillende netwerken bevinden. Als dat het geval is, dan dient _vastgesteld_ te worden dat routers de UDP informatie op de juiste wijze routeren, omdat er anders nooit NFS-verkeer gerouteerd kan worden. In de volgende voorbeelden is `fastws` de host(interface)naam van een hoog presterend werkstation en `freebox` is de host(interface)naam van een FreeBSD systeem met een Ethernet adapter die mindere prestaties levert. [.filename]#/sharedfs# wordt het geëxporteerde NFS-bestandssysteem (zie man:exports[5]) en [.filename]#/project# wordt het koppelpunt voor het geëxporteerde bestandssysteem op de cliënt. [NOTE] ==== In sommige gevallen kunnen applicaties beter draaien als extra opties als `hard` of `soft` en `bg` gebruikt worden. ==== Voorbeelden voor het FreeBSD systeem (`freebox`) als de cliënt in [.filename]#/etc/fstab# op `freebox`: [.programlisting] .... fastws:/sharedfs /project nfs rw,-r=1024 0 0 .... Als een handmatig aankoppelcommando op `freebox`: [source,shell] .... # mount -t nfs -o -r=1024 fastws:/sharedfs /project .... Voorbeelden voor het FreeBSD systeem als de server in [.filename]#/etc/fstab# op `fastws`: [.programlisting] .... freebox:/sharedfs /project nfs rw,-w=1024 0 0 .... Als een handmatig aankoppelcommando op `fastws`: [source,shell] .... # mount -t nfs -o -w=1024 freebox:/sharedfs /project .... Bijna iedere 16-bit Ethernet adapter werkt zonder de hierboven beschreven restricties op de lees- en schrijfgrootte. Voor wie het wil weten wordt nu beschreven wat er gebeurt als de fout ontstaan, wat ook duidelijk maakt waarom het niet hersteld kan worden. NFS werkt meestal met een "block"grootte van 8 K (hoewel het mogelijk is dat er kleinere fragmenten worden verwerkt). Omdat de maximale grootte van een Ethernet pakket rond de 1500 bytes ligt, wordt een "block" opgesplitst in meerdere Ethernetpakketten, hoewel het hoger in de code nog steeds één eenheid is, en wordt ontvangen, samengevoegd en _bevestigd_ als een eenheid. De hoog presterende werkstations kunnen de pakketten waaruit een NFS-eenheid bestaat bijzonder snel naar buiten pompen. Op de kaarten met minder capaciteit worden de eerdere pakketten door de latere pakketten van dezelfde eenheid ingehaald voordat ze bij die host zijn aangekomen en daarom kan de eenheid niet worden samengesteld en bevestigd. Als gevolg daarvan ontstaat er op het werkstation een timeout en probeert die de eenheid opnieuw te sturen, maar dan weer de hele eenheid van 8 K, waardoor het proces wordt herhaald, ad infinitum. Door de grootte van de eenheid kleiner te houden dan de grootte van een Ethernet pakket, is het zeker dat elk Ethernetpakket dat compleet is aangekomen bevestigd kan worden, zodat de deadlock niet ontstaat. Toch kan een PC systeem nog wel overrompeld worden als hoog presterende werkstations er op inhakken, maar met de betere netwerkkaarten valt het dan in ieder geval niet om door de NFS "eenheden". Als het systeem toch wordt overrompeld, dan worden de betrokken eenheden opnieuw verstuurd en dan is de kans groot dat ze worden ontvangen, samengevoegd en bevestigd. [[network-nis]] == Netwerkinformatiesysteem (NIS/YP) === Wat is het? NIS, dat staat voor Netwerkinformatiediensten (Network Information Services), is ontwikkeld door Sun Microsystems om het beheer van UNIX(R) (origineel SunOS(TM)) systemen te centraliseren. Tegenwoordig is het eigenlijk een industriestandaard geworden. Alle grote UNIX(R) achtige systemen (Solaris(TM), HP-UX, AIX(R), Linux(R), NetBSD, OpenBSD, FreeBSD, enzovoort) ondersteunen NIS. NIS stond vroeger bekend als Yellow Pages, maar vanwege problemen met het handelsmerk heeft Sun de naam veranderd. De oude term, en yp, wordt nog steeds vaak gebruikt. Het is een op RPC-gebaseerd cliënt/serversysteem waarmee een groep machines binnen een NIS-domein een gezamenlijke verzameling met instellingenbestanden kan delen. Hierdoor kan een beheerder NIS-systemen opzetten met een minimaal aantal instellingen en vanaf een centrale lokatie instellingen toevoegen, verwijderen en wijzigen. Het is te vergelijken met het Windows NT(R) domeinsysteem en hoewel de interne implementatie van de twee helemaal niet overeenkomt, is de basisfunctionaliteit vergelijkbaar. === Termen en processen om te onthouden Er zijn een aantal termen en belangrijke gebruikersprocessen die een rol spelen bij het implementeren van NIS op FreeBSD, zowel bij het maken van een NIS-server als bij het maken van een systeem dan NIS-cliënt is: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Term | Beschrijving |NIS-domeinnaam |Een NIS-masterserver en al zijn cliënten (inclusief zijn slave master) hebben een NIS-domeinnaam. Vergelijkbaar met een Windows NT(R) domeinnaam, maar de NIS-domeinnaam heeft niets te maken met DNS. |rpcbind |Moet draaien om RPC (Remote Procedure Call in te schakelen, een netwerkprotocol dat door NIS gebruikt wordt). Als rpcbind niet draait, dan kan er geen NIS-server draaien en kan een machine ook geen NIS-cliënt zijn. |ypbind |"Verbindt" een NIS-cliënt aan zijn NIS-server. Dat gebeurt door met de NIS-domeinnaam van het systeem en door het gebruik van RPC te verbinden met de server. ypbind is de kern van cliënt-server communicatie in een NIS-omgeving. Als ypbind op een machine stopt, dan kan die niet meer bij de NIS-server komen. |ypserv |Hoort alleen te draaien op NIS-servers. Dit is het NIS-serverproces zelf. Als man:ypserv[8] stopt, dan kan de server niet langer reageren op NIS-verzoeken (hopelijk is er dan een slaveserver om het over te nemen). Er zijn een aantal implementaties van NIS, maar niet die op FreeBSD, die geen verbinding met een andere server proberen te maken als de server waarmee ze verbonden waren niet meer reageert. In dat geval is vaak het enige dat werkt het serverproces herstarten (of zelfs de hele server) of het ypbind-proces op de cliënt. |rpc.yppasswdd |Nog een proces dat alleen op NIS-masterservers hoort te draaien. Dit is een daemon waarbij NIS-cliënten hun NIS-wachtwoorden kunnen wijzigen. Als deze daemon niet draait, moeten gebruikers zich aanmelden op de NIS-masterserver en daar hun wachtwoord wijzigen. |=== === Hoe werkt het? Er zijn drie typen hosts in een NIS-omgeving: master servers, slaveservers en cliënten. Servers zijn het centrale depot voor instellingen voor een host. Masterservers bevatten de geautoriseerd kopie van die informatie, terwijl slaveservers die informatie spiegelen voor redundantie. Cliënten verlaten zich op de servers om hun die informatie ter beschikking te stellen. Op deze manier kan informatie uit veel bestanden gedeeld worden. De bestanden [.filename]#master.passwd#, [.filename]#group# en [.filename]#hosts# worden meestal via NIS gedeeld. Als een proces op een cliënt informatie nodig heeft die normaliter in een van die lokale bestanden staat, dan vraagt die het in plaats daarvan aan de NIS-servers waarmee hij verbonden is. ==== Soorten machines * Een _NIS-masterserver_. Deze server onderhoudt, analoog aan een Windows NT(R) primaire domeincontroller, de bestanden die door alle NIS-cliënten gebruikt worden. De bestanden [.filename]#passwd#, [.filename]#group# en andere bestanden die door de NIS-cliënten gebruikt worden staan op de masterserver. + [NOTE] ==== Het is mogelijk om één machine master server te laten zijn voor meerdere NIS-domeinen. Dat wordt in deze inleiding echter niet beschreven, omdat die uitgaat van een relatief kleine omgeving. ==== * _NIS-slaveservers_. Deze zijn te vergelijken met Windows NT(R) backup domain controllers. NIS-slaveservers beheren een kopie van de bestanden met gegevens op de NIS-master. NIS-slaveservers bieden redundantie, die nodig is in belangrijke omgevingen. Ze helpen ook om de belasting te verdelen met de master server: NIS-cliënten maken altijd een verbinding met de NIS-server die het eerst reageert en dat geldt ook voor antwoorden van slaveservers. * _NIS-cliënten_. NIS-cliënten authenticeren, net als de meeste Windows NT(R) werkstations, tegen de NIS-server (of de Windows NT(R) domain controller in het geval van Windows NT(R) werkstations) bij het aanmelden. === NIS/YP gebruiken Dit onderdeel behandelt het opzetten van een NIS-voorbeeldomgeving. ==== Plannen Er wordt uitgegaan van een beheerder van een klein universiteitslab. Dat lab, dat bestaat uit FreeBSD machines, kent op dit moment geen centraal beheer. Iedere machine heeft zijn eigen [.filename]#/etc/passwd# en [.filename]#/etc/master.passwd#. Die bestanden worden alleen met elkaar in lijn gehouden door handmatige handelingen. Als er op dit moment een gebruiker aan het lab wordt toegevoegd, moet `adduser` op alle 15 machines gedraaid worden. Dat moet natuurlijk veranderen en daarom is besloten het lab in te richten met NIS, waarbij twee machines als server worden gebruikt. Het lab ziet er ongeveer als volgt uit: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Machinenaam | IP-adres | Rol Machine |`ellington` |`10.0.0.2` |NIS-master |`coltrane` |`10.0.0.3` |NIS-slave |`basie` |`10.0.0.4` |Wetenschappelijk werkstation |`bird` |`10.0.0.5` |Cliënt machine |`cli[1-11]` |`10.0.0.[6-17]` |Andere cliënt machines |=== Bij het voor de eerste keer instellen van een NIS-schema is het verstandig eerst na te denken over hoe dat opgezet moet worden. Hoe groot een netwerk ook is, er moeten een aantal beslissingen gemaakt worden. ===== Een NIS-domeinnaam kiezen Dit is wellicht niet de bekende "domeinnaam". Daarom wordt het ook de "NIS-domeinnaam" genoemd. Bij de broadcast van een cliënt om informatie wordt ook de naam van het NIS-domein waar hij onderdeel van uitmaakt meegezonden. Zo kunnen meerdere servers op een netwerk bepalen of er antwoord gegeven dient te worden op een verzoek. De NIS-domeinnaam kan voorgesteld worden als de naam van een groep hosts die op een of andere manier aan elkaar gerelateerd zijn. Sommige organisaties kiezen hun Internet-domeinnaam als NIS-domeinnaam. Dat wordt niet aangeraden omdat het voor verwarring kan zorgen bij het debuggen van netwerkproblemen. De NIS-domeinnaam moet uniek zijn binnen een netwerk en het is handig als die de groep machines beschrijft waarvoor hij geldt. Zo kan bijvoorbeeld de financiële afdeling van Acme Inc. als NIS-domeinnaam "acme-fin" hebben. In dit voorbeeld wordt de naam `test-domain` gekozen. Sommige besturingssystemen gebruiken echter (met name SunOS(TM)) hun NIS-domeinnaam als hun Internet-domeinnaam. Als er machines zijn op een netwerk die deze restrictie kennen, dan _moet_ de Internet-domeinnaam als de naam voor het NIS-domeinnaam gekozen worden. ===== Systeemeisen Bij het kiezen van een machine die als NIS-server wordt gebruikt zijn er een aantal aandachtspunten. Een van de onhandige dingen aan NIS is de afhankelijkheid van de cliënten van de server. Als een cliënt de server voor zijn NIS-domein niet kan bereiken, dan wordt die machine vaak onbruikbaar. Door het gebrek aan gebruiker- en groepsinformatie bevriezen de meeste systemen. Daarom moet er een machine gekozen worden die niet vaak herstart hoeft te worden of wordt gebruikt voor ontwikkeling. De NIS-server is in het meest ideale geval een alleenstaande server die als enige doel heeft NIS-server te zijn. Als een netwerk niet zwaar wordt gebruikt, kan de NIS-server op een machine die ook andere diensten aanbiedt gezet worden, maar het blijft belangrijk om ervan bewust te zijn dat als de NIS-server niet beschikbaar is, dat nadelige invloed heeft op _alle_ NIS-cliënten. ==== NIS-servers De hoofdversies van alle NIS-informatie staan opgeslagen op één machine die de NIS-masterserver heet. De databases waarin de informatie wordt opgeslagen heten NIS-afbeeldingen. In FreeBSD worden die afbeeldingen opgeslagen in [.filename]#/var/yp/[domeinnaam]# waar [.filename]#[domeinnaam]# de naam is van het NIS-domein dat wordt bediend. Een enkele NIS-server kan tegelijkertijd meerdere NIS-domeinen ondersteunen en het is dus mogelijk dat er meerdere van zulke mappen zijn, een voor ieder ondersteund domein. Ieder domein heeft zijn eigen onafhankelijke verzameling afbeeldingen. In NIS-master- en -slaveservers worden alle NIS-verzoeken door de daemon `ypserv` afgehandeld. `ypserv` is verantwoordelijk voor het ontvangen van inkomende verzoeken van NIS-cliënten, het vertalen van de gevraagde domeinnaam en mapnaam naar een pad naar het corresponderende databasebestand en het terugsturen van de database naar de cliënten. ===== Een NIS-masterserver opzetten Het opzetten van een master NIS-server kan erg eenvoudig zijn, afhankelijk van de behoeften. FreeBSD heeft ondersteuning voor NIS als basisfunctie. Alleen de volgende regels hoeven aan [.filename]#/etc/rc.conf# toegevoegd te worden en FreeBSD doet de rest: [.procedure] ==== [.programlisting] .... nisdomainname="test-domain" .... . Deze regel stelt de NIS-domeinnaam in op `test-domain` bij het instellen van het netwerk (bij het opstarten). + [.programlisting] .... nis_server_enable="YES" .... + . Dit geeft FreeBSD aan de NIS-serverprocessen te starten als het netwerk de volgende keer wordt opgestart. + [.programlisting] .... nis_yppasswdd_enable="YES" .... . Dit schakelt de daemon `rpc.yppasswdd` in die, zoals al eerder aangegeven, cliënten toestaat om hun NIS-wachtwoord vanaf een cliënt-machine te wijzigen. ==== [NOTE] ==== Afhankelijk van de inrichting van NIS, kunnen er nog meer instellingen nodig zijn. In het onderdeel <> staan meer details. ==== Draai na het instellen van bovenstaande regels het commando `/etc/netstart` als supergebruiker. Het zal alles voor u instellen, gebruikmakende van de waarden die u in [.filename]#/etc/rc.conf# heeft ingesteld. Start als laatste stap, voor het initialiseren van de NIS-afbeeldingen, de daemon ypserv handmatig: [source,shell] .... # service ypserv start .... ===== NIS-afbeeldingen initialiseren Die _NIS-afbeeldingen_ zijn databasebestanden die in de map [.filename]#/var/yp# staan. Ze worden gemaakt uit de bestanden met instellingen uit de map [.filename]#/etc# van de NIS-master, met één uitzondering: [.filename]#/etc/master.passwd#. Daar is een goede reden voor, want het is niet wenselijk om de wachtwoorden voor `root` en andere administratieve accounts naar alle servers in het NIS-domein te sturen. Daar moet voor het initialiseren van de NIS-afbeeldingen het volgende uitgevoerd worden: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... Dan horen alle systeemaccounts verwijderd te worden (`bin`, `tty`, `kmem`, `games`, enzovoort) en alle overige accounts waarvoor het niet wenselijk is dat ze op de NIS-cliënten terecht komen (bijvoorbeeld `root` en alle andere UID 0 (supergebruiker) accounts). [NOTE] ==== [.filename]#/var/yp/master.passwd# hoort niet te lezen te zijn voor een groep of voor de wereld (dus modus 600)! Voor het aanpassen van de rechten kan `chmod` gebruikt worden. ==== Als dat is gedaan, kunnen de NIS-afbeeldingen geïnitialiseerd worden. Bij FreeBSD zit een script `ypinit` waarmee dit kan (in de hulppagina staat meer informatie). Dit script is beschikbaar op de meeste UNIX(R) besturingssystemen, maar niet op allemaal. Op Digital UNIX/Compaq Tru64 UNIX heet het `ypsetup`. Omdat er afbeeldingen voor een NIS-master worden gemaakt, wordt de optie `-m` meegegeven aan `ypinit`. Aangenomen dat de voorgaande stappen zijn uitgevoerd, kunnen de NIS-afbeeldingen gemaakt worden op de volgende manier: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..uitvoer van het maken van de afbeeldingen..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... `ypinit` hoort [.filename]#/var/yp/Makefile# gemaakt te hebben uit [.filename]#/var/yp/Makefile.dist#. Als dit bestand is gemaakt, neemt dat bestand aan dat er in een omgeving met een enkele NIS-server wordt gewerkt met alleen FreeBSD-machines. Omdat `test-domain` ook een slaveserver bevat, dient [.filename]#/var/yp/Makefile# gewijzigd te worden: [source,shell] .... ellington# vi /var/yp/Makefile .... Als de onderstaande regel niet al uitgecommentarieerd is, dient dat alsnog te gebeuren: [.programlisting] .... NOPUSH = "True" .... ===== Een NIS-slaveserver opzetten Het opzetten van een NIS-slaveserver is nog makkelijker dan het opzetten van de master. Dit kan door aan te melden op de slaveserver en net als voor de masterserver [.filename]#/etc/rc.conf# te wijzigen. Het enige verschil is dat nu de optie `-s` gebruikt wordt voor het draaien van `ypinit`. Met de optie `-s` moet ook de naam van de NIS-master meegegeven worden. Het commando ziet er dus als volgt uit: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Don't forget to update map ypservers on ellington. .... Nu hoort er een map [.filename]#/var/yp/test-domain# te zijn waarin kopieë van de NIS-masterserver afbeeldingen staan. Die moeten bijgewerkt blijven. De volgende regel in [.filename]#/etc/crontab# op de slaveservers regelt dat: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... Met de bovenstaande twee regels wordt de slave gedwongen zijn afbeeldingen met de afbeeldingen op de masterserver te synchroniseren. Dit is niet verplicht omdat de masterserver automatisch probeert veranderingen aan de NIS-afbeeldingen door te geven aan zijn slaves. Echter, vanwege het belang van correcte wachtwoordinformatie op andere cliënten die van de slaveserver afhankelijk zijn, is het aanbevolen om specifiek de wachtwoordafbeeldingen vaak tot bijwerken te dwingen. Dit is des te belangrijker op drukke netwerken, omdat daar het bijwerken van afbeeldingen niet altijd compleet afgehandeld hoeft te worden. Nu kan ook op de slaveserver het commando `/etc/netstart` uitgevoerd worden, dat op zijn beurt de NIS-server start. ==== NIS-cliënten Een NIS-cliënt maakt wat heet een verbinding (binding) met een NIS-server met de daemon `ypbind`. `ypbind` controleert het standaarddomein van het systeem (zoals ingesteld met `domainname`) en begint met het broadcasten van RPC-verzoeken op het lokale netwerk. Die verzoeken bevatten de naam van het domein waarvoor `ypbind` een binding probeert te maken. Als een server die is ingesteld om het gevraagde domein te bedienen een broadcast ontvangt, dan antwoordt die aan `ypbind` dat dan het IP-adres van de server opslaat. Als er meerdere servers beschikbaar zijn, een master en bijvoorbeeld meerdere slaves, dan gebruikt `ypbind` het adres van de eerste server die antwoord geeft. Vanaf dat moment stuurt de cliënt alle NIS-verzoeken naar die server. `ypbind` "pingt" de server zo nu en dan om te controleren of die nog draait. Als er na een bepaalde tijd geen antwoord komt op een ping, dan markeert `ypbind` het domein als niet verbonden en begint het broadcasten opnieuw, in de hoop dat er een andere server wordt gelocaliseerd. ===== Een NIS-cliënt opzetten Het opzetten van een FreeBSD machine als NIS-cliënt is redelijk doorzichtig: [.procedure] ==== . Wijzig [.filename]#/etc/rc.conf# en voeg de volgende regels toe om de NIS-domeinnaam in te stellen en `ypbind` mee te laten starten bij het starten van het netwerk: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... + . Om alle mogelijke regels voor accounts uit de NIS-server te halen, dienen alle gebruikersaccounts uit [.filename]#/etc/master.passwd# verwijderd te worden en dient met `vipw` de volgende regel aan het einde van het bestand geplaatst te worden: + [.programlisting] .... +::::::::: .... + [NOTE] ====== Door deze regel wordt alle geldige accounts in de wachtwoordafbeelding van de NIS-server toegang gegeven. Er zijn veel manieren om de NIS-cliënt in te stellen door deze regel te veranderen. In het onderdeel <> hieronder staat meer informatie. Zeer gedetailleerde informatie staat in het boek `NFS en NIS beheren` van O'Reilly. ====== + [NOTE] ====== Er moet tenminste één lokale account behouden blijven (dus niet geïmporteerd via NIS) in [.filename]#/etc/master.passwd# en die hoort ook lid te zijn van de groep `wheel`. Als er iets mis is met NIS, dan kan die account gebruikt worden om via het netwerk aan te melden, `root` te worden en het systeem te repareren. ====== + . Om alle groepen van de NIS-server te importeren, kan de volgende regel aan [.filename]#/etc/group# toegevoegd worden: + [.programlisting] .... +:*:: .... ==== Voer, om de NIS-cliënt onmiddelijk te starten, de volgende commando's als supergebruiker uit: [source,shell] .... # /etc/netstart # service ypbind start .... Na het afronden van deze stappen zou met `ypcat passwd` de passwd map van de NIS-server te zien moeten zijn. === NIS-beveiliging In het algemeen kan iedere netwerkgebruiker een RPC-verzoek doen uitgaan naar man:ypserv[8] en de inhoud van de NIS-afbeeldingen ontvangen, mits die gebruiker de domeinnaam kent. Omdat soort ongeautoriseerde transacties te voorkomen, ondersteunt man:ypserv[8] de optie "securenets", die gebruikt kan worden om de toegang te beperken tot een opgegeven aantal hosts. Bij het opstarten probeert man:ypserv[8] de securenets informatie te laden uit het bestand [.filename]#/var/yp/securenets#. [NOTE] ==== Dit pad kan verschillen, afhankelijk van het pad dat opgegeven is met de optie `-p`. Dit bestand bevat regels die bestaan uit een netwerkspecificatie en een netwerkmasker, gescheiden door witruimte. Regels die beginnen met `#` worden als commentaar gezien. Een voorbeeld van een securenetsbestand zou er zo uit kunnen zien: ==== [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... Als man:ypserv[8] een verzoek ontvangt van een adres dat overeenkomt met een van de bovenstaande regels, dan wordt dat verzoek normaal verwerkt. Als er geen enkele regel op het verzoek van toepassing is, dan wordt het verzoek genegeerd en wordt er een waarschuwing gelogd. Als het bestand [.filename]#/var/yp/securenets# niet bestaat, dan accepteert `ypserv` verbindingen van iedere host. Het programma `ypserv` ondersteunt ook het pakket TCP Wrapper van Wietse Venema. Daardoor kan een beheerder de instellingenbestanden van TCP Wrapper gebruiken voor toegangsbeperking in plaats van [.filename]#/var/yp/securenets#. [NOTE] ==== Hoewel beide methoden van toegangscontrole enige vorm van beveiliging bieden, zijn ze net als de geprivilegieerde poorttest kwetsbaar voor "IP spoofing" aanvallen. Al het NIS-gerelateerde verkeer hoort door een firewall tegengehouden te worden. Servers die gebruik maken van [.filename]#/var/yp/securenets# kunnen wellicht legitieme verzoeken van NIS-cliënten weigeren als die gebruik maken van erg oude TCP/IP-implementaties. Sommige van die implementaties zetten alle host bits op nul als ze een broadcast doen en/of kijken niet naar het subnetmasker als ze het broadcastadres berekenen. Hoewel sommige van die problemen opgelost kunnen worden door de instellingen op de cliënt aan te passen, zorgen andere problemen voor het noodgedwongen niet langer kunnen gebruiker van NIS voor die cliënt of het niet langer gebruiken van [.filename]#/var/yp/securenets#. Het gebruik van [.filename]#/var/yp/securenets# op een server met zo'n oude implementatie van TCP/IP is echt een slecht idee en zal leiden tot verlies van NIS-functionaliteit voor grote delen van een netwerk. Het gebruik van het pakket TCP Wrapper leidt tot langere wachttijden op de NIS-server. De extra vertraging kan net lang genoeg zijn om een timeout te veroorzaken in cliëntprogramma's, in het bijzonder als het netwerk druk is of de NIS-server traag is. Als een of meer cliënten last hebben van dat symptoom, dan is het verstandig om de cliëntsysteem in kwestie NIS-slaveserver te maken en naar zichzelf te laten wijzen. ==== === Aanmelden voor bepaalde gebruikers blokkeren In het lab staat de machine `basie`, die alleen faculteitswerkstation hoort te zijn. Het is niet gewenst die machine uit het NIS-domein te halen, maar het [.filename]#passwd# bestand op de master NIS-server bevat nu eenmaal accounts voor zowel de faculteit als de studenten. Hoe kan dat opgelost worden? Er is een manier om het aanmelden van specifieke gebruikers op een machine te weigeren, zelfs als ze in de NIS-database staan. Daarvoor hoeft er alleen maar `-gebruikersnaam` met het juiste aantal dubbele punten (zoals bij andere regels) aan het einde van [.filename]#/etc/master.passwd# op de cliëntmachine toegevoegd te worden, waar _gebruikersnaam_ de gebruikersnaam van de gebruiker die niet mag aanmelden is. De regel met de geblokkeerde gebruiker moet voor de regel met `+` staan om NIS-gebruikers toe te staan. Dit gebeurt bij voorkeur met `vipw`, omdat `vipw` de wijzigingen aan [.filename]#/etc/master.passwd# controleert en ook de wachtwoord database opnieuw bouwt na het wijzigen. Om bijvoorbeeld de gebruiker `bill` te kunnen laten aanmelden op `basie`: [source,shell] .... basie# vipw [voeg -bill::::::::: aan het einde toe, exit] vipw: rebuilding the database... vipw: done basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin -bill::::::::: +::::::::: basie# .... [[network-netgroups]] === Netgroups gebruiken De methode uit het vorige onderdeel werkt prima als er maar voor een beperkt aantal gebruikers en/of machines speciale regels nodig zijn. Op grotere netwerken _gebeurt_ het gewoon dat er wordt vergeten om een aantal gebruikers de aanmeldrechten op gevoelige machines te ontnemen of dat zelfs iedere individuele machine aangepast moet worden, waardoor het voordeel van NIS teniet wordt gedaan: _centraal_ beheren. De ontwikkelaars van NIS hebben dit probleem opgelost met _netgroepen_. Het doel en de semantiek kunnen vergeleken worden met de normale groepen die gebruikt worden op UNIX(R) bestandssystemen. De belangrijkste verschillen zijn de afwezigheid van een numeriek ID en de mogelijkheid om een netgroep aan te maken die zowel gebruikers als andere netgroepen bevat. Netgroepen zijn ontwikkeld om gebruikt te worden voor grote, complexe netwerken met honderden gebruikers en machines. Aan de ene kant is dat iets Goeds. Aan de andere kant is het wel complex en bijna onmogelijk om netgroepen met een paar eenvoudige voorbeelden uit te leggen. Dat probleem wordt in de rest van dit onderdeel duidelijk gemaakt. Stel dat de succesvolle implementatie van NIS in het lab de interesse heeft gewekt van een centrale beheerclub. De volgende taak is het uitbreiden van het NIS-domein met een aantal andere machines op de campus. De onderstaande twee tabellen bevatten de namen van de nieuwe gebruikers en de nieuwe machines met een korte beschijving. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Gebruikersnamen | Beschrijving |`alpha`, `beta` |Gewone medewerkers van de IT-afdeling |`charlie`, `delta` |Junior medewerkers van de IT-afdeling |`echo`, `foxtrott`, `golf`, ... |Gewone medewerkers |`able`, `baker`, ... |Stagiairs |=== [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Machinenamen | Beschrijving |`war`, `death`, `famine`, `pollution` |De belangrijkste servers. Alleen senior medewerkers van de IT-afdeling mogen hierop aanmelden. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |Minder belangrijke servers. Alle leden van de IT-afdeling mogen aanmelden op deze machines. |`one`, `two`, `three`, `four`, ... |Gewone werkstations. Alleen _echte_ medewerkers mogen zich op deze machines aanmelden. |`trashcan` |Een erg oude machine zonder kritische data. Zelfs de stagiair mag deze doos gebruiken. |=== Als deze restricties ingevoerd worden door iedere gebruiker afzonderlijk te blokkeren, dan wordt er een `-user` regel per systeem toegevoegd aan de [.filename]#passwd# voor iedere gebruiker die niet mag aanmelden op dat systeem. Als er maar één regel wordt vergeten, kan dat een probleem opleveren. Wellicht lukt het nog dit juist in te stellen bij de bouw van een machine, maar het wordt _echt_ vergeten de regels toe te voegen voor nieuwe gebruikers in de productiefase. Murphy was tenslotte een optimist. Het gebruik van netgroepen biedt in deze situatie een aantal voordelen. Niet iedere gebruiker hoeft separaat afgehandeld te worden. Een gebruik kan aan een of meer groepen worden toegevoegd en aanmelden kan voor alle leden van zo'n groep worden toegestaan of geweigerd. Als er een nieuwe machine wordt toegevoegd, dan hoeven alleen de aanmeldrestricties voor de netgroepen te worden ingesteld. Als er een nieuwe gebruiker wordt toegevoegd, dan hoeft die alleen maar aan de juiste netgroepen te worden toegevoegd. Die veranderingen zijn niet van elkaar afhankelijk: geen "voor iedere combinatie van gebruiker en machine moet het volgende ...". Als de NIS-opzet zorgvuldig is gepland, dan hoeft er maar één instellingenbestand gewijzigd te worden om toegang tot machines te geven of te ontnemen. De eerst stap is het initialiseren van de NIS-afbeelding netgroup. man:ypinit[8] van FreeBSD maakt deze map niet standaard, maar als die is gemaakt, ondersteunt de NIS-implementatie hem wel. Een lege map wordt als volgt gemaakt: [source,shell] .... ellington# vi /var/yp/netgroup .... Nu kan hij gevuld worden. In het gebruikte voorbeeld zijn tenminste vier netgroepen: IT-medewerkers, IT-junioren, gewone medewerkers en stagiars. [.programlisting] .... IT_MW (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) STAGS (,able,test-domain) (,baker,test-domain) .... `IT_MW`, `IT_APP` enzovoort, zijn de namen van de netgroepen. Iedere groep tussen haakjes bevat een of meer gebruikersnamen voor die groep. De drie velden binnen een groep zijn: . De naam van de host of namen van de hosts waar de volgende onderdelen geldig zijn. Als er geen hostnaam wordt opgegeven dan is de regel geldig voor alle hosts. Als er wel een hostnaam wordt opgegeven, dan wordt een donker, spookachtig en verwarrend domein betreden. . De naam van de account die bij deze netgroep hoort. . Het NIS-domein voor de account. Er kunnen accounts uit andere NIS-domeinen geïmporteerd worden in een netgroep als een beheerder zo ongelukkig is meerdere NIS-domeinen te hebben. Al deze velden kunnen jokerkarakters bevatten. Details daarover staan in man:netgroup[5]. [NOTE] ==== De naam van een netgroep mag niet langer zijn dan acht karakters, zeker niet als er andere besturingssystemen binnen een NIS-domein worden gebruikt. De namen zijn hoofdlettergevoelig: alleen hoofdletters gebruiken voor de namen van netgroepen is een makkelijke manier om onderscheid te kunnen maken tussen gebruikers-, machine- en netgroepnamen. Sommige NIS-cliënten (andere dan die op FreeBSD draaien) kunnen niet omgaan met netgroepen met veel leden. Sommige oudere versies van SunOS(TM) gaan bijvoorbeeld lastig doen als een netgroep meer dan 15 _leden_ heeft. Dit kan omzeild worden door meerdere subnetgroepen te maken met 15 gebruikers of minder en een echte netgroep die de subnetgroepen bevat: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Dit proces kan herhaald worden als er meer dan 225 gebruikers in een netgroep moeten. ==== Het activeren en distribueren van de nieuwe NIS-map is eenvoudig: [source,shell] .... ellington# cd /var/yp ellington# make .... Hiermee worden drie nieuwe NIS-afbeeldingen gemaakt: [.filename]#netgroup#, [.filename]#netgroup.byhost# en [.filename]#netgroup.byuser#. Met man:ypcat[1] kan bekeken worden op de nieuwe NIS-afbeeldingen beschikbaar zijn: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... De uitvoer van het eerste commando hoort te lijken op de inhoud van [.filename]#/var/yp/netgroup#. Het tweede commando geeft geen uitvoer als er geen host-specifieke netgroepen zijn ingesteld. Het derde commando kan gebruikt worden om een lijst van netgroepen voor een gebruiker op te vragen. Het instellen van de cliënt is redelijk eenvoudig. Om de server `war` in te stellen hoeft alleen met man:vipw[8] de volgende regel in de regel daarna vervangen te worden: [.programlisting] .... +::::::::: .... Vervang de bovenstaande regel in de onderstaande. [.programlisting] .... +@IT_MW::::::::: .... Nu worden alleen de gebruikers die in de netgroep `IT_MW` geïmporteerd in de wachtwoorddatabase van de host `war`, zodat alleen die gebruikers zich kunnen aanmelden. Helaas zijn deze beperkingen ook van toepassing op de functie `~` van de shell en alle routines waarmee tussen gebruikersnamen en numerieke gebruikers ID's wordt gewisseld. Met andere woorden: `cd ~user` werkt niet, `ls -l` toont het numerieke ID in plaats van de gebruikersnaam en `find . -user joe -print` faalt met de foutmelding `No such user`. Om dit te repareren moeten alle gebruikers geïmporteerd worden, _zonder ze het recht te geven aan te melden op een server_. Dit kan gedaan worden door nog een regel aan [.filename]#/etc/master.passwd# toe te voegen: [.programlisting] .... +:::::::::/sbin/nologin .... Dit betekent "importeer alle gebruikers, maar vervang de shell door [.filename]#/sbin/nologin#". Ieder veld in een `passwd` regel kan door een standaardwaarde vervangen worden in [.filename]#/etc/master.passwd#. [WARNING] ==== De regel `+:::::::::/sbin/nologin` moet na `+@IT_MW:::::::::` komen. Anders krijgen alle gebruikers die uit NIS-komen [.filename]#/sbin/nologin# als aanmeldshell. ==== Na deze wijziging hoeft er nog maar één NIS-afbeelding gewijzigd te worden als er een nieuwe medewerker komt bij de IT-afdeling. Dezelfde aanpak kan gebruikt worden voor de minder belangrijke servers door de oude regel `+:::::::::` in de lokale versie van [.filename]#/etc/master.passwd# door iets als het volgende te vervangen: [.programlisting] .... +@IT_MW::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin .... Voor normale werkstations zijn het de volgende regels: [.programlisting] .... +@IT_MW::::::::: +@USERS::::::::: +:::::::::/sbin/nologin .... En dat zou allemaal leuk en aardig zijn als er niet na een paar weken een beleidsverandering komt: de IT-afdeling gaat stagiairs aannemen. De IT-stagiairs mogen de normale werkstations en de minder belangrijke servers gebruiken en de juniorbeheerders mogen gaan aanmelden op de hoofdservers. Dat kan door een nieuwe groep `IT_STAG` te maken en de nieuwe IT-stagiairs toe te voegen aan die netgroep en dan de instellingen op iedere machine te gaan veranderen. Maar zoals het spreekwoord zegt: "Fouten in een centrale planning leiden tot complete chaos." Deze situaties kunnen voorkomen worden door gebruik te maken van de mogelijkheid in NIS om netgroepen in netgroepen op te nemen. Het is mogelijk om rolgebaseerde netgroepen te maken. Er kan bijvoorbeeld een netgroep `BIGSRV` gemaakt worden om het aanmelden op de belangrijke servers te beperken en er kan een andere netgroep `SMALLSRV` voor de minder belangrijke servers zijn en een derde netgroep met de naam `USERBOX` voor de normale werkstations. Al die netgroepen kunnen de netgroepen bevatten die op die machines mogen aanmelden. De nieuwe regels in de NIS-afbeelding netgroup zien er dan zo uit: [.programlisting] .... BIGSRV IT_MW IT_APP SMALLSRV IT_MW IT_APP ITSTAG USERBOX IT_MW ITSTAG USERS .... Deze methode voor het instellen van aanmeldbeperkingen werkt redelijk goed als er groepen van machines gemaakt kunnen worden met identieke beperkingen. Helaas blijkt dat eerder uitzondering dan regel. Meestal moet het mogelijk zijn om per machine in te stellen wie zich wel en wie zich niet mogen aanmelden. Daarom is het ook mogelijk om via machinespecifieke netgroepen de hierboven aangegeven beleidswijziging op te vangen. In dat scenario bevat [.filename]#/etc/master.passwd# op iedere machine twee regels die met "+" beginnen. De eerste voegt de netgroep toe met de accounts die op de machine mogen aanmelden en de tweede voegt alle andere accounts toe met [.filename]#/sbin/nologin# als shell. Het is verstandig om als naam van de netgroep de machinenaam in "HOOFDLETTERS" te gebruiken. De regels zien er ongeveer als volgt uit: [.programlisting] .... +@MACHINENAAM::::::::: +:::::::::/sbin/nologin .... Als dit voor alle machines is gedaan, dan hoeven de lokale versies van [.filename]#/etc/master.passwd# nooit meer veranderd te worden. Alle toekomstige wijzigingen kunnen dan gemaakt worden door de NIS-afbeelding te wijzigen. Hieronder staat een voorbeeld van een mogelijke netgroep map voor het beschreven scenario met een aantal toevoegingen: [.programlisting] .... # Definieer eerst de gebruikersgroepen IT_MW (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITSTAG (,kilo,test-domain) (,lima,test-domain) D_STAGS (,able,test-domain) (,baker,test-domain) # # En nu een aantal groepen op basis van rollen USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_MW IT_APP SMALLSRV IT_MW IT_APP ITSTAG USERBOX IT_MW ITSTAG USERS # # Een een groep voor speciale taken. # Geef echo en golf toegang tot de anti-virus machine. SECURITY IT_MW (,echo,test-domain) (,golf,test-domain) # # Machinegebaseerde netgroepen # Hoofdservers WAR BIGSRV FAMINE BIGSRV # Gebruiker india heeft toegang tot deze server nodig. POLLUTION BIGSRV (,india,test-domain) # # Deze is erg belangrijk en heeft strengere toegangseisen nodig. DEATH IT_MW # # De anti-virus machine als hierboven genoemd. ONE SECURITY # # Een machine die maar door 1 gebruiker gebruikt mag worden. TWO (,hotel,test-domain) # [...hierna volgen de andere groepen] .... Als er een soort database wordt gebruikt om de gebruikersaccounts te beheren, dan is het in ieder geval nodig dat ook het eerste deel van de afbeelding met de databaserapportagehulpmiddelen gemaakt kan worden. Dan krijgen nieuwe gebruikers automatisch toegang tot de machines. Nog een laatste waarschuwing: het is niet altijd aan te raden gebruik te maken van machinegebaseerde netgroepen. Als er tientallen of zelfs honderden gelijke machines voor bijvoorbeeld studentenruimtes worden uitgerold, dan is het verstandiger rolgebaseerde netgroepen te gebruiken in plaats van machinegebaseerde netgroepen om de grootte van de NIS-afbeelding binnen de perken te houden. === Belangrijk om te onthouden In een NIS-omgeving werken een aantal dingen wel anders. * Als er een gebruiker toegevoegd moet worden, dan moet die _alleen_ toegevoegd worden aan de master NIS-server en _mag niet vergeten worden dat de NIS-afbeeldingen herbouwd moeten worden_. Als dit wordt vergeten, dan kan de nieuwe gebruiker nergens anders aanmelden dan op de NIS-master. Als bijvoorbeeld een nieuwe gebruiker `jsmith` toegevoegd moet worden: + [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... + Er kan ook `adduser jsmith` in plaats van `pw useradd jsmith` gebruikt worden. * _De beheeraccounts moeten buiten de NIS-afbeeldingen gehouden worden_. Het is niet handig als de beheeraccounts en wachtwoorden naar machines waarop gebruikers zich aanmelden die geen toegang tot die informatie horen te hebben zouden gaan. * _De NIS-master en slave moeten veilig blijven en zo min mogelijk niet beschikbaar zijn_. Als de machine wordt gehackt of als hij wordt uitgeschakeld, dan kunnen er in theorie nogal wat mensen niet meer aanmelden. + Dit is de belangrijkste zwakte van elk gecentraliseerd beheersysteem. Als de NIS-servers niet goed beschermd worden, dan worden veel gebruikers boos! === NIS v1-compatibiliteit ypserv voor FreeBSD biedt wat ondersteuning voor NIS v1 cliënten. De NIS-implementatie van FreeBSD gebruikt alleen het NIS v2 protocol, maar andere implementaties bevatten ondersteuning voor het v1 protocol voor achterwaartse compatibiliteit met oudere systemen. De ypbind-daemons die bij deze systemen zitten proberen een binding op te zetten met een NIS v1 server, hoewel dat niet per se ooit nodig is (en ze gaan misschien nog wel door met broadcasten nadat ze een antwoord van een v2 server hebben ontvangen). Het is belangrijk om te melden dat hoewel ondersteuning voor gewone cliëntoproepen aanwezig is, deze versie van ypserv geen overdrachtsverzoeken voor v1-afbeeldingen af kan handelen. Daarom kan ypserv niet gebruikt worden als master of slave in combinatie met oudere NIS-servers die alleen het v1 protocol ondersteunen. Gelukkig worden er in deze tijd niet meer zoveel van deze servers gebruikt. [[network-nis-server-is-client]] === NIS-servers die ook NIS-cliënten zijn Het is belangrijk voorzichtig om te gaan met het draaien van ypserv in een multi-server domein waar de server machines ook NIS-cliënten zijn. Het is in het algemeen verstandiger om de servers te dwingen met zichzelf te binden dan ze toe te staan een bindverzoek te broadcasten en het risico te lopen dat ze een binding met elkaar maken. Er kunnen vreemde fouten optreden als een van de servers plat gaat als er andere servers van die server afhankelijk zijn. Na verloop van tijd treedt op de cliënten wel een timeout op en verbinden ze met een andere server, maar de daarmee gepaard gaande vertraging kan aanzienlijk zijn en de foutmodus is nog steeds van toepassing, omdat de servers dan toch weer opnieuw een verbinding met elkaar kunnen vinden. Het is mogelijk een host aan een specifieke server te binden door aan `ypbind` de vlag `-S` mee te geven. Om dit niet iedere keer handmatig na een herstart te hoeven uitvoeren, kan de volgende regel worden opgenomen in [.filename]#/etc/rc.conf# van de NIS-server: [.programlisting] .... nis_client_enable="YES" # start ook het cliënt gedeelte nis_client_flags="-S NIS domain,server" .... In man:ypbind[8] staat meer informatie. === Wachtwoordformaten Een van de meest voorkomende problemen bij het implementeren van NIS is de compatibiliteit van het wachtwoordformaat. Als een NIS-server wachtwoorden gebruikt die met DES gecodeerd zijn, dan kunnen alleen cliënten die ook DES gebruiken ondersteund worden. Als er bijvoorbeeld Solaris(TM) NIS-cliënten in een netwerk zijn, dan moet er vrijwel zeker gebruik gemaakt worden van met DES gecodeerde wachtwoorden. Van welk formaat cliënten en servers gebruik maken is te zien in [.filename]#/etc/login.conf#. Als een host gebruik maakt van met DES gecodeerde wachtwoorden, dan staat er in de klasse `default` een regel als de volgende: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Overige regels weggelaten] .... Andere mogelijke waarden voor `passwd_format` zijn `blf` en `md5` (respectievelijk voor Blowfish en MD5 gecodeerde wachtwoorden). Als er wijzigingen gemaakt zijn aan [.filename]#/etc/login.conf# dan moet de login capability database herbouwd worden door het volgende commando als `root` uit te voeren: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== Het formaat van de wachtwoorden die al in [.filename]#/etc/master.passwd# staan worden niet bijgewerkt totdat een gebruiker zijn wachtwoord voor de eerste keer wijzigt _nadat_ de login capability database is herbouwd. ==== Om te zorgen dat de wachtwoorden in het gekozen formaat zijn gecodeerd, moet daarna gecontroleerd worden of de waarde `crypt_default` in [.filename]#/etc/auth.conf# de voorkeur geeft aan het gekozen formaat. Om dat te realiseren dient het gekozen formaat vooraan gezet te worden in de lijst. Als er bijvoorbeeld gebruik gemaakt wordt van DES gecodeerde wachtwoorden, dan hoort de regel er als volgt uit te zien: [.programlisting] .... crypt_default = des blf md5 .... Als de bovenstaande stappen op alle FreeBSD gebaseerde NIS-servers en cliënten zijn uitgevoerd, dan is het zeker dat ze het allemaal eens zijn over welk wachtwoordformaat er op het netwerk wordt gebruikt. Als er problemen zijn bij de authenticatie op een NIS-cliënt, dan is dit een prima startpunt voor het uitzoeken waar de problemen vandaan komen. Nogmaals: als er een NIS-server in een heterogene omgeving wordt geplaatst, dan is het waarschijnlijk dat er gebruik gemaakt moet worden van DES op alle systemen, omdat dat de laagst overeenkomende standaard is. [[network-dhcp]] == Automatisch netwerk instellen (DHCP) === Wat is DHCP? DHCP, het Dynamic Host Configuration Protocol, schrijft voor hoe een systeem verbinding kan maken met een netwerk en hoe het de benodigde informatie kan krijgen om met dat netwerk te communiceren. FreeBSD gebruikt de OpenBSD `dhclient` welke uit OpenBSD 3.7 komt. Alle informatie over `dhclient` kan zowel voor de ISC als de OpenBSD DHCP-cliënt gebruikt worden. De DHCP-server zit bij de ISC-distributie. === Wat behandeld wordt In dit onderdeel worden de cliëntcomponenten van de ISC en OpenBSD DHCP-cliënt en de servercomponenten van het ISC DHCP-systeem beschreven. Het programma voor de cliënt, `dhclient`, zit standaard in FreeBSD en de server is beschikbaar via de port package:net/isc-dhcp42-server[]. Naast de onderstaande informatie, zijn de hulppagina's van man:dhclient[8], man:dhcp-options[5] en man:dhclient.conf[5] bruikbare bronnen. === Hoe het werkt Als `dhclient`, de DHCP-cliënt, wordt uitgevoerd op een cliëntmachine, dan begint die met het broadcasten van verzoeken om instellingeninformatie. Standaard worden deze verzoeken op UDP poort 68 gedaan. De server antwoordt op UDP 67 en geeft de cliënt een IP-adres en andere relevante netwerkinformatie, zoals een netmasker, router en DNS-servers. Al die informatie komt in de vorm van een DHCP "lease" en is voor een bepaalde tijd geldig (die is ingesteld door de beheerder van de DHCP-server). Op die manier kunnen IP-adressen voor cliënten die niet langer met het netwerk verbonden zijn (stale) automatisch weer ingenomen worden. DHCP-cliënten kunnen veel informatie van de server krijgen. Er staat een uitputtende lijst in man:dhcp-options[5]. === FreeBSD integratie FreeBSD integreert de OpenBSD DHCP-cliënt `dhclient` volledig. Er is ondersteuning voor de DHCP-cliënt in zowel het installatieprogramma als in het basissysteem, waardoor het niet noodzakelijk is om kennis te hebben van het maken van netwerkinstellingen voor het netwerk waar een DHCP-server draait. DHCP wordt ondersteund door sysinstall. Bij het instellen van een netwerkinterface binnen sysinstall is de tweede vraag: "Wil je proberen de interface met DHCP in te stellen?" Als het antwoord bevestigend luidt, dan wordt `dhclient` uitgevoerd en als dat succesvol verloopt, dan worden de netwerkinstellingen automatisch ingevuld. Voor het gebruiken van DHCP bij het opstarten van het systeem zijn twee instellingen nodig: * Het apparaat [.filename]#bpf# moet in de kernel gecompileerd zijn. Dit kan door `device bpf` aan het bestand met kernelinstellingen toe te voegen en de kernel te herbouwen. Meer informatie over het bouwen van een kernel staat in crossref:kernelconfig[kernelconfig,De FreeBSD-kernel instellen]. + Het apparaat [.filename]#bpf# is al onderdeel van de [.filename]#GENERIC# kernel die bij FreeBSD zit, dus als er geen sprake is van een aangepaste kernel, dan hoeft er geen nieuwe gemaakt te worden om DHCP aan te praat te krijgen. + [NOTE] ==== Voor de lezer die bijzonder begaan is met beveiliging, is het belangrijk aan te geven dat [.filename]#bpf# ook het apparaat is waardoor pakketsnuffelaars hun werk kunnen doen (hoewel ze nog steeds als `root` moeten draaien). [.filename]#bpf#_is_ noodzakelijk voor DHCP, maar als beveiliging bijzonder belangrijk is, dan hoort [.filename]#bpf# waarschijnlijk niet in een kernel te zitten omdat de verwachting dat er in de toekomst ooit DHCP gebruikt gaat worden. ==== * Standaard draait de DHCP-synchronisatie op FreeBSD in de achtergrond, of _asynchroon_. Andere opstartscripten gaan verder terwijl DHCP wordt voltooid, wat het opstarten van het systeem versnelt. + DHCP in de achtergrond werkt goed als de DHCP-server snel op verzoeken reageert en het DHCP-configuratieproces snel gaat. Op sommige systemen kan het lang duren voordat DHCP klaar is. Als netwerkdiensten proberen te draaien voordat DHCP voltooid is, zullen ze falen. Door DHCP in _synchrone_ modus te draaien wordt dit probleem voorkomen en wordt het opstarten gepauzeerd totdat de DHCP-configuratie voltooid is. + Gebruik om in de achtergrond verbinding te maken met een DHCP-server terwijl andere opstartscripts verder gaan (asynchrone modus) de waarde "`DHCP`" in [.filename]#/etc/rc.conf#: + [.programlisting] .... ifconfig_fxp0="DHCP" .... + Gebruik om het opstarten te pauzeren totdat DHCP voltooid is de synchrone modus met waarde "`SYNDHCP`": + [.programlisting] .... ifconfig_fxp0="SYNDHCP" .... + [NOTE] ==== Vervang _fxp0_ zoals getoond in deze voorbeelden met de naam van de interface dat dynamisch geconfigureerd moet worden, zoals getoond in crossref:config[config-network-setup,Netwerkkaarten instellen]. ==== + Als er een andere lokatie voor `dhclient` wordt gebruikt of als er extra parameters aan `dhclient` meegegeven moeten worden, dan dient ook iets als het volgende toegevoegd te worden: + [.programlisting] .... dhclient_program="/sbin/dhclient" dhclient_flags="" .... De DHCP-server, dhcpd, zit bij de port package:net/isc-dhcp42-server[] in de Portscollectie. Deze port bevat de ISC DHCP-server en documentatie. === Bestanden * [.filename]#/etc/dhclient.conf# + Voor `dhclient` is een instellingenbestand [.filename]#/etc/dhclient.conf# nodig. Dat bestand bevat meestal alleen maar commentaar, omdat de standaardinstellingen redelijk zinvol zijn. Dit bestand wordt beschreven in man:dhclient.conf[5]. * [.filename]#/sbin/dhclient# + `dhclient` is statisch gelinkt en staat in [.filename]#/sbin#. Er staat meer informatie over `dhclient` in man:dhclient[8]. * [.filename]#/sbin/dhclient-script# + `dhclient-script` is het FreeBSD-specifieke DHCP-cliënt instellingenscript. Het wordt beschreven in man:dhclient-script[8], maar het is niet nodig het te wijzigen om goed te werken. * [.filename]#/var/db/dhclient.leases.interface# + De DHCP-cliënt houdt in dit bestand een database bij van geldige leases, die naar een logboekbestand worden geschreven. In man:dhclient.leases[5] staat een iets uitgebreidere beschrijving. === Verder lezen Het DHCP-protocol staat volledig beschreven in http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Er is nog een bron van informatie ingesteld op http://www.dhcp.org/[http://www.dhcp.org/]. [[network-dhcp-server]] === Een DHCP-server installeren en instellen ==== Wat behandeld wordt In dit onderdeel wordt beschreven hoe een FreeBSD systeem zo ingesteld kan worden dat het opereert als DHCP-server door gebruik te maken van de ISC (Internet Systems Consortium) implementatie van de DHCP-server. De server wordt niet geleverd als deel van FreeBSD en om deze dienst aan te bieden dient de port package:net/isc-dhcp42-server[] geïnstalleerd te worden. In crossref:ports[ports,Applicaties installeren. pakketten en ports] staat meer informatie over de Portscollectie. ==== DHCP-serverinstallatie Om een FreeBSD systeem in te stellen als DHCP-server moet het apparaat man:bpf[4] in de kernel zijn opgenomen. Om dit te doen dient `device bpf` aan het bestand met kernelinstellingen toegevoegd te worden en dient de kernel herbouwd te worden. Meer informatie over het bouwen van kernels staat in crossref:kernelconfig[kernelconfig,De FreeBSD-kernel instellen]. Het apparaat [.filename]#bpf# is al onderdeel van de [.filename]#GENERIC# kernel die bij FreeBSD, dus het is meestal niet nodig om een aangepaste kernel te bouwen om DHCP aan de praat te krijgen. [NOTE] ==== Het is belangrijk te vermelden dat [.filename]#bpf# ook het apparaat is waardoor pakketsnuffelaars kunnen werken (hoewel de programma's die er gebruik van maken wel bijzondere toegang nodig hebben). [.filename]#bpf#_is_ verplicht voor DHCP, maar als beveiliging van belang is, dan is het waarschijnlijk niet verstandig om [.filename]#bpf# in een kernel op te nemen alleen omdat er in de toekomst misschien ooit DHCP gebruikt gaat worden. ==== Hierna dient het standaardbestand [.filename]#dhcpd.conf# dat door de port package:net/isc-dhcp42-server[] is geïnstalleerd gewijzigd te worden. Standaard is dit [.filename]#/usr/local/etc/dhcpd.conf.sample# en dit bestand dient gekopieerd te worden naar [.filename]#/usr/local/etc/dhcpd.conf# voordat de wijzigingen worden gemaakt. ==== De DHCP-server instellen [.filename]#dhcpd.conf# is opgebouwd uit declaraties over subnetten en hosts en is wellicht het meest eenvoudig te beschrijven met een voorbeeld: [.programlisting] .... option domain-name "example.com"; <.> option domain-name-servers 192.168.4.100; <.> option subnet-mask 255.255.255.0; <.> default-lease-time 3600; <.> max-lease-time 86400; <.> ddns-update-style none; <.> subnet 192.168.4.0 netmask 255.255.255.0 { range 192.168.4.129 192.168.4.254; <.> option routers 192.168.4.1; <.> } host mailhost { hardware ethernet 02:03:04:05:06:07; <.> fixed-address mailhost.example.com; <.> } .... <.> Deze optie geeft het domein aan dat door cliënten als standaard zoekdomein wordt gebruikt. In man:resolv.conf[5] staat meer over wat dat betekent. <.> Deze optie beschrijft een door komma's gescheiden lijst met DNS-servers die de cliënt moet gebruiken. <.> Het netmasker dat aan de cliënten wordt voorgeschreven. <.> Een cliënt kan om een bepaalde duur vragen die een lease geldig is. Anders geeft de server aan wanneer de lease vervalt (in seconden). <.> Dit is de maximale duur voor een lease die de server toestaat. Als een cliënt vraagt om een langere lease, dan wordt die wel verstrekt, maar is de maar geldig gedurende `max-lease-time` seconden. <.> Deze optie geeft aan of de DHCP-server moet proberen de DNS-server bij te werken als een lease is geaccepteerd of wordt vrijgegeven. In de ISC implementatie is deze optie _verplicht_. <.> Dit geeft aan welke IP-adressen in de groep met adressen zitten die zijn gereserveerd om uitgegeven te worden aan cliënten. Alle IP-adressen tussen de aangegeven adressen en die adressen zelf worden aan cliënten uitgegeven. <.> Geeft de default gateway aan die aan de cliënten wordt voorgeschreven. <.> Het hardware MAC-adres van een host, zodat de DHCP-server een host kan herkennen als die een verzoek doet. <.> Geeft een host aan die altijd hetzelfde IP-adres moet krijgen. Hier kan een hostnaam gebruikt worden, omdat de DHCP-server de hostnaam zelf opzoekt voordat de lease-informatie terug wordt gegeven. Wanneer u klaar bent met het schrijven van uw [.filename]#dhcpd.conf#, dient u de DHCP-server in [.filename]#/etc/rc.conf# aan te zetten, door het volgende toe te voegen: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... Vervang de interfacenaam `dc0` door de interface (of interfaces, gescheiden door witruimtes) waarop uw DHCP-server moet luisteren naar DHCP-verzoeken van cliënten. Daarna kunt u doorgaan met het starten van de server door het volgende commando te geven: [source,shell] .... # service isc-dhcpd start .... Als er later wijzigingen in de instellingen gemaakt moeten worden, dan is het belangrijk te onthouden dat het sturen van een `SIGHUP` signaal naar dhcpd_niet_ resulteert in het opnieuw laden van de instellingen, zoals voor de meeste daemons geldt. Voor deze daemon dient een signaal `SIGTERM` gestuurd te worden om het proces te stoppen. Daarna dient de daemon met het hiervoor beschreven commando weer gestart worden. ==== Bestanden * [.filename]#/usr/local/sbin/dhcpd# + dhcpd is statisch gelinkt en staat in [.filename]#/usr/local/sbin#. In de hulppagina voor man:dhcpd[8] die meekomt met de port staat meer informatie over dhcpd. * [.filename]#/usr/local/etc/dhcpd.conf# + dhcpd heeft een instellingenbestand, [.filename]#/usr/local/etc/dhcpd.conf#, nodig voordat de daemon diensten aan cliënten kan leveren. Het bestand moet alle informatie bevatten die aan cliënten gegeven moet worden en de informatie die nodig is voor het draaien van de dienst. Dit instellingenbestand staat beschreven in de hulppagina voor man:dhcpd.conf[5] die meekomt met de port. * [.filename]#/var/db/dhcpd.leases# + De DHCP-server houdt in dit bestand een database bij met leases die zijn uitgegeven en die naar een logboek worden geschreven. In de hulppagina man:dhcpd.leases[5] die bij de port zit wordt dit uitvoeriger beschreven. * [.filename]#/usr/local/sbin/dhcrelay# + dhcrelay wordt in uitgebreidere omgevingen gebruikt waar de ene DHCP-server een verzoek van een cliënt naar een andere DHCP-server op een ander netwerk doorstuurt. Als deze functionaliteit nodig is, kan die beschikbaar komen door de port package:net/isc-dhcp42-relay[] te installeren. De hulppagina voor man:dhcrelay[8] die bij de port zit bevat meer details. [[network-dns]] == Domeinnaamsysteem (DNS) === Overzicht FreeBSD gebruikt standaard een versie van BIND (Berkeley Internet Name Domain), wat de meest gebruikte implementatie van het DNS-protocol is. DNS is het protocol waarmee namen aan IP-adressen gebonden worden en vice versa. Zo wordt bijvoorbeeld op een zoekopdracht voor `www.FreeBSD.org` geantwoord met het IP-adres van de webserver van het FreeBSD Project en op een zoekopdracht voor `ftp.FreeBSD.org` wordt geantwoord met het IP-adres van de bijbehorende FTP-machine. Het tegenovergestelde kan ook gebeuren. Een zoekopdracht voor een IP-adres kan de bijbehorende hostnaam opleveren. Het is niet nodig om een naamserver te draaien om op een systeem zoekopdrachten met DNS uit te voeren. FreeBSD wordt momenteel standaard geleverd met de BIND9 DNS-serversoftware. Onze installatie biedt verbeterde beveilingsmogelijkheden, een nieuwe indeling van het bestandssysteem en geautomatiseerde configuratie van man:chroot[8]. DNS wordt op Internet onderhouden door een enigszins complex systeem van autoritaire root, Top Level Domain (TLD), en andere kleinschaligere naamservers die individuele domeininformatie hosten en cachen. Op dit moment wordt BIND beheerd door het Internet Systems Consortium https://www.isc.org/[https://www.isc.org/]. === Terminologie Om dit document te begrijpen moeten een aantal termen gerelateerd aan DNS begrepen worden. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Term | Definitie |Voorwaartse DNS |Het afbeelden van hostnamen op IP-adressen. |Herkomst (origin) |Verwijst naar het domein dat door een bepaald zonebestand wordt gedekt. |named, BIND |Vaak gebruikte namen voor het naamserverpakket BIND in FreeBSD. |Resolver |Een systeemproces waarmee een machine zoekopdrachten om zoneinformatie aan een naamserver geeft. |Reverse DNS |Het afbeelden van IP-adressen op hostnamen. |Rootzone |Het begin van de Internet zonehiërarchie. Alle zones vallen onder de rootzone, net zoals alle bestanden in een bestandssysteem onder de rootmap vallen. |Zone |Een individueel domein, subdomein of een deel van de DNS die door dezelfde autoriteit wordt beheerd. |=== Voorbeelden van zones: * `.` is hoe de rootzone normaliter in de documentatie genoemd wordt. * `org.` is een Top Level Domain (TLD) onder de rootzone. * `example.org.` is een zone onder het TLD `org.`. * `1.168.192.in-addr.arpa` is een zone die naar alle IP-adressen verwijst die onder de IP-adresruimte `192.168.1.*` vallen. Zoals te zien is staat het specifiekere deel van een hostnaam aan de linkerkant. Zo is bijvoorbeeld `example.org.` specifieker dan `org.` en is `org.` specifieker dan de rootzone. De indeling van ieder deel van een hostnaam lijkt veel op een bestandssysteem: de map [.filename]#/dev# valt onder de root, enzovoort. === Redenen om een naamserver te draaien Naamservers bestaan in het algemeen in twee smaken: autoratieve naamservers en caching (ook bekend als resolving) naamservers. Er is een autoratieve naamserver nodig als: * Het gewenst is om DNS-informatie aan te bieden aan de wereld om met autoriteit op verzoeken te antwoorden. * Een domein, zoals `example.org`, is geregistreerd en er IP-adressen aan hostnamen die daaronder liggen toegewezen moeten worden. * Een IP-adresblok omgekeerde DNS-ingangen nodig heeft (IP naar hostnaam). * Een omgekeerde of tweede naamserver, die een slaaf wordt genoemd, moet antwoorden op verzoeken. Er is een caching naamserver nodig als: * Een lokale DNS-server kan cachen en wellicht sneller kan antwoorden dan een naamserver die verder weg staat. Als er een verzoek wordt gedaan voor `www.FreeBSD.org`, dan doet de resolver meestal een verzoek bij de naamserver van de ISP die de uplink levert en ontvangt daarop een antwoord. Met een lokale, caching DNS-server hoeft het verzoek maar één keer door de caching DNS-server naar de buitenwereld gedaan te worden. Voor aanvullende verzoeken hoeft niet buiten het lokale netwerk te gaan omdat het al lokaal in de cache staat. === Hoe het werkt De daemon BIND heet in FreeBSD named. [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Bestand | Beschrijving |man:named[8] |De daemon BIND. |man:rndc[8] |Naamserverbeheerprogramma. |[.filename]#/etc/namedb# |Map waar zoneinformatie van BIND staat. |[.filename]#/etc/namedb/named.conf# |Instellingenbestand van de daemon. |=== Afhankelijk van hoe en gegeven zone op de server is geconfigureerd, staan de bestanden gerelateerd aan die zone in de submappen [.filename]#master#, [.filename]#slave#, of [.filename]#dynamic# van de map [.filename]#/etc/namedb#. Deze bestanden bevatten de DNS-informatie die door de naamserver als antwoord op zoekopdrachten gegeven zal worden. === BIND starten Omdat BIND standaard wordt geïnstalleerd, is het instellen relatief eenvoudig. De standaardconfiguratie van named is die van een eenvoudige resolverende naamserver, draaiende in een man:chroot[8]-omgeving, en beperkt tot het luisteren op het lokale IPv4-teruglusadres (127.0.0.1). Gebruik het volgende commando om de server eenmaal met deze configuratie te starten: [source,shell] .... # service named onestart .... Om er zeker van te zijn dat de daemon named elke keer bij het opstarten gestart wordt, moet de volgende regel in [.filename]#/etc/rc.conf# gezet worden: [.programlisting] .... named_enable="YES" .... Het is duidelijk dat er vele instelopties voor [.filename]#/etc/namedb/named.conf# zijn die buiten het bereik van dit document vallen. Als u echter geïnteresseerd bent in de opstartopties voor named op FreeBSD, bekijk dan de `named_*`-vlaggen in [.filename]#/etc/defaults/rc.conf# en raadpleeg de handleidingpagina man:rc.conf[5]. De sectie crossref:config[configtuning-rcd,Gebruik van rc met FreeBSD] is ook nuttig om te lezen. === Instellingenbestanden Instellingenbestanden voor named bevinden zich momenteel in [.filename]#/etc/namedb# en moeten gewijzigd worden voor gebruik, tenzij er alleen een eenvoudige resolver nodig is. Hier vindt de meeste configuratie plaats. ==== [.filename]#/etc/namedb/named.conf# [.programlisting] .... // $FreeBSD$ // // In de handleidingpagina's named.conf(5) en named(8), en in de // documentatie in /usr/shared/doc/bind9 zijn meer details te vinden. // // Voor het opzetten van een autoratieve server is een grondig begrip // van de werking van DNS noodzakelijk. Zelfs eenvoudige fouten kunnen // de werking verstoren voor beïnvloede partijen of veel onnodig // Internetverkeer veroorzaken. options { // Alle namen van bestanden en paden zijn relatief aan de chroot-map, // indien aanwezig, en moeten volledig gekwalificeerd zijn. directory "/etc/namedb/working"; pid-file "/var/run/named/pid" dump-file "/var/dump/named_dump.db" statistics-file "/var/stats/named.stats" // Als named alleen als een lokale resolver gebruikt wordt, is dit een // veilige standaardinstelling. Om named toegang tot het netwerk te // verschaffen, dient deze optie gecommentarieerd te worden, het // juiste IP-adres opgegeven te worden, of dient deze optie verwijderd // te worden. listen-on { 127.0.0.1; }; // Als u IPv6 aan heeft staan op dit systeem, dient deze optie // uitgecommentarieerd te worden om als lokale resolver te dienen. Om // toegang tot het netwerk te verschaffen, dient een IPv6-adres of het // sleutelwoord "any" gegeven te worden. // listen-on-v6 { ::1; }; // Deze zones zijn reeds opgenomen door de lege zones die hieronder // staan. Als u de gerelateerde lege zones hieronder verwijdert, // dienen deze regels uitgecommentarieerd te worden. disable-empty-zone "255.255.255.255.IN-ADDR.ARPA"; disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; // Als er een DNS-server beschikbaar is bij een upstream provider dan // kan het IP-adres op de regel hieronder ingegeven worden en kan die // geactiveerd worden. Hierdoor wordt voordeel gehaald uit de cache, // waardoor het algehele DNS-verkeer op het Internet vermindert. /* forwarders { 127.0.0.1; }; */ // Als de 'forwarders'-clausule niet leeg is, is de standaard om "forward // first" te gebruiken, welke terug zal vallen op het versturen van een // verzoek naar uw lokale server als de naamservers in 'forwarders' het // antwoord niet weten. Als alternatief kunt u uw naamserver dwingen om // nooit zelf verzoeken in te dienen door de volgende regel aan te // zetten: // forward only; // Als u forwarding automatisch wilt configureren gebaseerd op de regels // in /etc/resolv.conf, verwijder dan het commentaar van de volgende // regel en stel in /etc/rc.conf named_auto_forward=yes in. U kunt ook // named_auto_forward_only aanzetten (het effect hiervan is hierboven // beschreven). // include "/etc/namedb/auto_forward.conf"; .... Zoals al in het commentaar staat kan van een cache in de uplink geprofiteerd worden als `forwarders` ingeschakeld worden. Onder normale omstandigheden maakt een naamserver recursief verzoeken tot het Internet op zoek naar zekere naamservers tot er een antwoord komt waar het naar op zoek is. Door de bovenstaande optie in te schakelen wordt eerst de uplink naamserver (of de opgegeven naamserver) gevraagd, waardoor er gebruik gemaakt kan worden van de cache van die server. Als die uplink naamserver een drukke, snelle naamserver is, kan het erg de moeite waard zijn om dit aan te zetten. [WARNING] ==== `127.0.0.1` werkt hier _niet_. Verander dit IP-adres in een naamserver in de uplink. ==== [.programlisting] .... /* Moderne versies van BIND gebruiken standaard een random UDP-poort voor elk uitgaand verzoek om de kans op cache poisoning drastisch te verminderen. Alle gebruikers wordt met klem verzocht om deze mogelijkheid te gebruiken en hun firewalls overeenkomstig aan te passen. ALS EEN LAATSTE UITVLUCHT om een beperkende firewall te omzeilen kunt u proberen om onderstaande optie aan te zetten. Het gebruik van deze optie vermindert uw kans om een cache poisoning aanval te weerstaan aanzienlijk, en dient indien mogelijk te worden vermeden. Vervang NNNNN in het voorbeeld door een getal tussen 49160 en 65530. */ // query-source address * port NNNNN; }; // Als er een lokale naamserver wordt gebruikt, vergeet dan niet om // eerst 127.0.0.1 in /etc/resolv.conf te zetten zodat die gevraagd // wordt. Controleer ook dat het in /etc/rc.conf is aangezet. // Het traditionele root-hint-mechanisme. Gebruik dit OF de // onderstaande slaafzones. zone "." { type hint; file "/etc/namedb/named.root"; }; /* Het slaaf maken van de volgende zones vanaf de root-naamservers heeft een aantal aanzienlijke voordelen: 1. Snellere lokale resolutie voor uw gebruikers 2. Geen vals verkeer dat vanaf uw netwerk naar de roots wordt verzonden 3. Betere weerstand tegen elke mogelijk falen van de rootserver/DDoS Wel is het zo dat deze methode meer toezicht vraagt dan het hintbestand om er zeker van te zijn dat een onverwachte faalmodus uw server niet heeft lamgelegd. Naamservers die veel clienten serveren zullen meer voordeel uit deze aanpak halen dan individuele hosts. Met zorg gebruiken. Verwijder het commentaar uit de onderstaande regels en commentarieer de bovenstaande hintzone om dit mechanisme te gebruiken. Zoals gedocumenteerd op http://dns.icann.org/services/axfr/ zijn deze zones: "." (de root), ARPA, IN-ADDR.ARPA, IP6.ARPA en ROOT-SERVERS.NET beschikbaar voor AXFR van deze servers op IPv4 en IPv6: xfr.lax.dns.icann.org, xfr.cjr.dns.icann.org */ zone "." { type slave; file "/etc/namedb/slave/root.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; zone "arpa" { type slave; file "/etc/namedb/slave/arpa.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; /* Het lokaal serveren van de volgende zones voorkomt dat enig verzoek voor deze zones uw netwerk verlaat en naar de root-naamservers gaat. Dit heeft twee aanzienlijke voordelen: 1. Snellere lokale resolutie voor uw gebruikers 2. Er zal geen vals verkeer vanaf uw netwerk naar de roots worden verzonden */ // RFCs 1912 en 5735 (en BCP32 voor localhost) zone "localhost" { type master; file "/etc/namedb/master/localhost-forward.db"; }; zone "127.in-addr.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; zone "255.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // RFC 1912-stijl zone voor IPv6 localhost adres zone "0.ip6.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; // "Dit" netwerk (RFCs 1912 en 5735) zone "0.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Netwerken voor privaat gebruik (RFC 1918 en 5735) zone "10.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "16.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "17.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "18.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "19.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "20.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "21.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "22.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "23.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "24.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "25.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "26.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "27.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "28.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "29.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "30.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "31.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "168.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Lokale link/APIPA (RFCs 3927 en 5735) zone "254.169.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IETF protocol-toewijzingen (RFCs 5735 en 5736) zone "0.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // TEST-NET-[1-3] voor documentatie (RFCs 5735 en 5737) zone "2.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "100.51.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "113.0.203.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6-bereik voor documentatie (RFC 3849) zone "8.b.d.0.1.0.0.2.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Domeinnamen voor documentatie en testen (BCP 32) zone "test" { type master; file "/etc/namedb/master/empty.db"; }; zone "example" { type master; file "/etc/namedb/master/empty.db"; }; zone "invalid" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.com" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.net" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.org" { type master; file "/etc/namedb/master/empty.db"; }; // Router benchmarken (RFC 2544 en 5735) zone "18.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "19.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } // Gereserveerd door IANA - oude ruimte van klasse E (RFC 5735) zone "240.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "241.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "242.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "243.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "244.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "245.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "246.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "247.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "248.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "249.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "250.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "251.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "252.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "253.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "254.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; } // Niet-toegewezen IPv6-adressen (RFC 4291) zone "1.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "2.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "3.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "4.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "5.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "6.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "7.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "8.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "9.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "a.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "b.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "c.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "d.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "e.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "0.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "1.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "2.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "3.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "4.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "5.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "6.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "7.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "8.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "9.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "a.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "b.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "0.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "1.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "2.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "3.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "4.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "5.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "6.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "7.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } // IPv6 ULA (RFC 4193) zone "c.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "d.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } // IPv6 lokale link (RFC 4291) zone "8.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "9.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "a.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "b.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } // IPv6 verouderde site-lokale adressen (RFC 3879) zone "c.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "d.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "e.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } zone "f.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; } // IP6.INT is verouderd (RFC 4159) zone "ip6.int" { type master; file "/etc/namedb/master/empty.db"; } // NB: De IP-adressen hieronder zijn bedoeld als voorbeeld en dienen // niet gebruikt te worden! // // Voorbeeld instellingen voor slaafzones. Het kan handig zijn om // tenminste slaaf te worden voor de zone waar de host onderdeel van // uitmaakt. Bij uw netwerkbeheerder kan het IP-adres van de // verantwoordelijke meester-naamserver nagevraagd worden. // // Vergeet niet om de omgekeerde lookup-zone op te nemen! // Dit is genoemd na de eerste bytes van het IP-adres, in omgekeerde // volgorde, met daarachter ".IN-ADDR.ARPA", of "IP6.ARPA" voor IPv6. // // Het is van groot belang om de werking van DNS en BIND te begrijpen // voordat er een meester-zone wordt opgezet. Er zijn nogal wat // onverwachte valkuilen. Het opzetten van een slaafzone is // gewoonlijk eenvoudiger. // // NB: Zet de onderstaande voorbeelden niet blindelings aan. :-) // Gebruik in plaats hiervan echte namen en adressen. /* Een voorbeeld van een dynamische zone key "exampleorgkey" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "example.org" { type master; allow-update { key "exampleorgkey"; }; file "/etc/namedb/dynamic/example.org"; }; */ /* Voorbeeld van een omgekeerde slaafzone zone "1.168.192.in-addr.arpa" { type slave; file "/etc/namedb/slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ .... In [.filename]#named.conf# zijn dit voorbeelden van slaafregels voor een voorwaartse en een omgekeerde zone. Voor iedere nieuwe zone die wordt aangeboden dient een nieuwe instelling voor de zone aan [.filename]#named.conf# toegevoegd te worden. De eenvoudigste instelling voor de zone `example.org` kan er als volgt uitzien: [.programlisting] .... zone "example.org" { type master; file "master/example.org"; }; .... De zone is een master, zoals aangegeven door het statement `type`, waarvan de zoneinformatie in [.filename]#/etc/namedb/example.org# staat, zoals het statement `file` aangeeft. [.programlisting] .... zone "example.org" { type slave; file "slave/example.org"; }; .... In het geval van de slaaf wordt de zoneinformatie voor een zone overgedragen van de master naamserver en opgeslagen in het ingestelde bestand. Als de masterserver het niet meer doet of niet bereikbaar is, dan heeft de slaveserver de overgedragen zoneinformatie nog en kan het die aanbieden. ==== Zonebestanden Een voorbeeldbestand voor een masterzone voor `example.org` (bestaande binnen [.filename]#/etc/namedb/master/example.org#) ziet er als volgt uit: [.programlisting] .... $TTL 3600 ; 1 uur standaard TTL example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serienummer 10800 ; Verversen 3600 ; Opnieuw proberen 604800 ; Verlopen 300 ; Negatieve antwoord-TTL ) ; DNS Servers IN NS ns1.example.org. IN NS ns2.example.org. ; MX Records IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Machinenamen localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mail IN A 192.168.1.4 mx IN A 192.168.1.5 ; Aliases www IN CNAME example.org. .... Iedere hostnaam die eindigt op een "." is een exacte hostnaam, terwijl alles zonder een "." op het einde relatief is aan de oorsprong. Zo wordt `ns1` bijvoorbeeld vertaald naar `ns1.example.org.`. De regels in een zonebestand volgen de volgende opmaak: [.programlisting] .... recordnaam IN recordtype waarde .... De meest gebruikte DNS-records: SOA:: begin van autoriteit (start of authority) NS:: een bevoegde (autoratieve) name server A:: een hostadres CNAME:: de canonieke naam voor een alias MX:: mail exchanger PTR:: een domeinnaam pointer (gebruikt in omgekeerde DNS) [.programlisting] .... example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serienummer 10800 ; Ververs na 3 uur 3600 ; Opnieuw proberen na 1 uur 604800 ; Verlopen na 1 week 300 ; Negatieve antwoord-TTL .... `example.org.`:: de domeinnaam, ook de oorsprong voor dit zonebestand. `ns1.example.org.`:: de primaire/bevoegde naamserver voor deze zone. `admin.example.org.`:: de persoon die verantwoordelijk is voor deze zone, emailadres met "@" vervangen. mailto:admin@example.org[admin@example.org] wordt `admin.example.org`. `2006051501`:: het serienummer van het bestand. Dit moet iedere keer als het zonebestand wordt aangepast opgehoogd worden. Tegenwoordig geven veel beheerders de voorkeur aan de opmaak `yyyymmddrr` voor het serienummer. `2006051501` betekent dan dat het voor het laatst is aangepast op 15-05-2006, de laatste `01` betekent dat het zonebestand die dag voor het eerst is aangepast. Het serienummer is belangrijk omdat het slaafnaamservers aangeeft dat een zone is bijgewerkt. [.programlisting] .... IN NS ns1.example.org. .... Hierboven staat een NS-regel. Voor iedere naamserver die bevoegde antwoorden moet geven voor de zone hoort er zo'n regel te zijn. [.programlisting] .... localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 .... Een A-record geeft een machinenaam aan. Hierboven is te zien dat `ns1.example.org` zou resolven naar `192.168.1.2`. [.programlisting] .... IN A 192.168.1.1 .... Deze regel kent IP-adres `192.168.1.1` toe aan de huidige oorsprong, in dit geval `example.org`. [.programlisting] .... www IN CNAME @ .... Een canoniek naamrecord wordt meestal gebruikt voor het geven van aliassen aan een machine. In het voorbeeld is `www` een alias naar de "master" machine waarvan de naam gelijk is aan de domeinnaam `example.org` (`192.168.1.1`). CNAME's kunnen nooit samen met een ander soort record voor dezelfde hostnaam gebruikt worden. [.programlisting] .... IN MX 10 mail.example.org. .... MX records geven aan welke mailservers verantwoordelijk zijn voor het afhandelen van inkomende mail voor de zone. `mail.example.org` is de hostnaam van een mailserver en 10 is de prioriteit voor die mailserver. Het is mogelijk meerdere mailservers in te stellen met prioriteiten 10, 20, enzovoorts. Een mailserver die probeert mail af te leveren voor `example.org` probeert dat eerst bij de MX met de hoogste prioriteit (het record met het laagste prioriteitsnummer), daarna de tweede hoogste, enzovoort, totdat de mail afgeleverd kan worden. Voor in-addr.arpa zonebestanden (omgekeerd DNS) wordt dezelfde opmaak gebruikt, maar dan met PTR-regels in plaats van A of CNAME. [.programlisting] .... $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serienummer 10800 ; Ververs 3600 ; Opnieuw proberen 604800 ; Verlopen 300 ) ; Negatieve antwoord-TTL IN NS ns1.example.org. IN NS ns2.example.org. 1 IN PTR example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 4 IN PTR mx.example.org. 5 IN PTR mail.example.org. .... Dit bestand geeft de juiste IP-adressen voor hostnamen in het voorbeelddomein hierboven. Het is het vernoemen waard dat alle namen aan de rechterkant van een PTR-record volledig gekwalificeerd dienen te zijn (i.e., met een "." eindigen). === Caching naamserver Een caching naamserver is een naamserver wiens primaire rol het oplossen van recursieve verzoeken is. Het dient simpelweg zelf verzoeken in en onthoudt de antwoorden voor later gebruik. === DNSSEC Domain Name Security System Extensions, ofwel DNSSEC, is een verzameling van specificaties om resolvende naamservers te beschermen tegen valse DNS-gegevens, zoals vervalste DNS-records. Door digitale handtekeningen te gebruiken kan een resolver de integriteit van een record controleren. Merk op dat DNSSEC alleen integriteit biedt via het digitaal ondertekenen van het Resource Record (RRs). Het biedt noch betrouwbaarheid noch bescherming tegen onjuiste aannames van eindgebruikers. Dit betekent dat het mensen niet kan beschermen tegen het bezoeken van `example.net` in plaats van `example.com`. Het enige wat DNSSEC doet is authenticeren dat de gegevens niet tijdens het transport zijn gecompromitteerd. De beveiliging van DNSSEC is een belangrijke stap in het beveiligen van het internet in het algemeen. De relevante RFCs zijn een goed beginpunt voor meer gedetailleerde gegevens over hoe DNSSEC werkt. Raadpleeg de lijst in <>. De volgende secties laten zien hoe DNSSEC voor een autoratieve DNS-server en een recursieve (of caching) DNS-server die BIND 9 draait kan worden bewerkstelligd. Hoewel alle versies van BIND 9 DNSSEC ondersteunen, is tenminste versie 9.6.2 nodig om gebruik te kunnen maken van de ondertekende rootzones tijdens het valideren van DNS-verzoeken. Dit komt doordat eerdere versies de benodigde algoritmes om validatie met de sleutel voor de rootzone te uit te voeren niet hebben. Het wordt sterk aangeraden om de nieuwste versie van BIND 9.7 te gebruiken om gebruik te kunnen maken van automatische sleutel-updates voor de rootsleutel en van andere mogelijkheden om zones ondertekend en sleutel up-to-date te houden. Wanneer configuraties tussen 9.6.2 en 9.7 en later verschillen, zullen deze worden toegelicht. ==== Configuratie van een recursieve DNS-server Het aanzetten van DNSSEC-validatie van verzoeken die door een recursieve DNS-server worden uitgevoerd heeft enkele aanpassingen aan [.filename]#named.conf# nodig. Voordat deze wijzigingen worden gemaakt dient de rootzone-sleutel, of vertrouwensanker, te worden opgehaald. Momenteel is de rootzone-sleutel niet beschikbaar in een bestandsformaat dat BIND begrijpt, dus moet het handmatig in het juiste formaat omgezet worden. De sleutel zelf kan verkregen worden door de rootzone ervoor met dig te ondervragen. Door [source,shell] .... % dig +multi +noall +answer DNSKEY . > root.dnskey .... te draaien, wordt de sleutel in [.filename]#root.dnskey# opgeslagen. De inhoud dient er ongeveer als volgt uit te zien: [.programlisting] .... . 93910 IN DNSKEY 257 3 8 ( AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQ bSEW0O8gcCjFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh /RStIoO8g0NfnfL2MTJRkxoXbfDaUeVPQuYEhg37NZWA JQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaDX6RS6CXp oY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3 LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGO Yl7OyQdXfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGc LmqrAmRLKBP1dfwhYB4N7knNnulqQxA+Uk1ihz0= ) ; key id = 19036 . 93910 IN DNSKEY 256 3 8 ( AwEAAcaGQEA+OJmOzfzVfoYN249JId7gx+OZMbxy69Hf UyuGBbRN0+HuTOpBxxBCkNOL+EJB9qJxt+0FEY6ZUVjE g58sRr4ZQ6Iu6b1xTBKgc193zUARk4mmQ/PPGxn7Cn5V EGJ/1h6dNaiXuRHwR+7oWh7DnzkIJChcTqlFrXDW3tjt ) ; key id = 34525 .... Schrik niet als de verkregen sleutels anders zijn dan in dit voorbeeld. Ze kunnen zijn veranderd nadat deze instructies voor het laatst waren bijgewerkt. De uitvoer bevat in feite twee sleutels. De eerste sleutel, met de waarde 257 na het DNSKEY-recordtype, is degene die nodig is. Deze waarde geeft aan dat dit een Secure Entry Point ( SEP) is, beter bekend als een Key Signing Key (KSK). De tweede sleutel, met de waarde 256, is een deelsleutel, beter bekend als een Zone Signing Key (ZSK). Meer over de verschillende soorten sleutels komt aan bod in <>. Nu moet de sleutel gecontroleerd en geformatteerd worden zodat BIND deze kan gebruiken. Maak om de sleutel te controleren een DS - RR-paar aan. Maak een bestand aan dat deze RRs bevat aan met [source,shell] .... % dnssec-dsfromkey -f root-dnskey . > root.ds .... Deze records gebruiken respectievelijk SHA-1 en SHA-256, en dienen er als het volgende voorbeeld uit te zien, waarbij het langere record SHA-256 gebruikt. [.programlisting] .... . IN DS 19036 8 1 B256BD09DC8DD59F0E0F0D8541B8328DD986DF6E . IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5 .... Het SHA-256 RR kan nu worden vergeleken met de digest in https://data.iana.org/root-anchors/root-anchors.xml[https://data.iana.org/root-anchors/root-anchors.xml]. Om er absoluut zeker van te zijn dat er niet geknoeid is met de sleutel kunnen de gegevens in het XML-bestand worden gecontroleerd met de PGP-handtekening in https://data.iana.org/root-anchors/root-anchors.asc[https//data.iana.org/root-anchors/root-anchors.asc]. Vervolgens dient de sleutel juist geformateerd te worden. Dit verschilt een beetje tussen versie 9.6.2 en versie 9.7 en later van BIND. In versie 9.7 is ondersteuning toegevoegd om automatisch veranderingen aan de sleutel te volgen en deze bij te werken indien nodig. Dit wordt gedaan met `managed-keys` zoals in het volgende voorbeeld te zien is. Als de oudere versie gebruikt wordt, wordt de sleutel toegevoegd met een commando `trusted-keys` en dient deze handmatig bijgewerkt te worden. Voor BIND 9.6.2 ziet het formaat er uit als: [.programlisting] .... trusted-keys { "." 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; .... Voor versie 9.7 ziet het formaat er echter zo uit: [.programlisting] .... managed-keys { "." initial-key 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; .... De rootsleutel kan nu aan [.filename]#named.conf# worden toegevoegd, ofwel direct of door een bestand dat de sleutel bevat te includen. Stel na deze stappen BIND in zodat het DNSSEC-validatie uitvoert op verzoeken door [.filename]#named.conf# te bewerken en het volgende aan de directief `options` toe te voegen: [.programlisting] .... dnssec-enable yes; dnssec-validation yes; .... Om te controleren dat het ook echt werkt, kan dig gebruikt worden om een verzoek op een ondertekende zone uit te voeren met de zojuist geconfigureerde resolver. Een succesvol antwoord zal de vlag `AD` bevatten om aan te geven dat de gegevens zijn geautenticeerd. Een verzoek als [source,shell] .... % dig @resolver +dnssec se ds .... zou het DSRR paar voor de `.se`-zone moeten teruggeven. In de sectie `flags:` moet de vlag `AD` te zien zijn, als in: [.programlisting] .... ... ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ... .... De resolver is nu in staat om DNS-verzoeken te autenticeren. [[dns-dnssec-auth]] ==== Configuratie van een autoratieve DNS-server Om een autoratieve naamserver een met DNSSEC ondertekende zone te laten serveren is wat meer werk nodig. Een zone wordt ondertekend met cryptografische sleutels die aangemaakt moeten worden. Het is mogelijk om hier slechts één sleutel voor te gebruiken. De methode die de voorkeur verdient is echter om een sterke, goed beschermde Key Signing Key (KSK) die niet vaak wordt geroteerd en een Zone Signing Key (ZSK) die vaker wordt geroteerd te hebben. Informatie over aanbevolen procedures staat in http://tools.ietf.org/rfc/rfc4641.txt[RFC 4641: DNSSEC Operational Practices]. Procedures betreffende de rootzone staan in http://www.root-dnssec.org/wp-content/uploads/2010/06/icann-dps-00.txt[DNSSEC Practice Statement for the Root Zone KSK operator] en http://www.root-dnssec.org/wp-content/uploads/2010/06/vrsn-dps-00.txt[DNSSEC Practice Statement for the Root Zone ZSK operator]. De KSK wordt gebruikt om een autoriteitsketen voor de te valideren gegevens op te bouwen en wordt daarom ook een Secure Entry Point (SEP)-sleutel genoemd. Een bericht-digest van deze sleutel, dat Delegation Signer (DS)-record genoemd wordt, moet gepubliceerd zijn in de ouderzone om een vertrouwensketen op te bouwen. Hoe dit bereikt wordt hangt af van de eigenaar van de ouderzone. De ZSK wordt gebruikt om de zone te ondertekenen, en hoeft alleen daar gepubliceerd te worden. Om DNSSEC aan te zetten voor de zone `example.com` zoals beschreven in de voorgaande voorbeelden, dient als eerste dnssec-keygen gebruikt te worden om het sleutelpaar met de KSK en ZSK te genereren. Dit sleutelpaar kan verschillende cryptografische algoritmes gebruiken. Het wordt aanbevolen om RSA/SHA-256 voor de sleutels te gebruiken, een sleutellengte van 2048 bits zou voldoende moeten zijn. Om de KSK voor `example.com` te genereren: [source,shell] .... % dnssec-keygen -f KSK -a RSASHA256 -b 2048 -n ZONE example.com .... en om de ZSK te genereren: [source,shell] .... % dnssec-keygen -a RSASHA256 -b 2048 -n ZONE example.com .... dnssec-keygen maakt twee bestanden, de publieke en private sleutels in bestanden met namen als [.filename]#Kexample.com.+005+nnnnn.key# (publiek) en [.filename]#Kexample.com.+005+nnnnn.private# (privaat). Het gedeelte `nnnnn` van de bestandsnaam is een sleutel-ID van vijf cijfers. Houd bij welke sleutel-ID bij welke sleutel hoort. Dit is in het bijzonder van belang wanneer er meerdere sleutels per zone zijn. Het is ook mogelijk om de sleutels te hernoemen. Voor elk KSK-bestand: [source,shell] .... % mv Kexample.com.+005+nnnnn.key Kexample.com.+005+nnnn.KSK.key % mv Kexample.com.+005+nnnnn.private Kexample.com.+005+nnnnn.KSK.private .... Voor ZSK-bestanden dient `KSK` waar nodig door `ZSK` vervangen te worden. De bestanden kunnen nu worden opgenomen in het zonebestand, door de opdracht `$include` te gebruiken. Het zou er ongeveer als volgt uit moeten zien: [.programlisting] .... $include Kexample.com.+005+nnnnn.KSK.key ; KSK $include Kexample.com.+005+nnnnn.ZSK.key ; ZSK .... Onderteken tenslotte de zone en vertel BIND om het ondertekende zonebestand te gebruiken. Voor het ondertekenen van een zone wordt dnssec-signzone gebruikt. Het commando om de zone `example.com`, dat zich in [.filename]#example.com.db# bevindt, zou er ongeveer zo uit moeten zien: [source,shell] .... % dnssec-signzone -o example.com -k Kexample.com.+005+nnnnn.KSK example.com.db Kexample.com.+005+nnnnn.ZSK.key .... De sleutel die aan het argument `-k` wordt meegegeven is de KSK en het andere sleutelbestand is de ZSK dat bij het ondertekenen gebruikt moet worden. Het is mogelijk om meer dan één KSK en ZSK op te geven, wat tot gevolg heeft dat de zone met alle meegegeven sleutels wordt ondertekend. Dit kan nodig zijn om zonegegevens aan te leveren die met meerdere algoritmes zijn ondertekend. De uitvoer van dnssec-signzone is een zonebestand met daarin alle RRs ondertekend. Deze uitvoer komt in een bestand met de extensie `.signed` terecht, zoals [.filename]#example.com.db.signed#. De DS-records worden ook naar een apart bestand [.filename]#dsset-example.com# geschreven. Om deze ondertekende zone te gebruiken hoeft alleen de zone-directief in [.filename]#named.conf# veranderd te worden om [.filename]#example.com.db.signed#. Standaard zijn de ondertekeningen slechts 30 dagen geldig, wat betekent dat de zone over ongeveer 15 dagen hertekend moet worden om er zeker van te zijn dat resolvers geen records met oude ondertekeningen cachen. Het is mogelijk om hiervoor een script en een crontaak te maken. Bekijk de relevante handleidingen voor details. Zorg ervoor dat de private sleutels veilig blijven, zoals met alle cryptografische sleutels. Bij het veranderen van een sleutel kan het beste de nieuwe sleutel in de zone opgenomen worden, en nog met de oude sleutel te ondertekenen, en om daarna over te stappen op de nieuwe sleutel. Nadat deze handelingen zijn voltooid kan de oude sleutel uit de zone worden verwijderd. Wanneer dit niet wordt gedaan kunnen de DNS-gegevens tijdelijk onbeschikbaar zijn totdat de nieuwe sleutel door de DNS-hiërarchie is gepropageerd. Meer informatie over sleutelwisselingen en andere praktijken rondom DNSSEC staan in http://www.ietf.org/rfc/rfc4641.txt[RFC 4641: DNSSEC Operational practices]. ==== Automatisering met BIND 9.7 of nieuwer In versie 9.7 van BIND is een nieuwe mogelijkheid genaamd _Smart Signing_ geïntroduceerd. Deze mogelijkheid heeft als doel om het sleutelbeheer en ondertekenproces eenvoudiger te maken door delen van deze taken te automatiseren. Door de sleutels in een _sleutelreservoir_ te stoppen en de nieuwe optie `auto-dnssec` te gebruiken, is het mogelijk om een dynamische zone aan te maken welke opnieuw getekend wordt indien dat nodig is. Gebruik om deze zone bij te werken nsupdate met de nieuwe `-l`. rndc kan nu ook zones ondertekenen met sleutels uit het sleutelreservoir door de optie `sign` te gebruiken. Voeg, om BIND dit automatische ondertekenen en bijwerken van zones te laten gebruiken voor `example.com`, het volgende aan [.filename]#named.conf# toe: [.programlisting] .... zone example.com { type master; key-directory "/etc/named/keys"; update-policy local; auto-dnssec maintain; file "/etc/named/dynamic/example.com.zone"; }; .... Nadat deze veranderingen gemaakt zijn, dienen de sleutels voor de zone aangemaakt te worden zoals uitgelegd in <>, deze sleutels in het sleutelreservoir gestopt te worden dat als argument aan de `key-directory` in het zoneconfiguratie is meegegeven, waarna de zone automatisch zal worden ondertekend. Zones die op deze manier zijn geconfigureerd dienen met nsupdate te worden gedaan, dat voor het opnieuw ondertekenen van de zone met de nieuw toegevoegde gegevens zal zorgen. Zie voor meer details <> en de BIND-documentatie. === Beveiliging Hoewel BIND de meest gebruikte implementatie van DNS is, is er altijd nog het beveiligingsvraagstuk. Soms worden er mogelijke en te misbruiken beveiligingsgaten gevonden. Hoewel FreeBSD named automatisch in een man:chroot[8]-omgeving plaatst; zijn er verschillende andere beveiligingsmechanismen actief die zouden kunnen helpen om mogelijke aanvallen op de DNS-dienst af te wenden. Het is altijd verstandig om de http://www.cert.org/[CERT] beveiligingswaarschuwingen te lezen en een abonnement te nemen op de {freebsd-security-notifications} om bij te blijven met de beveiligingsproblemen wat betreft Internet en FreeBSD. [TIP] ==== Als er problemen ontstaan, kan het bijwerken van broncode en het opnieuw bouwen van named hulp bieden. ==== [[dns-read]] === Verder lezen BIND/named hulppagina's: man:rndc[8] man:named[8] man:named.conf[8] man:nsupdate[8] man:dnssec-signzone[8] man:dnssec-keygen[8] * https://www.isc.org/software/bind/[Officiële ISC BIND pagina] * https://www.isc.org/software/guild/[Officieel ISC BIND Forum] * http://www.oreilly.com/catalog/dns5/[O'Reilly DNS en BIND 5e Editie] * http://www.root-dnssec.org/documentation/[Root DNSSEC] * http://data.iana.org/root-anchors/draft-icann-dnssec-trust-anchor.html[ DNSSEC Trust Anchor Publication for the Root Zone] * http://tools.ietf.org/html/rfc1034[RFC1034 - Domain Names - Concepts and Facilitities] * http://tools.ietf.org/html/rfc1035[RFC1035 - Domain Names - Implementation and Specification] * http://tools.ietf.org/html/rfc4033[RFC4033 - DNS Security Introduction and Requirements] * http://tools.ietf.org/html/rfc4034[RFC4034 - Resource Records for the DNS Security Extensions] * http://tools.ietf.org/html/rfc4035[RFC4035 - Protocol Modifications for the DNS Security Extensions] * http://tools.ietf.org/html/rfc4641[RFC4641 - DNSSEC Operational Practices] * http://tools.ietf.org/html/rfc5011[RFC5011 - Automated Updates of DNS Security (DNSSEC Trust Anchors)] [[network-apache]] == Apache HTTP server === Overzicht FreeBSD wordt gebruikt om een paar van de drukste websites ter wereld te draaien. De meeste webservers op Internet maken gebruik van de Apache HTTP Server. Apache softwarepakketten staan op de FreeBSD installatiemedia. Als Apache niet bij de oorspronkelijke installatie van FreeBSD is meegeïnstalleerd, dan kan dat vanuit de port package:www/apache22[]. Als Apache succesvol is geïnstalleerd, moeten er instellingen gemaakt worden. [NOTE] ==== In dit onderdeel wordt versie 2.2.X van de Apache HTTP Server behandeld omdat die het meest gebruikt wordt op FreeBSD. Meer gedetailleerde informatie over Apache 2.X dat buiten het bereik van dit document valt is te vinden op http://httpd.apache.org/[http://httpd.apache.org/]. ==== === Instellen Het belangrijkste bestand met instellingen voor de Apache HTTP Server op FreeBSD is [.filename]#/usr/local/etc/apache22/httpd.conf#. Dit bestand is een typisch UNIX(R) tekstgebaseerd instellingenbestand waarin regels met commentaar beginnen met het karakter `#`. Het uitputtend beschrijven van alle mogelijke instellingen valt buiten het bereik van dit boek, dus worden alleen de meest gebruikte directieven beschreven. `ServerRoot "/usr/local"`:: Hierin wordt de standaard mappenhiërarchie voor de Apache installatie aangegeven. Binaire bestanden staan in de submappen [.filename]#bin# en [.filename]#sbin# van de serverroot en bestanden met instellingen staan in [.filename]#etc/apache#. `ServerAdmin beheerder@beheer.adres`:: Het adres waaraan problemen met de server gemaild kunnen worden. Dit adres verschijnt op een aantal door de server gegenereerde pagina's, zoals documenten met foutmeldingen. `ServerName www.example.com`:: Met `ServerName` kan een hostnaam ingesteld worden die wordt teruggezonden aan de cliënten als de naam van de server anders is dan diegene is ingesteld (gebruik bijvoorbeeld `www` in plaats van de echte hostnaam). `DocumentRoot "/usr/local/www/apache22/data"`:: `DocumentRoot`: de map waaruit de documenten worden geserveerd. Standaard worden alle verzoeken uit deze map gehaald, maar er kunnen symbolische links en aliassen gebruikt worden om naar andere locaties te wijzen. Het is altijd een goed idee om reservekopieën te maken van het instellingenbestand voor Apache vóór het maken van wijzigingen. Als de juiste instellingen gemaakt zijn, kan Apache gestart worden. === Apache draaien De port package:www/apache2[] installeert een man:rc[8]-script dat helpt met het starten, stoppen en herstarten van Apache en is te vinden in [.filename]#/usr/local/etc/rc.d/#. Om Apache met het systeem mee te starten kan de volgende regel aan [.filename]#/etc/rc.conf# worden toegevoegd: [.programlisting] .... apache22_enable="YES" .... Als het nodig is Apache met afwijkende opties op te starten, kan de volgende regel aan [.filename]#/etc/rc.conf# worden toegevoegd: [.programlisting] .... apache22_flags="" .... De configuratie van Apache kan worden getest op fouten voordat het daemon `httpd` voor de eerste keer wordt gestart, of na het maken van wijzigingen aan de instellingen terwijl `httpd` draait. Dit kan direct door het man:rc[8]-script worden gedaan, of door het gereedschap man:service[8] door één van de volgende commando's op te geven: [source,shell] .... # service apache22 configtest .... [NOTE] ==== Het is belangrijk om op te merken dat `configtest` geen man:rc[8]-standaard is, verwacht niet dat het met alle man:rc[8]-opstartscripts werkt. ==== Als Apache geen instellingsfouten meldt, kan Apache `httpd` gestart worden met man:service[8]: [source,shell] .... # service apache22 start .... De dienst `httpd` kan getest worden door `http://localhost` in een webbrowser te typen, waarbij _localhost_ door de volledig gekwalificeerde domeinnaam wordt vervangen van de machine die `httpd` draait, als het niet de lokale machine is. De standaard webpagina die afgebeeld wordt is [.filename]#/usr/local/www/apache22/data/index.html#. === Virtuele hosting Apache ondersteunt twee verschillende manieren van Virtuele Hosting. De eerste methode is Naamgebaseerde Virtuele Hosting. Naamgebaseerde Virtuele Hosting gebruikt de HTTP/1.1 headers van de cliënten om de hostnaam uit te zoeken. Hierdoor kunnen meerdere domeinen hetzelfde IP-adres delen. Om Apache gebruik te laten maken van Naamgebaseerde Virtuele Hosting kan een regel als de volgende in [.filename]#httpd.conf# worden opgenomen: [.programlisting] .... NameVirtualHost * .... Als een webserver `www.domein.tld` heet en er moet een virtueel domein voor `www.anderdomein.tld` gaan draaien, dan kunnen de volgende regels aan [.filename]#httpd.conf# worden toegevoegd: [source,shell] .... ServerName www.domein.tld DocumentRoot /www/domein.tld ServerName www.anderdomein.tld DocumentRoot /www/anderdomein.tld .... De adressen en de paden uit dit voorbeeld kunnen in echte implementaties uiteraard gewijzigd worden. Meer informatie over het opzetten van virtuele hosts staat in de officiële documentatie voor Apache op http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/] === Apache modules Er zijn veel verschillende Apache modules die functionaliteit toevoegen aan de basisdienst. De FreeBSD Portscollectie biedt op een eenvoudige manier de mogelijkheid om Apache samen met de meeste populaire add-on modules te installeren. ==== mod_ssl De module mod_ssl gebruikt de bibliotheek OpenSSL om sterke cryptografie te leveren via de protocollen Secure Sockets Layer (SSL v2/v3) en Transport Layer Security (TLS v1). Deze module levert alles wat nodig is om een getekend certificaat aan te vragen bij een vertrouwde certificaatautoriteit om een veilige webserver onder FreeBSD te kunnen draaien. De module mod_ssl wordt standaard gebouwd, maar kan worden aangezet door tijdens het compileren `-DWITH_SSL` op te geven. ==== Taalbindingen Er zijn Apache-modules beschikbare voor de meeste grote scriptingtalen. Deze modules maken het typisch mogelijk om Apache-modules geheel in een scriptingtaal te schrijven. Ze worden ook vaak gebruikt als een persistente interpreter die in de server zit en die de rompslomp van het starten van een externe interpreter en de opstartvertraging voor dynamische websites vermijdt, zoals beschreven in de volgende sectie. === Dynamische websites In het afgelopen decennium hebben steeds meer bedrijven zich op Internet gericht om hun omzet te verhogen en hun zichtbaarheid te vergroten. Hiermee is ook de behoefte aan interactieve webinhoud toegenomen. Hoewel sommige bedrijven zoals Microsoft(R) oplossingen hebben geïntroduceerd voor hun eigen (propriëtaire) producten, heeft ook de open source gemeenschap een antwoord op de vraag gegeven. Moderne opties voor dynamische webinhoud zijn onder andere Django, Ruby on Rails, mod_perl2, en mod_php. ==== Django Django is een BSD-gelicenseerd raamwerk ontworpen om ontwikkelaars in staat te stellen om snel hoog presterende, elegante webapplicaties te schrijven. Het biedt een vertaling van objecten naar relaties zodat datatypes ontwikkeld kunnen worden als Python-objecten, en er een rijke dynamische databasetoegang voor die objecten kan worden geboden zonder dat de ontwikkelaar ooit SQL hoeft te schrijven. Het biedt ook een uitbreidbaar sjabloonsysteem zodat de applicatielogica is gescheiden van de HTML-presentatie. Django is afhankelijk van mod_python, Apache, en een SQL-database-engine naar keuze. De FreeBSD-port zal al deze vereisten met de juiste vlaggen voor u installeren. [[network-www-django-install]] .Django installeren met Apache2, mod_python3 en PostgreSQL [example] ==== [source,shell] .... # cd /usr/ports/www/py-django; make all install clean -DWITH_MOD_PYTHON3 -DWITH_POSTGRESQL .... ==== Als Django en deze vereisten eenmaal zijn geïnstalleerd, dient u een Django-projectmap te maken en vervolgens Apache te configureren om de ingebakken Python-interpreter te gebruiken om uw applicatie voor specifieke URL's op uw site aan te roepen. [[network-www-django-apache-config]] .Apache-configuratie voor Django/mod_python [example] ==== U moet een regel aan het Apache-bestand [.filename]#httpd.conf# toevoegen om Apache in te stellen om verzoeken voor bepaalde URL's aan uw webapplicatie door te geven: [source,shell] .... SetHandler python-program PythonPath "['/map/naar/uw/django-pakketten/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mijnsite.settings PythonAutoReload On PythonDebug On .... ==== ==== Ruby on Rails Ruby on Rails is een ader opensource webraamwerk dat een volledige ontwikkelstack biedt en geoptimaliseerd is om webontwikkelaars productiever te maken en snel krachtige applicaties te laten ontwikkelen. Het kan eenvoudig vanuit het portssysteem geïnstalleerd worden. [source,shell] .... # cd /usr/ports/www/rubygem-rails; make all install clean .... ==== mod_perl2 Het Apache/Perl integratieproject brengt de volledige kracht van de programmeertaal Perl en de Apache HTTP Server samen. Met de module mod_perl2 is het mogelijk om Apache-modules volledig in Perl te schrijven. Daarnaast voorkomt een ingebouwde persistente interpreter in de server de rompslomp van het starten van een externe interpreter en de nadelen van de opstarttijd van Perl. mod_perl2 is beschikbaar in de port package:www/mod_perl2[]. ==== mod_php PHP, ook bekend als "PHP: Hypertext Preprocessor", is een algemene scripttaal die bijzonder geschikt is voor webontwikkeling. Het is mogelijk de taal in te bedden in HTML en de syntaxis is afgeleid van C, Java(TM) en Perl met de bedoeling webontwikkelaars in staat te stellen om snel dynamisch samengestelde pagina's te schrijven. Om ondersteuning voor PHP5 toe te voegen aan de Apache webserver kan eerst de port package:lang/php5[] geïnstalleerd worden. Als de port package:lang/php5[] voor het eerst geïnstalleerd wordt, worden automatisch de beschikbare `OPTIONS` weergegeven. Als er geen menu wordt weergegeven, omdat de port package:lang/php5[] reeds in het verleden is geïnstalleerd, is het altijd mogelijk om het optiedialoog weer te laten verschijnen door [source,shell] .... # make config .... uit te voeren in de map van de port. Controleer in het optiedialoog dat de optie `APACHE` mod_php5 als een laadbare module voor de webserver Apache bouwt. [NOTE] ==== Een heleboel sites draaien nog steeds PHP4 om verschillende redenen (compatibiliteitszaken of reeds in gebruik genomen webapplicaties). Als mod_php4 nodig is in plaats van mod_php5, gebruik dan de port package:lang/php4[]. De port package:lang/php4[] ondersteunt een groot deel van de configuratie- en bouwopties van de port package:lang/php5[]. ==== Hiermee worden de modules die nodig zijn voor de ondersteuning van dynamische PHP-applicaties geïnstalleerd en ingesteld. Controleer dat de volgende secties aan [.filename]#/usr/local/etc/apache22/httpd.conf# zijn toegevoegd: [.programlisting] .... LoadModule php5_module libexec/apache/libphp5.so .... [.programlisting] .... AddModule mod_php5.c IfModule mod_php5.c DirectoryIndex index.php index.html /IfModule IfModule mod_php5.c AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps /IfModule .... Na voltooiing is een eenvoudige aanroep van het commando `apachectl` voor een nette herstart nodig om de module PHP te laden: [source,shell] .... # apachectl graceful .... Voor toekomstig bijwerken van PHP zal het commando `make config` niet nodig zijn; de geselecteerde `OPTIONS` worden automatisch bewaard door het FreeBSD Ports raamwerk. De ondersteuning voor PHP in FreeBSD is extreem modulair waardoor de basisinstallatie zeer beperkt is. Het is heel gemakkelijk om ondersteuning toe te voegen door de port package:lang/php5-extensions[] te gebruiken. Deze port biedt een menugestuurde interface voor de installatie van PHP-uitbreidingen. Als alternatief kunnen individuele uitbreidingen worden geïnstalleerd door de juiste port te gebruiken. Om bijvoorbeeld ondersteuning voor de MySQL databaseserver aan PHP5 toe te voegen kan gewoonweg de port [.filename]#databases/php5-mysql# geïnstalleerd worden: Na de installatie van een uitbreiding moet de Apache-server herladen worden om de nieuwe veranderingen in de configuratie op te pikken: [source,shell] .... # apachectl graceful .... [[network-ftp]] == File Transfer Protocol (FTP) === Overzicht Het File Transfer Protocol (FTP) biedt gebruikers een eenvoudige manier om bestanden van en naar een FTP server te verplaatsen. FreeBSD bevat FTP server software, ftpd, in het basissysteem. Hierdoor is het opzetten en beheren van een FTP server op FreeBSD erg overzichtelijk. === Instellen De belangrijkste stap bij het instellen is de beslissing welke accounts toegang krijgen tot de FTP server. Een normaal FreeBSD systeem heeft een aantal systeemaccounts die gebruikt worden voor daemons, maar onbekende gebruikers mag niet toegestaan worden van die accounts gebruikt te maken. In [.filename]#/etc/ftpusers# staat een lijst met gebruikers die geen FTP toegang hebben. Standaard staan daar de voorgenoemde accounts in, maar het is ook mogelijk om daar gebruikers toe te voegen die geen FTP toegang mogen hebben. Het kan ook wenselijk zijn de FTP toegang voor sommige gebruikers te beperken, maar niet onmogelijk te maken. Dit kan met [.filename]#/etc/ftpchroot#. In dat bestand staan gebruikers en groepen waarop FTP toegangsbeperkingen van toepassing zijn. In man:ftpchroot[5] staan alle details die hier niet beschreven zijn. Om anonieme FTP toegang voor een server in te schakelen, dient er een gebruiker `ftp` op een FreeBSD systeem aangemaakt te worden. Dan kunnen gebruikers op de server aanmelden met de gebruikersnaam `ftp` of `anonymous` en met ieder wachtwoord (de geldende conventie schrijft voor dat dit een emailadres van de gebruiker is). De FTP server roep bij een anonieme aanmelding man:chroot[2] aan, zodat er alleen toegang is tot de thuismap van de gebruiker `ftp`. Er zijn twee tekstbestanden waarin welkomstberichten voor de FTP-cliënten gezet kunnen worden. De inhoud van [.filename]#/etc/ftpwelcome# wordt getoond voordat gebruikers een aanmeldprompt zien. Na een succesvolle aanmelding wordt de inhoud van [.filename]#/etc/ftpmotd# getoond. Het genoemde pad is relatief ten opzichte van de aanmeldomgeving, dus voor anonieme gebruikers wordt [.filename]#~ftp/etc/ftpmotd# getoond. Als een FTP server eenmaal correct is ingesteld, moet die ingeschakeld worden in [.filename]#/etc/inetd.conf#. Daar moet het commentaarkarakter `#` voor de bestaande ftpd regel verwijderd worden: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... Zoals is uitgelegd in <>, moet de configuratie van inetd worden herladen nadat dit instellingenbestand is gewijzigd. Details over het aanzetten van inetd op uw systeem staan in <>. Als alternatief kan ftpd ook gestart worden als een op zichzelf staande dienst. In dat geval volstaat het om de juiste variabele in te stellen in [.filename]#/etc/rc.conf#: [.programlisting] .... ftpd_enable="YES" .... Na het instellen van de bovenstaande variabele zal de op zichzelf staande server gestart worden nadat de computer opnieuw is opgestart, of het kan handmatig worden gestart door het volgende commando als `root` uit te voeren: [source,shell] .... # service ftpd start .... Nu kan aangemeld worden op de FTP-server met: [source,shell] .... % ftp localhost .... === Beheren De ftpd daemon gebruikt man:syslog[3] om berichten te loggen. Standaard plaatst de systeemlogdaemon berichten over FTP in [.filename]#/var/log/xferlog#. De lokatie van het FTP logboek kan gewijzigd worden door de volgende regels in [.filename]#/etc/syslog.conf# te wijzigen: [.programlisting] .... ftp.info /var/log/xferlog .... Het is verstandig na te denken over de gevaren die op de loer liggen bij het draaien van een anonieme FTP server. Dat geldt in het bijzonder voor het laten uploaden ven bestanden. Het is dan goed mogelijk dat een FTP site een forum wordt om commerciële software zonder licenties uit te wisselen of erger. Als anonieme uploads toch nodig zijn, dan horen de rechten op die bestanden zo te staan dat ze niet door andere anonieme gebruikers gelezen kunnen worden tot er door een beheerder naar gekeken is. [[network-samba]] == Bestands- en printdiensten voor Microsoft(R) Windows(R) cliënten (Samba) === Overzicht Samba is een populair open source softwarepakket dat bestands- en printdiensten voor Microsoft(R) Windows(R) cliënten biedt. Die cliënten kunnen dan ruimte op een FreeBSD bestandssysteem gebruiken alsof het een lokale schijf is en FreeBSD printers gebruiken alsof het lokale printers zijn. Samba softwarepakketten horen op de FreeBSD installatiemedia te staan. Als Samba bij de basisinstallatie niet mee is geïnstalleerd, dan kan dat alsnog via de package:net/samba34[] port of met het pakket. === Instellen Een standaardbestand met instellingen voor Samba wordt geïnstalleerd als [.filename]#/usr/local/shared/examples/samba34/smb.conf.default#. Dit bestand dient gekopieerd te worden naar [.filename]#/usr/local/etc/smb.conf# en voordat Samba gebruikt kan worden, moeten er aanpassingen aan worden gemaakt. [.filename]#smb.conf# bevat de instellingen voor Samba, zoals die voor de printers en de "gedeelde bestandssystemen" die gedeeld worden met Windows(R) cliënten. Het pakket Samba bevat een webgebaseerde beheermodule die swat heet, waarmee [.filename]#smb.conf# op een eenvoudige manier ingesteld kan worden. ==== De Samba webbeheermodule gebruiken (SWAT) De Samba Webbeheermodule (SWAT) draait als een daemon vanuit inetd. Daarom dient inetd aangezet te worden zoals beschreven in <> en dient voor de volgende regel uit [.filename]#/etc/inetd.conf# het commentaarkarakter verwijderd te worden voordat swat gebruikt kan worden om Samba in te stellen: [.programlisting] .... swat stream tcp nowait/400 root /usr/local/sbin/swat swat .... Zoals is uitgelegd in <>, moet de configuratie van inetd worden herladen nadat dit instellingenbestand is gewijzigd. Als swat is ingeschakeld in [.filename]#inetd.conf#, kan de module gebruikt worden door met een browser een verbinding te maken met http://localhost:901[http://localhost:901]. Er dient aangemeld te worden met het `root` account van het systeem. Na succesvol aanmelden op de hoofdpagina voor de Samba instellingen, is het mogelijk de systeemdocumentatie te bekijken of te starten door op het tabblad menu:Globals[] te klikken. Het onderdeel menu:Globals[] correspondeert met de sectie `[global]` in [.filename]#/usr/local/etc/smb.conf#. ==== Systeembrede instellingen Of Samba nu wordt ingesteld door [.filename]#/usr/local/etc/smb.conf# direct te bewerken of met swat, de eerste instellingen die gemaakt moeten worden zijn de volgende: `workgroup`:: NT Domeinnaam of Werkgroepnaam voor de computers die verbinding gaan maken met de server. `netbiosnaam`:: Hiermee wordt de NetBIOS naam waaronder de Samba server bekend zal zijn ingesteld. Standaard is de naam het eerste gedeelte van de DNS-naam van een host. `server string`:: Hiermee wordt de string ingesteld die te zien is als het commando `net view` en een aantal andere commando's die gebruik maken van de beschrijvende tekst voor de server gebruikt worden. ==== Beveiligingsinstellingen Twee van de belangrijkste instellingen in [.filename]#/usr/local/etc/smb.conf# zijn het gekozen beveiligingsmodel en het wachtwoord voor cliëntgebruikers. Deze worden met de volgende instellingen gemaakt: `security`:: De twee meest gebruikte mogelijkheden hier zijn `security = share` en `security = user`. Als de cliënten gebruikersnamen hebben die overeenkomen met hun gebruikersnaam op de FreeBSD machine, dan is het verstandig om te kiezen voor beveiliging op gebruikersniveau. Dit is het standaard beveiligingsbeleid en kent als voorwaarde dat gebruikers zich eerst moeten aanmelden voordat ze toegang krijgen tot gedeelde bronnen. + Bij beveiliging op shareniveau hoeft een cliënt niet met een geldige gebruikersnaam en wachtwoord aan te melden op de server voor het mogelijk is om een verbinding te proberen te krijgen met een gedeelde bron. Dit was het standaardbeveiligingsmodel voor oudere versies van Samba. `passdb backend`:: Samba kent aan de achterkant verschillende authenticatiemodellen. Cliënten kunnen authenticeren met LDAP, NIS+, een SQL-database of een aangepast wachtwoordbestand. De standaard authenticatiemethode is `smbpasswd`. Meer wordt hier niet behandeld. Als aangenomen wordt dat de standaard achterkant `smbpasswd` wordt gebruikt, dan moet [.filename]#/usr/local/etc/samba/smbpasswd# gemaakt worden om Samba in staat te stellen cliënten te authenticeren. Als het gewenst is om uw UNIX(R) gebruikersaccounts toegang te geven vanaf Windows(R) cliënten, gebruik dan het volgende commando: [source,shell] .... # smbpasswd -a gebruikersnaam .... [NOTE] ==== De aanbevolen backend is nu `tdbsam`, en het volgende command moet gebruikt worden om gebruikersaccounts toe te voegen: [source,shell] .... # pdbedit -a -u gebruikersnaam .... ==== In de http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection[Official Samba HOWTO] staat meer informatie over instelopties. Met de hier gegeven basisuitleg moet het mogelijk zijn Samba draaiende te krijgen. === Samba starten De port package:net/samba34[] voegt een nieuw opstartscript toe, dat gebruikt kan worden om Samba te beheren. Om dit script te activeren, zodat het bijvoorbeeld gebruikt kan worden om Samba te starten, stoppen, of te herstarten, dient de volgende regel aan [.filename]#/etc/rc.conf# toegevoegd te worden: [.programlisting] .... samba_enable="YES" .... Of, voor fijnkorrelig beheer: [.programlisting] .... nmbd_enable="YES" .... [.programlisting] .... smbd_enable="YES" .... [NOTE] ==== Dit stelt Samba ook in om automatisch tijdens het opstarten te starten. ==== Vervolgens is het mogelijk om Samba op elk moment te starten door dit te typen: [source,shell] .... # service samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. .... Refereer aan crossref:config[configtuning-rcd,Gebruik van rc met FreeBSD] voor meer informatie over het gebruikt van rc-scripts. Samba bestaat feitelijk uit drie afzonderlijke daemons. Het script [.filename]#samba# start de daemons nmbd en smbd. Als de winbind naamresolutiediensten in [.filename]#smb.conf# zijn ingeschakeld, dan start ook de daemon winbindd. Samba kan op ieder moment gestopt worden met: [source,shell] .... # service samba stop .... Samba is een complexe softwaresuite met functionaliteit waarmee verregaande integratie met Microsoft(R) Windows(R) netwerken mogelijk wordt. Informatie die verder gaat dan de basisinstallatie staat op http://www.samba.org[http://www.samba.org]. [[network-ntp]] == Tijd synchroniseren met NTP === Overzicht Na verloop van tijd gaat de tijd van een computer meestal uit de pas lopen. Het Netwerk Tijd Protocol (NTP) kan ervoor zorgen dat de tijd accuraat blijft. Veel diensten op Internet zijn afhankelijk, of hebben veel voordeel, van het betrouwbaar zijn van de tijd. Zo ontvangt een webserver bijvoorbeeld veel verzoeken om een bestand te sturen als dat gewijzigd is sinds een bepaald moment. In een LAN-omgeving is het van groot belang dat computers die bestanden delen van eenzelfde server gesynchroniseerde tijd hebben zodat de tijdstempels consistent blijven. Diensten zoals man:cron[8] zijn ook afhankelijk van een betrouwbare systeemtijd om commando's op het ingestelde moment uit te voeren. Bij FreeBSD zit de man:ntpd[8] NTP server die gebruikt kan worden om bij andere NTP servers de tijd op te vragen om de eigen klok gelijk te zetten of om de juiste tijd te verstrekken aan andere apparaten. === Passende NTP-servers kiezen Om de tijd te synchroniseren moeten er één of meer NTP-servers beschikbaar zijn. Een lokale systeembeheerder of een ISP heeft wellicht een NTP-server voor dit doel opgezet. Het is verstandig om documentatie te raadplegen en te bekijken of dat het geval is. Er is een http://support.ntp.org/bin/view/Servers/WebHome[online lijst van publiek toegankelijke NTP-servers] waarop een NTP-server gezocht kan worden die in geografische zin dichtbij een te synchroniseren computer ligt. Het is belangrijk te voldoen aan het beleid voor de betreffende server en toestemming te vragen als dat in de voorwaarden staat. Het is verstandig meerdere, niet van elkaar afhankelijke, NTP-servers te kiezen voor het geval een van de servers niet langer betrouwbaar is of niet bereikbaar is. man:ntpd[8] gebruikt de antwoorden die van andere servers ontvangen worden op intelligente wijze: betrouwbare servers krijgen voorrang boven onbetrouwbare servers. === Machine instellen ==== Basisinstellingen Als het alleen de bedoeling is de tijd te synchroniseren bij het opstarten van een machine, dan kan man:ntpdate[8] gebruikt worden. Dit kan van toepassing zijn op desktops die regelmatig herstart worden en niet echt regelmatig gesynchroniseerd hoeven te worden. Op sommige machines hoort echter man:ntpd[8] te draaien. Het gebruik van man:ntpdate[8] bij het opstarten is ook een goed idee voor machines waarop man:ntpd[8] draait. De man:ntpd[8] wijzigt de tijd geleidelijk, terwijl man:ntpdate[8] gewoon de tijd instelt, hoe groot het verschil tussen de bestaande tijd van een machine en de correcte tijd ook is. Om man:ntpdate[8] tijdens het opstarten in te schakelen kan `ntpdate_enable="YES"` aan [.filename]#/etc/rc.conf# worden toegevoegd. Alle voor de synchronisatie te gebruiken servers moeten dan, samen met eventuele opties voor man:ntpdate[8], in `ntpdate_flags` aangegeven worden. ==== Algemene instellingen NTP wordt ingesteld met het bestand [.filename]#/etc/ntp.conf# in het formaat dat beschreven staat in man:ntp.conf[5]. Hieronder volgt een eenvoudig voorbeeld: [.programlisting] .... server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift .... De optie `server` geeft aan welke servers er gebruikt moeten worden, met op elke regel een server. Als de server wordt ingesteld met het argument `prefer`, zoals bij `ntplocal.example.com`, dan krijgt die server de voorkeur boven de andere. Een antwoord van een voorkeursserver wordt genegeerd als dat significant afwijkt van de antwoorden van de andere servers. In andere gevallen wordt het gebruikt zonder rekening te houden met de andere antwoorden. Het argument `prefer` wordt meestal gebruikt voor NTP-servers waarvan bekend is dat ze erg betrouwbaar zijn, zoals die met speciale tijdbewakingshardware. De optie `driftfile` geeft aan welk bestand gebruikt wordt om de offset van de klokfrequentie van het systeem op te slaan. man:ntpd[8] gebruikt die om automatisch te compenseren voor het natuurlijke afwijken van de tijd, zodat er zelfs bij gebrek aan externe bronnen een redelijke accurate tijdsinstelling mogelijk is. De optie `driftfile` geeft aan welk bestand gebruikt wordt om informatie over eerdere antwoorden van NTP-servers die gebruikt worden op te slaan. Dit bestand bevat interne informatie voor NTP. Het hoort niet door andere processen gewijzigd te worden. ==== Toegang tot een server instellen Een NTP-server is standaard toegankelijk voor alle hosts op een netwerk. De optie `restrict` in [.filename]#/etc/ntp.conf# maakt het mogelijk om aan te geven welke machines de dienst mogen benaderen. Voor het blokkeren van toegang voor alle andere machines kan de volgende regel aan [.filename]#/etc/ntp.conf# toegevoegd worden: [.programlisting] .... restrict default ignore .... [NOTE] ==== Dit zal ook toegang van uw server naar alle servers die vermeld staan in uw lokale configuratie verhinderen. Als u uw NTP-server moet synchroniseren met een externe NTP-server, dient u deze specifieke server toe te staan. Lees de handleiding voor man:ntp.conf[5] voor meer informatie. ==== Om alleen machines op bijvoorbeeld het lokale netwerk toe te staan hun tijd te synchroniseren met een server, maar ze tegelijkertijd niet toe te staan om de server te draaien of de server als referentie voor synchronisatie te gebruiken, kan de volgende regel toegevoegd worden: [.programlisting] .... restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap .... Hierboven is `192.168.1.0` een IP-adres op een LAN en `255.255.255.0` is het bijbehorende netwerkmasker. [.filename]#/etc/ntp.conf# mag meerdere regels met `restrict` bevatten. Meer details staan in het onderdeel `Access Control Support` van man:ntp.conf[5]. === De NTP-server draaien De NTP-server kan bij het opstarten gestart worden door de regel `ntpd_enable="YES"` aan [.filename]#/etc/rc.conf# toe te voegen. Om extra opties aan man:ntpd[8] mee te geven kan de parameter `ntpd_flags` in [.filename]#/etc/rc.conf# gebruikt worden. Om de server zonder een herstart van de machine te starten kan `ntpd` uitgevoerd worden, met toevoeging van de parameters uit `ntpd_flags` in [.filename]#/etc/rc.conf#. Bijvoorbeeld: [source,shell] .... # ntpd -p /var/run/ntpd.pid .... === ntpd gebruiken met een tijdelijke Internetverbinding man:ntpd[8] heeft geen permanente verbinding met een netwerk nodig om goed te werken. Maar als er gebruik gemaakt wordt van een inbelverbinding, is het wellicht verstandig om ervoor te zorgen dat uitgaande NTP-verzoeken geen uitgaande verbinding kunnen starten. Als er gebruik gemaakt wordt van gebruikers-PPP, kunnen er `filter` commando's ingesteld worden in [.filename]#/etc/ppp/ppp.conf#. Bijvoorbeeld: [.programlisting] .... set filter dial 0 deny udp src eq 123 # NTP-verkeer zorgt niet voor uitbellen set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Inkomend NTP-verkeer houdt de verbinding niet open set filter alive 1 deny udp dst eq 123 # Uitgaand NTP-verkeer houdt de verbinding niet open set filter alive 2 permit 0/0 0/0 .... Meer details staan in de sectie `PACKET FILTERING` in man:ppp[8] en in de voorbeelden in [.filename]#/usr/shared/examples/ppp/#. [NOTE] ==== Sommige Internetproviders blokkeren lage poorten, waardoor NTP niet kan werken omdat er nooit een antwoord ontvangen kan worden door een machine. ==== === Meer informatie HTML-documentatie voor de NTP-server staat in [.filename]#/usr/shared/doc/ntp/#. [[network-syslogd]] == Hosts op afstand loggen met `syslogd` Het omgaan met systeemlogs is een cruciaal aspect van zowel beveiligings- als systeembeheer. Het in de gaten houden van logbestanden van meerdere hosts kan nogal onhandelbaar worden als deze hosts over (middel)grote netwerken zijn verspreid, of wanneer ze deel zijn van verschillende soorten netwerken. In deze gevallen kan het op afstand loggen het gehele proces een stuk aangenamer maken. Het centraal loggen naar een specifieke loghost kan wat van de administratieve last van het beheren van logbestanden wegnemen. Het aggregeren, samenvoegen, en roteren van logbestanden kan op één enkele plaats worden ingesteld, door gebruik te maken van de eigen gereedschappen van FreeBSD, zoals man:syslogd[8] en man:newsyslog[8]. In de volgende voorbeeldconfiguratie zal host `A`, genaamd `logserv.example.com`, loginformatie voor het plaatselijke netwerk verzamelen. Host `B`, genaamd `logclient.example.com`, zal loginformatie aan het serversysteem doorgeven. In echte configuraties hebben beide hosts degelijke voor- en terugwaartse DNS of regels in [.filename]#/etc/hosts# nodig. Anders worden de gegevens geweigerd door de server. === Configuratie van de logserver Logservers zijn machines die zijn geconfigureerd om loginformatie van hosts op afstand te accepteren. In de meeste gevallen is dit om de configuratie te vergemakkelijken, in andere gevallen kan het gewoon een beheersbeslissing zijn. Ongeacht de reden zijn er enkele eisen voordat er verder wordt gegaan. Een juist geconfigureerde logserver voldoet aan de volgende minimale eisen: * De regels van de firewall staan toe dat UDP wordt doorgegeven op poort 514 van zowel de cliënt als de server; * syslogd is ingesteld om berichten op afstand van cliëntmachines te accepteren; * De syslogd-server en alle cliëntmachines moeten geldige regels hebben voor zowel voorwaartse als terugwaartse DNS, of correct zijn geconfigureerd in [.filename]#/etc/hosts#. Om de logserver te configureren, moet de cliënt vermeld zijn in [.filename]#/etc/syslog.conf#, en moet de logfaciliteit zijn gespecificeerd: [.programlisting] .... +logclient.example.com *.* /var/log/logclient.log .... [NOTE] ==== Meer informatie over de verschillende ondersteunde en beschikbare _faciliteiten_ kan gevonden worden in de handleidingpagina man:syslog.conf[5]. ==== Eenmaal toegevoegd worden alle `faciliteits`-berichten gelogd naar het eerder gespecificeerde bestand, [.filename]#/var/log/logclient.log#. De servermachine moet ook het volgende in [.filename]#/etc/rc.conf# hebben staan: [.programlisting] .... syslogd_enable="YES" syslogd_flags="-a logclient.example.com -v -v" .... De eerste optie zet de daemon `syslogd` aan tijdens het opstarten, en de tweede regel staat toe dat gegevens van de cliënt op deze server worden geaccepteerd. Het laatste gedeelte, dat `-v -v` gebruikt, verhoogt de verbositeit van gelogde berichten. Dit is extreem handig voor het optimaal instellen van faciliteiten aangezien beheerders kunnen zien welk soort berichten onder welke faciliteit worden gelogd. Er kunnen meerdere opties `-a` worden gespecificeerd om logging vanuit meerdere cliënten toe te staan. IP-adressen en hele netblokken mogen ook worden gespecificeerd, bekijk de hulppagina man:syslog[3] voor een volledige lijst van mogelijke opties. Als laatste dient het logbestand gecreëerd te worden. De gebruikte manier maakt niet uit, maar man:touch[1] werkt prima in dit soort situaties: [source,shell] .... # touch /var/log/logclient.log .... Nu dient het `syslogd`-daemon herstart en geverifieerd worden: [source,shell] .... # service syslogd restart # pgrep syslog .... Als er een PID wordt teruggegeven, dan is de server succesvol herstart, en kan er begonnen worden met de configuratie van de cliënt. Raadpleeg de log [.filename]#/var/log/messages# voor uitvoer als de server niet is herstart. === Configuratie van de logcliënt Een logcliënt is een machine die loginformatie naar een logserver verstuurt en daarnaast lokale kopieën bewaart. Net als logservers moeten logcliënten ook aan enkele minimumeisen voldoen: * man:syslogd[8] moet zijn ingesteld om berichten van bepaalde soorten naar een logserver te sturen, die ze moet accepteren; * De firewall moet UDP-pakketten doorlaten op poort 514; * Zowel voorwaartse als terugwaartse DNS moeten geconfigureerd zijn of juiste regels in [.filename]#/etc/hosts# hebben. De configuratie van cliënten is wat soepeler dan die van servers. De cliëntmachine moet de volgende regels in [.filename]#/etc/rc.conf# hebben: [.programlisting] .... syslogd_enable="YES" syslogd_flags="-s -v -v" .... Net als eerder zullen deze regels de daemon `syslogd` tijdens het opstarten aanzetten, en de verbositeit van gelogde berichten verhogen. De optie `-s` voorkomt dat logs van deze cliënt vanuit andere hosts worden geaccepteerd. Faciliteiten beschrijven het systeemgedeelte waarvoor een bericht is gegenereerd. ftp en ipfw bijvoorbeeld zijn beide faciliteiten. Wanneer er logberichten worden gegenereerd voor deze twee diensten, zullen ze normaalgesproken deze twee gereedschappen in elk logbericht opnemen. Faciliteiten worden vergezeld van een prioriteit of niveau, welke wordt gebruikt om aan te geven hoe belangrijk een logbericht is. De meest voorkomende zullen `warning` en `info` zijn. Bekijk de handleidingpagina man:syslog[3] voor een volledige lijst van beschikbare faciliteiten en prioriteiten. De logserver moet in [.filename]#/etc/syslog.conf# van de cliënt zijn gedefinieerd. In dit geval wordt het symbool `@` gebruikt om loggegevens naar een server op afstand te sturen en zou er ongeveer als de volgende regel uit moeten zien: [.programlisting] .... *.* @logserv.example.com .... Eenmaal toegevoegd moet `syslogd` worden herstart zodat de veranderingen effect hebben: [source,shell] .... # service syslogd restart .... Om te testen of logberichten over het netwerk worden verzonden, wordt man:logger[1] op de cliënt gebruikt om een bericht naar `syslogd` te sturen: [source,shell] .... # logger "Testbericht van logclient" .... Dit bericht dient nu zowel in [.filename]#/var/log/messages# op de cliënt als [.filename]#/var/log/logclient.log# op de logserver te staan. === Logservers debuggen In bepaalde gevallen kan het nodig zijn om te debuggen als berichten niet door de logserver worden ontvangen. Er zijn verschillende redenen waarom dit kan gebeuren; de twee meest voorkomende zijn echter voorvallen met de netwerkverbinding en DNS. Om deze gevallen te testen, dient te worden nagegaan dat beide hosts elkaar kunnen bereiken door de hostnaam in [.filename]#/etc/rc.conf# te gebruiken. Als dit juist lijkt te werken, dient de optie `syslogd_flags` in [.filename]#/etc/rc.conf# te worden veranderd. In het volgende voorbeeld is [.filename]#/var/log/logclient.log# leeg, en noemt [.filename]#/var/log/messages# geen reden waarom het mislukt. Verander de optie `syslogd_flags` zoals in het volgende voorbeeld en herstart om de debuguitvoer te verhogen: [.programlisting] .... syslogd_flags="-d -a logclien.example.com -v -v" .... [source,shell] .... # service syslogd restart .... Debuggegevens zoals de volgende zullen meteen na de herstart over het scherm vliegen: [source,shell] .... logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel Logging to FILE /var/log/messages syslogd: kernel boot file is /boot/kernel/kernel cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; rejected in rule 0 due to name mismatch. .... Het is duidelijk dat de berichten worden geweigerd wegens een niet-overeenkomende naam. Na de configuratie grondig te hebben herzien, lijkt het of een typefout in de volgende regel in [.filename]#/etc/rc.conf# een probleem heeft: [.programlisting] .... syslogd_flags="-d -a logclien.example.com -v -v" .... De regel dient `logclient`, niet `logclien` te bevatten. Nadat de juiste wijzigingen zijn gemaakt, wordt er herstart met de verwachte resultaten: [source,shell] .... # service syslogd restart logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel syslogd: kernel boot file is /boot/kernel/kernel logmsg: pri 166, flags 17, from logserv.example.com, msg Dec 10 20:55:02 logserv.example.com syslogd: exiting on signal 2 cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; accepted in rule 0. logmsg: pri 15, flags 0, from logclient.example.com, msg Dec 11 02:01:28 trhodes: Test message 2 Logging to FILE /var/log/logclient.log Logging to FILE /var/log/messages .... Nu worden de berichten juist ontvangen en in het correcte bestand geplaatst. === Beveiligingsoverwegingen Zoals bij alle netwerkdiensten, dienen beveiligingseisen in acht te worden genomen voordat deze configuratie wordt geïmplementeerd. Soms kunnen logbestanden gevoelige gegevens bevatten over diensten die aanstaan op de lokale host, gebruikersaccounts, en configuratiegegevens. Netwerkgegevens die van de cliënt naar de server worden verzonden worden niet versleuteld noch met een wachtwoord beveiligd. Als versleuteling nodig is, kan package:security/stunnel[] worden gebruikt, wat gegevens over een versleutelde tunnel verstuurt. Aan lokale beveiliging moet ook gedacht worden. Logbestanden worden niet versleuteld tijdens gebruik of na logrotatie. Lokale gebruikers kunnen deze bestanden benaderen om aanvullende inzichten over de systeemconfiguratie op te doen. In deze gevallen is het van kritiek belang om de juiste rechten op deze bestanden in te stellen. Het gereedschap man:syslogd[8] ondersteunt het instellen van rechten op nieuw aangemaakte en geroteerde logbestanden. Het instellen van logbestanden op modus `600` dient al het ongewenste spieken door lokale gebruikers te verhinderen. diff --git a/documentation/content/pl/books/handbook/mac/_index.adoc b/documentation/content/pl/books/handbook/mac/_index.adoc index d3f74c11bc..9f1f9dd20b 100644 --- a/documentation/content/pl/books/handbook/mac/_index.adoc +++ b/documentation/content/pl/books/handbook/mac/_index.adoc @@ -1,810 +1,808 @@ --- title: Rozdział 16. Mandatory Access Control part: Część III. Administracja systemem prev: books/handbook/jails next: books/handbook/audit showBookMenu: true weight: 20 params: path: "/books/handbook/mac/" --- [[mac]] = Mandatory Access Control :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 16 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == Synopsis FreeBSD supports security extensions based on the POSIX(R).1e draft. These security mechanisms include file system Access Control Lists (crossref:security[fs-acl,“Access Control Lists”]) and Mandatory Access Control (MAC). MAC allows access control modules to be loaded in order to implement security policies. Some modules provide protections for a narrow subset of the system, hardening a particular service. Others provide comprehensive labeled security across all subjects and objects. The mandatory part of the definition indicates that enforcement of controls is performed by administrators and the operating system. This is in contrast to the default security mechanism of Discretionary Access Control (DAC) where enforcement is left to the discretion of users. This chapter focuses on the MAC framework and the set of pluggable security policy modules FreeBSD provides for enabling various security mechanisms. After reading this chapter, you will know: * The terminology associated with the MAC framework. * The capabilities of MAC security policy modules as well as the difference between a labeled and non-labeled policy. * The considerations to take into account before configuring a system to use the MAC framework. * Which MAC security policy modules are included in FreeBSD and how to configure them. * How to implement a more secure environment using the MAC framework. * How to test the MAC configuration to ensure the framework has been properly implemented. Before reading this chapter, you should: * Understand UNIX(R) and FreeBSD basics (crossref:basics[basics,FreeBSD Basics]). * Have some familiarity with security and how it pertains to FreeBSD (crossref:security[security,Security]). [WARNING] ==== Improper MAC configuration may cause loss of system access, aggravation of users, or inability to access the features provided by Xorg. More importantly, MAC should not be relied upon to completely secure a system. The MAC framework only augments an existing security policy. Without sound security practices and regular security checks, the system will never be completely secure. The examples contained within this chapter are for demonstration purposes and the example settings should _not_ be implemented on a production system. Implementing any security policy takes a good deal of understanding, proper design, and thorough testing. ==== While this chapter covers a broad range of security issues relating to the MAC framework, the development of new MAC security policy modules will not be covered. A number of security policy modules included with the MAC framework have specific characteristics which are provided for both testing and new module development. Refer to man:mac_test[4], man:mac_stub[4] and man:mac_none[4] for more information on these security policy modules and the various mechanisms they provide. [[mac-inline-glossary]] == Key Terms The following key terms are used when referring to the MAC framework: * _compartment_: a set of programs and data to be partitioned or separated, where users are given explicit access to specific component of a system. A compartment represents a grouping, such as a work group, department, project, or topic. Compartments make it possible to implement a need-to-know-basis security policy. * _integrity_: the level of trust which can be placed on data. As the integrity of the data is elevated, so does the ability to trust that data. * _level_: the increased or decreased setting of a security attribute. As the level increases, its security is considered to elevate as well. * _label_: a security attribute which can be applied to files, directories, or other items in the system. It could be considered a confidentiality stamp. When a label is placed on a file, it describes the security properties of that file and will only permit access by files, users, and resources with a similar security setting. The meaning and interpretation of label values depends on the policy configuration. Some policies treat a label as representing the integrity or secrecy of an object while other policies might use labels to hold rules for access. * _multilabel_: this property is a file system option which can be set in single-user mode using man:tunefs[8], during boot using man:fstab[5], or during the creation of a new file system. This option permits an administrator to apply different MAC labels on different objects. This option only applies to security policy modules which support labeling. * _single label_: a policy where the entire file system uses one label to enforce access control over the flow of data. Whenever `multilabel` is not set, all files will conform to the same label setting. * _object_: an entity through which information flows under the direction of a _subject_. This includes directories, files, fields, screens, keyboards, memory, magnetic storage, printers or any other data storage or moving device. An object is a data container or a system resource. Access to an object effectively means access to its data. * _subject_: any active entity that causes information to flow between _objects_ such as a user, user process, or system process. On FreeBSD, this is almost always a thread acting in a process on behalf of a user. * _policy_: a collection of rules which defines how objectives are to be achieved. A policy usually documents how certain items are to be handled. This chapter considers a policy to be a collection of rules which controls the flow of data and information and defines who has access to that data and information. * _high-watermark_: this type of policy permits the raising of security levels for the purpose of accessing higher level information. In most cases, the original level is restored after the process is complete. Currently, the FreeBSD MAC framework does not include this type of policy. * _low-watermark_: this type of policy permits lowering security levels for the purpose of accessing information which is less secure. In most cases, the original security level of the user is restored after the process is complete. The only security policy module in FreeBSD to use this is man:mac_lomac[4]. * _sensitivity_: usually used when discussing Multilevel Security (MLS). A sensitivity level describes how important or secret the data should be. As the sensitivity level increases, so does the importance of the secrecy, or confidentiality, of the data. [[mac-understandlabel]] == Understanding MAC Labels A MAC label is a security attribute which may be applied to subjects and objects throughout the system. When setting a label, the administrator must understand its implications in order to prevent unexpected or undesired behavior of the system. The attributes available on an object depend on the loaded policy module, as policy modules interpret their attributes in different ways. The security label on an object is used as a part of a security access control decision by a policy. With some policies, the label contains all of the information necessary to make a decision. In other policies, the labels may be processed as part of a larger rule set. There are two types of label policies: single label and multi label. By default, the system will use single label. The administrator should be aware of the pros and cons of each in order to implement policies which meet the requirements of the system's security model. A single label security policy only permits one label to be used for every subject or object. Since a single label policy enforces one set of access permissions across the entire system, it provides lower administration overhead, but decreases the flexibility of policies which support labeling. However, in many environments, a single label policy may be all that is required. A single label policy is somewhat similar to DAC as `root` configures the policies so that users are placed in the appropriate categories and access levels. A notable difference is that many policy modules can also restrict `root`. Basic control over objects will then be released to the group, but `root` may revoke or modify the settings at any time. When appropriate, a multi label policy can be set on a UFS file system by passing `multilabel` to man:tunefs[8]. A multi label policy permits each subject or object to have its own independent MAC label. The decision to use a multi label or single label policy is only required for policies which implement the labeling feature, such as `biba`, `lomac`, and `mls`. Some policies, such as `seeotheruids`, `portacl` and `partition`, do not use labels at all. Using a multi label policy on a partition and establishing a multi label security model can increase administrative overhead as everything in that file system has a label. This includes directories, files, and even device nodes. The following command will set `multilabel` on the specified UFS file system. This may only be done in single-user mode and is not a requirement for the swap file system: [source,shell] .... # tunefs -l enable / .... [NOTE] ==== Some users have experienced problems with setting the `multilabel` flag on the root partition. If this is the case, please review <>. ==== Since the multi label policy is set on a per-file system basis, a multi label policy may not be needed if the file system layout is well designed. Consider an example security MAC model for a FreeBSD web server. This machine uses the single label, `biba/high`, for everything in the default file systems. If the web server needs to run at `biba/low` to prevent write up capabilities, it could be installed to a separate UFS [.filename]#/usr/local# file system set at `biba/low`. === Label Configuration Virtually all aspects of label policy module configuration will be performed using the base system utilities. These commands provide a simple interface for object or subject configuration or the manipulation and verification of the configuration. All configuration may be done using `setfmac`, which is used to set MAC labels on system objects, and `setpmac`, which is used to set the labels on system subjects. For example, to set the `biba` MAC label to `high` on [.filename]#test#: [source,shell] .... # setfmac biba/high test .... If the configuration is successful, the prompt will be returned without error. A common error is `Permission denied` which usually occurs when the label is being set or modified on a restricted object. Other conditions may produce different failures. For instance, the file may not be owned by the user attempting to relabel the object, the object may not exist, or the object may be read-only. A mandatory policy will not allow the process to relabel the file, maybe because of a property of the file, a property of the process, or a property of the proposed new label value. For example, if a user running at low integrity tries to change the label of a high integrity file, or a user running at low integrity tries to change the label of a low integrity file to a high integrity label, these operations will fail. The system administrator may use `setpmac` to override the policy module's settings by assigning a different label to the invoked process: [source,shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... For currently running processes, such as sendmail, `getpmac` is usually used instead. This command takes a process ID (PID) in place of a command name. If users attempt to manipulate a file not in their access, subject to the rules of the loaded policy modules, the `Operation not permitted` error will be displayed. === Predefined Labels A few FreeBSD policy modules which support the labeling feature offer three predefined labels: `low`, `equal`, and `high`, where: * `low` is considered the lowest label setting an object or subject may have. Setting this on objects or subjects blocks their access to objects or subjects marked high. * `equal` sets the subject or object to be disabled or unaffected and should only be placed on objects considered to be exempt from the policy. * `high` grants an object or subject the highest setting available in the Biba and MLS policy modules. Such policy modules include man:mac_biba[4], man:mac_mls[4] and man:mac_lomac[4]. Each of the predefined labels establishes a different information flow directive. Refer to the manual page of the module to determine the traits of the generic label configurations. === Numeric Labels The Biba and MLS policy modules support a numeric label which may be set to indicate the precise level of hierarchical control. This numeric level is used to partition or sort information into different groups of classification, only permitting access to that group or a higher group level. For example: [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... may be interpreted as "Biba Policy Label/Grade 10:Compartments 2, 3 and 6: (grade 5 ...") In this example, the first grade would be considered the effective grade with effective compartments, the second grade is the low grade, and the last one is the high grade. In most configurations, such fine-grained settings are not needed as they are considered to be advanced configurations. System objects only have a current grade and compartment. System subjects reflect the range of available rights in the system, and network interfaces, where they are used for access control. The grade and compartments in a subject and object pair are used to construct a relationship known as _dominance_, in which a subject dominates an object, the object dominates the subject, neither dominates the other, or both dominate each other. The "both dominate" case occurs when the two labels are equal. Due to the information flow nature of Biba, a user has rights to a set of compartments that might correspond to projects, but objects also have a set of compartments. Users may have to subset their rights using `su` or `setpmac` in order to access objects in a compartment from which they are not restricted. === User Labels Users are required to have labels so that their files and processes properly interact with the security policy defined on the system. This is configured in [.filename]#/etc/login.conf# using login classes. Every policy module that uses labels will implement the user class setting. To set the user class default label which will be enforced by MAC, add a `label` entry. An example `label` entry containing every policy module is displayed below. Note that in a real configuration, the administrator would never enable every policy module. It is recommended that the rest of this chapter be reviewed before any configuration is implemented. [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... While users can not modify the default value, they may change their label after they login, subject to the constraints of the policy. The example above tells the Biba policy that a process's minimum integrity is `5`, its maximum is `15`, and the default effective label is `10`. The process will run at `10` until it chooses to change label, perhaps due to the user using `setpmac`, which will be constrained by Biba to the configured range. After any change to [.filename]#login.conf#, the login class capability database must be rebuilt using `cap_mkdb`. Many sites have a large number of users requiring several different user classes. In depth planning is required as this can become difficult to manage. === Network Interface Labels Labels may be set on network interfaces to help control the flow of data across the network. Policies using network interface labels function in the same way that policies function with respect to objects. Users at high settings in Biba, for example, will not be permitted to access network interfaces with a label of `low`. When setting the MAC label on network interfaces, `maclabel` may be passed to `ifconfig`: [source,shell] .... # ifconfig bge0 maclabel biba/equal .... This example will set the MAC label of `biba/equal` on the `bge0` interface. When using a setting similar to `biba/high(low-high)`, the entire label should be quoted to prevent an error from being returned. Each policy module which supports labeling has a tunable which may be used to disable the MAC label on network interfaces. Setting the label to `equal` will have a similar effect. Review the output of `sysctl`, the policy manual pages, and the information in the rest of this chapter for more information on those tunables. [[mac-planning]] == Planning the Security Configuration Before implementing any MAC policies, a planning phase is recommended. During the planning stages, an administrator should consider the implementation requirements and goals, such as: * How to classify information and resources available on the target systems. * Which information or resources to restrict access to along with the type of restrictions that should be applied. * Which MAC modules will be required to achieve this goal. A trial run of the trusted system and its configuration should occur _before_ a MAC implementation is used on production systems. Since different environments have different needs and requirements, establishing a complete security profile will decrease the need of changes once the system goes live. Consider how the MAC framework augments the security of the system as a whole. The various security policy modules provided by the MAC framework could be used to protect the network and file systems or to block users from accessing certain ports and sockets. Perhaps the best use of the policy modules is to load several security policy modules at a time in order to provide a MLS environment. This approach differs from a hardening policy, which typically hardens elements of a system which are used only for specific purposes. The downside to MLS is increased administrative overhead. The overhead is minimal when compared to the lasting effect of a framework which provides the ability to pick and choose which policies are required for a specific configuration and which keeps performance overhead down. The reduction of support for unneeded policies can increase the overall performance of the system as well as offer flexibility of choice. A good implementation would consider the overall security requirements and effectively implement the various security policy modules offered by the framework. A system utilizing MAC guarantees that a user will not be permitted to change security attributes at will. All user utilities, programs, and scripts must work within the constraints of the access rules provided by the selected security policy modules and control of the MAC access rules is in the hands of the system administrator. It is the duty of the system administrator to carefully select the correct security policy modules. For an environment that needs to limit access control over the network, the man:mac_portacl[4], man:mac_ifoff[4], and man:mac_biba[4] policy modules make good starting points. For an environment where strict confidentiality of file system objects is required, consider the man:mac_bsdextended[4] and man:mac_mls[4] policy modules. Policy decisions could be made based on network configuration. If only certain users should be permitted access to man:ssh[1], the man:mac_portacl[4] policy module is a good choice. In the case of file systems, access to objects might be considered confidential to some users, but not to others. As an example, a large development team might be broken off into smaller projects where developers in project A might not be permitted to access objects written by developers in project B. Yet both projects might need to access objects created by developers in project C. Using the different security policy modules provided by the MAC framework, users could be divided into these groups and then given access to the appropriate objects. Each security policy module has a unique way of dealing with the overall security of a system. Module selection should be based on a well thought out security policy which may require revision and reimplementation. Understanding the different security policy modules offered by the MAC framework will help administrators choose the best policies for their situations. The rest of this chapter covers the available modules, describes their use and configuration, and in some cases, provides insight on applicable situations. [CAUTION] ==== Implementing MAC is much like implementing a firewall since care must be taken to prevent being completely locked out of the system. The ability to revert back to a previous configuration should be considered and the implementation of MAC over a remote connection should be done with extreme caution. ==== [[mac-policies]] == Available MAC Policies The default FreeBSD kernel includes `options MAC`. This means that every module included with the MAC framework can be loaded with `kldload` as a run-time kernel module. After testing the module, add the module name to [.filename]#/boot/loader.conf# so that it will load during boot. Each module also provides a kernel option for those administrators who choose to compile their own custom kernel. FreeBSD includes a group of policies that will cover most security requirements. Each policy is summarized below. The last three policies support integer settings in place of the three default labels. [[mac-seeotheruids]] === The MAC See Other UIDs Policy Module name: [.filename]#mac_seeotheruids.ko# Kernel configuration line: `options MAC_SEEOTHERUIDS` Boot option: `mac_seeotheruids_load="YES"` The man:mac_seeotheruids[4] module extends the `security.bsd.see_other_uids` and `security.bsd.see_other_gids sysctl` tunables. This option does not require any labels to be set before configuration and can operate transparently with other modules. After loading the module, the following `sysctl` tunables may be used to control its features: * `security.mac.seeotheruids.enabled` enables the module and implements the default settings which deny users the ability to view processes and sockets owned by other users. * `security.mac.seeotheruids.specificgid_enabled` allows specified groups to be exempt from this policy. To exempt specific groups, use the `security.mac.seeotheruids.specificgid=_XXX_ sysctl` tunable, replacing _XXX_ with the numeric group ID to be exempted. * `security.mac.seeotheruids.primarygroup_enabled` is used to exempt specific primary groups from this policy. When using this tunable, `security.mac.seeotheruids.specificgid_enabled` may not be set. [[mac-bsdextended]] === The MAC BSD Extended Policy Module name: [.filename]#mac_bsdextended.ko# Kernel configuration line: `options MAC_BSDEXTENDED` Boot option: `mac_bsdextended_load="YES"` The man:mac_bsdextended[4] module enforces a file system firewall. It provides an extension to the standard file system permissions model, permitting an administrator to create a firewall-like ruleset to protect files, utilities, and directories in the file system hierarchy. When access to a file system object is attempted, the list of rules is iterated until either a matching rule is located or the end is reached. This behavior may be changed using `security.mac.bsdextended.firstmatch_enabled`. Similar to other firewall modules in FreeBSD, a file containing the access control rules can be created and read by the system at boot time using an man:rc.conf[5] variable. The rule list may be entered using man:ugidfw[8] which has a syntax similar to man:ipfw[8]. More tools can be written by using the functions in the man:libugidfw[3] library. After the man:mac_bsdextended[4] module has been loaded, the following command may be used to list the current rule configuration: [source,shell] .... # ugidfw list 0 slots, 0 rules .... By default, no rules are defined and everything is completely accessible. To create a rule which blocks all access by users but leaves `root` unaffected: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... While this rule is simple to implement, it is a very bad idea as it blocks all users from issuing any commands. A more realistic example blocks `user1` all access, including directory listings, to ``_user2_``'s home directory: [source,shell] .... # ugidfw set 2 subject uid user1 object uid user2 mode n # ugidfw set 3 subject uid user1 object gid user2 mode n .... Instead of `user1`, `not uid _user2_` could be used in order to enforce the same access restrictions for all users. However, the `root` user is unaffected by these rules. [NOTE] ==== Extreme caution should be taken when working with this module as incorrect use could block access to certain parts of the file system. ==== [[mac-ifoff]] === The MAC Interface Silencing Policy Module name: [.filename]#mac_ifoff.ko# Kernel configuration line: `options MAC_IFOFF` Boot option: `mac_ifoff_load="YES"` The man:mac_ifoff[4] module is used to disable network interfaces on the fly and to keep network interfaces from being brought up during system boot. It does not use labels and does not depend on any other MAC modules. Most of this module's control is performed through these `sysctl` tunables: * `security.mac.ifoff.lo_enabled` enables or disables all traffic on the loopback, man:lo[4], interface. * `security.mac.ifoff.bpfrecv_enabled` enables or disables all traffic on the Berkeley Packet Filter interface, man:bpf[4]. * `security.mac.ifoff.other_enabled` enables or disables traffic on all other interfaces. One of the most common uses of man:mac_ifoff[4] is network monitoring in an environment where network traffic should not be permitted during the boot sequence. Another use would be to write a script which uses an application such as package:security/aide[] to automatically block network traffic if it finds new or altered files in protected directories. [[mac-portacl]] === The MAC Port Access Control List Policy Module name: [.filename]#mac_portacl.ko# Kernel configuration line: `MAC_PORTACL` Boot option: `mac_portacl_load="YES"` The man:mac_portacl[4] module is used to limit binding to local TCP and UDP ports, making it possible to allow non-`root` users to bind to specified privileged ports below 1024. Once loaded, this module enables the MAC policy on all sockets. The following tunables are available: * `security.mac.portacl.enabled` enables or disables the policy completely. * `security.mac.portacl.port_high` sets the highest port number that man:mac_portacl[4] protects. * `security.mac.portacl.suser_exempt`, when set to a non-zero value, exempts the `root` user from this policy. * `security.mac.portacl.rules` specifies the policy as a text string of the form `rule[,rule,...]`, with as many rules as needed, and where each rule is of the form `idtype:id:protocol:port`. The `idtype` is either `uid` or `gid`. The `protocol` parameter can be `tcp` or `udp`. The `port` parameter is the port number to allow the specified user or group to bind to. Only numeric values can be used for the user ID, group ID, and port parameters. By default, ports below 1024 can only be used by privileged processes which run as `root`. For man:mac_portacl[4] to allow non-privileged processes to bind to ports below 1024, set the following tunables as follows: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 # sysctl net.inet.ip.portrange.reservedhigh=0 .... To prevent the `root` user from being affected by this policy, set `security.mac.portacl.suser_exempt` to a non-zero value. [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... To allow the `www` user with UID 80 to bind to port 80 without ever needing `root` privilege: [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... This next example permits the user with the UID of 1001 to bind to TCP ports 110 (POP3) and 995 (POP3s): [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... [[mac-partition]] === The MAC Partition Policy Module name: [.filename]#mac_partition.ko# Kernel configuration line: `options MAC_PARTITION` Boot option: `mac_partition_load="YES"` The man:mac_partition[4] policy drops processes into specific "partitions" based on their MAC label. Most configuration for this policy is done using man:setpmac[8]. One `sysctl` tunable is available for this policy: * `security.mac.partition.enabled` enables the enforcement of MAC process partitions. When this policy is enabled, users will only be permitted to see their processes, and any others within their partition, but will not be permitted to work with utilities outside the scope of this partition. For instance, a user in the `insecure` class will not be permitted to access `top` as well as many other commands that must spawn a process. This example adds `top` to the label set on users in the `insecure` class. All processes spawned by users in the `insecure` class will stay in the `partition/13` label. [source,shell] .... # setpmac partition/13 top .... This command displays the partition label and the process list: [source,shell] .... # ps Zax .... This command displays another user's process partition label and that user's currently running processes: [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== Users can see processes in ``root``'s label unless the man:mac_seeotheruids[4] policy is loaded. ==== [[mac-mls]] === The MAC Multi-Level Security Module Module name: [.filename]#mac_mls.ko# Kernel configuration line: `options MAC_MLS` Boot option: `mac_mls_load="YES"` The man:mac_mls[4] policy controls access between subjects and objects in the system by enforcing a strict information flow policy. In MLS environments, a "clearance" level is set in the label of each subject or object, along with compartments. Since these clearance levels can reach numbers greater than several thousand, it would be a daunting task to thoroughly configure every subject or object. To ease this administrative overhead, three labels are included in this policy: `mls/low`, `mls/equal`, and `mls/high`, where: * Anything labeled with `mls/low` will have a low clearance level and not be permitted to access information of a higher level. This label also prevents objects of a higher clearance level from writing or passing information to a lower level. * `mls/equal` should be placed on objects which should be exempt from the policy. * `mls/high` is the highest level of clearance possible. Objects assigned this label will hold dominance over all other objects in the system; however, they will not permit the leaking of information to objects of a lower class. MLS provides: * A hierarchical security level with a set of non-hierarchical categories. * Fixed rules of `no read up, no write down`. This means that a subject can have read access to objects on its own level or below, but not above. Similarly, a subject can have write access to objects on its own level or above, but not beneath. * Secrecy, or the prevention of inappropriate disclosure of data. * A basis for the design of systems that concurrently handle data at multiple sensitivity levels without leaking information between secret and confidential. The following `sysctl` tunables are available: * `security.mac.mls.enabled` is used to enable or disable the MLS policy. * `security.mac.mls.ptys_equal` labels all man:pty[4] devices as `mls/equal` during creation. * `security.mac.mls.revocation_enabled` revokes access to objects after their label changes to a label of a lower grade. * `security.mac.mls.max_compartments` sets the maximum number of compartment levels allowed on a system. To manipulate MLS labels, use man:setfmac[8]. To assign a label to an object: [source,shell] .... # setfmac mls/5 test .... To get the MLS label for the file [.filename]#test#: [source,shell] .... # getfmac test .... Another approach is to create a master policy file in [.filename]#/etc/# which specifies the MLS policy information and to feed that file to `setfmac`. When using the MLS policy module, an administrator plans to control the flow of sensitive information. The default `block read up block write down` sets everything to a low state. Everything is accessible and an administrator slowly augments the confidentiality of the information. Beyond the three basic label options, an administrator may group users and groups as required to block the information flow between them. It might be easier to look at the information in clearance levels using descriptive words, such as classifications of `Confidential`, `Secret`, and `Top Secret`. Some administrators instead create different groups based on project levels. Regardless of the classification method, a well thought out plan must exist before implementing a restrictive policy. Some example situations for the MLS policy module include an e-commerce web server, a file server holding critical company information, and financial institution environments. [[mac-biba]] === The MAC Biba Module Module name: [.filename]#mac_biba.ko# Kernel configuration line: `options MAC_BIBA` Boot option: `mac_biba_load="YES"` The man:mac_biba[4] module loads the MAC Biba policy. This policy is similar to the MLS policy with the exception that the rules for information flow are slightly reversed. This is to prevent the downward flow of sensitive information whereas the MLS policy prevents the upward flow of sensitive information. In Biba environments, an "integrity" label is set on each subject or object. These labels are made up of hierarchical grades and non-hierarchical components. As a grade ascends, so does its integrity. Supported labels are `biba/low`, `biba/equal`, and `biba/high`, where: * `biba/low` is considered the lowest integrity an object or subject may have. Setting this on objects or subjects blocks their write access to objects or subjects marked as `biba/high`, but will not prevent read access. * `biba/equal` should only be placed on objects considered to be exempt from the policy. * `biba/high` permits writing to objects set at a lower label, but does not permit reading that object. It is recommended that this label be placed on objects that affect the integrity of the entire system. Biba provides: * Hierarchical integrity levels with a set of non-hierarchical integrity categories. * Fixed rules are `no write up, no read down`, the opposite of MLS. A subject can have write access to objects on its own level or below, but not above. Similarly, a subject can have read access to objects on its own level or above, but not below. * Integrity by preventing inappropriate modification of data. * Integrity levels instead of MLS sensitivity levels. The following tunables can be used to manipulate the Biba policy: * `security.mac.biba.enabled` is used to enable or disable enforcement of the Biba policy on the target machine. * `security.mac.biba.ptys_equal` is used to disable the Biba policy on man:pty[4] devices. * `security.mac.biba.revocation_enabled` forces the revocation of access to objects if the label is changed to dominate the subject. To access the Biba policy setting on system objects, use `setfmac` and `getfmac`: [source,shell] .... # setfmac biba/low test # getfmac test test: biba/low .... Integrity, which is different from sensitivity, is used to guarantee that information is not manipulated by untrusted parties. This includes information passed between subjects and objects. It ensures that users will only be able to modify or access information they have been given explicit access to. The man:mac_biba[4] security policy module permits an administrator to configure which files and programs a user may see and invoke while assuring that the programs and files are trusted by the system for that user. During the initial planning phase, an administrator must be prepared to partition users into grades, levels, and areas. The system will default to a high label once this policy module is enabled, and it is up to the administrator to configure the different grades and levels for users. Instead of using clearance levels, a good planning method could include topics. For instance, only allow developers modification access to the source code repository, source code compiler, and other development utilities. Other users would be grouped into other categories such as testers, designers, or end users and would only be permitted read access. A lower integrity subject is unable to write to a higher integrity subject and a higher integrity subject cannot list or read a lower integrity object. Setting a label at the lowest possible grade could make it inaccessible to subjects. Some prospective environments for this security policy module would include a constrained web server, a development and test machine, and a source code repository. A less useful implementation would be a personal workstation, a machine used as a router, or a network firewall. [[mac-lomac]] === The MAC Low-watermark Module Module name: [.filename]#mac_lomac.ko# Kernel configuration line: `options MAC_LOMAC` Boot option: `mac_lomac_load="YES"` Unlike the MAC Biba policy, the man:mac_lomac[4] policy permits access to lower integrity objects only after decreasing the integrity level to not disrupt any integrity rules. The Low-watermark integrity policy works almost identically to Biba, with the exception of using floating labels to support subject demotion via an auxiliary grade compartment. This secondary compartment takes the form `[auxgrade]`. When assigning a policy with an auxiliary grade, use the syntax `lomac/10[2]`, where `2` is the auxiliary grade. This policy relies on the ubiquitous labeling of all system objects with integrity labels, permitting subjects to read from low integrity objects and then downgrading the label on the subject to prevent future writes to high integrity objects using `[auxgrade]`. The policy may provide greater compatibility and require less initial configuration than Biba. Like the Biba and MLS policies, `setfmac` and `setpmac` are used to place labels on system objects: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... The auxiliary grade `low` is a feature provided only by the MACLOMAC policy. [[mac-userlocked]] == User Lock Down This example considers a relatively small storage system with fewer than fifty users. Users will have login capabilities and are permitted to store data and access resources. For this scenario, the man:mac_bsdextended[4] and man:mac_seeotheruids[4] policy modules could co-exist and block access to system objects while hiding user processes. Begin by adding the following line to [.filename]#/boot/loader.conf#: [.programlisting] .... mac_seeotheruids_load="YES" .... The man:mac_bsdextended[4] security policy module may be activated by adding this line to [.filename]#/etc/rc.conf#: [.programlisting] .... ugidfw_enable="YES" .... Default rules stored in [.filename]#/etc/rc.bsdextended# will be loaded at system initialization. However, the default entries may need modification. Since this machine is expected only to service users, everything may be left commented out except the last two lines in order to force the loading of user owned system objects by default. Add the required users to this machine and reboot. For testing purposes, try logging in as a different user across two consoles. Run `ps aux` to see if processes of other users are visible. Verify that running man:ls[1] on another user's home directory fails. Do not try to test with the `root` user unless the specific ``sysctl``s have been modified to block super user access. [NOTE] ==== When a new user is added, their man:mac_bsdextended[4] rule will not be in the ruleset list. To update the ruleset quickly, unload the security policy module and reload it again using man:kldunload[8] and man:kldload[8]. ==== [[mac-implementing]] == Nagios in a MAC Jail This section demonstrates the steps that are needed to implement the Nagios network monitoring system in a MAC environment. This is meant as an example which still requires the administrator to test that the implemented policy meets the security requirements of the network before using in a production environment. This example requires `multilabel` to be set on each file system. It also assumes that package:net-mgmt/nagios-plugins[], package:net-mgmt/nagios[], and package:www/apache22[] are all installed, configured, and working correctly before attempting the integration into the MAC framework. === Create an Insecure User Class Begin the procedure by adding the following user class to [.filename]#/etc/login.conf#: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... Then, add the following line to the default user class section: [.programlisting] .... :label=biba/high: .... Save the edits and issue the following command to rebuild the database: [source,shell] .... # cap_mkdb /etc/login.conf .... === Configure Users Set the `root` user to the default class using: [source,shell] .... # pw usermod root -L default .... All user accounts that are not `root` will now require a login class. The login class is required, otherwise users will be refused access to common commands. The following `sh` script should do the trick: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ /etc/passwd`; do pw usermod $x -L default; done; .... Next, drop the `nagios` and `www` accounts into the insecure class: [source,shell] .... # pw usermod nagios -L insecure # pw usermod www -L insecure .... === Create the Contexts File A contexts file should now be created as [.filename]#/etc/policy.contexts#: [.programlisting] .... # This is the default BIBA policy for this system. # System: /var/run(/.*)? biba/equal /dev/(/.*)? biba/equal /var biba/equal /var/spool(/.*)? biba/equal /var/log(/.*)? biba/equal /tmp(/.*)? biba/equal /var/tmp(/.*)? biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # For Nagios: /usr/local/etc/nagios(/.*)? biba/10 /var/spool/nagios(/.*)? biba/10 # For apache /usr/local/etc/apache(/.*)? biba/10 .... This policy enforces security by setting restrictions on the flow of information. In this specific configuration, users, including `root`, should never be allowed to access Nagios. Configuration files and processes that are a part of Nagios will be completely self contained or jailed. This file will be read after running `setfsmac` on every file system. This example sets the policy on the root file system: [source,shell] .... # setfsmac -ef /etc/policy.contexts / .... Next, add these edits to the main section of [.filename]#/etc/mac.conf#: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === Loader Configuration To finish the configuration, add the following lines to [.filename]#/boot/loader.conf#: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" security.mac.biba.trust_all_interfaces=1 .... And the following line to the network card configuration stored in [.filename]#/etc/rc.conf#. If the primary network configuration is done via DHCP, this may need to be configured manually after every system boot: [.programlisting] .... maclabel biba/equal .... === Testing the Configuration First, ensure that the web server and Nagios will not be started on system initialization and reboot. Ensure that `root` cannot access any of the files in the Nagios configuration directory. If `root` can list the contents of [.filename]#/var/spool/nagios#, something is wrong. Instead, a "permission denied" error should be returned. If all seems well, Nagios, Apache, and Sendmail can now be started: [source,shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Double check to ensure that everything is working properly. If not, check the log files for error messages. If needed, use man:sysctl[8] to disable the man:mac_biba[4] security policy module and try starting everything again as usual. [NOTE] ==== The `root` user can still change the security enforcement and edit its configuration files. The following command will permit the degradation of the security policy to a lower grade for a newly spawned shell: [source,shell] .... # setpmac biba/10 csh .... To block this from happening, force the user into a range using man:login.conf[5]. If man:setpmac[8] attempts to run a command outside of the compartment's range, an error will be returned and the command will not be executed. In this case, set root to `biba/high(high-high)`. ==== [[mac-troubleshoot]] == Troubleshooting the MAC Framework This section discusses common configuration errors and how to resolve them. The `multilabel` flag does not stay enabled on the root ([.filename]#/#) partition::: The following steps may resolve this transient error: [.procedure] ==== . Edit [.filename]#/etc/fstab# and set the root partition to `ro` for read-only. . Reboot into single user mode. . Run `tunefs -l enable` on [.filename]#/#. . Reboot the system. . Run `mount -urw`[.filename]#/# and change the `ro` back to `rw` in [.filename]#/etc/fstab# and reboot the system again. . Double-check the output from `mount` to ensure that `multilabel` has been properly set on the root file system. ==== After establishing a secure environment with MAC, Xorg no longer starts::: This could be caused by the MAC `partition` policy or by a mislabeling in one of the MAC labeling policies. To debug, try the following: [.procedure] ==== . Check the error message. If the user is in the `insecure` class, the `partition` policy may be the culprit. Try setting the user's class back to the `default` class and rebuild the database with `cap_mkdb`. If this does not alleviate the problem, go to step two. . Double-check that the label policies are set correctly for the user, Xorg, and the [.filename]#/dev# entries. . If neither of these resolve the problem, send the error message and a description of the environment to the {freebsd-questions}. ==== The `_secure_path: unable to stat .login_conf` error appears::: This error can appear when a user attempts to switch from the `root` user to another user in the system. This message usually occurs when the user has a higher label setting than that of the user they are attempting to become. For instance, if `joe` has a default label of `biba/low` and `root` has a label of `biba/high`, `root` cannot view ``joe``'s home directory. This will happen whether or not `root` has used `su` to become `joe` as the Biba integrity model will not permit `root` to view objects set at a lower integrity level. The system no longer recognizes `root`::: When this occurs, `whoami` returns `0` and `su` returns `who are you?`. + This can happen if a labeling policy has been disabled by man:sysctl[8] or the policy module was unloaded. If the policy is disabled, the login capabilities database needs to be reconfigured. Double check [.filename]#/etc/login.conf# to ensure that all `label` options have been removed and rebuild the database with `cap_mkdb`. + This may also happen if a policy restricts access to [.filename]#master.passwd#. This is usually caused by an administrator altering the file under a label which conflicts with the general policy being used by the system. In these cases, the user information would be read by the system and access would be blocked as the file has inherited the new label. Disable the policy using man:sysctl[8] and everything should return to normal. diff --git a/documentation/content/pl/books/handbook/network-servers/_index.adoc b/documentation/content/pl/books/handbook/network-servers/_index.adoc index 38f94cbc85..2f1acdf7b1 100644 --- a/documentation/content/pl/books/handbook/network-servers/_index.adoc +++ b/documentation/content/pl/books/handbook/network-servers/_index.adoc @@ -1,2403 +1,2402 @@ --- title: Rozdział 30. Network Servers part: Część IV. Komunikacja sieciowa prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 35 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Network Servers :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 30 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Synopsis This chapter covers some of the more frequently used network services on UNIX(R) systems. This includes installing, configuring, testing, and maintaining many different types of network services. Example configuration files are included throughout this chapter for reference. By the end of this chapter, readers will know: * How to manage the inetd daemon. * How to set up the Network File System (NFS). * How to set up the Network Information Server (NIS) for centralizing and sharing user accounts. * How to set FreeBSD up to act as an LDAP server or client * How to set up automatic network settings using DHCP. * How to set up a Domain Name Server (DNS). * How to set up the ApacheHTTP Server. * How to set up a File Transfer Protocol (FTP) server. * How to set up a file and print server for Windows(R) clients using Samba. * How to synchronize the time and date, and set up a time server using the Network Time Protocol (NTP). * How to set up iSCSI. This chapter assumes a basic knowledge of: * [.filename]#/etc/rc# scripts. * Network terminology. * Installation of additional third-party software (crossref:ports[ports,Installing Applications: Packages and Ports]). [[network-inetd]] == The inetd Super-Server The man:inetd[8] daemon is sometimes referred to as a Super-Server because it manages connections for many services. Instead of starting multiple applications, only the inetd service needs to be started. When a connection is received for a service that is managed by inetd, it determines which program the connection is destined for, spawns a process for that program, and delegates the program a socket. Using inetd for services that are not heavily used can reduce system load, when compared to running each daemon individually in stand-alone mode. Primarily, inetd is used to spawn other daemons, but several trivial protocols are handled internally, such as chargen, auth, time, echo, discard, and daytime. This section covers the basics of configuring inetd. [[network-inetd-conf]] === Configuration File Configuration of inetd is done by editing [.filename]#/etc/inetd.conf#. Each line of this configuration file represents an application which can be started by inetd. By default, every line starts with a comment (`#`), meaning that inetd is not listening for any applications. To configure inetd to listen for an application's connections, remove the `#` at the beginning of the line for that application. After saving your edits, configure inetd to start at system boot by editing [.filename]#/etc/rc.conf#: [.programlisting] .... inetd_enable="YES" .... To start inetd now, so that it listens for the service you configured, type: [source,shell] .... # service inetd start .... Once inetd is started, it needs to be notified whenever a modification is made to [.filename]#/etc/inetd.conf#: [[network-inetd-reread]] .Reloading the inetd Configuration File [example] ==== [source,shell] .... # service inetd reload .... ==== Typically, the default entry for an application does not need to be edited beyond removing the `#`. In some situations, it may be appropriate to edit the default entry. As an example, this is the default entry for man:ftpd[8] over IPv4: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... The seven columns in an entry are as follows: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... where: service-name:: The service name of the daemon to start. It must correspond to a service listed in [.filename]#/etc/services#. This determines which port inetd listens on for incoming connections to that service. When using a custom service, it must first be added to [.filename]#/etc/services#. socket-type:: Either `stream`, `dgram`, `raw`, or `seqpacket`. Use `stream` for TCP connections and `dgram` for UDP services. protocol:: Use one of the following protocol names: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protocol Name | Explanation |tcp or tcp4 |TCP IPv4 |udp or udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |Both TCP IPv4 and IPv6 |udp46 |Both UDP IPv4 and IPv6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: In this field, `wait` or `nowait` must be specified. `max-child`, `max-connections-per-ip-per-minute` and `max-child-per-ip` are optional. + `wait|nowait` indicates whether or not the service is able to handle its own socket. `dgram` socket types must use `wait` while `stream` daemons, which are usually multi-threaded, should use `nowait`. `wait` usually hands off multiple sockets to a single daemon, while `nowait` spawns a child daemon for each new socket. + The maximum number of child daemons inetd may spawn is set by `max-child`. For example, to limit ten instances of the daemon, place a `/10` after `nowait`. Specifying `/0` allows an unlimited number of children. + `max-connections-per-ip-per-minute` limits the number of connections from any particular IP address per minute. Once the limit is reached, further connections from this IP address will be dropped until the end of the minute. For example, a value of `/10` would limit any particular IP address to ten connection attempts per minute. `max-child-per-ip` limits the number of child processes that can be started on behalf on any single IP address at any moment. These options can limit excessive resource consumption and help to prevent Denial of Service attacks. + An example can be seen in the default settings for man:fingerd[8]: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s .... user:: The username the daemon will run as. Daemons typically run as `root`, `daemon`, or `nobody`. server-program:: The full path to the daemon. If the daemon is a service provided by inetd internally, use `internal`. server-program-arguments:: Used to specify any command arguments to be passed to the daemon on invocation. If the daemon is an internal service, use `internal`. [[network-inetd-cmdline]] === Command-Line Options Like most server daemons, inetd has a number of options that can be used to modify its behavior. By default, inetd is started with `-wW -C 60`. These options enable TCP wrappers for all services, including internal services, and prevent any IP address from requesting any service more than 60 times per minute. To change the default options which are passed to inetd, add an entry for `inetd_flags` in [.filename]#/etc/rc.conf#. If inetd is already running, restart it with `service inetd restart`. The available rate limiting options are: -c maximum:: Specify the default maximum number of simultaneous invocations of each service, where the default is unlimited. May be overridden on a per-service basis by using `max-child` in [.filename]#/etc/inetd.conf#. -C rate:: Specify the default maximum number of times a service can be invoked from a single IP address per minute. May be overridden on a per-service basis by using `max-connections-per-ip-per-minute` in [.filename]#/etc/inetd.conf#. -R rate:: Specify the maximum number of times a service can be invoked in one minute, where the default is `256`. A rate of `0` allows an unlimited number. -s maximum:: Specify the maximum number of times a service can be invoked from a single IP address at any one time, where the default is unlimited. May be overridden on a per-service basis by using `max-child-per-ip` in [.filename]#/etc/inetd.conf#. Additional options are available. Refer to man:inetd[8] for the full list of options. [[network-inetd-security]] === Security Considerations Many of the daemons which can be managed by inetd are not security-conscious. Some daemons, such as fingerd, can provide information that may be useful to an attacker. Only enable the services which are needed and monitor the system for excessive connection attempts. `max-connections-per-ip-per-minute`, `max-child` and `max-child-per-ip` can be used to limit such attacks. By default, TCP wrappers is enabled. Consult man:hosts_access[5] for more information on placing TCP restrictions on various inetd invoked daemons. [[network-nfs]] == Network File System (NFS) FreeBSD supports the Network File System (NFS), which allows a server to share directories and files with clients over a network. With NFS, users and programs can access files on remote systems as if they were stored locally. NFS has many practical uses. Some of the more common uses include: * Data that would otherwise be duplicated on each client can be kept in a single location and accessed by clients on the network. * Several clients may need access to the [.filename]#/usr/ports/distfiles# directory. Sharing that directory allows for quick access to the source files without having to download them to each client. * On large networks, it is often more convenient to configure a central NFS server on which all user home directories are stored. Users can log into a client anywhere on the network and have access to their home directories. * Administration of NFS exports is simplified. For example, there is only one file system where security or backup policies must be set. * Removable media storage devices can be used by other machines on the network. This reduces the number of devices throughout the network and provides a centralized location to manage their security. It is often more convenient to install software on multiple machines from a centralized installation media. NFS consists of a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly, a few processes have to be configured and running. These daemons must be running on the server: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Daemon | Description |nfsd |The NFS daemon which services requests from NFS clients. |mountd |The NFS mount daemon which carries out requests received from nfsd. |rpcbind | This daemon allows NFS clients to discover which port the NFS server is using. |=== Running man:nfsiod[8] on the client can improve performance, but is not required. [[network-configuring-nfs]] === Configuring the Server The file systems which the NFS server will share are specified in [.filename]#/etc/exports#. Each line in this file specifies a file system to be exported, which clients have access to that file system, and any access options. When adding entries to this file, each exported file system, its properties, and allowed hosts must occur on a single line. If no clients are listed in the entry, then any client on the network can mount that file system. The following [.filename]#/etc/exports# entries demonstrate how to export file systems. The examples can be modified to match the file systems and client names on the reader's network. There are many options that can be used in this file, but only a few will be mentioned here. See man:exports[5] for the full list of options. This example shows how to export [.filename]#/cdrom# to three hosts named _alpha_, _bravo_, and _charlie_: [.programlisting] .... /cdrom -ro alpha bravo charlie .... The `-ro` flag makes the file system read-only, preventing clients from making any changes to the exported file system. This example assumes that the host names are either in DNS or in [.filename]#/etc/hosts#. Refer to man:hosts[5] if the network does not have a DNS server. The next example exports [.filename]#/home# to three clients by IP address. This can be useful for networks without DNS or [.filename]#/etc/hosts# entries. The `-alldirs` flag allows subdirectories to be mount points. In other words, it will not automatically mount the subdirectories, but will permit the client to mount the directories that are required as needed. [.programlisting] .... /usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... This next example exports [.filename]#/a# so that two clients from different domains may access that file system. The `-maproot=root` allows `root` on the remote system to write data on the exported file system as `root`. If `-maproot=root` is not specified, the client's `root` user will be mapped to the server's `nobody` account and will be subject to the access limitations defined for `nobody`. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... A client can only be specified once per file system. For example, if [.filename]#/usr# is a single file system, these entries would be invalid as both entries specify the same host: [.programlisting] .... # Invalid when /usr is one file system /usr/src client /usr/ports client .... The correct format for this situation is to use one entry: [.programlisting] .... /usr/src /usr/ports client .... The following is an example of a valid export list, where [.filename]#/usr# and [.filename]#/exports# are local file systems: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... To enable the processes required by the NFS server at boot time, add these options to [.filename]#/etc/rc.conf#: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_enable="YES" .... The server can be started now by running this command: [source,shell] .... # service nfsd start .... Whenever the NFS server is started, mountd also starts automatically. However, mountd only reads [.filename]#/etc/exports# when it is started. To make subsequent [.filename]#/etc/exports# edits take effect immediately, force mountd to reread it: [source,shell] .... # service mountd reload .... === Configuring the Client To enable NFS clients, set this option in each client's [.filename]#/etc/rc.conf#: [.programlisting] .... nfs_client_enable="YES" .... Then, run this command on each NFS client: [source,shell] .... # service nfsclient start .... The client now has everything it needs to mount a remote file system. In these examples, the server's name is `server` and the client's name is `client`. To mount [.filename]#/home# on `server` to the [.filename]#/mnt# mount point on `client`: [source,shell] .... # mount server:/home /mnt .... The files and directories in [.filename]#/home# will now be available on `client`, in the [.filename]#/mnt# directory. To mount a remote file system each time the client boots, add it to [.filename]#/etc/fstab#: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... Refer to man:fstab[5] for a description of all available options. === Locking Some applications require file locking to operate correctly. To enable locking, add these lines to [.filename]#/etc/rc.conf# on both the client and server: [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... Then start the applications: [source,shell] .... # service lockd start # service statd start .... If locking is not required on the server, the NFS client can be configured to lock locally by including `-L` when running mount. Refer to man:mount_nfs[8] for further details. [[network-autofs]] === Automating Mounts with man:autofs[5] [NOTE] ==== The man:autofs[5] automount facility is supported starting with FreeBSD 10.1-RELEASE. To use the automounter functionality in older versions of FreeBSD, use man:amd[8] instead. This chapter only describes the man:autofs[5] automounter. ==== The man:autofs[5] facility is a common name for several components that, together, allow for automatic mounting of remote and local filesystems whenever a file or directory within that file system is accessed. It consists of the kernel component, man:autofs[5], and several userspace applications: man:automount[8], man:automountd[8] and man:autounmountd[8]. It serves as an alternative for man:amd[8] from previous FreeBSD releases. Amd is still provided for backward compatibility purposes, as the two use different map format; the one used by autofs is the same as with other SVR4 automounters, such as the ones in Solaris, MacOS X, and Linux. The man:autofs[5] virtual filesystem is mounted on specified mountpoints by man:automount[8], usually invoked during boot. Whenever a process attempts to access file within the man:autofs[5] mountpoint, the kernel will notify man:automountd[8] daemon and pause the triggering process. The man:automountd[8] daemon will handle kernel requests by finding the proper map and mounting the filesystem according to it, then signal the kernel to release blocked process. The man:autounmountd[8] daemon automatically unmounts automounted filesystems after some time, unless they are still being used. The primary autofs configuration file is [.filename]#/etc/auto_master#. It assigns individual maps to top-level mounts. For an explanation of [.filename]#auto_master# and the map syntax, refer to man:auto_master[5]. There is a special automounter map mounted on [.filename]#/net#. When a file is accessed within this directory, man:autofs[5] looks up the corresponding remote mount and automatically mounts it. For instance, an attempt to access a file within [.filename]#/net/foobar/usr# would tell man:automountd[8] to mount the [.filename]#/usr# export from the host `foobar`. .Mounting an Export with man:autofs[5] [example] ==== In this example, `showmount -e` shows the exported file systems that can be mounted from the NFS server, `foobar`: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /net/foobar/usr .... ==== The output from `showmount` shows [.filename]#/usr# as an export. When changing directories to [.filename]#/host/foobar/usr#, man:automountd[8] intercepts the request and attempts to resolve the hostname `foobar`. If successful, man:automountd[8] automatically mounts the source export. To enable man:autofs[5] at boot time, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... autofs_enable="YES" .... Then man:autofs[5] can be started by running: [source,shell] .... # service automount start # service automountd start # service autounmountd start .... The man:autofs[5] map format is the same as in other operating systems. Information about this format from other sources can be useful, like the http://web.archive.org/web/20160813071113/http://images.apple.com/business/docs/Autofs.pdf[Mac OS X document]. Consult the man:automount[8], man:automountd[8], man:autounmountd[8], and man:auto_master[5] manual pages for more information. [[network-nis]] == Network Information System (NIS) Network Information System (NIS) is designed to centralize administration of UNIX(R)-like systems such as Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD, and FreeBSD. NIS was originally known as Yellow Pages but the name was changed due to trademark issues. This is the reason why NIS commands begin with `yp`. NIS is a Remote Procedure Call (RPC)-based client/server system that allows a group of machines within an NIS domain to share a common set of configuration files. This permits a system administrator to set up NIS client systems with only minimal configuration data and to add, remove, or modify configuration data from a single location. FreeBSD uses version 2 of the NIS protocol. === NIS Terms and Processes Table 28.1 summarizes the terms and important processes used by NIS: .NIS Terminology [cols="1,1", frame="none", options="header"] |=== | Term | Description |NIS domain name |NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS. |man:rpcbind[8] |This service enables RPC and must be running in order to run an NIS server or act as an NIS client. |man:ypbind[8] |This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server. |man:ypserv[8] |This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-FreeBSD clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients. |man:rpc.yppasswdd[8] |This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there. |=== === Machine Types There are three types of hosts in an NIS environment: * NIS master server + This server acts as a central repository for host configuration information and maintains the authoritative copy of the files used by all of the NIS clients. The [.filename]#passwd#, [.filename]#group#, and other various files used by NIS clients are stored on the master server. While it is possible for one machine to be an NIS master server for more than one NIS domain, this type of configuration will not be covered in this chapter as it assumes a relatively small-scale NIS environment. * NIS slave servers + NIS slave servers maintain copies of the NIS master's data files in order to provide redundancy. Slave servers also help to balance the load of the master server as NIS clients always attach to the NIS server which responds first. * NIS clients + NIS clients authenticate against the NIS server during log on. Information in many files can be shared using NIS. The [.filename]#master.passwd#, [.filename]#group#, and [.filename]#hosts# files are commonly shared via NIS. Whenever a process on a client needs information that would normally be found in these files locally, it makes a query to the NIS server that it is bound to instead. === Planning Considerations This section describes a sample NIS environment which consists of 15 FreeBSD machines with no centralized point of administration. Each machine has its own [.filename]#/etc/passwd# and [.filename]#/etc/master.passwd#. These files are kept in sync with each other only through manual intervention. Currently, when a user is added to the lab, the process must be repeated on all 15 machines. The configuration of the lab will be as follows: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Machine name | IP address | Machine role |`ellington` |`10.0.0.2` |NIS master |`coltrane` |`10.0.0.3` |NIS slave |`basie` |`10.0.0.4` |Faculty workstation |`bird` |`10.0.0.5` |Client machine |`cli[1-11]` |`10.0.0.[6-17]` |Other client machines |=== If this is the first time an NIS scheme is being developed, it should be thoroughly planned ahead of time. Regardless of network size, several decisions need to be made as part of the planning process. ==== Choosing a NIS Domain Name When a client broadcasts its requests for info, it includes the name of the NIS domain that it is part of. This is how multiple servers on one network can tell which server should answer which request. Think of the NIS domain name as the name for a group of hosts. Some organizations choose to use their Internet domain name for their NIS domain name. This is not recommended as it can cause confusion when trying to debug network problems. The NIS domain name should be unique within the network and it is helpful if it describes the group of machines it represents. For example, the Art department at Acme Inc. might be in the "acme-art"NIS domain. This example will use the domain name `test-domain`. However, some non-FreeBSD operating systems require the NIS domain name to be the same as the Internet domain name. If one or more machines on the network have this restriction, the Internet domain name _must_ be used as the NIS domain name. ==== Physical Server Requirements There are several things to keep in mind when choosing a machine to use as a NIS server. Since NIS clients depend upon the availability of the server, choose a machine that is not rebooted frequently. The NIS server should ideally be a stand alone machine whose sole purpose is to be an NIS server. If the network is not heavily used, it is acceptable to put the NIS server on a machine running other services. However, if the NIS server becomes unavailable, it will adversely affect all NIS clients. === Configuring the NIS Master Server The canonical copies of all NIS files are stored on the master server. The databases used to store the information are called NIS maps. In FreeBSD, these maps are stored in [.filename]#/var/yp/[domainname]# where [.filename]#[domainname]# is the name of the NIS domain. Since multiple domains are supported, it is possible to have several directories, one for each domain. Each domain will have its own independent set of maps. NIS master and slave servers handle all NIS requests through man:ypserv[8]. This daemon is responsible for receiving incoming requests from NIS clients, translating the requested domain and map name to a path to the corresponding database file, and transmitting data from the database back to the client. Setting up a master NIS server can be relatively straight forward, depending on environmental needs. Since FreeBSD provides built-in NIS support, it only needs to be enabled by adding the following lines to [.filename]#/etc/rc.conf#: [.programlisting] .... nisdomainname="test-domain" <.> nis_server_enable="YES" <.> nis_yppasswdd_enable="YES" <.> .... <.> This line sets the NIS domain name to `test-domain`. <.> This automates the start up of the NIS server processes when the system boots. <.> This enables the man:rpc.yppasswdd[8] daemon so that users can change their NIS password from a client machine. Care must be taken in a multi-server domain where the server machines are also NIS clients. It is generally a good idea to force the servers to bind to themselves rather than allowing them to broadcast bind requests and possibly become bound to each other. Strange failure modes can result if one server goes down and others are dependent upon it. Eventually, all the clients will time out and attempt to bind to other servers, but the delay involved can be considerable and the failure mode is still present since the servers might bind to each other all over again. A server that is also a client can be forced to bind to a particular server by adding these additional lines to [.filename]#/etc/rc.conf#: [.programlisting] .... nis_client_enable="YES" <.> nis_client_flags="-S test-domain,server" <.> .... <.> This enables running client stuff as well. <.> This line sets the NIS domain name to `test-domain` and bind to itself. After saving the edits, type `/etc/netstart` to restart the network and apply the values defined in [.filename]#/etc/rc.conf#. Before initializing the NIS maps, start man:ypserv[8]: [source,shell] .... # service ypserv start .... ==== Initializing the NIS Maps NIS maps are generated from the configuration files in [.filename]#/etc# on the NIS master, with one exception: [.filename]#/etc/master.passwd#. This is to prevent the propagation of passwords to all the servers in the NIS domain. Therefore, before the NIS maps are initialized, configure the primary password files: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... It is advisable to remove all entries for system accounts as well as any user accounts that do not need to be propagated to the NIS clients, such as the `root` and any other administrative accounts. [NOTE] ==== Ensure that the [.filename]#/var/yp/master.passwd# is neither group or world readable by setting its permissions to `600`. ==== After completing this task, initialize the NIS maps. FreeBSD includes the man:ypinit[8] script to do this. When generating maps for the master server, include `-m` and specify the NIS domain name: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... This will create [.filename]#/var/yp/Makefile# from [.filename]#/var/yp/Makefile.dist#. By default, this file assumes that the environment has a single NIS server with only FreeBSD clients. Since `test-domain` has a slave server, edit this line in [.filename]#/var/yp/Makefile# so that it begins with a comment (`#`): [.programlisting] .... NOPUSH = "True" .... ==== Adding New Users Every time a new user is created, the user account must be added to the master NIS server and the NIS maps rebuilt. Until this occurs, the new user will not be able to login anywhere except on the NIS master. For example, to add the new user `jsmith` to the `test-domain` domain, run these commands on the master server: [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... The user could also be added using `adduser jsmith` instead of `pw useradd smith`. === Setting up a NIS Slave Server To set up an NIS slave server, log on to the slave server and edit [.filename]#/etc/rc.conf# as for the master server. Do not generate any NIS maps, as these already exist on the master server. When running `ypinit` on the slave server, use `-s` (for slave) instead of `-m` (for master). This option requires the name of the NIS master in addition to the domain name, as seen in this example: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. .... This will generate a directory on the slave server called [.filename]#/var/yp/test-domain# which contains copies of the NIS master server's maps. Adding these [.filename]#/etc/crontab# entries on each slave server will force the slaves to sync their maps with the maps on the master server: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... These entries are not mandatory because the master server automatically attempts to push any map changes to its slaves. However, since clients may depend upon the slave server to provide correct password information, it is recommended to force frequent password map updates. This is especially important on busy networks where map updates might not always complete. To finish the configuration, run `/etc/netstart` on the slave server in order to start the NIS services. === Setting Up an NIS Client An NIS client binds to an NIS server using man:ypbind[8]. This daemon broadcasts RPC requests on the local network. These requests specify the domain name configured on the client. If an NIS server in the same domain receives one of the broadcasts, it will respond to ypbind, which will record the server's address. If there are several servers available, the client will use the address of the first server to respond and will direct all of its NIS requests to that server. The client will automatically ping the server on a regular basis to make sure it is still available. If it fails to receive a reply within a reasonable amount of time, ypbind will mark the domain as unbound and begin broadcasting again in the hopes of locating another server. To configure a FreeBSD machine to be an NIS client: [.procedure] . Edit [.filename]#/etc/rc.conf# and add the following lines in order to set the NIS domain name and start man:ypbind[8] during network startup: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... . To import all possible password entries from the NIS server, use `vipw` to remove all user accounts except one from [.filename]#/etc/master.passwd#. When removing the accounts, keep in mind that at least one local account should remain and this account should be a member of `wheel`. If there is a problem with NIS, this local account can be used to log in remotely, become the superuser, and fix the problem. Before saving the edits, add the following line to the end of the file: + [.programlisting] .... +::::::::: .... + This line configures the client to provide anyone with a valid account in the NIS server's password maps an account on the client. There are many ways to configure the NIS client by modifying this line. One method is described in <>. For more detailed reading, refer to the book `Managing NFS and NIS`, published by O'Reilly Media. . To import all possible group entries from the NIS server, add this line to [.filename]#/etc/group#: + [.programlisting] .... +:*:: .... To start the NIS client immediately, execute the following commands as the superuser: [source,shell] .... # /etc/netstart # service ypbind start .... After completing these steps, running `ypcat passwd` on the client should show the server's [.filename]#passwd# map. === NIS Security Since RPC is a broadcast-based service, any system running ypbind within the same domain can retrieve the contents of the NIS maps. To prevent unauthorized transactions, man:ypserv[8] supports a feature called "securenets" which can be used to restrict access to a given set of hosts. By default, this information is stored in [.filename]#/var/yp/securenets#, unless man:ypserv[8] is started with `-p` and an alternate path. This file contains entries that consist of a network specification and a network mask separated by white space. Lines starting with `#` are considered to be comments. A sample [.filename]+securenets+ might look like this: [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... If man:ypserv[8] receives a request from an address that matches one of these rules, it will process the request normally. If the address fails to match a rule, the request will be ignored and a warning message will be logged. If the [.filename]#securenets# does not exist, `ypserv` will allow connections from any host. crossref:security[tcpwrappers,"TCP Wrapper"] is an alternate mechanism for providing access control instead of [.filename]#securenets#. While either access control mechanism adds some security, they are both vulnerable to "IP spoofing" attacks. All NIS-related traffic should be blocked at the firewall. Servers using [.filename]#securenets# may fail to serve legitimate NIS clients with archaic TCP/IP implementations. Some of these implementations set all host bits to zero when doing broadcasts or fail to observe the subnet mask when calculating the broadcast address. While some of these problems can be fixed by changing the client configuration, other problems may force the retirement of these client systems or the abandonment of [.filename]#securenets#. The use of TCP Wrapper increases the latency of the NIS server. The additional delay may be long enough to cause timeouts in client programs, especially in busy networks with slow NIS servers. If one or more clients suffer from latency, convert those clients into NIS slave servers and force them to bind to themselves. ==== Barring Some Users In this example, the `basie` system is a faculty workstation within the NIS domain. The [.filename]#passwd# map on the master NIS server contains accounts for both faculty and students. This section demonstrates how to allow faculty logins on this system while refusing student logins. To prevent specified users from logging on to a system, even if they are present in the NIS database, use `vipw` to add `-_username_` with the correct number of colons towards the end of [.filename]#/etc/master.passwd# on the client, where _username_ is the username of a user to bar from logging in. The line with the blocked user must be before the `+` line that allows NIS users. In this example, `bill` is barred from logging on to `basie`: [source,shell] .... basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin operator:*:2:5::0:0:System &:/:/usr/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/usr/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin -bill::::::::: +::::::::: basie# .... [[network-netgroups]] === Using Netgroups Barring specified users from logging on to individual systems becomes unscaleable on larger networks and quickly loses the main benefit of NIS: _centralized_ administration. Netgroups were developed to handle large, complex networks with hundreds of users and machines. Their use is comparable to UNIX(R) groups, where the main difference is the lack of a numeric ID and the ability to define a netgroup by including both user accounts and other netgroups. To expand on the example used in this chapter, the NIS domain will be extended to add the users and systems shown in Tables 28.2 and 28.3: .Additional Users [cols="1,1", frame="none", options="header"] |=== | User Name(s) | Description |`alpha`, `beta` |IT department employees |`charlie`, `delta` |IT department apprentices |`echo`, `foxtrott`, `golf`, ... |employees |`able`, `baker`, ... |interns |=== .Additional Systems [cols="1,1", frame="none", options="header"] |=== | Machine Name(s) | Description |`war`, `death`, `famine`, `pollution` |Only IT employees are allowed to log onto these servers. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |All members of the IT department are allowed to login onto these servers. |`one`, `two`, `three`, `four`, ... |Ordinary workstations used by employees. |`trashcan` |A very old machine without any critical data. Even interns are allowed to use this system. |=== When using netgroups to configure this scenario, each user is assigned to one or more netgroups and logins are then allowed or forbidden for all members of the netgroup. When adding a new machine, login restrictions must be defined for all netgroups. When a new user is added, the account must be added to one or more netgroups. If the NIS setup is planned carefully, only one central configuration file needs modification to grant or deny access to machines. The first step is the initialization of the NIS `netgroup` map. In FreeBSD, this map is not created by default. On the NIS master server, use an editor to create a map named [.filename]#/var/yp/netgroup#. This example creates four netgroups to represent IT employees, IT apprentices, employees, and interns: [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... Each entry configures a netgroup. The first column in an entry is the name of the netgroup. Each set of brackets represents either a group of one or more users or the name of another netgroup. When specifying a user, the three comma-delimited fields inside each group represent: . The name of the host(s) where the other fields representing the user are valid. If a hostname is not specified, the entry is valid on all hosts. . The name of the account that belongs to this netgroup. . The NIS domain for the account. Accounts may be imported from other NIS domains into a netgroup. If a group contains multiple users, separate each user with whitespace. Additionally, each field may contain wildcards. See man:netgroup[5] for details. Netgroup names longer than 8 characters should not be used. The names are case sensitive and using capital letters for netgroup names is an easy way to distinguish between user, machine and netgroup names. Some non-FreeBSD NIS clients cannot handle netgroups containing more than 15 entries. This limit may be circumvented by creating several sub-netgroups with 15 users or fewer and a real netgroup consisting of the sub-netgroups, as seen in this example: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Repeat this process if more than 225 (15 times 15) users exist within a single netgroup. To activate and distribute the new NIS map: [source,shell] ellington# cd /var/yp ellington# make .... This will generate the three NIS maps [.filename]#netgroup#, [.filename]#netgroup.byhost# and [.filename]#netgroup.byuser#. Use the map key option of man:ypcat[1] to check if the new NIS maps are available: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... The output of the first command should resemble the contents of [.filename]#/var/yp/netgroup#. The second command only produces output if host-specific netgroups were created. The third command is used to get the list of netgroups for a user. To configure a client, use man:vipw[8] to specify the name of the netgroup. For example, on the server named `war`, replace this line: [.programlisting] .... +::::::::: .... with [.programlisting] .... +@IT_EMP::::::::: .... This specifies that only the users defined in the netgroup `IT_EMP` will be imported into this system's password database and only those users are allowed to login to this system. This configuration also applies to the `~` function of the shell and all routines which convert between user names and numerical user IDs. In other words, `cd ~_user_` will not work, `ls -l` will show the numerical ID instead of the username, and `find . -user joe -print` will fail with the message `No such user`. To fix this, import all user entries without allowing them to login into the servers. This can be achieved by adding an extra line: [.programlisting] .... +:::::::::/usr/sbin/nologin .... This line configures the client to import all entries but to replace the shell in those entries with [.filename]#/usr/sbin/nologin#. Make sure that extra line is placed _after_ `+@IT_EMP:::::::::`. Otherwise, all user accounts imported from NIS will have [.filename]#/usr/sbin/nologin# as their login shell and no one will be able to login to the system. To configure the less important servers, replace the old `+:::::::::` on the servers with these lines: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/usr/sbin/nologin .... The corresponding lines for the workstations would be: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/usr/sbin/nologin .... NIS supports the creation of netgroups from other netgroups which can be useful if the policy regarding user access changes. One possibility is the creation of role-based netgroups. For example, one might create a netgroup called `BIGSRV` to define the login restrictions for the important servers, another netgroup called `SMALLSRV` for the less important servers, and a third netgroup called `USERBOX` for the workstations. Each of these netgroups contains the netgroups that are allowed to login onto these machines. The new entries for the NIS `netgroup` map would look like this: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... This method of defining login restrictions works reasonably well when it is possible to define groups of machines with identical restrictions. Unfortunately, this is the exception and not the rule. Most of the time, the ability to define login restrictions on a per-machine basis is required. Machine-specific netgroup definitions are another possibility to deal with the policy changes. In this scenario, the [.filename]#/etc/master.passwd# of each system contains two lines starting with "+". The first line adds a netgroup with the accounts allowed to login onto this machine and the second line adds all other accounts with [.filename]#/usr/sbin/nologin# as shell. It is recommended to use the "ALL-CAPS" version of the hostname as the name of the netgroup: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/usr/sbin/nologin .... Once this task is completed on all the machines, there is no longer a need to modify the local versions of [.filename]#/etc/master.passwd# ever again. All further changes can be handled by modifying the NIS map. Here is an example of a possible `netgroup` map for this scenario: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... It may not always be advisable to use machine-based netgroups. When deploying a couple of dozen or hundreds of systems, role-based netgroups instead of machine-based netgroups may be used to keep the size of the NIS map within reasonable limits. === Password Formats NIS requires that all hosts within an NIS domain use the same format for encrypting passwords. If users have trouble authenticating on an NIS client, it may be due to a differing password format. In a heterogeneous network, the format must be supported by all operating systems, where DES is the lowest common standard. To check which format a server or client is using, look at this section of [.filename]#/etc/login.conf#: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Further entries elided] .... In this example, the system is using the DES format. Other possible values are `blf` for Blowfish and `md5` for MD5 encrypted passwords. If the format on a host needs to be edited to match the one being used in the NIS domain, the login capability database must be rebuilt after saving the change: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== The format of passwords for existing user accounts will not be updated until each user changes their password _after_ the login capability database is rebuilt. ==== [[network-ldap]] == Lightweight Directory Access Protocol (LDAP) The Lightweight Directory Access Protocol (LDAP) is an application layer protocol used to access, modify, and authenticate objects using a distributed directory information service. Think of it as a phone or record book which stores several levels of hierarchical, homogeneous information. It is used in Active Directory and OpenLDAP networks and allows users to access to several levels of internal information utilizing a single account. For example, email authentication, pulling employee contact information, and internal website authentication might all make use of a single user account in the LDAP server's record base. This section provides a quick start guide for configuring an LDAP server on a FreeBSD system. It assumes that the administrator already has a design plan which includes the type of information to store, what that information will be used for, which users should have access to that information, and how to secure this information from unauthorized access. === LDAP Terminology and Structure LDAP uses several terms which should be understood before starting the configuration. All directory entries consist of a group of _attributes_. Each of these attribute sets contains a unique identifier known as a _Distinguished Name_ (DN) which is normally built from several other attributes such as the common or _Relative Distinguished Name_ (RDN). Similar to how directories have absolute and relative paths, consider a DN as an absolute path and the RDN as the relative path. An example LDAP entry looks like the following. This example searches for the entry for the specified user account (`uid`), organizational unit (`ou`), and organization (`o`): [source,shell] .... % ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... This example entry shows the values for the `dn`, `mail`, `cn`, `uid`, and `telephoneNumber` attributes. The cn attribute is the RDN. More information about LDAP and its terminology can be found at http://www.openldap.org/doc/admin24/intro.html[http://www.openldap.org/doc/admin24/intro.html]. [[ldap-config]] === Configuring an LDAP Server FreeBSD does not provide a built-in LDAP server. Begin the configuration by installing package:net/openldap-server[] package or port: [source,shell] .... # pkg install openldap-server .... There is a large set of default options enabled in the extref:{linux-users}[package, software]. Review them by running `pkg info openldap-server`. If they are not sufficient (for example if SQL support is needed), please consider recompiling the port using the appropriate crossref:ports[ports-using,framework]. The installation creates the directory [.filename]#/var/db/openldap-data# to hold the data. The directory to store the certificates must be created: [source,shell] .... # mkdir /usr/local/etc/openldap/private .... The next phase is to configure the Certificate Authority. The following commands must be executed from [.filename]#/usr/local/etc/openldap/private#. This is important as the file permissions need to be restrictive and users should not have access to these files. More detailed information about certificates and their parameters can be found in crossref:security[openssl,"OpenSSL"]. To create the Certificate Authority, start with this command and follow the prompts: [source,shell] .... # openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt .... The entries for the prompts may be generic _except_ for the `Common Name`. This entry must be _different_ than the system hostname. If this will be a self signed certificate, prefix the hostname with `CA` for Certificate Authority. The next task is to create a certificate signing request and a private key. Input this command and follow the prompts: [source,shell] .... # openssl req -days 365 -nodes -new -keyout server.key -out server.csr .... During the certificate generation process, be sure to correctly set the `Common Name` attribute. The Certificate Signing Request must be signed with the Certificate Authority in order to be used as a valid certificate: [source,shell] .... # openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial .... The final part of the certificate generation process is to generate and sign the client certificates: [source,shell] .... # openssl req -days 365 -nodes -new -keyout client.key -out client.csr # openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key .... Remember to use the same `Common Name` attribute when prompted. When finished, ensure that a total of eight (8) new files have been generated through the proceeding commands. The daemon running the OpenLDAP server is [.filename]#slapd#. Its configuration is performed through [.filename]#slapd.ldif#: the old [.filename]#slapd.conf# has been deprecated by OpenLDAP. http://www.openldap.org/doc/admin24/slapdconf2.html[Configuration examples] for [.filename]#slapd.ldif# are available and can also be found in [.filename]#/usr/local/etc/openldap/slapd.ldif.sample#. Options are documented in slapd-config(5). Each section of [.filename]#slapd.ldif#, like all the other LDAP attribute sets, is uniquely identified through a DN. Be sure that no blank lines are left between the `dn:` statement and the desired end of the section. In the following example, TLS will be used to implement a secure channel. The first section represents the global configuration: [.programlisting] .... # # See slapd-config(5) for details on configuration options. # This file should NOT be world readable. # dn: cn=config objectClass: olcGlobal cn: config # # # Define global ACLs to disable default read access. # olcArgsFile: /var/run/openldap/slapd.args olcPidFile: /var/run/openldap/slapd.pid olcTLSCertificateFile: /usr/local/etc/openldap/server.crt olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt #olcTLSCipherSuite: HIGH olcTLSProtocolMin: 3.1 olcTLSVerifyClient: never .... The Certificate Authority, server certificate and server private key files must be specified here. It is recommended to let the clients choose the security cipher and omit option `olcTLSCipherSuite` (incompatible with TLS clients other than [.filename]#openssl#). Option `olcTLSProtocolMin` lets the server require a minimum security level: it is recommended. While verification is mandatory for the server, it is not for the client: `olcTLSVerifyClient: never`. The second section is about the backend modules and can be configured as follows: [.programlisting] .... # # Load dynamic backend modules: # dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/local/libexec/openldap olcModuleload: back_mdb.la #olcModuleload: back_bdb.la #olcModuleload: back_hdb.la #olcModuleload: back_ldap.la #olcModuleload: back_passwd.la #olcModuleload: back_shell.la .... The third section is devoted to load the needed `ldif` schemas to be used by the databases: they are essential. [.programlisting] .... dn: cn=schema,cn=config objectClass: olcSchemaConfig cn: schema include: file:///usr/local/etc/openldap/schema/core.ldif include: file:///usr/local/etc/openldap/schema/cosine.ldif include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif include: file:///usr/local/etc/openldap/schema/nis.ldif .... Next, the frontend configuration section: [.programlisting] .... # Frontend settings # dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: to * by * read # # Sample global access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # #olcAccess: to dn.base="" by * read #olcAccess: to dn.base="cn=Subschema" by * read #olcAccess: to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! # olcPasswordHash: {SSHA} # {SSHA} is already the default for olcPasswordHash .... Another section is devoted to the _configuration backend_, the only way to later access the OpenLDAP server configuration is as a global super-user. [.programlisting] .... dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: to * by * none olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U .... The default administrator username is `cn=config`. Type [.filename]#slappasswd# in a shell, choose a password and use its hash in `olcRootPW`. If this option is not specified now, before [.filename]#slapd.ldif# is imported, no one will be later able to modify the _global configuration_ section. The last section is about the database backend: [.programlisting] .... ####################################################################### # LMDB database definitions ####################################################################### # dn: olcDatabase=mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: mdb olcDbMaxSize: 1073741824 olcSuffix: dc=domain,dc=example olcRootDN: cn=mdbadmin,dc=domain,dc=example # Cleartext passwords, especially for the rootdn, should # be avoided. See slappasswd(8) and slapd-config(5) for details. # Use of strong authentication encouraged. olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+ # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. olcDbDirectory: /var/db/openldap-data # Indices to maintain olcDbIndex: objectClass eq .... This database hosts the _actual contents_ of the LDAP directory. Types other than `mdb` are available. Its super-user, not to be confused with the global one, is configured here: a (possibly custom) username in `olcRootDN` and the password hash in `olcRootPW`; [.filename]#slappasswd# can be used as before. This http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/data/regressions/its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;hb=HEAD[repository] contains four examples of [.filename]#slapd.ldif#. To convert an existing [.filename]#slapd.conf# into [.filename]#slapd.ldif#, refer to http://www.openldap.org/doc/admin24/slapdconf2.html[this page] (please note that this may introduce some unuseful options). When the configuration is completed, [.filename]#slapd.ldif# must be placed in an empty directory. It is recommended to create it as: [source,shell] .... # mkdir /usr/local/etc/openldap/slapd.d/ .... Import the configuration database: [source,shell] .... # /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif .... Start the [.filename]#slapd# daemon: [source,shell] .... # /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/ .... Option `-d` can be used for debugging, as specified in slapd(8). To verify that the server is running and working: [source,shell] .... # ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContexts # # dn: namingContexts: dc=domain,dc=example # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... The server must still be trusted. If that has never been done before, follow these instructions. Install the OpenSSL package or port: [source,shell] .... # pkg install openssl .... From the directory where [.filename]#ca.crt# is stored (in this example, [.filename]#/usr/local/etc/openldap#), run: [source,shell] .... # c_rehash . .... Both the CA and the server certificate are now correctly recognized in their respective roles. To verify this, run this command from the [.filename]#server.crt# directory: [source,shell] .... # openssl verify -verbose -CApath . server.crt .... If [.filename]#slapd# was running, restart it. As stated in [.filename]#/usr/local/etc/rc.d/slapd#, to properly run [.filename]#slapd# at boot the following lines must be added to [.filename]#/etc/rc.conf#: [.programlisting] .... lapd_enable="YES" slapd_flags='-h "ldapi://%2fvar%2frun%2fopenldap%2fldapi/ ldap://0.0.0.0/"' slapd_sockets="/var/run/openldap/ldapi" slapd_cn_config="YES" .... [.filename]#slapd# does not provide debugging at boot. Check [.filename]#/var/log/debug.log#, [.filename]#dmesg -a# and [.filename]#/var/log/messages# for this purpose. The following example adds the group `team` and the user `john` to the `domain.example` LDAP database, which is still empty. First, create the file [.filename]#domain.ldif#: [source,shell] .... # cat domain.ldif dn: dc=domain,dc=example objectClass: dcObject objectClass: organization o: domain.example dc: domain dn: ou=groups,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: groups dn: ou=users,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: users dn: cn=team,ou=groups,dc=domain,dc=example objectClass: top objectClass: posixGroup cn: team gidNumber: 10001 dn: uid=john,ou=users,dc=domain,dc=example objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: John McUser uid: john uidNumber: 10001 gidNumber: 10001 homeDirectory: /home/john/ loginShell: /usr/bin/bash userPassword: secret .... See the OpenLDAP documentation for more details. Use [.filename]#slappasswd# to replace the plain text password `secret` with a hash in `userPassword`. The path specified as `loginShell` must exist in all the systems where `john` is allowed to login. Finally, use the `mdb` administrator to modify the database: [source,shell] .... # ldapadd -W -D "cn=mdbadmin,dc=domain,dc=example" -f domain.ldif .... Modifications to the _global configuration_ section can only be performed by the global super-user. For example, assume that the option `olcTLSCipherSuite: HIGH:MEDIUM:SSLv3` was initially specified and must now be deleted. First, create a file that contains the following: [source,shell] .... # cat global_mod dn: cn=config changetype: modify delete: olcTLSCipherSuite .... Then, apply the modifications: [source,shell] .... # ldapmodify -f global_mod -x -D "cn=config" -W .... When asked, provide the password chosen in the _configuration backend_ section. The username is not required: here, `cn=config` represents the DN of the database section to be modified. Alternatively, use `ldapmodify` to delete a single line of the database, `ldapdelete` to delete a whole entry. If something goes wrong, or if the global super-user cannot access the configuration backend, it is possible to delete and re-write the whole configuration: [source,shell] .... # rm -rf /usr/local/etc/openldap/slapd.d/ .... [.filename]#slapd.ldif# can then be edited and imported again. Please, follow this procedure only when no other solution is available. This is the configuration of the server only. The same machine can also host an LDAP client, with its own separate configuration. [[network-dhcp]] == Dynamic Host Configuration Protocol (DHCP) The Dynamic Host Configuration Protocol (DHCP) allows a system to connect to a network in order to be assigned the necessary addressing information for communication on that network. FreeBSD includes the OpenBSD version of `dhclient` which is used by the client to obtain the addressing information. FreeBSD does not install a DHCP server, but several servers are available in the FreeBSD Ports Collection. The DHCP protocol is fully described in http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Informational resources are also available at http://www.isc.org/downloads/dhcp/[isc.org/downloads/dhcp/]. This section describes how to use the built-in DHCP client. It then describes how to install and configure a DHCP server. [NOTE] ==== In FreeBSD, the man:bpf[4] device is needed by both the DHCP server and DHCP client. This device is included in the [.filename]#GENERIC# kernel that is installed with FreeBSD. Users who prefer to create a custom kernel need to keep this device if DHCP is used. It should be noted that [.filename]#bpf# also allows privileged users to run network packet sniffers on that system. ==== === Configuring a DHCP Client DHCP client support is included in the FreeBSD installer, making it easy to configure a newly installed system to automatically receive its networking addressing information from an existing DHCP server. Refer to crossref:bsdinstall[bsdinstall-post,"Accounts, Time Zone, Services and Hardening"] for examples of network configuration. When `dhclient` is executed on the client machine, it begins broadcasting requests for configuration information. By default, these requests use UDP port 68. The server replies on UDP port 67, giving the client an IP address and other relevant network information such as a subnet mask, default gateway, and DNS server addresses. This information is in the form of a DHCP"lease" and is valid for a configurable time. This allows stale IP addresses for clients no longer connected to the network to automatically be reused. DHCP clients can obtain a great deal of information from the server. An exhaustive list may be found in man:dhcp-options[5]. By default, when a FreeBSD system boots, its DHCP client runs in the background, or _asynchronously_. Other startup scripts continue to run while the DHCP process completes, which speeds up system startup. Background DHCP works well when the DHCP server responds quickly to the client's requests. However, DHCP may take a long time to complete on some systems. If network services attempt to run before DHCP has assigned the network addressing information, they will fail. Using DHCP in _synchronous_ mode prevents this problem as it pauses startup until the DHCP configuration has completed. This line in [.filename]#/etc/rc.conf# is used to configure background or asynchronous mode: [.programlisting] .... ifconfig_fxp0="DHCP" .... This line may already exist if the system was configured to use DHCP during installation. Replace the _fxp0_ shown in these examples with the name of the interface to be dynamically configured, as described in crossref:config[config-network-setup,“Setting Up Network Interface Cards”]. To instead configure the system to use synchronous mode, and to pause during startup while DHCP completes, use "`SYNCDHCP`": [.programlisting] .... ifconfig_fxp0="SYNCDHCP" .... Additional client options are available. Search for `dhclient` in man:rc.conf[5] for details. The DHCP client uses the following files: * [.filename]#/etc/dhclient.conf# + The configuration file used by `dhclient`. Typically, this file contains only comments as the defaults are suitable for most clients. This configuration file is described in man:dhclient.conf[5]. * [.filename]#/sbin/dhclient# + More information about the command itself can be found in man:dhclient[8]. * [.filename]#/sbin/dhclient-script# + The FreeBSD-specific DHCP client configuration script. It is described in man:dhclient-script[8], but should not need any user modification to function properly. * [.filename]#/var/db/dhclient.leases.interface# + The DHCP client keeps a database of valid leases in this file, which is written as a log and is described in man:dhclient.leases[5]. [[network-dhcp-server]] === Installing and Configuring a DHCP Server This section demonstrates how to configure a FreeBSD system to act as a DHCP server using the Internet Systems Consortium (ISC) implementation of the DHCP server. This implementation and its documentation can be installed using the package:net/isc-dhcp44-server[] package or port. The installation of package:net/isc-dhcp44-server[] installs a sample configuration file. Copy [.filename]#/usr/local/etc/dhcpd.conf.example# to [.filename]#/usr/local/etc/dhcpd.conf# and make any edits to this new file. The configuration file is comprised of declarations for subnets and hosts which define the information that is provided to DHCP clients. For example, these lines configure the following: [.programlisting] .... option domain-name "example.org";<.> option domain-name-servers ns1.example.org;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 600;<.> max-lease-time 72400;<.> ddns-update-style none;<.> subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20;<.> option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;<.> } host fantasia { hardware ethernet 08:00:07:26:c0:a5;<.> fixed-address fantasia.fugue.com;<.> } .... <.> This option specifies the default search domain that will be provided to clients. Refer to man:resolv.conf[5] for more information. <.> This option specifies a comma separated list of DNS servers that the client should use. They can be listed by their Fully Qualified Domain Names (FQDN), as seen in the example, or by their IP addresses. <.> The subnet mask that will be provided to clients. <.> The default lease expiry time in seconds. A client can be configured to override this value. <.> The maximum allowed length of time, in seconds, for a lease. Should a client request a longer lease, a lease will still be issued, but it will only be valid for `max-lease-time`. <.> The default of `none` disables dynamic DNS updates. Changing this to `interim` configures the DHCP server to update a DNS server whenever it hands out a lease so that the DNS server knows which IP addresses are associated with which computers in the network. Do not change the default setting unless the DNS server has been configured to support dynamic DNS. <.> This line creates a pool of available IP addresses which are reserved for allocation to DHCP clients. The range of addresses must be valid for the network or subnet specified in the previous line. <.> Declares the default gateway that is valid for the network or subnet specified before the opening `{` bracket. <.> Specifies the hardware MAC address of a client so that the DHCP server can recognize the client when it makes a request. <.> Specifies that this host should always be given the same IP address. Using the hostname is correct, since the DHCP server will resolve the hostname before returning the lease information. This configuration file supports many more options. Refer to dhcpd.conf(5), installed with the server, for details and examples. Once the configuration of [.filename]#dhcpd.conf# is complete, enable the DHCP server in [.filename]#/etc/rc.conf#: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... Replace the `dc0` with the interface (or interfaces, separated by whitespace) that the DHCP server should listen on for DHCP client requests. Start the server by issuing the following command: [source,shell] .... # service isc-dhcpd start .... Any future changes to the configuration of the server will require the dhcpd service to be stopped and then started using man:service[8]. The DHCP server uses the following files. Note that the manual pages are installed with the server software. * [.filename]#/usr/local/sbin/dhcpd# + More information about the dhcpd server can be found in dhcpd(8). * [.filename]#/usr/local/etc/dhcpd.conf# + The server configuration file needs to contain all the information that should be provided to clients, along with information regarding the operation of the server. This configuration file is described in dhcpd.conf(5). * [.filename]#/var/db/dhcpd.leases# + The DHCP server keeps a database of leases it has issued in this file, which is written as a log. Refer to dhcpd.leases(5), which gives a slightly longer description. * [.filename]#/usr/local/sbin/dhcrelay# + This daemon is used in advanced environments where one DHCP server forwards a request from a client to another DHCP server on a separate network. If this functionality is required, install the package:net/isc-dhcp44-relay[] package or port. The installation includes dhcrelay(8) which provides more detail. [[network-dns]] == Domain Name System (DNS) Domain Name System (DNS) is the protocol through which domain names are mapped to IP addresses, and vice versa. DNS is coordinated across the Internet through a somewhat complex system of authoritative root, Top Level Domain (TLD), and other smaller-scale name servers, which host and cache individual domain information. It is not necessary to run a name server to perform DNS lookups on a system. The following table describes some of the terms associated with DNS: .DNS Terminology [cols="1,1", frame="none", options="header"] |=== | Term | Definition |Forward DNS |Mapping of hostnames to IP addresses. |Origin |Refers to the domain covered in a particular zone file. |Resolver |A system process through which a machine queries a name server for zone information. |Reverse DNS |Mapping of IP addresses to hostnames. |Root zone |The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory. |Zone |An individual domain, subdomain, or portion of the DNS administered by the same authority. |=== Examples of zones: * `.` is how the root zone is usually referred to in documentation. * `org.` is a Top Level Domain (TLD) under the root zone. * `example.org.` is a zone under the `org.` TLD. * `1.168.192.in-addr.arpa` is a zone referencing all IP addresses which fall under the `192.168.1.*` IP address space. As one can see, the more specific part of a hostname appears to its left. For example, `example.org.` is more specific than `org.`, as `org.` is more specific than the root zone. The layout of each part of a hostname is much like a file system: the [.filename]#/dev# directory falls within the root, and so on. === Reasons to Run a Name Server Name servers generally come in two forms: authoritative name servers, and caching (also known as resolving) name servers. An authoritative name server is needed when: * One wants to serve DNS information to the world, replying authoritatively to queries. * A domain, such as `example.org`, is registered and IP addresses need to be assigned to hostnames under it. * An IP address block requires reverse DNS entries (IP to hostname). * A backup or second name server, called a slave, will reply to queries. A caching name server is needed when: * A local DNS server may cache and respond more quickly than querying an outside name server. When one queries for `www.FreeBSD.org`, the resolver usually queries the uplink ISP's name server, and retrieves the reply. With a local, caching DNS server, the query only has to be made once to the outside world by the caching DNS server. Additional queries will not have to go outside the local network, since the information is cached locally. === DNS Server Configuration Unbound is provided in the FreeBSD base system. By default, it will provide DNS resolution to the local machine only. While the base system package can be configured to provide resolution services beyond the local machine, it is recommended that such requirements be addressed by installing Unbound from the FreeBSD Ports Collection. To enable Unbound, add the following to [.filename]#/etc/rc.conf#: [.programlisting] .... local_unbound_enable="YES" .... Any existing nameservers in [.filename]#/etc/resolv.conf# will be configured as forwarders in the new Unbound configuration. [NOTE] ==== If any of the listed nameservers do not support DNSSEC, local DNS resolution will fail. Be sure to test each nameserver and remove any that fail the test. The following command will show the trust tree or a failure for a nameserver running on `192.168.1.1`: ==== [source,shell] .... % drill -S FreeBSD.org @192.168.1.1 .... Once each nameserver is confirmed to support DNSSEC, start Unbound: [source,shell] .... # service local_unbound onestart .... This will take care of updating [.filename]#/etc/resolv.conf# so that queries for DNSSEC secured domains will now work. For example, run the following to validate the FreeBSD.org DNSSEC trust tree: [source,shell] .... % drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful .... [[network-apache]] == Apache HTTP Server The open source Apache HTTP Server is the most widely used web server. FreeBSD does not install this web server by default, but it can be installed from the package:www/apache24[] package or port. This section summarizes how to configure and start version 2._x_ of the Apache HTTP Server on FreeBSD. For more detailed information about Apache 2.X and its configuration directives, refer to http://httpd.apache.org/[httpd.apache.org]. === Configuring and Starting Apache In FreeBSD, the main Apache HTTP Server configuration file is installed as [.filename]#/usr/local/etc/apache2x/httpd.conf#, where _x_ represents the version number. This ASCII text file begins comment lines with a `#`. The most frequently modified directives are: `ServerRoot "/usr/local"`:: Specifies the default directory hierarchy for the Apache installation. Binaries are stored in the [.filename]#bin# and [.filename]#sbin# subdirectories of the server root and configuration files are stored in the [.filename]#etc/apache2x# subdirectory. `ServerAdmin you@example.com`:: Change this to the email address to receive problems with the server. This address also appears on some server-generated pages, such as error documents. `ServerName www.example.com:80`:: Allows an administrator to set a hostname which is sent back to clients for the server. For example, `www` can be used instead of the actual hostname. If the system does not have a registered DNS name, enter its IP address instead. If the server will listen on an alternate report, change `80` to the alternate port number. `DocumentRoot "/usr/local/www/apache2__x__/data"`:: The directory where documents will be served from. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. It is always a good idea to make a backup copy of the default Apache configuration file before making changes. When the configuration of Apache is complete, save the file and verify the configuration using `apachectl`. Running `apachectl configtest` should return `Syntax OK`. To launch Apache at system startup, add the following line to [.filename]#/etc/rc.conf#: [.programlisting] .... apache24_enable="YES" .... If Apache should be started with non-default options, the following line may be added to [.filename]#/etc/rc.conf# to specify the needed flags: [.programlisting] .... apache24_flags="" .... If apachectl does not report configuration errors, start `httpd` now: [source,shell] .... # service apache24 start .... The `httpd` service can be tested by entering `http://_localhost_` in a web browser, replacing _localhost_ with the fully-qualified domain name of the machine running `httpd`. The default web page that is displayed is [.filename]#/usr/local/www/apache24/data/index.html#. The Apache configuration can be tested for errors after making subsequent configuration changes while `httpd` is running using the following command: [source,shell] .... # service apache24 configtest .... [NOTE] ==== It is important to note that `configtest` is not an man:rc[8] standard, and should not be expected to work for all startup scripts. ==== === Virtual Hosting Virtual hosting allows multiple websites to run on one Apache server. The virtual hosts can be _IP-based_ or _name-based_. IP-based virtual hosting uses a different IP address for each website. Name-based virtual hosting uses the clients HTTP/1.1 headers to figure out the hostname, which allows the websites to share the same IP address. To setup Apache to use name-based virtual hosting, add a `VirtualHost` block for each website. For example, for the webserver named `www.domain.tld` with a virtual domain of `www.someotherdomain.tld`, add the following entries to [.filename]#httpd.conf#: [.programlisting] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... For each virtual host, replace the values for `ServerName` and `DocumentRoot` with the values to be used. For more information about setting up virtual hosts, consult the official Apache documentation at: http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]. === Apache Modules Apache uses modules to augment the functionality provided by the basic server. Refer to http://httpd.apache.org/docs/current/mod/[http://httpd.apache.org/docs/current/mod/] for a complete listing of and the configuration details for the available modules. In FreeBSD, some modules can be compiled with the package:www/apache24[] port. Type `make config` within [.filename]#/usr/ports/www/apache24# to see which modules are available and which are enabled by default. If the module is not compiled with the port, the FreeBSD Ports Collection provides an easy way to install many modules. This section describes three of the most commonly used modules. ==== [.filename]#mod_ssl# The [.filename]#mod_ssl# module uses the OpenSSL library to provide strong cryptography via the Secure Sockets Layer (SSLv3) and Transport Layer Security (TLSv1) protocols. This module provides everything necessary to request a signed certificate from a trusted certificate signing authority to run a secure web server on FreeBSD. In FreeBSD, [.filename]#mod_ssl# module is enabled by default in both the package and the port. The available configuration directives are explained at http://httpd.apache.org/docs/current/mod/mod_ssl.html[http://httpd.apache.org/docs/current/mod/mod_ssl.html]. ==== [.filename]#mod_perl# The [.filename]#mod_perl# module makes it possible to write Apache modules in Perl. In addition, the persistent interpreter embedded in the server avoids the overhead of starting an external interpreter and the penalty of Perl start-up time. The [.filename]#mod_perl# can be installed using the package:www/mod_perl2[] package or port. Documentation for using this module can be found at http://perl.apache.org/docs/2.0/index.html[http://perl.apache.org/docs/2.0/index.html]. ==== [.filename]#mod_php# _PHP: Hypertext Preprocessor_ (PHP) is a general-purpose scripting language that is especially suited for web development. Capable of being embedded into HTML, its syntax draws upon C, Java(TM), and Perl with the intention of allowing web developers to write dynamically generated webpages quickly. To gain support for PHP5 for the Apache web server, install the package:www/mod_php56[] package or port. This will install and configure the modules required to support dynamic PHP applications. The installation will automatically add this line to [.filename]#/usr/local/etc/apache24/httpd.conf#: [.programlisting] .... LoadModule php5_module libexec/apache24/libphp5.so .... Then, perform a graceful restart to load the PHP module: [source,shell] .... # apachectl graceful .... The PHP support provided by package:www/mod_php56[] is limited. Additional support can be installed using the package:lang/php56-extensions[] port which provides a menu driven interface to the available PHP extensions. Alternatively, individual extensions can be installed using the appropriate port. For instance, to add PHP support for the MySQL database server, install package:databases/php56-mysql[]. After installing an extension, the Apache server must be reloaded to pick up the new configuration changes: [source,shell] .... # apachectl graceful .... === Dynamic Websites In addition to mod_perl and mod_php, other languages are available for creating dynamic web content. These include Django and Ruby on Rails. ==== Django Django is a BSD-licensed framework designed to allow developers to write high performance, elegant web applications quickly. It provides an object-relational mapper so that data types are developed as Python objects. A rich dynamic database-access API is provided for those objects without the developer ever having to write SQL. It also provides an extensible template system so that the logic of the application is separated from the HTML presentation. Django depends on [.filename]#mod_python#, and an SQL database engine. In FreeBSD, the package:www/py-django[] port automatically installs [.filename]#mod_python# and supports the PostgreSQL, MySQL, or SQLite databases, with the default being SQLite. To change the database engine, type `make config` within [.filename]#/usr/ports/www/py-django#, then install the port. Once Django is installed, the application will need a project directory along with the Apache configuration in order to use the embedded Python interpreter. This interpreter is used to call the application for specific URLs on the site. To configure Apache to pass requests for certain URLs to the web application, add the following to [.filename]#httpd.conf#, specifying the full path to the project directory: [.programlisting] .... SetHandler python-program PythonPath "['/dir/to/the/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On .... Refer to https://docs.djangoproject.com[https://docs.djangoproject.com] for more information on how to use Django. ==== Ruby on Rails Ruby on Rails is another open source web framework that provides a full development stack. It is optimized to make web developers more productive and capable of writing powerful applications quickly. On FreeBSD, it can be installed using the package:www/rubygem-rails[] package or port. Refer to http://guides.rubyonrails.org[http://guides.rubyonrails.org] for more information on how to use Ruby on Rails. [[network-ftp]] == File Transfer Protocol (FTP) The File Transfer Protocol (FTP) provides users with a simple way to transfer files to and from an FTP server. FreeBSD includes FTP server software, ftpd, in the base system. FreeBSD provides several configuration files for controlling access to the FTP server. This section summarizes these files. Refer to man:ftpd[8] for more details about the built-in FTP server. === Configuration The most important configuration step is deciding which accounts will be allowed access to the FTP server. A FreeBSD system has a number of system accounts which should not be allowed FTP access. The list of users disallowed any FTP access can be found in [.filename]#/etc/ftpusers#. By default, it includes system accounts. Additional users that should not be allowed access to FTP can be added. In some cases it may be desirable to restrict the access of some users without preventing them completely from using FTP. This can be accomplished be creating [.filename]#/etc/ftpchroot# as described in man:ftpchroot[5]. This file lists users and groups subject to FTP access restrictions. To enable anonymous FTP access to the server, create a user named `ftp` on the FreeBSD system. Users will then be able to log on to the FTP server with a username of `ftp` or `anonymous`. When prompted for the password, any input will be accepted, but by convention, an email address should be used as the password. The FTP server will call man:chroot[2] when an anonymous user logs in, to restrict access to only the home directory of the `ftp` user. There are two text files that can be created to specify welcome messages to be displayed to FTP clients. The contents of [.filename]#/etc/ftpwelcome# will be displayed to users before they reach the login prompt. After a successful login, the contents of [.filename]#/etc/ftpmotd# will be displayed. Note that the path to this file is relative to the login environment, so the contents of [.filename]#~ftp/etc/ftpmotd# would be displayed for anonymous users. Once the FTP server has been configured, set the appropriate variable in [.filename]#/etc/rc.conf# to start the service during boot: [.programlisting] .... ftpd_enable="YES" .... To start the service now: [source,shell] .... # service ftpd start .... Test the connection to the FTP server by typing: [source,shell] .... % ftp localhost .... The ftpd daemon uses man:syslog[3] to log messages. By default, the system log daemon will write messages related to FTP in [.filename]#/var/log/xferlog#. The location of the FTP log can be modified by changing the following line in [.filename]#/etc/syslog.conf#: [.programlisting] .... ftp.info /var/log/xferlog .... [NOTE] ==== Be aware of the potential problems involved with running an anonymous FTP server. In particular, think twice about allowing anonymous users to upload files. It may turn out that the FTP site becomes a forum for the trade of unlicensed commercial software or worse. If anonymous FTP uploads are required, then verify the permissions so that these files cannot be read by other anonymous users until they have been reviewed by an administrator. ==== [[network-samba]] == File and Print Services for Microsoft(R) Windows(R) Clients (Samba) Samba is a popular open source software package that provides file and print services using the SMB/CIFS protocol. This protocol is built into Microsoft(R) Windows(R) systems. It can be added to non-Microsoft(R) Windows(R) systems by installing the Samba client libraries. The protocol allows clients to access shared data and printers. These shares can be mapped as a local disk drive and shared printers can be used as if they were local printers. On FreeBSD, the Samba client libraries can be installed using the package:net/samba410[] port or package. The client provides the ability for a FreeBSD system to access SMB/CIFS shares in a Microsoft(R) Windows(R) network. A FreeBSD system can also be configured to act as a Samba server by installing the same package:net/samba410[] port or package. This allows the administrator to create SMB/CIFS shares on the FreeBSD system which can be accessed by clients running Microsoft(R) Windows(R) or the Samba client libraries. === Server Configuration Samba is configured in [.filename]#/usr/local/etc/smb4.conf#. This file must be created before Samba can be used. A simple [.filename]#smb4.conf# to share directories and printers with Windows(R) clients in a workgroup is shown here. For more complex setups involving LDAP or Active Directory, it is easier to use man:samba-tool[8] to create the initial [.filename]#smb4.conf#. [.programlisting] .... [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = ExampleMachine wins support = Yes security = user passdb backend = tdbsam # Example: share /usr/src accessible only to 'developer' user [src] path = /usr/src valid users = developer writable = yes browsable = yes read only = no guest ok = no public = no create mask = 0666 directory mask = 0755 .... ==== Global Settings Settings that describe the network are added in [.filename]#/usr/local/etc/smb4.conf#: `workgroup`:: The name of the workgroup to be served. `netbios name`:: The NetBIOS name by which a Samba server is known. By default, it is the same as the first component of the host's DNS name. `server string`:: The string that will be displayed in the output of `net view` and some other networking tools that seek to display descriptive text about the server. `wins support`:: Whether Samba will act as a WINS server. Do not enable support for WINS on more than one server on the network. ==== Security Settings The most important settings in [.filename]#/usr/local/etc/smb4.conf# are the security model and the backend password format. These directives control the options: `security`:: The most common settings are `security = share` and `security = user`. If the clients use usernames that are the same as their usernames on the FreeBSD machine, user level security should be used. This is the default security policy and it requires clients to first log on before they can access shared resources. + In share level security, clients do not need to log onto the server with a valid username and password before attempting to connect to a shared resource. This was the default security model for older versions of Samba. `passdb backend`:: Samba has several different backend authentication models. Clients may be authenticated with LDAP, NIS+, an SQL database, or a modified password file. The recommended authentication method, `tdbsam`, is ideal for simple networks and is covered here. For larger or more complex networks, `ldapsam` is recommended. `smbpasswd` was the former default and is now obsolete. ==== Samba Users FreeBSD user accounts must be mapped to the `SambaSAMAccount` database for Windows(R) clients to access the share. Map existing FreeBSD user accounts using man:pdbedit[8]: [source,shell] .... # pdbedit -a username .... This section has only mentioned the most commonly used settings. Refer to the https://wiki.samba.org[Official Samba Wiki] for additional information about the available configuration options. === Starting Samba To enable Samba at boot time, add the following line to [.filename]#/etc/rc.conf#: [.programlisting] .... samba_server_enable="YES" .... To start Samba now: [source,shell] .... # service samba_server start Performing sanity check on Samba configuration: OK Starting nmbd. Starting smbd. .... Samba consists of three separate daemons. Both the nmbd and smbd daemons are started by `samba_enable`. If winbind name resolution is also required, set: [.programlisting] .... winbindd_enable="YES" .... Samba can be stopped at any time by typing: [source,shell] .... # service samba_server stop .... Samba is a complex software suite with functionality that allows broad integration with Microsoft(R) Windows(R) networks. For more information about functionality beyond the basic configuration described here, refer to https://www.samba.org[https://www.samba.org]. [[network-ntp]] == Clock Synchronization with NTP Over time, a computer's clock is prone to drift. This is problematic as many network services require the computers on a network to share the same accurate time. Accurate time is also needed to ensure that file timestamps stay consistent. The Network Time Protocol (NTP) is one way to provide clock accuracy in a network. FreeBSD includes man:ntpd[8] which can be configured to query other NTP servers to synchronize the clock on that machine or to provide time services to other computers in the network. This section describes how to configure ntpd on FreeBSD. Further documentation can be found in [.filename]#/usr/shared/doc/ntp/# in HTML format. === NTP Configuration On FreeBSD, the built-in ntpd can be used to synchronize a system's clock. Ntpd is configured using man:rc.conf[5] variables and [.filename]#/etc/ntp.conf#, as detailed in the following sections. Ntpd communicates with its network peers using UDP packets. Any firewalls between your machine and its NTP peers must be configured to allow UDP packets in and out on port 123. ==== The [.filename]#/etc/ntp.conf# file Ntpd reads [.filename]#/etc/ntp.conf# to determine which NTP servers to query. Choosing several NTP servers is recommended in case one of the servers becomes unreachable or its clock proves unreliable. As ntpd receives responses, it favors reliable servers over the less reliable ones. The servers which are queried can be local to the network, provided by an ISP, or selected from an http://support.ntp.org/bin/view/Servers/WebHome[online list of publicly accessible NTP servers]. When choosing a public NTP server, select one that is geographically close and review its usage policy. The `pool` configuration keyword selects one or more servers from a pool of servers. An http://support.ntp.org/bin/view/Servers/NTPPoolServers[online list of publicly accessible NTP pools] is available, organized by geographic area. In addition, FreeBSD provides a project-sponsored pool, `0.freebsd.pool.ntp.org`. .Sample [.filename]#/etc/ntp.conf# [example] ==== This is a simple example of an [.filename]#ntp.conf# file. It can safely be used as-is; it contains the recommended `restrict` options for operation on a publicly-accessible network connection. [.programlisting] .... # Disallow ntpq control/query access. Allow peers to be added only # based on pool and server statements in this file. restrict default limited kod nomodify notrap noquery nopeer restrict source limited kod nomodify notrap noquery # Allow unrestricted access from localhost for queries and control. restrict 127.0.0.1 restrict ::1 # Add a specific server. server ntplocal.example.com iburst # Add FreeBSD pool servers until 3-6 good servers are available. tos minclock 3 maxclock 6 pool 0.freebsd.pool.ntp.org iburst # Use a local leap-seconds file. leapfile "/var/db/ntpd.leap-seconds.list" .... ==== The format of this file is described in man:ntp.conf[5]. The descriptions below provide a quick overview of just the keywords used in the sample file above. By default, an NTP server is accessible to any network host. The `restrict` keyword controls which systems can access the server. Multiple `restrict` entries are supported, each one refining the restrictions given in previous statements. The values shown in the example grant the local system full query and control access, while allowing remote systems only the ability to query the time. For more details, refer to the `Access Control Support` subsection of man:ntp.conf[5]. The `server` keyword specifies a single server to query. The file can contain multiple server keywords, with one server listed on each line. The `pool` keyword specifies a pool of servers. Ntpd will add one or more servers from this pool as needed to reach the number of peers specified using the `tos minclock` value. The `iburst` keyword directs ntpd to perform a burst of eight quick packet exchanges with a server when contact is first established, to help quickly synchronize system time. The `leapfile` keyword specifies the location of a file containing information about leap seconds. The file is updated automatically by man:periodic[8]. The file location specified by this keyword must match the location set in the `ntp_db_leapfile` variable in [.filename]#/etc/rc.conf#. ==== NTP entries in [.filename]#/etc/rc.conf# Set `ntpd_enable=YES` to start ntpd at boot time. Once `ntpd_enable=YES` has been added to [.filename]#/etc/rc.conf#, ntpd can be started immediately without rebooting the system by typing: [source,shell] .... # service ntpd start .... Only `ntpd_enable` must be set to use ntpd. The [.filename]#rc.conf# variables listed below may also be set as needed. Set `ntpd_sync_on_start=YES` to allow ntpd to step the clock any amount, one time at startup. Normally ntpd will log an error message and exit if the clock is off by more than 1000 seconds. This option is especially useful on systems without a battery-backed realtime clock. Set `ntpd_oomprotect=YES` to protect the ntpd daemon from being killed by the system attempting to recover from an Out Of Memory (OOM) condition. Set `ntpd_config=` to the location of an alternate [.filename]#ntp.conf# file. Set `ntpd_flags=` to contain any other ntpd flags as needed, but avoid using these flags which are managed internally by [.filename]#/etc/rc.d/ntpd#: * `-p` (pid file location) * `-c` (set `ntpd_config=` instead) ==== Ntpd and the unpriveleged `ntpd` user Ntpd on FreeBSD can start and run as an unpriveleged user. Doing so requires the man:mac_none[4] policy module. The [.filename]#/etc/rc.d/ntpd# startup script first examines the NTP configuration. If possible, it loads the `mac_ntpd` module, then starts ntpd as unpriveleged user `ntpd` (user id 123). To avoid problems with file and directory access, the startup script will not automatically start ntpd as `ntpd` when the configuration contains any file-related options. The presence of any of the following in `ntpd_flags` requires manual configuration as described below to run as the `ntpd` user: * -f or --driftfile * -i or --jaildir * -k or --keyfile * -l or --logfile * -s or --statsdir The presence of any of the following keywords in [.filename]#ntp.conf# requires manual configuration as described below to run as the `ntpd` user: * crypto * driftfile * key * logdir * statsdir To manually configure ntpd to run as user `ntpd` you must: * Ensure that the `ntpd` user has access to all the files and directories specified in the configuration. * Arrange for the `mac_ntpd` module to be loaded or compiled into the kernel. See man:mac_none[4] for details. * Set `ntpd_user="ntpd"` in [.filename]#/etc/rc.conf# === Using NTP with a PPP Connection ntpd does not need a permanent connection to the Internet to function properly. However, if a PPP connection is configured to dial out on demand, NTP traffic should be prevented from triggering a dial out or keeping the connection alive. This can be configured with `filter` directives in [.filename]#/etc/ppp/ppp.conf#. For example: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... For more details, refer to the `PACKET FILTERING` section in man:ppp[8] and the examples in [.filename]#/usr/shared/examples/ppp/#. [NOTE] ==== Some Internet access providers block low-numbered ports, preventing NTP from functioning since replies never reach the machine. ==== [[network-iscsi]] == iSCSI Initiator and Target Configuration iSCSI is a way to share storage over a network. Unlike NFS, which works at the file system level, iSCSI works at the block device level. In iSCSI terminology, the system that shares the storage is known as the _target_. The storage can be a physical disk, or an area representing multiple disks or a portion of a physical disk. For example, if the disk(s) are formatted with ZFS, a zvol can be created to use as the iSCSI storage. The clients which access the iSCSI storage are called _initiators_. To initiators, the storage available through iSCSI appears as a raw, unformatted disk known as a LUN. Device nodes for the disk appear in [.filename]#/dev/# and the device must be separately formatted and mounted. FreeBSD provides a native, kernel-based iSCSI target and initiator. This section describes how to configure a FreeBSD system as a target or an initiator. [[network-iscsi-target]] === Configuring an iSCSI Target To configure an iSCSI target, create the [.filename]#/etc/ctl.conf# configuration file, add a line to [.filename]#/etc/rc.conf# to make sure the man:ctld[8] daemon is automatically started at boot, and then start the daemon. The following is an example of a simple [.filename]#/etc/ctl.conf# configuration file. Refer to man:ctl.conf[5] for a more complete description of this file's available options. [.programlisting] .... portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... The first entry defines the `pg0` portal group. Portal groups define which network addresses the man:ctld[8] daemon will listen on. The `discovery-auth-group no-authentication` entry indicates that any initiator is allowed to perform iSCSI target discovery without authentication. Lines three and four configure man:ctld[8] to listen on all IPv4 (`listen 0.0.0.0`) and IPv6 (`listen [::]`) addresses on the default port of 3260. It is not necessary to define a portal group as there is a built-in portal group called `default`. In this case, the difference between `default` and `pg0` is that with `default`, target discovery is always denied, while with `pg0`, it is always allowed. The second entry defines a single target. Target has two possible meanings: a machine serving iSCSI or a named group of LUNs. This example uses the latter meaning, where `iqn.2012-06.com.example:target0` is the target name. This target name is suitable for testing purposes. For actual use, change `com.example` to the real domain name, reversed. The `2012-06` represents the year and month of acquiring control of that domain name, and `target0` can be any value. Any number of targets can be defined in this configuration file. The `auth-group no-authentication` line allows all initiators to connect to the specified target and `portal-group pg0` makes the target reachable through the `pg0` portal group. The next section defines the LUN. To the initiator, each LUN will be visible as a separate disk device. Multiple LUNs can be defined for each target. Each LUN is identified by a number, where LUN 0 is mandatory. The `path /data/target0-0` line defines the full path to a file or zvol backing the LUN. That path must exist before starting man:ctld[8]. The second line is optional and specifies the size of the LUN. Next, to make sure the man:ctld[8] daemon is started at boot, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... ctld_enable="YES" .... To start man:ctld[8] now, run this command: [source,shell] .... # service ctld start .... As the man:ctld[8] daemon is started, it reads [.filename]#/etc/ctl.conf#. If this file is edited after the daemon starts, use this command so that the changes take effect immediately: [source,shell] .... # service ctld reload .... ==== Authentication The previous example is inherently insecure as it uses no authentication, granting anyone full access to all targets. To require a username and password to access targets, modify the configuration as follows: [.programlisting] .... auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... The `auth-group` section defines username and password pairs. An initiator trying to connect to `iqn.2012-06.com.example:target0` must first specify a defined username and secret. However, target discovery is still permitted without authentication. To require target discovery authentication, set `discovery-auth-group` to a defined `auth-group` name instead of `no-authentication`. It is common to define a single exported target for every initiator. As a shorthand for the syntax above, the username and password can be specified directly in the target entry: [.programlisting] .... target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } .... [[network-iscsi-initiator]] === Configuring an iSCSI Initiator [NOTE] ==== The iSCSI initiator described in this section is supported starting with FreeBSD 10.0-RELEASE. To use the iSCSI initiator available in older versions, refer to man:iscontrol[8]. ==== The iSCSI initiator requires that the man:iscsid[8] daemon is running. This daemon does not use a configuration file. To start it automatically at boot, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... iscsid_enable="YES" .... To start man:iscsid[8] now, run this command: [source,shell] .... # service iscsid start .... Connecting to a target can be done with or without an [.filename]#/etc/iscsi.conf# configuration file. This section demonstrates both types of connections. ==== Connecting to a Target Without a Configuration File To connect an initiator to a single target, specify the IP address of the portal and the name of the target: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 .... To verify if the connection succeeded, run `iscsictl` without any arguments. The output should look similar to this: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 .... In this example, the iSCSI session was successfully established, with [.filename]#/dev/da0# representing the attached LUN. If the `iqn.2012-06.com.example:target0` target exports more than one LUN, multiple device nodes will be shown in that section of the output: [source,shell] .... Connected: da0 da1 da2. .... Any errors will be reported in the output, as well as the system logs. For example, this message usually means that the man:iscsid[8] daemon is not running: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) .... The following message suggests a networking problem, such as a wrong IP address or port: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused .... This message means that the specified target name is wrong: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Not found .... This message means that the target requires authentication: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed .... To specify a CHAP username and secret, use this syntax: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret .... ==== Connecting to a Target with a Configuration File To connect using a configuration file, create [.filename]#/etc/iscsi.conf# with contents like this: [.programlisting] .... t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } .... The `t0` specifies a nickname for the configuration file section. It will be used by the initiator to specify which configuration to use. The other lines specify the parameters to use during connection. The `TargetAddress` and `TargetName` are mandatory, whereas the other options are optional. In this example, the CHAP username and secret are shown. To connect to the defined target, specify the nickname: [source,shell] .... # iscsictl -An t0 .... Alternately, to connect to all targets defined in the configuration file, use: [source,shell] .... # iscsictl -Aa .... To make the initiator automatically connect to all targets in [.filename]#/etc/iscsi.conf#, add the following to [.filename]#/etc/rc.conf#: [.programlisting] .... iscsictl_enable="YES" iscsictl_flags="-Aa" .... diff --git a/documentation/content/pt-br/books/handbook/mac/_index.adoc b/documentation/content/pt-br/books/handbook/mac/_index.adoc index 2390ed52a1..f89e12e7d3 100644 --- a/documentation/content/pt-br/books/handbook/mac/_index.adoc +++ b/documentation/content/pt-br/books/handbook/mac/_index.adoc @@ -1,810 +1,808 @@ --- title: Capítulo 15. Controle de acesso obrigatório part: Parte III. Administração do Sistema prev: books/handbook/jails next: books/handbook/audit showBookMenu: true weight: 19 params: path: "/books/handbook/mac/" --- [[mac]] = Controle de acesso obrigatório :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 15 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == Sinopse O FreeBSD suporta extensões de segurança baseadas no projeto POSIX(TM).1e. Esses mecanismos de segurança incluem as Listas de Controle de Acesso do sistema de arquivos (crossref:security[fs-acl,Listas de Controle de Acesso]) e o Controle de Acesso Obrigatório, (Mandatory Access Control - MAC). O MAC permite que os módulos de controle de acesso sejam carregados para implementar políticas de segurança. Alguns módulos fornecem proteções para um subconjunto restrito do sistema, fortalecendo um serviço específico. Outros fornecem segurança rotulada abrangente em todos os assuntos e objetos. A parte obrigatória da definição indica que a imposição de controles é executada pelos administradores e pelo sistema operacional. Isso está em contraste com o mecanismo de segurança padrão do Controle de Acesso Discricionário (Discretionary Access Control - DAC), onde a imposição é deixada a critério dos usuários. Este capítulo enfoca o framework MAC e o conjunto de módulos de política de segurança plugáveis que o FreeBSD fornece para habilitar vários mecanismos de segurança. Depois de ler este capítulo, você saberá: * A terminologia associada ao framework MAC. * Os recursos dos módulos de política de segurança MAC, bem como a diferença entre uma política rotulada e não rotulada. * As considerações a se levar em conta antes de configurar um sistema para usar o framework MAC. * Quais módulos de política de segurança MAC estão incluídos no FreeBSD e como configurá-los. * Como implementar um ambiente mais seguro usando o framework MAC. * Como testar a configuração para garantir que o framework MAC foi implementado corretamente. Antes de ler este capítulo, você deve: * Entender os fundamentos do UNIX(TM) e do FreeBSD (crossref:basics[basics, Fundamentos do FreeBSD]). * Ter alguma familiaridade com segurança e como ela está presente no FreeBSD (crossref:security[security, Segurança]). [WARNING] ==== A configuração incorreta do MAC pode causar perda de acesso ao sistema, agravamento de usuários, ou incapacidade de acessar os recursos fornecidos pelo Xorg. Mais importante, o MAC não deve ser usado para proteger completamente um sistema. O framework MAC apenas aumenta uma política de segurança existente. Sem práticas de segurança sólidas e verificações regulares de segurança, o sistema nunca estará completamente seguro. Os exemplos contidos neste capítulo são para fins de demonstração e os exemplos de configurações _não_ devem ser implementadas em um sistema de produção. A implementação de qualquer política de segurança requer um bom entendimento, design adequado e testes completos. ==== Embora este capítulo abranja uma ampla gama de questões de segurança relacionadas à estrututa MAC, o desenvolvimento de novos módulos de políticas de segurança MAC não serão abrangidos. Vários módulos de política de segurança incluídos com o framework MAC possuem características específicas que são fornecidas tanto para o teste quanto para o desenvolvimento de novos módulos. Consulte man:mac_test[4], man:mac_stub[4] e man:mac_none[4] para obter mais informações sobre esses módulos de política de segurança e os diversos mecanismos que eles fornecem. [[mac-inline-glossary]] == Termos chave Os seguintes termos chave são usados ao se referir ao framework MAC: * _compartment_: um conjunto de programas e dados a serem particionados ou separados, onde os usuários recebem acesso explícito ao componente específico de um sistema. Um compartimento (compartment) representa um agrupamento, como um grupo de trabalho, departamento, projeto ou tópico. Os compartimentos possibilitam a implementação de uma política de segurança baseada na necessidade de conhecimento. * _integrity_: o nível de confiança que pode ser colocado nos dados. Como a integridade (integrity) dos dados é elevada, também aumenta a capacidade de confiar nesses dados. * _level_: a configuração aumentada ou diminuída de um atributo de segurança. À medida que o nível (level) aumenta, sua segurança também é considerada alta. * _label_: um atributo de segurança que pode ser aplicado a arquivos, diretórios ou outros itens no sistema. Pode ser considerado um selo de confidencialidade. Quando um rótulo (label) é colocado em um arquivo, ele descreve as propriedades de segurança desse arquivo e só permitirá acesso por arquivos, usuários e recursos com uma configuração de segurança semelhante. O significado e a interpretação dos valores do rótulo dependem da configuração da política. Algumas políticas tratam um rótulo como representando a integridade ou o sigilo de um objeto, enquanto outras políticas podem usar rótulos para manter regras de acesso. * _multilabel_: esta propriedade é uma opção de sistema de arquivos que pode ser configurada no modo usuário único (single-user) usando o man:tunefs[8], durante a inicialização usando o man:fstab[5], ou durante a criação de um novo sistema de arquivos. Essa opção permite que um administrador aplique rótulos MAC diferentes em objetos diferentes. Essa opção aplica-se somente aos módulos de política de segurança que suportam rotulagem. * _single label_: uma política em que o sistema de arquivos inteiro usa um rótulo para impor o controle de acesso sobre o fluxo de dados. Sempre que `multilabel` não estiver definido, todos os arquivos estarão em conformidade com a mesma configuração de rótulo. * _object_: uma entidade através da qual a informação flui sob a direção de um _sujeito_. Isso inclui diretórios, arquivos, campos, telas, teclados, memória, armazenamento magnético, impressoras ou qualquer outro dispositivo de armazenamento ou movimentação de dados. Um objeto (object) é um contêiner de dados ou um recurso do sistema. O acesso a um objeto significa efetivamente acesso aos seus dados. * _subject_: qualquer entidade ativa que faz com que as informações fluam entre _objetos_, como um usuário, processo do usuário ou processo do sistema. No FreeBSD, isso é quase sempre um segmento agindo em um processo em nome de um usuário. * _policy_: uma coleção de regras que define como os objetivos devem ser alcançados. Uma política (policy) geralmente documenta como determinados itens devem ser manipulados. Este capítulo considera uma política como uma coleção de regras que controla o fluxo de dados e informações e define quem tem acesso a esses dados e informações. * _high-watermark_: esse tipo de política permite o aumento dos níveis de segurança com o objetivo de acessar informações de nível superior. Na maioria dos casos, o nível original é restaurado depois que o processo é concluído. Atualmente, o framework MAC do FreeBSD não inclui este tipo de política. * _low-watermark_: esse tipo de política permite reduzir os níveis de segurança com o objetivo de acessar informações menos seguras. Na maioria dos casos, o nível de segurança original do usuário é restaurado após a conclusão do processo. O único módulo de política de segurança no FreeBSD para usar isto é o man:mac_lomac[4]. * _sensitivity_: normalmente usado quando se discute Segurança Multinível (Multilevel Security - MLS). Um nível de sensibilidade (sensitivity) descreve o quão importante ou secreto os dados devem ser. À medida que o nível de sensibilidade aumenta, também aumenta a importância do sigilo ou confidencialidade dos dados. [[mac-understandlabel]] == Entendendo os rótulos MAC Um rótulo MAC é um atributo de segurança que pode ser aplicado a sujeitos e objetos em todo o sistema. Ao definir um rótulo, o administrador deve entender suas implicações para evitar o comportamento inesperado ou indesejado do sistema. Os atributos disponíveis em um objeto dependem do módulo de política carregado, pois os módulos de política interpretam seus atributos de maneiras diferentes. O rótulo de segurança em um objeto é usado como parte de uma decisão de controle de acesso de segurança por uma política. Com algumas políticas, o rótulo contém todas as informações necessárias para tomar uma decisão. Em outras políticas, os rótulos podem ser processados como parte de um conjunto de regras maior. Existem dois tipos de políticas de rótulos: rótulo único e rótulo múltiplo. Por padrão, o sistema usará rótulo único. O administrador deve estar ciente dos prós e contras de cada um para implementar políticas que atendam aos requisitos do modelo de segurança do sistema. Uma diretiva de segurança de rótulo único permite que apenas um rótulo seja usado para cada sujeito ou objeto. Como uma política de rótulo único impõe um conjunto de permissões de acesso em todo o sistema, ela fornece menor sobrecarga de administração, mas diminui a flexibilidade das políticas que suportam rotulagem. No entanto, em muitos ambientes, uma única diretiva de rótulo pode ser tudo o que é necessário. Uma diretiva de segurança de rótulo único é um pouco semelhante ao DAC pois o `root` configura as políticas para que os usuários sejam colocados nas categorias e níveis de acesso apropriados. Uma diferença notável é que muitos módulos de política também podem restringir o `root`. O controle básico sobre os objetos será então liberado para o grupo, mas o `root` poderá revogar ou modificar as configurações a qualquer momento. Quando apropriado, uma política de rótulos múltiplos pode ser configurada em um sistema de arquivos UFS passando `multilabel` para o man:tunefs[8]. Uma política de rótulos múltiplos permite que cada sujeito ou objeto tenha seu próprio rótulo MAC independente. A decisão de usar uma política de rótulos múltiplos ou rótulo único é necessária apenas para políticas que implementam o recurso de rotulagem, como `biba`, `lomac` e `mls`. Algumas políticas, como `seeotheruids`, `portacl` e `partition`, não usam rótulos. Usar uma política de rótulos múltiplos em uma partição e estabelecer um modelo de segurança de rótulos múltiplos pode aumentar a sobrecarga administrativa, já que tudo nesse sistema de arquivos tem um rótulo. Isso inclui diretórios, arquivos e até mesmo nós de dispositivos. O comando a seguir definirá a flag `multilabel` no sistema de arquivos UFS especificado . Isso só pode ser feito no modo de usuário único e não é um requisito para o sistema de arquivos de swap: [source,shell] .... # tunefs -l enable / .... [NOTE] ==== Alguns usuários tiveram problemas com a configuração de flag `multilabel` na partição raiz. Se este for o caso, por favor consulte <>. ==== Como a política de rótulos múltiplos é definida por sistema de arquivos, ela pode não ser necessária se o layout do sistema de arquivos for bem projetado. Considere um exemplo de modelo de segurança MAC para um servidor Web do FreeBSD. Esta máquina usa o rótulo único, `biba/high`, para tudo nos sistemas de arquivos padrão. Se o servidor Web precisar ser executado em `biba/low` para evitar recursos de gravação, ele poderá ser instalado em um sistema de arquivos UFS separado, [.filename]#/usr/local#, definido com `biba/low`. === Configuração de rótulo Praticamente todos os aspectos da configuração do módulo de política de rótulo serão executados usando os utilitários do sistema base. Esses comandos fornecem uma interface simples para a configuração de objeto ou sujeito ou a manipulação e verificação da configuração. Toda a configuração pode ser feita usando `setfmac`, que é usado para definir rótulos MAC em objetos do sistema, e `setpmac`, que é usado para definir os rótulos em sujeitos do sistema. Por exemplo, para definir o rótulo MAC`biba` como `high` em [.filename]#test#: [source,shell] .... # setfmac biba/high test .... Se a configuração for bem sucedida, o prompt será retornado sem erro. Um erro comum é `Permission denied`, que geralmente ocorre quando o rótulo está sendo definido ou modificado em um objeto restrito. Outras condições podem produzir falhas diferentes. Por exemplo, o arquivo pode não ser de propriedade do usuário que está tentando re-rotular o objeto, o objeto pode não existir ou o objeto pode ser somente de leitura. Uma política obrigatória não permitirá que o processo renomeie o arquivo, talvez devido a uma propriedade do arquivo, uma propriedade do processo ou uma propriedade do novo valor de rótulo proposto. Por exemplo, se um usuário que estiver executando com baixa integridade tentar alterar o rótulo de um arquivo de alta integridade, ou um usuário executando com baixa integridade tentar alterar o rótulo de um arquivo de baixa integridade para um rótulo de alta integridade, essas operações falharão. O administrador do sistema pode usar `setpmac` para substituir as configurações do módulo de política, atribuindo um rótulo diferente a chamada do processo: [source,shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... Para processos atualmente em execução, como o sendmail, o `getpmac` é normalmente usado. Esse comando usa uma ID de processo (PID) no lugar de um nome de comando. Se os usuários tentarem manipular um arquivo que não esteja em seu acesso, sujeito às regras dos módulos de política carregados, o erro `Operation not permitted` será exibido. === Rótulos pré-definidos Alguns módulos de política do FreeBSD que suportam o recurso de rotulagem oferecem três rótulos predefinidos: `low`, `equal` e `high`, onde: * `low` é considerada a configuração de rótulo mais baixa que um objeto ou assunto pode ter. Definir isso em sujeitos ou objetos bloqueia o acesso a objetos ou sujeitos marcados como alto (high). * `equal` define o sujeito ou objeto a ser desabilitado ou não afetado e deve ser colocado apenas em objetos considerados como isentos da política. * `high` concede a um objeto ou sujeito a configuração mais alta disponível nos módulos de política Biba e MLS. Esses módulos de política incluem man:mac_biba[4], man:mac_mls[4] e man:mac_lomac[4]. Cada um dos rótulos predefinidos estabelece uma diretiva de fluxo de informações diferentes. Consulte a página de manual do módulo para determinar as características das configurações genéricas de rótulos. === Rótulos numéricos Os módulos de políticas Biba e MLS suportam um rótulo numérico que pode ser configurado para indicar o nível exato de controle hierárquico. Esse nível numérico é usado para particionar ou classificar informações em diferentes grupos de classificação, permitindo apenas o acesso a esse grupo ou a um nível de grupo mais alto. Por exemplo: [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... pode ser interpretado como "Rótulo de Política Biba/Grau 10:Compartimentos 2, 3 e 6: (grau 5 ...") Neste exemplo, o primeiro grau seria considerado o grau efetivo com compartimentos efetivos, o segundo grau é o grau baixo e o último é o grau alto. Na maioria das configurações, essas definições refinadas não são necessárias, pois são consideradas configurações avançadas. Objetos do sistema possuem apenas um grau e compartimento atuais. Os sujeitos do sistema refletem o intervalo de direitos disponíveis no sistema e as interfaces de rede, onde são usados para controle de acesso. O grau e os compartimentos em um par de sujeito e objeto são usados para construir um relacionamento conhecido como _dominance_, em que um sujeito domina um objeto, o objeto domina o sujeito, nenhum domina o outro, ou ambos dominam cada um. O caso em que "ambos dominam" ocorre quando dois rótulos são iguais. Devido à natureza do fluxo de informações do Biba, um usuário tem direitos sobre um conjunto de compartimentos que podem corresponder aos projetos, mas os objetos também têm um conjunto de compartimentos. Os usuários podem ter que subconjuntar seus direitos usando `su` ou `setpmac` para acessar objetos em um compartimento a partir do qual eles não estão restritos. === Rótulos de usuários Os usuários precisam ter rótulos para que seus arquivos e processos interajam adequadamente com a política de segurança definida no sistema. Isso é configurado no [.filename]#/etc/login.conf# usando classes de login. Todo módulo de política que usa rótulos implementará a configuração da classe de usuário. Para definir o rótulo padrão da classe de usuário que será imposto pelo MAC, adicione uma entrada `label`. Um exemplo de entrada `label` contendo todos os módulos de política é exibida abaixo. Observe que, em uma configuração real, o administrador nunca habilitaria todos os módulos de política. Recomenda-se que o restante deste capítulo seja revisado antes que qualquer configuração seja implementada. [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... Embora os usuários não possam modificar o valor padrão, eles podem alterar seu rótulo após o login, sujeito às restrições da política. O exemplo acima diz à política do Biba que a integridade mínima de um processo é `5`, seu máximo é `15` e o rótulo efetivo padrão é `10`. O processo será executado em `10` até que ele escolha alterar o rótulo, talvez devido ao usuário usar `setpmac`, que será restringido pelo Biba ao intervalo configurado. Após qualquer alteração no [.filename]#login.conf#, o banco de dados de recursos da classe de login deve ser reconstruído usando o `cap_mkdb`. Muitos sites têm um grande número de usuários que exigem várias classes de usuário diferentes. Um planejamento detalhado é necessário, pois isso pode dificultar o gerenciamento. === Rótulos de interface de rede Os rótulos podem ser definidos em interfaces de rede para ajudar a controlar o fluxo de dados através da rede. Políticas que usam rótulos de interface de rede funcionam da mesma maneira que as políticas funcionam em relação aos objetos. Usuários com configurações altas no Biba, por exemplo, não terão permissão para acessar interfaces de rede com um rótulo `low`. Ao definir o rótulo MAC em interfaces de rede, `maclabel` pode ser passado para o `ifconfig`: [source,shell] .... # ifconfig bge0 maclabel biba/equal .... Este exemplo irá definir o rótulo MAC de `biba/equal` na interface `bge0`. Ao usar uma configuração semelhante a `biba/high(low-high)`, o rótulo inteiro deve ser citado para evitar que um erro seja retornado. Cada módulo de política que suporta rotulagem tem um ajuste que pode ser usado para desativar o rótulo MAC em interfaces de rede. Configurar o rótulo para `equal` terá um efeito semelhante. Reveja a saída do `sysctl`, as páginas do manual de políticas e as informações no restante deste capítulo para obter mais informações sobre esses ajustes. [[mac-planning]] == Planejando a configuração de segurança Antes de implementar qualquer política de MAC, recomenda-se uma fase de planejamento. Durante as etapas de planejamento, um administrador deve considerar os requisitos e metas de implementação, como: * Como classificar informações e recursos disponíveis nos sistemas de destino. * Quais informações ou recursos para restringir o acesso, juntamente com o tipo de restrições que devem ser aplicadas. * Quais módulos MAC serão necessários para atingir esse objetivo. Um teste de sistema confiável e sua configuração deve ocorrer _antes_ de uma implementação MAC ser usada em sistemas de produção. Como diferentes ambientes têm diferentes necessidades e requisitos, estabelecer um perfil de segurança completo diminuirá a necessidade de alterações quando o sistema entrar em operação. Considere como o framework MAC aumenta a segurança do sistema como um todo. Os vários módulos de política de segurança fornecidos pelo framework MAC podem ser usados para proteger a rede e os sistemas de arquivos ou para impedir que usuários acessem determinadas portas e soquetes. Talvez o melhor uso dos módulos de política seja carregar vários módulos de política de segurança por vez para fornecer um ambiente MLS. Essa abordagem difere de uma política rígida, que tipicamente endurece elementos de um sistema que são usados apenas para propósitos específicos. A desvantagem de MLS é o aumento da sobrecarga administrativa. A sobrecarga é mínima quando comparada ao efeito duradouro de uma estrutura que fornece a capacidade de escolher quais políticas são necessárias para uma configuração específica e que reduzem a sobrecarga de desempenho. A redução do suporte a políticas desnecessárias pode aumentar o desempenho geral do sistema, além de oferecer flexibilidade de escolha. Uma boa implementação consideraria os requisitos gerais de segurança e implementaria efetivamente os vários módulos de política de segurança oferecidos pelo framework. Um sistema que utiliza MAC garante que um usuário não terá permissão para alterar atributos de segurança à vontade. Todos os utilitários, programas e scripts de usuário devem funcionar dentro das restrições das regras de acesso fornecidas pelos módulos de política de segurança selecionados e o controle das regras de acesso do MAC está nas mãos do administrador do sistema. É dever do administrador do sistema selecionar cuidadosamente os módulos de política de segurança corretos. Para um ambiente que precisa limitar o controle de acesso na rede, o man:mac_portacl[4], man:mac_ifoff[4], e os módulos de políticas man:mac_biba[4] são bons pontos de partida. Para um ambiente em que a confidencialidade rigorosa dos objetos do sistema de arquivos é necessária, considere man:mac_bsdextended[4] e os módulos de política man:mac_mls[4]. Decisões de políticas podem ser tomadas com base na configuração da rede. Se apenas determinados usuários tiverem permissão para acessar o man:ssh[1], o módulo de política man:mac_portacl[4] é uma boa escolha. No caso de sistemas de arquivos, o acesso a objetos pode ser considerado confidencial para alguns usuários, mas não para outros. Como um exemplo, uma grande equipe de desenvolvimento pode ser dividida em projetos menores, onde os desenvolvedores do projeto A podem não ter permissão para acessar objetos escritos por desenvolvedores do projeto B. No entanto, ambos os projetos podem precisar acessar objetos criados por desenvolvedores do projeto C. Usando os diferentes módulos de política de segurança fornecidos pelo framework MAC, os usuários poderiam ser divididos nesses grupos e então receber acesso aos objetos apropriados. Cada módulo de política de segurança tem uma maneira exclusiva de lidar com a segurança geral de um sistema. A seleção de módulos deve se basear em uma política de segurança bem pensada, que pode exigir revisão e reimplementação. Entender os diferentes módulos da política de segurança oferecidos pelo framework MAC ajudará os administradores a escolher as melhores políticas para suas situações. O restante deste capítulo aborda os módulos disponíveis, descreve seu uso e configuração e, em alguns casos, fornece informações sobre as situações aplicáveis. [CAUTION] ==== A implementação do MAC é muito parecida com a implementação de um firewall, já que é preciso tomar cuidado para evitar que o sistema seja completamente bloqueado. A capacidade de reverter para uma configuração anterior deve ser considerada e a implementação do MAC em uma conexão remota deve ser feita com extrema cautela. ==== [[mac-policies]] == Políticas MAC Disponíveis O kernel padrão do FreeBSD inclui a diretiva `options MAC`. Isso significa que todos os módulos incluídos no framework MAC podem ser carregados com o comando `kldload` como um módulo do kernel em tempo de execução. Depois de testar o módulo, adicione o nome do módulo ao arquivo [.filename]#/boot/loader.conf# para que ele seja carregado durante a inicialização. Cada módulo também fornece uma opção de kernel para os administradores que escolhem compilar seu próprio kernel personalizado. O FreeBSD inclui um grupo de políticas que cobrirá a maioria dos requisitos de segurança. Cada política é resumida abaixo. As três últimas políticas suportam configurações inteiras no lugar dos três rótulos padrão. [[mac-seeotheruids]] === O MAC vê a Política de Outros UIDs Nome do módulo: [.filename]#mac_seeotheruids.ko# Linha de configuração do kernel: `options MAC_SEEOTHERUIDS` Opção de inicialização: `mac_seeotheruids_load="YES"` O módulo man:mac_seeotheruids[4] amplia os ajustes `security.bsd.see_other_uids` e `security.bsd.see_other_gids` do `sysctl`. Esta opção não requer que nenhum rótulo seja definido antes da configuração e pode operar de forma transparente com outros módulos. Depois de carregar o módulo, os seguintes ajustes `sysctl` podem ser usados para controlar seus recursos: * O `security.mac.seeotheruids.enabled` ativa o módulo e implementa as configurações padrões que impedem que os usuários visualizem processos e soquetes pertencentes a outros usuários. * `security.mac.seeotheruids.specificgid_enabled` permite que grupos especificados sejam isentos desta política. Para isentar grupos específicos, use a variável `security.mac.seeotheruids.specificgid=_XXX_` do `sysctl`, substituindo _XXX_ pelo ID numérico do grupo a ser isento. * `security.mac.seeotheruids.primarygroup_enabled` é usado para isentar grupos primários específicos desta política. Ao usar este ajuste, o `security.mac.seeotheruids.specificgid_enabled` não pode estar definido. [[mac-bsdextended]] === A Política Estendida do BSD MAC Nome do módulo: [.filename]#mac_bsdextended.ko# Linha de configuração do kernel: `options MAC_BSDEXTENDED` Opção de inicialização: `mac_bsdextended_load="YES"` O módulo man:mac_bsdextended[4] aplica um firewall no sistema de arquivos. Ele fornece uma extensão para o modelo de permissões do sistema de arquivos padrão, permitindo que um administrador crie um conjunto de regras semelhante a um firewall para proteger arquivos, utilitários e diretórios na hierarquia do sistema de arquivos. Quando se tenta acessar um objeto do sistema de arquivos, a lista de regras é iterada até que uma regra correspondente seja localizada ou o final seja atingido. Esse comportamento pode ser alterado usando `security.mac.bsdextended.firstmatch_enabled`. Semelhante a outros módulos de firewall no FreeBSD, um arquivo contendo as regras de controle de acesso pode ser criado e lido pelo sistema no momento da inicialização usando uma variável do man:rc.conf[5]. A lista de regras pode ser inserida usando o man:ugidfw[8] que possui uma sintaxe similar ao man:ipfw[8]. Mais ferramentas podem ser escritas usando as funções da biblioteca man:libugidfw[3]. Depois que o módulo man:mac_bsdextended[4] tiver sido carregado, o seguinte comando poderá ser usado para listar a configuração atual da regra: [source,shell] .... # ugidfw list 0 slots, 0 rules .... Por padrão, nenhuma regra é definida e tudo está completamente acessível. Para criar uma regra que bloqueia todo o acesso dos usuários, mas que não afeta o ` root `: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... Embora essa regra seja simples de implementar, é uma idéia muito ruim, pois impede que todos os usuários emitam comandos. Um exemplo mais realista bloqueia todo o acesso do `user1`, incluindo listagens de diretórios, ao diretório inicial do usuário `_user2_` : [source,shell] .... # ugidfw set 2 subject uid user1 object uid user2 mode n # ugidfw set 3 subject uid user1 object gid user2 mode n .... Em vez de `user1`, `not uid_user2` poderia ser usado para impor as mesmas restrições de acesso para todos os usuários. No entanto, o usuário `root` não é afetado por essas regras. [NOTE] ==== Deve-se ter extremo cuidado ao trabalhar com este módulo, pois o uso incorreto pode bloquear o acesso a certas partes do sistema de arquivos. ==== [[mac-ifoff]] === A política de silenciamento da interface MAC Nome do módulo: [.filename]#mac_ifoff.ko# Linha de configuração do kernel: `options MAC_IFOFF` Opção de inicialização: `mac_ifoff_load="YES"` O módulo man:mac_ifoff[4] é usado para desabilitar as interfaces de rede e evitar que as interfaces de rede sejam ativadas durante a inicialização do sistema. Ele não usa rótulos e não depende de nenhum outro módulo MAC. A maior parte do controle deste módulo é realizada através destes ajustes `sysctl`: * `security.mac.ifoff.lo_enabled` ativa ou desativa todo o tráfego na interface de loopback, man:lo[4]. * `security.mac.ifoff.bpfrecv_enabled` ativa ou desativa todo o tráfego na interface do Filtro de Pacotes Berkeley, man:bpf[4]. * `security.mac.ifoff.other_enabled` ativa ou desativa o tráfego em todas as outras interfaces. Um dos usos mais comuns do man:mac_ifoff[4] é o monitoramento de rede em um ambiente onde o tráfego de rede não deve ser permitido durante a sequência de inicialização. Outro uso seria escrever um script que usa um aplicativo como o package:security/aide[] para bloquear automaticamente o tráfego da rede se encontrar arquivos novos ou alterados em diretórios protegidos. [[mac-portacl]] === A política de lista de controle de acesso da porta MAC Nome do módulo: [.filename]#mac_portacl.ko# Linha de configuração do kernel: `MAC_PORTACL` Opção de inicialização: `mac_portacl_load="YES"` O módulo man:mac_portacl[4] é usado para limitar a ligação a portas TCP e UDP locais , tornando possível permitir que usuários non-`root` sejam vinculados a portas privilegiadas especificadas abaixo de 1024. Uma vez carregado, este módulo habilita a política MAC em todos os sockets. Os seguintes ajustes estão disponíveis: * `security.mac.portacl.enabled` ativa ou desativa a política completamente. * A `security.mac.portacl.port_high` configura o número de porta mais alto que o man:mac_portacl[4] protege. * A `security.mac.portacl.suser_exempt`, quando configurada para um valor diferente de zero, isenta o usuário `root` desta política. * A `security.mac.portacl.rules` especifica a política como uma cadeia de texto no formato `rule [, rule, ...]`, com tantas regras quantas forem necessárias, e onde cada regra esta na forma `idtype:id:protocol:port`. O [parameter]#idtype# é `uid` ou `gid`. O parâmetro [parameter]#protocol# pode ser `tcp` ou `udp`. O parâmetro [parameter]#port# é o número da porta para permitir que o usuário ou grupo especificado se vincule. Somente valores numéricos podem ser usados para os parâmetros ID do usuário, ID do grupo e porta. Por padrão, as portas abaixo de 1024 só podem ser usadas por processos privilegiados que são executados como `root`. Para que o man:mac_portacl[4] permita que processos não privilegiados se vinculem a portas abaixo de 1024, defina os seguintes ajustes da seguinte forma: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 # sysctl net.inet.ip.portrange.reservedhigh=0 .... Para evitar que o usuário `root` seja afetado por esta política, configure `security.mac.portacl.suser_exempt` para um valor diferente de zero. [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... Para permitir que o usuário `www` com UID 80 seja vinculado à porta 80 sem precisar do privilégio `root`: [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... Este próximo exemplo permite que o usuário com o UID de 1001 se vincule às portas TCP 110 (POP3) e 995 (POP3): [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... [[mac-partition]] === A Política de Partição MAC Nome do módulo: [.filename]#mac_partition.ko# Linha de configuração do kernel: `options MAC_PARTITION` Opção de inicialização: `mac_partition_load="YES"` A política man:mac_partition[4] coloca os processos em "partições" específicas com base no rótulo MAC. A maioria das configurações para esta política é feita usando man:setpmac[8]. Uma vari[avek `sysctl` está disponível para esta política: * A `security.mac.partition.enabled` permite a aplicação de partições de processo MAC. Quando essa política esta ativada, os usuários só poderão ver seus processos e quaisquer outros em sua partição, mas não terão permissão para trabalhar com utilitários fora do escopo dessa partição. Por exemplo, um usuário na classe `insecure` não terá permissão para acessar `top`, bem como muitos outros comandos que devem fazer spawn de um processo. Este exemplo adiciona o `top` ao conjunto de rótulos dos usuários na classe `insecure`. Todos os processos gerados por usuários na classe `insecure` permanecerão no rótulo `partition/13`. [source,shell] .... # setpmac partition/13 top .... Este comando exibe o rótulo da partição e a lista de processos: [source,shell] .... # ps Zax .... Esse comando exibe o rótulo da partição de processo de outro usuário e os processos atualmente em execução desse usuário: [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== Os usuários podem ver processos no rótulo `root`, a menos que a política man:mac_seeotheruids[4] esteja carregada. ==== [[mac-mls]] === O módulo de segurança multinível MAC Nome do módulo: [.filename]#mac_mls.ko# Linha de configuração do kernel: `options MAC_MLS` Opção de inicialização: `mac_mls_load="YES"` A política man:mac_mls[4] controla o acesso entre sujeitos e objetos no sistema, aplicando uma diretiva de fluxo de informações restrita. Em ambientes MLS, um nível de "clearance" é definido no rótulo de cada sujeito ou objeto, juntamente com os compartimentos. Como esses níveis de liberação podem atingir números maiores que vários milhares, seria uma tarefa difícil configurar completamente cada sujeito ou objeto. Para facilitar essa sobrecarga administrativa, três rótulos são incluídos nesta política: `mls/low`, `mls/equal` e `mls/high`, onde: * Qualquer coisa rotulada com `mls/low` terá um nível de folga baixo e não será permitido acessar informações de um nível superior. Esse rótulo também evita que objetos de nível de liberação mais alto gravem ou transmitam informações para um nível inferior. * `mls/equal` deve ser colocado em objetos que devem ser isentos da política. * `mls/high` é o nível mais alto de permissão possível. Objetos atribuídos a esse rótulo terão domínio sobre todos os outros objetos no sistema; no entanto, eles não permitirão o vazamento de informações para objetos de classe baixa. O MLS fornece: * Um nível de segurança hierárquico com um conjunto de categorias não hierárquicas. * Regras fixas de `no read up, no write down`. Isso significa que um sujeito pode ter acesso de leitura a objetos em seu próprio nível ou abaixo, mas não acima. Da mesma forma, um sujeito pode ter acesso de gravação a objetos em seu próprio nível ou acima, mas não abaixo dele. * Sigilo, ou a prevenção de divulgação inadequada de dados. * Uma base para o projeto de sistemas que lidam simultaneamente com dados em múltiplos níveis de sensibilidade sem vazar informações entre secretas e confidenciais. Os seguintes ajustes `sysctl` estão disponíveis: * `security.mac.mls.enabled` é usado para habilitar ou desabilitar a política MLS. * `security.mac.mls.ptys_equal` todos os dispositivos man:pty[4] como `mls/equal` durante a criação. * `security.mac.mls.revocation_enabled` revoga o acesso a objetos depois que seu rótulo é alterado para um rótulo de nível inferior. * `security.mac.mls.max_compartments` define o número máximo de níveis de compartimentos permitidos em um sistema. Para manipular os rótulos MLS, use man:setfmac[8]. Para atribuir um rótulo a um objeto: [source,shell] .... # setfmac mls/5 test .... Para obter o rótulo MLS para o arquivo [.filename]#test#: [source,shell] .... # getfmac test .... Outra abordagem é criar um arquivo de política mestre em [.filename]#/etc/#, que especifica as informações de política de MLS e alimentar o `setfmac` com esse arquivo. Ao usar o módulo de política do MLS, um administrador planeja controlar o fluxo de informações confidenciais. O padrão `block read up block write down` define tudo para um estado baixo. Tudo é acessível e um administrador aumenta lentamente a confidencialidade das informações. Além das três opções básicas de rótulo, um administrador pode agrupar usuários e grupos conforme necessário para bloquear o fluxo de informações entre eles. Pode ser mais fácil olhar as informações em níveis de clearance usando palavras descritivas, como classificações de `Confidential`, `Secret` e `Top Secret`. Alguns administradores criam grupos diferentes com base nos níveis do projeto. Independentemente do método de classificação, um plano bem pensado deve existir antes de implementar uma política restritiva. Alguns exemplos de situações para o módulo de política MLS incluem um servidor Web de e-commerce, um servidor de arquivos com informações críticas sobre a empresa e ambientes de instituições financeiras. [[mac-biba]] === O Módulo MAC Biba Nome do módulo: [.filename]#mac_biba.ko# Linha de configuração do kernel: `options MAC_BIBA` Opção de inicialização: `mac_biba_load="YES"` O módulo man:mac_biba[4] carrega a política MAC Biba. Essa política é semelhante à política MLS, com a exceção de que as regras para o fluxo de informações são levemente revertidas. Isso evita o fluxo descendente de informações confidenciais, enquanto a política MLS impede o fluxo ascendente de informações confidenciais. Nos ambientes do Biba, um rótulo "integrity" é definido em cada sujeito ou objeto. Esses rótulos são compostos de classes hierárquicas e componentes não hierárquicos. Como um grau ascende, o mesmo acontece com a sua integridade. Rótulos suportados são `biba/low`, `biba/equal` e `biba/high`, onde: * `biba/low` é considerado a integridade mais baixa que um sujeito ou objeto pode ter. Definir isso em sujeitos ou objetos bloqueia o acesso de gravação a objetos ou sujeitos marcados como `biba/high`, mas não impede o acesso de leitura. * `biba/equal` só deve ser colocado em objetos considerados como isentos da política. * `biba/high` permite gravar objetos em um rótulo inferior, mas não permite a leitura desse objeto. Recomenda-se que esse rótulo seja colocado em objetos que afetam a integridade de todo o sistema. O Biba fornece: * Níveis de integridade hierárquica com um conjunto de categorias de integridade não hierárquicas. * As regras fixas são `no write up, no read down`, o oposto do MLS. Um sujeito pode ter acesso de gravação a objetos em seu próprio nível ou abaixo, mas não acima. Da mesma forma, um sujeito pode ter acesso de leitura a objetos em seu próprio nível ou acima, mas não abaixo. * Integridade, impedindo a modificação inadequada de dados. * Níveis de integridade em vez dos níveis de sensibilidade do MLS. Os seguintes ajustes podem ser usados para manipular a política Biba: * `security.mac.biba.enabled` é usado para ativar ou desativar a imposição da política Biba na máquina de destino. * O `security.mac.biba.ptys_equal` é usado para desabilitar a política Biba em dispositivos man:pty[4]. * `security.mac.biba.revocation_enabled` força a revogação do acesso a objetos se o rótulo for alterado para dominar o sujeito. Para acessar a configuração de política Biba em objetos do sistema, use `setfmac` e `getfmac`: [source,shell] .... # setfmac biba/low test # getfmac test test: biba/low .... Integridade, que é diferente de sensibilidade, é usada para garantir que a informação não seja manipulada por partes não confiáveis. Isso inclui informações passadas entre sujeitos e objetos. Ele garante que os usuários só poderão modificar ou acessar as informações para as quais receberam acesso explícito. O módulo de política de segurança man:mac_biba[4] permite que um administrador configure quais arquivos e programas um usuário pode ver e invocar enquanto assegura que os programas e arquivos sejam confiáveis pelo sistema para esse usuário. Durante a fase de planejamento inicial, um administrador deve estar preparado para particionar os usuários em graus, níveis e áreas. O sistema terá como padrão um rótulo alto assim que esse módulo de política for ativado e cabe ao administrador configurar as diferentes classificações e níveis para os usuários. Em vez de usar níveis de liberação, um bom método de planejamento pode incluir tópicos. Por exemplo, permita apenas que os desenvolvedores modifiquem o acesso ao repositório do código-fonte, ao compilador do código-fonte e a outros utilitários de desenvolvimento. Outros usuários seriam agrupados em outras categorias, como testadores, designers ou usuários finais, e somente o acesso de leitura seria permitido. Um sujeito de integridade inferior é incapaz de escrever para um sujeito de integridade superior e um sujeito de integridade superior não pode listar ou ler um objeto de integridade inferior. Definir um rótulo com o grau mais baixo possível pode torná-lo inacessível aos sujeitos. Alguns ambientes em potencial para esse módulo de política de segurança incluiriam um servidor Web restrito, uma máquina de desenvolvimento e teste e um repositório de código-fonte. Uma implementação menos útil seria uma estação de trabalho pessoal, uma máquina usada como roteador ou um firewall de rede. [[mac-lomac]] === O módulo MAC de marca d'água baixa Nome do módulo: [.filename]#mac_lomac.ko# Linha de configuração do kernel: `options MAC_LOMAC` Opção de inicialização: `mac_lomac_load="YES"` Diferentemente da política do MAC Biba, a política man:mac_lomac[4] permite acesso a objetos de baixa integridade somente após diminuir o nível de integridade para não interromper nenhuma regra de integridade. A política de integridade de marca d'água baixa funciona de forma quase idêntica ao Biba, com a exceção do uso de rótulos flutuantes para suportar o rebaixamento do sujeito por meio de um compartimento auxiliar de classificação. Este compartimento secundário assume o formato `[auxgrade]`. Ao atribuir uma política com um grau auxiliar, use a sintaxe `lomac/10[2]`, onde `2` é o grau auxiliar. Essa política se baseia na rotulagem onipresente de todos os objetos do sistema com rótulos de integridade, permitindo que os sujeitos leiam objetos de baixa integridade e fazendo o downgrade do rótulo no sujeito para evitar gravações futuras em objetos de alta integridade usando `[auxgrade]` . A política pode fornecer maior compatibilidade e exigir menos configuração inicial do que o Biba. Como as políticas Biba e MLS, `setfmac` e `setpmac` são usadas para colocar rótulos nos objetos do sistema: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... Um grau auxiliar `low` é uma funcionalidade fornecida apenas pela política MACLOMAC. [[mac-userlocked]] == Bloqueio do Usuário Este exemplo considera um sistema de armazenamento relativamente pequeno com menos de cinquenta usuários. Os usuários terão recursos de login e terão permissão para armazenar dados e acessar recursos. Para este cenário, os módulos de política man:mac_bsdextended[4] e man:mac_seeotheruids[4] podem coexistir e bloquear o acesso a objetos do sistema enquanto ocultam processos do usuário. Comece adicionando a seguinte linha ao [.filename]#/boot/loader.conf#: [.programlisting] .... mac_seeotheruids_load="YES" .... O módulo de política de segurança man:mac_bsdextended[4] pode ser ativado adicionando esta linha ao arquivo [.filename]#/etc/rc.conf#: [.programlisting] .... ugidfw_enable="YES" .... As regras padrões armazenadas em [.filename]#/etc/rc.bsdextended# serão carregadas na inicialização do sistema. No entanto, as entradas padrões podem precisar de modificação. Como esta máquina é destinada apenas para servir os usuários, tudo pode ser deixado comentado, exceto as duas últimas linhas, a fim de forçar o carregamento de objetos do sistema de propriedade do usuário por padrão. Adicione os usuários necessários a esta máquina e reinicie. Para fins de teste, tente efetuar login como um usuário diferente em dois consoles. Execute `ps aux` para ver se os processos de outros usuários estão visíveis. Verifique se a execução do man:ls[1] no diretório inicial de outro usuário falha. Não tente testar com o usuário `root`, a menos que o `sysctl` específico tenha sido modificado para bloquear o acesso do superusuário. [NOTE] ==== Quando um novo usuário é adicionado, sua regra man:mac_bsdextended[4] não estará na lista de conjuntos de regras. Para atualizar o conjunto de regras rapidamente, descarregue o módulo de política de segurança e recarregue-o novamente usando man:kldunload[8] e man:kldload[8]. ==== [[mac-implementing]] == Nagios em Jail MAC Esta seção demonstra as etapas necessárias para implementar o sistema de monitoramento de rede Nagios em um ambiente MAC. Isso é um exemplo que ainda exige que o administrador teste se a política implementada atende aos requisitos de segurança da rede antes de usar em um ambiente de produção. Este exemplo requer que o `multilabel` seja definido em cada sistema de arquivos. Ele também assume que o package:net-mgmt/nagios-plugins[], package:net-mgmt/nagios[] e package:www/apache22[] estão todos instalados, configurados e funcionando corretamente antes de tentar a integração na estrutura MAC. === Criar uma Classe de Usuário Insegura Comece o procedimento adicionando a seguinte classe de usuário ao [.filename]#/etc/login.conf#: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... Em seguida, adicione a seguinte linha a seção de classe de usuário padrão: [.programlisting] .... :label=biba/high: .... Salve as edições e rode o seguinte comando para reconstruir o banco de dados: [source,shell] .... # cap_mkdb /etc/login.conf .... === Configurar usuários Configure o usuário `root` para a classe padrão usando: [source,shell] .... # pw usermod root -L default .... Todas as contas de usuário que não são `root` agora exigirão uma classe de login. A classe de login é necessária, caso contrário, os usuários terão acesso recusado aos comandos comuns. O seguinte script `sh` deve resolver: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ /etc/passwd`; do pw usermod $x -L default; done; .... Em seguida, altere as contas `nagios` e `www` para a classe insegura: [source,shell] .... # pw usermod nagios -L insecure # pw usermod www -L insecure .... === Crie o arquivo de contextos Um arquivo de contexto deve agora ser criado como [.filename]#/etc/policy.contexts#: [.programlisting] .... # This is the default BIBA policy for this system. # System: /var/run(/.*)? biba/equal /dev/(/.*)? biba/equal /var biba/equal /var/spool(/.*)? biba/equal /var/log(/.*)? biba/equal /tmp(/.*)? biba/equal /var/tmp(/.*)? biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # For Nagios: /usr/local/etc/nagios(/.*)? biba/10 /var/spool/nagios(/.*)? biba/10 # For apache /usr/local/etc/apache(/.*)? biba/10 .... Essa política impõe segurança ao definir restrições no fluxo de informações. Nesta configuração específica, os usuários, incluindo O `root`, nunca devem ter permissão para acessar o Nagios. Arquivos de configuração e processos que fazem parte do Nagios serão completamente auto-contidos ou presos. Este arquivo será lido depois da execução do `setfsmac` em cada sistema de arquivos. Este exemplo define a política no sistema de arquivos raiz: [source,shell] .... # setfsmac -ef /etc/policy.contexts / .... Em seguida, adicione estas edições a seção principal do [.filename]#/etc/mac.conf#: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === Configuração do Inicializador Para finalizar a configuração, adicione as seguintes linhas ao [.filename]#/boot/loader.conf#: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" security.mac.biba.trust_all_interfaces=1 .... E a seguinte linha para a configuração da placa de rede armazenada em [.filename]#/etc/rc.conf#. Se a configuração de rede principal for feita via DHCP, talvez seja necessário configurá-la manualmente após cada inicialização do sistema: [.programlisting] .... maclabel biba/equal .... === Testando a Configuração Primeiro, certifique-se de que o servidor Web e o Nagios não iniciarão na inicialização e reinicialização do sistema. Assegure-se de que o `root` não possa acessar nenhum dos arquivos no diretório de configuração do Nagios. Se o `root` puder listar o conteúdo de [.filename]#/var/spool/nagios#, algo está errado. Em vez disso, um erro "permission denied" deve ser retornado. Se tudo parecer bem, o Nagios, o Apache e o Sendmail agora poderão ser iniciados: [source,shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Verifique novamente para garantir que tudo esteja funcionando corretamente. Caso contrário, verifique os arquivos de log em busca de mensagens de erro. Se necessário, use o man:sysctl[8] para desativar o módulo de política de segurança man:mac_biba[4] e tente iniciar tudo novamente. [NOTE] ==== O usuário `root` ainda pode alterar a aplicação de segurança e editar seus arquivos de configuração. O comando a seguir permitirá a degradação da política de segurança para um nível inferior para um shell recém executado: [source,shell] .... # setpmac biba/10 csh .... Para impedir que isso aconteça, force o usuário a um intervalo usando man:login.conf[5]. Se o man:setpmac[8] tentar executar um comando fora do intervalo do compartimento, um erro será retornado e o comando não será executado. Nesse caso, defina root como `biba/high(high-high)`. ==== [[mac-troubleshoot]] == Solução de problemas do framework MAC Esta seção discute erros de configuração comuns e como resolvê-los. O sinalizador `multilabel` não fica habilitado na partição raiz ([.filename]#/#)::: As etapas a seguir podem resolver este erro transitório: [.procedure] ==== .. Edite [.filename]#/etc/fstab# e defina a partição raiz como somente leitura `ro`. .. Reinicie no modo single user. .. Execute `tunefs -l enable` no [.filename]#/#. .. Reinicie o sistema. .. Execute `mount -urw`[.filename]#/# e mude a opção `ro` de volta para `rw` no [.filename]#/etc/fstab# e reinicie o sistema novamente. .. Verifique novamente a saída do `mount` para garantir que o `multilabel` tenha sido configurado corretamente no sistema de arquivos raiz. ==== Depois de estabelecer um ambiente seguro com o MAC, o Xorg não inicia mais::: Isso pode ser causado pela política MAC `partition` ou por uma rotulagem incorreta em uma das políticas de rotulagem do MAC. Para depurar, tente o seguinte: [.procedure] ==== .. Verifique a mensagem de erro. Se o usuário estiver na classe `insecure`, a política `partition` pode ser a culpada. Tente definir a classe do usuário de volta para a classe `default` e reconstrua o banco de dados com o `cap_mkdb`. Se isso não mitigar o problema, vá para a etapa dois. .. Verifique duas vezes se as políticas de rótulo estão definidas corretamente para o usuário, para o Xorg e para as entradas no [.filename]#/dev#. .. Se nenhum destes resolver o problema, envie a mensagem de erro e uma descrição do ambiente para a lista de discussão http://lists.FreeBSD.org/mailman/listinfo/freebsd-questions[de perguntas gerais sobre o FreeBSD]. ==== O erro `_secure_path: unable to stat .login_conf` aparece::: Esse erro pode aparecer quando um usuário tenta alternar do usuário `root` para outro usuário no sistema. Essa mensagem geralmente ocorre quando o usuário possui uma qualificação mais alta do que a do usuário que ele está tentando se tornar. Por exemplo, se `joe` tiver uma classificação padrão de `biba/low` e o `root` tiver uma classificação de `biba/high`, o `root` não poderá visualizar o diretório inicial de `joe`. Isso acontecerá independente se o `root` usou ou não o `su` para se tornar o `joe`, pois o modelo de integridade do Biba não permitirá que o `root` exiba objetos definidos em um nível de integridade mais baixo. O sistema não reconhece mais o `root`::: Quando isso ocorre, o `whoami` retorna `0` e `su` retorna `who are you?`. + Isso pode acontecer se uma política de rotulagem foi desativada por man:sysctl[8] ou o módulo de política foi descarregado. Se a política estiver desativada, o banco de dados de recursos de login precisará ser reconfigurado. Verifique duas vezes o [.filename]#/etc/login.conf# para garantir que todas as opções de `label` tenham sido removidas e reconstrua o banco de dados com `cap_mkdb`. + Isso também pode acontecer se uma política restringir o acesso ao [.filename]#master.passwd#. Isso geralmente é causado por um administrador que altera o arquivo sob um rótulo que entra em conflito com a política geral que está sendo usada pelo sistema. Nesses casos, as informações do usuário seriam lidas pelo sistema e o acesso seria bloqueado, pois o arquivo herdaria o novo rótulo. Desative a política usando o man:sysctl[8] e tudo deve retornar ao normal. diff --git a/documentation/content/pt-br/books/handbook/network-servers/_index.adoc b/documentation/content/pt-br/books/handbook/network-servers/_index.adoc index e898ec8879..4a27f193f9 100644 --- a/documentation/content/pt-br/books/handbook/network-servers/_index.adoc +++ b/documentation/content/pt-br/books/handbook/network-servers/_index.adoc @@ -1,2549 +1,2548 @@ --- title: Capítulo 29. Servidores de Rede part: Parte IV. Comunicação de rede prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 34 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = Servidores de Rede :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 29 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Sinopse Este capítulo aborda alguns dos serviços de rede usados com mais frequência em sistemas UNIX(TM). Isso inclui instalar, configurar, testar e manter muitos tipos diferentes de serviços de rede. Exemplos de arquivos de configuração estão incluídos neste capítulo para referência. No final deste capítulo, os leitores saberão: * Como gerenciar o daemon inetd. * Como configurar o Network File System (NFS). * Como configurar o Network Information Server (NIS) para centralizar e compartilhar contas de usuários. * Como configurar o FreeBSD para funcionar como um servidor ou cliente LDAP * Como configurar configurações de rede automáticas usando o DHCP. * Como configurar um Domain Name Server (DNS). * Como configurar o servidor ApacheHTTP. * Como Configurar um Servidor de File Transfer Protocol (FTP). * Como configurar um servidor de arquivo e de impressão para clientes Windows(TM) usando o Samba. * Como sincronizar a hora e a data e configurar um servidor de horário usando o Network Time Protocol (NTP). * Como configurar o iSCSI. Este capítulo pressupõe um conhecimento básico de: * scripts [.filename]#/etc/rc#. * Terminologia de rede. * Instalação de software adicional de terceiros (crossref:ports[ports, Instalando Aplicativos. Pacotes e Ports]). [[network-inetd]] == O super-servidor inetd O daemon man:inetd[8] é algumas vezes chamado de Super-Servidor porque gerencia conexões para muitos serviços. Em vez de iniciar vários aplicativos, apenas o serviço inetd precisa ser iniciado. Quando uma conexão é recebida para um serviço gerenciado pelo inetd, ele determina para qual programa a conexão está destinada, gera um processo para esse programa e delega ao programa um socket. O uso de inetd para serviços que não são muito usados pode reduzir a carga do sistema, quando comparado à execução de cada daemon individualmente no modo independente. Primeiramente, inetd é usado para gerar outros daemons, mas vários protocolos triviais são tratados internamente, como chargen, auth, time, echo, discard e daytime. Esta seção aborda os conceitos básicos da configuração do inetd. [[network-inetd-conf]] === Arquivo de Configuração A configuração do inetd é feita editando o [.filename]#/etc/inetd.conf#. Cada linha deste arquivo de configuração representa um aplicativo que pode ser iniciado pelo inetd. Por padrão, cada linha começa com um comentário (`#`), o que significa que inetd não está atendendo a nenhum aplicativo. Para configurar o inetd para escutar as conexões de um aplicativo, remova o `#` no início da linha desse aplicativo. Depois de salvar suas edições, configure o inetd para iniciar na inicialização do sistema editando o arquivo [.filename]#/etc/rc.conf#: [.programlisting] .... inetd_enable="YES" .... Para iniciar o inetd agora, para que ele ouça o serviço que você configurou, digite: [source,shell] .... # service inetd start .... Uma vez iniciado o inetd, ele precisa ser notificado sempre que uma modificação for feita no arquivo [.filename]#/etc/inetd.conf#: [[network-inetd-reread]] .Recarregando o Arquivo de Configuração do inetd [example] ==== [source,shell] .... # service inetd reload .... ==== Normalmente, a entrada padrão de um aplicativo não precisa ser editada além da remoção do `#`. Em algumas situações, pode ser apropriado editar a entrada padrão. Como exemplo, esta é a entrada padrão para man:ftpd[8] sobre o IPv4: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... As sete colunas em uma entrada são as seguintes: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... Onde: service-name:: O nome do serviço do daemon para iniciar. Deve corresponder a um serviço listado no arquivo [.filename]#/etc/services#. Isso determina qual porta inetd atende para conexões de entrada para esse serviço. Ao usar um serviço personalizado, ele deve primeiro ser adicionado ao arquivo [.filename]#/etc/services#. socket-type:: Ou `stream`, `dgram`, `raw`, ou `seqpacket`. Use `stream` para conexões TCP e `dgram` para serviços UDP. protocol:: Use um dos seguintes nomes de protocolo: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protocol Name | Explicação |tcp ou tcp4 |TCP IPv4 |udp ou udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |Ambos TCP IPv4 e IPv6 |udp46 |Ambos UDP IPv4 e IPv6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: Neste campo, `wait` ou `nowait` deve ser especificado. `max-child`, `max-connections-per-ip-por-minute` e `max-child-per-ip` são opcionais. + `wait|nowait` indica se o serviço pode ou não manipular seu próprio socket. Os tipos de socket `dgram` devem usar `wait` enquanto os daemons `stream`, que geralmente são multi-threaded, devem usar `nowait`. `wait` geralmente passa vários sockets para um único daemon, enquanto `nowait` gera um daemon filho para cada novo socket. + O número máximo de daemons inetd que podem aparecer é definido por `max-child`. Por exemplo, para limitar dez instâncias do daemon, coloque um `/10` após o `nowait`. Especificar `/0` permite um número ilimitado de filhos. + `max-connections-per-ip-per-minute` limita o número de conexões de qualquer endereço específico de IP por minuto. Quando o limite for atingido, outras conexões desse endereço IP serão descartadas até o final do minuto. Por exemplo, um valor de `/10` limitaria qualquer endereço IP específico a dez tentativas de conexão por minuto. `max-child-per-ip` limita o número de processos-filhos que podem ser iniciados em nome de um único endereço IP a qualquer momento. Essas opções podem limitar o consumo excessivo de recursos e ajudar a impedir ataques de negação de serviço (DoS (Denial Of Service)). + Um exemplo pode ser visto nas configurações padrão para man:fingerd[8]: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s .... usuário:: O nome de usuário que o daemon será executado como. Daemons geralmente são executados como `root`, `daemon`, ou `nobody`. programa servidor:: O caminho completo para o daemon. Se o daemon for um serviço fornecido pelo inetd internamente, use `internal`. argumentos do programa servidor:: Usado para especificar qualquer argumento de comando a ser transmitido ao daemon na chamada. Se o daemon for um serviço interno, use `internal`. [[network-inetd-cmdline]] === Opções de linha de comando Como a maioria dos daemons de servidor, o inetd tem várias opções que podem ser usadas para modificar seu comportamento. Por padrão, inetd é iniciado com `-wW -C 60`. Essas opções ativam TCP wrappers para todos os serviços, incluindo serviços internos, e impedem que qualquer endereço de IP solicite qualquer serviço mais de 60 vezes por minuto. Para alterar as opções padrão que são passadas para inetd, adicione uma entrada para `inetd_flags` no arquivo [.filename]#/etc/rc.conf#. Se o inetd já estiver em execução, reinicie-o com `service inetd restart`. As opções disponíveis de limitação de taxa são: -c máximo:: Especifique o número máximo padrão de chamadas simultâneas de cada serviço, em que o padrão é ilimitado. Pode ser sobrescrito com base no serviço usando `max-child` em [.filename]#/etc/inetd.conf#. -C taxa:: Especifique o número máximo padrão de vezes por minuto que um serviço pode ser chamado a partir de um único endereço de IP. Pode ser substituído com base no serviço usando `max-connections-per-ip-por-minute` em [.filename]#/etc/inetd.conf#. -R taxa:: Especifique o número máximo de vezes que um serviço pode ser chamado em um minuto, em que o padrão é `256`. Uma taxa de `0` permite um número ilimitado. -s máximo:: Especifique o número máximo de vezes que um serviço pode ser chamado a partir de um único endereço IP a qualquer momento, em que o padrão é ilimitado. Pode ser sobrescrito com base no serviço usando `max-child-per-ip` no arquivo [.filename]#/etc/inetd.conf#. Opções adicionais estão disponíveis. Consulte man:inetd[8] para a lista completa de opções. [[network-inetd-security]] === Considerações de segurança Muitos dos daemons que podem ser gerenciados pelo inetd não são conscientes da segurança. Alguns daemons, como fingerd, podem fornecer informações que podem ser úteis para um invasor. Ative apenas os serviços necessários e monitore o sistema para tentativas excessivas de conexão. `max-connections-per-ip-por-minute`, `max-child` e `max-child-per-ip` podem ser usados para limitar tais ataques. Por padrão, TCP wrappers estão ativados. Consulte man:hosts_access[5] para obter mais informações sobre como colocar restrições TCP em vários daemons chamados pelo inetd. [[network-nfs]] == Network File System (NFS) O FreeBSD suporta o Network File System (NFS), que permite que um servidor compartilhe diretórios e arquivos com clientes através de uma rede. Com o NFS, os usuários e programas podem acessar arquivos em sistemas remotos como se estivessem armazenados localmente. NFS tem muitos usos práticos. Alguns dos usos mais comuns incluem: * Os dados que seriam duplicados em cada cliente podem ser mantidos em um único local e acessados por clientes na rede. * Vários clientes podem precisar de acesso ao diretório [.filename]#/usr/ports/distfiles#. Compartilhar esse diretório permite acesso rápido aos arquivos fonte sem precisar baixá-los para cada cliente. * Em grandes redes, geralmente é mais conveniente configurar um servidor central NFS no qual todos os diretórios home dos usuários são armazenados. Os usuários podem logar em um cliente em qualquer lugar da rede e ter acesso aos seus diretórios home. * A administração de exports do NFS é simplificada. Por exemplo, há apenas um sistema de arquivos no qual as políticas de segurança ou de backup devem ser definidas. * Dispositivos removíveis de armazenamento de mídia podem ser usados por outras máquinas na rede. Isso reduz o número de dispositivos em toda a rede e fornece um local centralizado para gerenciar sua segurança. Geralmente, é mais conveniente instalar software em várias máquinas a partir de uma mídia de instalação centralizada. O NFS consiste em um servidor e um ou mais clientes. O cliente acessa remotamente os dados armazenados na máquina do servidor. Para que isso funcione corretamente, alguns processos precisam ser configurados e executados. Esses daemons devem estar em execução no servidor: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Daemon | Descrição |nfsd |O daemon NFS que atende a solicitações de clientes NFS. |mountd |O daemon de montagem do NFS que realiza solicitações recebidas do nfsd. |rpcbind |Este daemon permite que clientes NF descubram qual porta o servidor NFS está usando. |=== A execução de man:nfsiod[8] no cliente pode melhorar o desempenho, mas não é necessária. [[network-configuring-nfs]] === Configurando o Servidor Os sistemas de arquivos que o servidor NFS irá compartilhar são especificados no arquivo [.filename]#/etc/exports#. Cada linha neste arquivo especifica um sistema de arquivos a ser exportado, quais clientes têm acesso a esse sistema de arquivos e quaisquer opções de acesso. Ao adicionar entradas a este arquivo, cada sistema de arquivos exportado, suas propriedades e hosts permitidos devem ocorrer em uma única linha. Se nenhum cliente estiver listado na entrada, qualquer cliente na rede poderá montar esse sistema de arquivos. As seguintes entradas no arquivo [.filename]#/etc/exports# demonstram como exportar sistemas de arquivos. Os exemplos podem ser modificados para corresponder aos sistemas de arquivos e nomes de clientes na rede do leitor. Existem muitas opções que podem ser usadas neste arquivo, mas apenas algumas serão mencionadas aqui. Veja man:exports[5] para a lista completa de opções. Este exemplo mostra como exportar [.filename]#/cdrom# para três hosts chamados _alpha_, _bravo_ e _charlie_: [.programlisting] .... /cdrom -ro alpha bravo charlie .... A flag `-ro` torna o sistema de arquivos somente leitura, impedindo que os clientes façam alterações no sistema de arquivos exportado. Este exemplo assume que os nomes de host estão no DNS ou no arquivo [.filename]#/etc/hosts#. Consulte man:hosts[5] se a rede não tiver um servidor de DNS. O próximo exemplo exporta [.filename]#/home# para três clientes pelo endereço IP. Isso pode ser útil para redes sem DNS ou [.filename]#/etc/hosts#. A flag `-alldirs` permite que os subdiretórios sejam pontos de montagem. Em outras palavras, ele não montará automaticamente os subdiretórios, mas permitirá que o cliente monte os diretórios necessários conforme necessário. [.programlisting] .... /usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... Este próximo exemplo exporta [.filename]#/a# para que dois clientes de domínios diferentes possam acessar esse sistema de arquivos. `-maproot=root` permite que o usuário `root` no sistema remoto grave os dados no sistema de arquivos exportado como `root`. Se `-maproot=root` não for especificado, o usuário `root` do cliente será mapeado para a conta `nobody` do servidor e estará sujeito às limitações de acesso definidas para `nobody`. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... Um cliente só pode ser especificado uma vez por sistema de arquivos. Por exemplo, se [.filename]#/usr# for um único sistema de arquivos, essas entradas serão inválidas, já que ambas as entradas especificam o mesmo host: [.programlisting] .... # Invalid when /usr is one file system /usr/src client /usr/ports client .... O formato correto para essa situação é usar uma entrada: [.programlisting] .... /usr/src /usr/ports client .... A seguir, um exemplo de uma lista de exportação válida, em que [.filename]#/usr# e [.filename]#/exports# são sistemas de arquivos locais: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... Para habilitar os processos requeridos pelo servidor NFS no momento da inicialização, adicione estas opções ao arquivo [.filename]#/etc/rc.conf#: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_enable="YES .... O servidor pode ser iniciado agora executando este comando: [source,shell] .... # service nfsd start .... Sempre que o servidor NFS for iniciado, o mountd também é iniciado automaticamente. No entanto, mountd lê apenas [.filename]#/etc/exports# quando é iniciado. Para fazer as edições subsequentes de [.filename]#/etc/exports# entrarem em vigor imediatamente, force mountd para ler novamente: [source,shell] .... # service mountd reload .... === Configurando o Cliente Para ativar clientes NFS, defina essa opção no arquivo [.filename]#/etc/rc.conf# de cada cliente: [.programlisting] .... nfs_client_enable="YES" .... Em seguida, execute este comando em cada cliente NFS: [source,shell] .... # service nfsclient start .... O cliente agora tem tudo de que precisa para montar um sistema de arquivos remoto. Nestes exemplos, o nome do servidor é `server` e o nome do cliente é `client`. Para montar [.filename]#/home# no `server` para o ponto de montagem [.filename]#/mnt# no `client`: [source,shell] .... # mount server:/home /mnt .... Os arquivos e diretórios em [.filename]#/home# agora estarão disponíveis no `client`, no diretório [.filename]#/mnt#. Para montar um sistema de arquivos remoto toda vez que o cliente for inicializado, adicione-o ao arquivo [.filename]#/etc/fstab#: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... Consulte man:fstab[5] para obter uma descrição de todas as opções disponíveis. === Bloqueando Alguns aplicativos exigem o bloqueio de arquivos para funcionar corretamente. Para ativar o bloqueio, adicione estas linhas ao arquivo [.filename]#/etc/rc.conf# no cliente e no servidor: [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... Então inicie as aplicações: [source,shell] .... # service lockd start # service statd start .... Se o bloqueio não for necessário no servidor, o cliente NFS pode ser configurado para bloquear localmente incluindo `-L` ao executar o mount. Consulte man:mount_nfs[8] para mais detalhes. [[network-autofs]] === Automatizando Montagens com man:autofs[5] [NOTE] ==== O recurso de montagem automática man:autofs[5] é suportado a partir do FreeBSD 10.1-RELEASE. Para usar a funcionalidade automounter em versões mais antigas do FreeBSD, use man:amd[8]. Este capítulo descreve apenas o montador automático man:autofs[5]. ==== O recurso man:autofs[5] é um nome comum para vários componentes que, juntos, permitem a montagem automática de sistemas de arquivos locais e remotos sempre que um arquivo ou diretório dentro desse sistema de arquivos é acessado. Ele consiste no componente do kernel, man:autofs[5] e vários aplicativos no espaço do usuário: man:automount[8], man:automountd[8] e man:autounmountd[8]. Ele serve como uma alternativa para man:amd[8] de versões anteriores do FreeBSD. Amd ainda é fornecido para fins de compatibilidade com versões anteriores, já que os dois usam formato de mapeamento diferentes; o usado pelo autofs é o mesmo que com outros automontadores do SVR4, como os do Solaris, MacOS X e Linux. O sistema de arquivos virtual man:autofs[5] é montado em pontos de montagem especificados por man:automount[8], geralmente chamado durante a inicialização. Sempre que um processo tentar acessar o arquivo dentro do ponto de montagem man:autofs[], o kernel notificará o daemon man:automountd[8] e irá pausar o processo de disparo. O daemon man:automountd[8] processará as solicitações do kernel localizando o mapeamento apropriado e irá montar o sistema de arquivos de acordo com ele, então sinaliza ao kernel para liberar o processo bloqueado . O daemon man:autounmountd[8] desmonta automaticamente os sistemas de arquivos montados automaticamente após algum tempo, a menos que eles ainda estejam sendo usados. O arquivo de configuração principal do autofs é o [.filename]#/etc/auto_master#. Atribui mapeamentos individuais a montagens de nível superior. Para uma explicação do [.filename]#auto_master# e da sintaxe do mapeamento, consulte man:auto_master[5]. Existe um mapeamento especial montado automaticamente em [.filename]#/net#. Quando um arquivo é acessado dentro desse diretório, o man:autofs[5] procura a montagem remota correspondente e monta-a automaticamente. Por exemplo, uma tentativa de acessar um arquivo dentro de [.filename]#/net/foobar/usr# informaria man:automountd[8] para montar a exportação [.filename]#/usr# do host `foobar`. .Montando uma Exportação com man:autofs[5] [example] ==== Neste exemplo, `showmount -e` mostra os sistemas de arquivos exportados que podem ser montados a partir do servidor NFS, `foobar`: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /net/foobar/usr .... ==== A saída de `showmount` mostra [.filename]#/usr# como uma exportação. Ao alterar os diretórios para [.filename]#/host/foobar/usr#, o man:automountd[8] intercepta o pedido e tenta resolver o nome do host `foobar`. Se for bem-sucedido, man:automountd[8] montará automaticamente a exportação de origem. Para habilitar man:autofs[5] no momento da inicialização, adicione esta linha ao arquivo [.filename]#/etc/rc.conf#: [.programlisting] .... autofs_enable="YES" .... Em seguida, man:autofs[5] pode ser iniciado executando: [source,shell] .... # service automount start # service automountd start # service autounmountd start .... O formato de mapeamento de man:autofs[5] é o mesmo que em outros sistemas operacionais. Informações sobre este formato de outras fontes podem ser úteis, como o http://web.archive.org/web/20160813071113/http://images.apple.com/business/docs/Autofs.pdf[documento do Mac OS X]. Consulte as páginas de manuais man:automount[8], man:automountd[8], man:autounmountd[8] e man:auto_master[5] para maiores informações. [[network-nis]] == Sistema de Informação de Rede (NIS) O Network Information System (NIS) foi projetado para centralizar a administração de sistemas UNIX(TM) como Solaris(TM), HP-UX, AIX(TM), Linux, NetBSD, OpenBSD e FreeBSD. O NIS era originalmente conhecido como Yellow Pages, mas o nome foi alterado devido a problemas de marca registrada. Esta é a razão pela qual os comandos do NIS começam com `yp`. O NIS é um sistema cliente/servidor baseado em Remote Procedure Call (RPC) que permite que um grupo de máquinas dentro de um domínio NIS compartilhe um conjunto de arquivos de configuração. Isso permite que um administrador do sistema configure sistemas clientes NIS com apenas dados mínimos de configuração e adicione, remova ou modifique dados de configuração de um único local. O FreeBSD usa a versão 2 do protocolo NIS. === Termos do NIS e Processos A Tabela 28.1 resume os termos e processos importantes usados pelo NIS: .Terminologia do NIS [cols="1,1", frame="none", options="header"] |=== | Termo | Descrição |nome de domínio NIS |Os servidores e clientes do NIS compartilham um nome de domínio NIS. Normalmente, esse nome não tem nada a ver com DNS. |man:rpcbind[8] |Este serviço habilita o RPC e deve estar rodando para rodar um servidor NIS ou atuar como um cliente NIS. |man:ypbind[8] |Este serviço liga um cliente NIS ao seu servidor NIS. Ele levará o nome de domínio NIS e usará RPC para se conectar ao servidor. É o núcleo da comunicação cliente/servidor em um ambiente NIS. Se este serviço não estiver sendo executado em uma máquina cliente, ele não poderá acessar o servidor NIS. |man:ypserv[8] |Este é o processo para o servidor NIS. Se este serviço parar de funcionar, o servidor não poderá mais responder aos pedidos do NIS, portanto, esperamos que exista um servidor slave para assumir o controle. Alguns clientes não-FreeBSD não tentarão se reconectar usando um servidor slave e o processo ypbind pode precisar ser reiniciado nesses clientes. |man:rpc.yppasswdd[8] |Este processo só é executado em servidores principais de NIS. Este daemon permite que clientes NIS alterem suas senhas do NIS. Se este daemon não estiver rodando, os usuários terão que acessar o servidor principal do NIS e alterar suas senhas lá. |=== === Tipos de Máquinas Existem três tipos de hosts em um ambiente NIS: * Servidor NIS master + Esse servidor atua como um repositório central para as informações de configuração do host e mantém a cópia autoritativa dos arquivos usados por todos os clientes do NIS. O [.filename]#passwd#, o [.filename]#group# e outros arquivos usados pelos clientes do NIS são armazenados no servidor master. Embora seja possível que uma máquina seja um servidor NIS master para mais de um domínio NIS, esse tipo de configuração não será abordado neste capítulo, pois pressupõe ambiente NIS de pequena escala. * Servidores NIS slave + Os servidores slaves do NIS mantêm cópias dos arquivos de dados do master do NIS para fornecer redundância. Os servidores slaves também ajudam a balancear a carga do servidor master, pois os clientes do NIS sempre se conectam ao servidor do NIS que responde primeiro. * Clientes NIS + Os clientes do NIS autenticam-se contra o servidor NIS durante o logon. Informações em muitos arquivos podem ser compartilhadas usando o NIS . Os arquivos [.filename]#master.passwd#, [.filename]#group# e [.filename]#hosts# são comumente compartilhados via NIS. Sempre que um processo em um cliente precisa de informações que normalmente seriam encontradas nesses arquivos localmente, ele faz uma consulta ao servidor NIS ao qual está vinculado. === Considerações de Planejamento Esta seção descreve um ambiente NIS de exemplo que consiste em 15 máquinas FreeBSD sem ponto de administração centralizado. Cada máquina tem seu próprio [.filename]#/etc/passwd# e [.filename]#/etc/master.passwd#. Esses arquivos são mantidos em sincronia entre si somente por meio de intervenção manual. Atualmente, quando um usuário é adicionado ao laboratório, o processo deve ser repetido em todas as 15 máquinas. A configuração do laboratório será a seguinte: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Nome da maquina | Endereço IP | Role da máquina |`ellington` |`10.0.0.2` |NIS master |`coltrane` |`10.0.0.3` |NIS slave |`basie` |`10.0.0.4` |Estação de Trabalho da Facultativa |`bird` |`10.0.0.5` |Máquina Cliente |`cli[1-11]` |`10.0.0.[6-17]` |Outras Máquinas Clientes |=== Se esta é a primeira vez que um esquema de NIS está sendo desenvolvido, ele deve ser cuidadosamente planejado através do tempo. Independentemente do tamanho da rede, várias decisões precisam ser tomadas como parte do processo de planejamento. ==== Escolhendo um Nome de Domínio NIS Quando um cliente transmite suas solicitações de informações, ele inclui o nome do domínio NIS do qual faz parte. É assim que vários servidores em uma rede podem informar qual servidor deve responder a qual solicitação. Pense no nome de domínio NIS como o nome de um grupo de hosts. Algumas organizações optam por usar o nome de domínio da Internet para o nome de domínio NIS. Isso não é recomendado, pois pode causar confusão ao tentar depurar problemas de rede. O nome de domínio NIS deve ser único dentro da rede e é útil se ele descrever o grupo de máquinas que representa. Por exemplo, o departamento de Arte da Acme Inc. pode estar no domínio NIS"acme-art". Este exemplo usará o nome de domínio `test-domain`. No entanto, alguns sistemas operacionais não-FreeBSD exigem que o nome de domínio NIS seja o mesmo que o nome de domínio da Internet. Se uma ou mais máquinas na rede tiverem essa restrição, o nome de domínio da Internet _deve_ ser usado como o nome de domínio NIS. ==== Requisitos Físicos do Servidor Há várias coisas que você deve ter em mente ao escolher uma máquina para usar como um servidor NIS. Como os clientes do NIS dependem da disponibilidade do servidor, escolha uma máquina que não seja reinicializada com freqüência. O servidor do NIS deve idealmente ser uma máquina autônoma cujo único propósito seja ser um servidor NIS. Se a rede não for muito usada, é aceitável colocar o servidor NIS em uma máquina que executa outros serviços. No entanto, se o servidor NIS ficar indisponível, isso afetará negativamente todos os clientes NIS. === Configurando o Servidor NIS Master As cópias canônicas de todos os arquivos NIS são armazenadas no servidor master. Os bancos de dados usados para armazenar as informações são chamados de mapas de NIS. No FreeBSD, estes mapas são armazenados em [.filename]#/var/yp/[nome_do_domínio]# onde [.filename]#[nome_do_dominio]# é o nome do domínio NIS. Como vários domínios são suportados, é possível ter vários diretórios, um para cada domínio. Cada domínio terá seu próprio conjunto independente de mapas. Os servidores master e slave do NIS lidam com todas as requisições NIS através do man:ypserv[8]. Esse daemon é responsável por receber solicitações de entrada de clientes NIS, traduzindo o domínio e o nome do mapa solicitados para um caminho para o arquivo de banco de dados correspondente e transmitindo dados do banco de dados de volta ao cliente. Configurar um servidor NIS master pode ser relativamente simples, dependendo das necessidades ambientais. Como o FreeBSD oferece suporte a NIS embutido, ele só precisa ser ativado adicionando as seguintes linhas ao arquivo [.filename]#/etc/rc.conf#: [.programlisting] .... nisdomainname="test-domain" <.> nis_server_enable="YES" <.> nis_yppasswdd_enable="YES" <.> .... <.> Esta linha define o nome de domínio NIS para `test-domain`. <.> Isto automatiza o início dos processos do servidor NIS quando o sistema é inicializado. <.> Isso habilita o daemon man:rpc.yppasswdd[8] para que os usuários possam alterar sua senha NIS de uma máquina cliente. É preciso ter cuidado em um domínio com vários servidores, no qual as máquinas do servidor também são clientes NIS. Geralmente, é uma boa ideia forçar os servidores a fazerem bind em si mesmos, em vez de permitir que eles transmitam solicitações de bind e, possivelmente, fiquem vinculados um ao outro. Modos de falha estranhos podem ocorrer se um servidor cair e outros dependerem dele. Eventualmente, todos os clientes terão tempo limite e tentarão fazer bind em outros servidores, mas o atraso envolvido poderá ser considerável e o modo de falha ainda estará presente, uma vez que os servidores podem ligar-se entre si novamente. Um servidor que também é um cliente pode ser forçado fazer bind em um servidor em particular adicionando estas linhas adicionais ao arquivo [.filename]#/etc/rc.conf#: [.programlisting] .... nis_client_enable="YES" <.> nis_client_flags="-S test-domain,server" <.> .... <.> Isso permite rodar coisas do cliente também. <.> Esta linha define o nome de domínio NIS para test-domain e vincula para si mesmo. Depois de salvar as edições, digite `/etc/netstart` para reiniciar a rede e aplicar os valores definidos no arquivo [.filename]#/etc/rc.conf#. Antes de inicializar os mapas de NIS, inicie man:ypserv[8]: [source,shell] .... # service ypserv start .... ==== Inicializando os mapas do NIS Os mapeamentos NIS são gerados a partir dos arquivos de configuração no diretório [.filename]#/etc# no NIS master, com uma exceção: [.filename]#/etc/master.passwd#. Isso evita a propagação de senhas para todos os servidores no domínio NIS. Portanto, antes de inicializar os mapas do NIS, configure os arquivos de senha primários: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... É aconselhável remover todas as entradas de contas do sistema, bem como quaisquer contas de usuário que não precisem ser propagadas para os clientes do NIS, como o `root` e quaisquer outras contas administrativas. [NOTE] ==== Assegure-se de que o arquivo [.filename]#/var/yp/master.passwd# não seja de grupo nem de mundo legível, definindo suas permissões para `600`. ==== Depois de concluir esta tarefa, inicialize os mapas do NIS. O FreeBSD inclui o script man:ypinit[8] para fazer isso. Ao gerar mapas para o servidor master, inclua `-m` e especifique o nome de domínio NIS: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... Isto irá criar [.filename]#/var/yp/Makefile# a partir de [.filename]#/var/yp/Makefile.dist#. Por padrão, este arquivo assume que o ambiente tem um único servidor NIS com apenas clientes FreeBSD. Como `test-domain` tem um servidor slave, edite esta linha no arquivo [.filename]#/var/yp/Makefile# para que comece com um comentário (`#`) : [.programlisting] .... NOPUSH = "True" .... ==== Adicionando novos usuários Toda vez que um novo usuário é criado, a conta de usuário deve ser adicionada ao servidor mestre NIS e aos mapeamentos do NIS reconstruídos. Até que isso ocorra, o novo usuário não poderá efetuar login em nenhum lugar, exceto no NIS master. Por exemplo, para adicionar o novo usuário `jsmith` ao domínio `test-domain`, execute estes comandos no servidor master: [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... O usuário também pode ser adicionado usando `adduser jsmith` em vez de `pw useradd smith`. === Configurando um Servidor NIS Slave Para configurar um servidor NIS slave, faça o logon no servidor slave e edite o arquivo [.filename]#/etc/rc.conf# assim como para o servidor master. Não gere nenhum mapa de NIS, pois estes já existem no servidor master. Ao executar `ypinit` no servidor slave, use `-s` (para slave) ao invés de `-m` (para master). Esta opção requer o nome do NIS master, além do nome do domínio, como visto neste exemplo: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. .... Isto irá gerar um diretório no servidor slave chamado [.filename]#/var/yp/test-domain# que contém cópias dos mapas do servidor principal do NIS. Adicionar estas entradas ao arquivo [.filename]#/etc/crontab# em cada servidor slave forçará os slaves a sincronizar seus mapas com os mapas no servidor master: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... Essas entradas não são obrigatórias porque o servidor master tenta enviar automaticamente quaisquer alterações no mapa para seus escravos. No entanto, como os clientes podem depender do servidor escravo para fornecer informações corretas de senha, recomenda-se forçar atualizações frequentes de mapas de senha. Isso é especialmente importante em redes ocupadas nas quais as atualizações de mapas nem sempre são concluídas. Para finalizar a configuração, execute `/etc/netstart` no servidor slave para iniciar os serviços do NIS. === Configurando um cliente NIS Um cliente NIS é vinculado a um servidor NIS usando man:ypbind[8]. Esse daemon transmite solicitações de RPC na rede local. Essas solicitações especificam o nome do domínio configurado no cliente. Se um servidor NIS no mesmo domínio receber uma das transmissões, ele responderá a ypbind, que registrará o endereço do servidor. Se houver vários servidores disponíveis, o cliente usará o endereço do primeiro servidor para responder e direcionará todas as suas solicitações de NIS para esse servidor. O cliente irá automaticamente pingar o servidor regularmente para garantir que ainda esteja disponível. Se ele não receber uma resposta dentro de um período de tempo razoável, o ypbind marcará o domínio como não acoplado e começará a transmitir novamente na esperança de localizar outro servidor. Para configurar uma máquina FreeBSD para ser um cliente NIS: [.procedure] ==== . Edite o [.filename]#/etc/rc.conf# e adicione as seguintes linhas para definir o nome de domínio NIS e inicie man:ypbind[8] durante a inicialização da rede: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... + . Para importar todas as possíveis entradas de senha do servidor NIS, use `vipw` para remover todas as contas de usuário, exceto uma do arquivo [.filename]#/etc/master.passwd#. Ao remover as contas, lembre-se de que pelo menos uma conta local deve permanecer e essa conta deve ser membro do grupo `wheel`. Se houver um problema com o NIS, essa conta local poderá ser usada para efetuar login remotamente, tornar-se o superusuário e corrigir o problema. Antes de salvar as edições, adicione a seguinte linha ao final do arquivo: + [.programlisting] .... +::::::::: .... + Esta linha configura o cliente para fornecer qualquer pessoa com uma conta válida na senha do servidor do NIS mapeia uma conta no cliente. Existem várias maneiras de configurar o cliente NIS modificando essa linha. Um método é descrito em <>. Para uma leitura mais detalhada, consulte o livro `Managing NFS and NIS`, publicado pela O'Reilly Media. + . Para importar todas as entradas de grupo possíveis do servidor NIS, adicione esta linha ao [.filename]#/etc/group#: + [.programlisting] .... +:*:: .... ==== Para iniciar imediatamente o cliente NIS, execute os seguintes comandos como superusuário: [source,shell] .... # /etc/netstart # service ypbind start .... Depois de concluir estas etapas, a execução do `ypcat passwd` no cliente deve mostrar o mapa [.filename]#passwd# do servidor. === Segurança NIS Como o RPC é um serviço baseado em broadcast, qualquer sistema executando o ypbind dentro do mesmo domínio pode recuperar o conteúdo dos mapas do NIS. Para evitar transações não autorizadas, man:ypserv[8] suporta um recurso chamado "securenets" que pode ser usado para restringir o acesso a um dado conjunto de hosts. Por padrão, essas informações são armazenadas no arquivo [.filename]#/var/yp/securenets#, a menos que man:ypserv[8] seja iniciado com `-p` e um caminho alternativo. Este arquivo contém entradas que consistem em uma especificação de rede e uma máscara de rede separadas por espaço em branco. Linhas iniciando com `#` são consideradas comentários. Um exemplo de [.filename]##securenets## pode ser assim: [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... Se man:ypserv[8] receber uma solicitação de um endereço que corresponda a uma dessas regras, ela processará a solicitação normalmente. Se o endereço não corresponder a uma regra, a solicitação será ignorada e uma mensagem de aviso será registrada. Se o [.filename]#securenets# não existir, o `ypserv` permitirá conexões de qualquer host. crossref:security[tcpwrappers,TCP Wrapper] é um mecanismo alternativo para fornecer controle de acesso em vez de [.filename]#securenets#. Embora o mecanismo de controle de acesso acrescente alguma segurança, ambos são vulneráveis a ataques como "IP spoofing". Todo o tráfego relacionado a NIS deve ser bloqueado no firewall. Servidores que usam [.filename]#securenets# podem não servir clientes legítimos de NIS com implementações arcaicas de TCP/IP. Algumas dessas implementações definem todos os bits do host como zero ao fazer transmissões ou não observam a máscara de sub-rede ao calcular o endereço de transmissão. Embora alguns desses problemas possam ser corrigidos alterando a configuração do cliente, outros problemas podem forçar a desativação desses sistemas clientes ou o abandono do [.filename]#securenets#. O uso de TCP Wrapper aumenta a latência do servidor NIS. O atraso adicional pode ser longo o suficiente para causar timeouts em programas clientes, especialmente em redes ocupadas com servidores NIS lentos. Se um ou mais clientes sofrerem de latência, converta esses clientes em servidores de NIS slaves e force-os a se ligarem a eles mesmos. ==== Barrando alguns usuários Neste exemplo, o sistema `basie` é uma estação de trabalho da dentro do domínio NIS facultativo. O mapa [.filename]#passwd# no servidor NIS master contém contas para professores e alunos. Esta seção demonstra como permitir o login do corpo docente neste sistema e, ao mesmo tempo, recusar logins de alunos. Para previnir usuários específicos de logar em um sistema, desde que eles estejam presentes no banco de dados do NIS, use `vipw` para adicionar `-_username_` com o numero correto de virgulas em direção ao fim do arquivo [.filename]#/etc/master.passwd# no cliente, onde _username_ é o nome de usuário a impedir de logar. A linha com o usuário bloqueado deve estar antes da linha `+` que permite usuários do NIS. Neste exemplo, `bill` está impedido de logar no `basie`: [source,shell] .... basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin operator:*:2:5::0:0:System &:/:/usr/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/usr/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin -bill::::::::: +::::::::: basie# .... [[network-netgroups]] === Usando Netgroups A exclusão de usuários especificados do logon em sistemas individuais torna-se imprestável em redes maiores e perde rapidamente o principal benefício do NIS: administração _centralizada_. Os netgroups foram desenvolvidos para lidar com redes grandes e complexas com centenas de usuários e máquinas. Seu uso é comparável aos grupos UNIX(TM), onde a principal diferença é a falta de um ID numérico e a capacidade de definir um netgroup incluindo contas de usuário e outros netgroups. Para expandir o exemplo usado neste capítulo, o domínio NIS será estendido para adicionar os usuários e sistemas mostrados nas Tabelas 28.2 e 28.3: .Usuários Adicionais [cols="1,1", frame="none", options="header"] |=== | Nome(s) de usuário | Descrição |`alpha`, `beta` |Funcionários do departamento de TI |`charlie`, `delta` |Aprendizes do departamento de TI |`echo`, `foxtrott`, `golf`, ... |funcionários |`able`, `baker`, ... |estagiários |=== .Sistemas Adicionais [cols="1,1", frame="none", options="header"] |=== | Nome(s) de máquina | Descrição |`war`, `death`, `famine`, `pollution` |Somente funcionários de TI podem fazer logon nesses servidores. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |Todos os membros do departamento de TI podem fazer login nesses servidores. |`one`, `two`, `three`, `four`, ... |Estações de trabalho comuns usadas pelos funcionários. |`trashcan` |Uma máquina muito antiga sem dados críticos. Até os estagiários podem usar este sistema. |=== Ao usar netgroups para configurar esse cenário, cada usuário é atribuído a um ou mais netgroups e os logins são permitidos ou proibidos para todos os membros do netgroup. Ao adicionar uma nova máquina, as restrições de login devem ser definidas para todos os netgroups. Quando um novo usuário é adicionado, a conta deve ser adicionada a um ou mais netgroups. Se a configuração do NIS for planejada com cuidado, somente um arquivo de configuração central precisará ser modificado para conceder ou negar acesso a máquinas. O primeiro passo é a inicialização do mapa do NIS `netgroup`. No FreeBSD, este mapa não é criado por padrão. No servidor NIS master, use um editor para criar um mapa chamado [.filename]#/var/yp/netgroup#. Este exemplo cria quatro grupos de rede para representar funcionários de TI, aprendizes de TI, funcionários e estagiários: [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... Cada entrada configura um netgroup. A primeira coluna em uma entrada é o nome do netgroup. Cada conjunto de colchetes representa um grupo de um ou mais usuários ou o nome de outro grupo de rede. Ao especificar um usuário, os três campos delimitados por vírgula dentro de cada grupo representam: . O nome do(s) host(s) onde os outros campos que representam o usuário são válidos. Se um nome de host não for especificado, a entrada será válida em todos os hosts. . O nome da conta que pertence a este netgroup. . O domínio NIS da conta. As contas podem ser importadas de outros domínios do NIS para um netgroup. Se um grupo contiver vários usuários, separe cada usuário com espaço em branco. Além disso, cada campo pode conter curingas. Veja man:netgroup[5] para detalhes. Nomes de grupos maiores que 8 caracteres não devem ser usados. Os nomes diferenciam maiúsculas de minúsculas e usar letras maiúsculas para nomes de grupos de rede é uma maneira fácil de distinguir entre nomes de usuários, máquinas e grupos de rede. Alguns clientes não-FreeBSD NIS não podem lidar com netgroups contendo mais de 15 entradas. Esse limite pode ser contornado criando vários grupos de sub-redes com 15 usuários ou menos e um grupo de rede real consistindo dos grupos de sub-redes, como visto neste exemplo: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Repita este processo se mais de 225 (15 vezes 15) usuários existirem dentro de um único netgroup. Para ativar e distribuir o novo mapa do NIS: [source,shell] .... ellington# cd /var/yp ellington# make .... Isso gerará os três mapas NIS, [.filename]#netgroup#, [.filename]#netgroup.byhost# e [.filename]#netgroup.byuser#. Use a opção de chave de mapa man:ypcat[1] para verificar se os novos mapas de NIS estão disponíveis: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... A saída do primeiro comando deve lembrar o conteúdo de [.filename]#/var/yp/netgroup#. O segundo comando só produz saída se os netgroups específicos do host foram criados. O terceiro comando é usado para obter a lista de netgroups de um usuário. Para configurar um cliente, use man:vipw[8] para especificar o nome do netgroup. Por exemplo, no servidor chamado `war`, substitua esta linha: [.programlisting] .... +::::::::: .... com [.programlisting] .... +@IT_EMP::::::::: .... Isso especifica que apenas os usuários definidos no netgroup `IT_EMP` serão importados para o banco de dados de senhas deste sistema e somente esses usuários terão permissão para efetuar login nesse sistema. Essa configuração também se aplica à função `~` do shell e a todas as rotinas que convertem entre nomes de usuário e IDs de usuário numérico. Em outras palavras, `cd ~_user_` não funcionará, `ls -l` mostrará o ID numérico em vez do nome de usuário e `find . -user joe -print` falhará com a mensagem `No such user`. Para corrigir isso, importe todas as entradas do usuário sem permitir que elas efetuem login nos servidores. Isto pode ser conseguido adicionando uma linha extra: [.programlisting] .... +:::::::::/usr/sbin/nologin .... Esta linha configura o cliente para importar todas as entradas, mas para substituir o shell nessas entradas com [.filename]#/usr/sbin/nologin#. Certifique-se que a linha extra é colocada _após_ `+@IT_EMP:::::::::`. Caso contrário, todas as contas de usuário importadas do NIS terão [.filename]#/usr/sbin/nologin# como seu shell de login e ninguém poderá efetuar o login no sistema. Para configurar os servidores menos importantes, substitua o antigo `+:::::::::` nos servidores com estas linhas: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/usr/sbin/nologin .... As linhas correspondentes para as estações de trabalho seriam: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/usr/sbin/nologin .... O NIS suporta a criação de grupos de rede de outros grupos de rede, o que pode ser útil se a política relacionada ao acesso do usuário for alterada. Uma possibilidade é a criação de netgroups baseados em funções. Por exemplo, pode-se criar um netgroup chamado `BIGSRV` para definir as restrições de login para os servidores importantes, outro grupo de rede chamado `SMALLSRV` para os servidores menos importantes e um terceiro netgroup chamado `USERBOX` para as estações de trabalho. Cada um desses netgroups contém os netgroups com permissão para efetuar login nessas máquinas. As novas entradas para o mapa do NIS `netgroup` seriam assim: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... Esse método de definir restrições de login funciona razoavelmente bem quando é possível definir grupos de máquinas com restrições idênticas. Infelizmente, esta é a exceção e não a regra. Na maioria das vezes, é necessária a capacidade de definir restrições de login por máquina. As definições de netgroup específicas da máquina são outra possibilidade para lidar com as mudanças na política. Neste cenário, o [.filename]#/etc/master.passwd# de cada sistema contém duas linhas que começam com "+". A primeira linha adiciona um netgroup com as contas permitidas para entrar nesta máquina e a segunda linha adiciona todas as outras contas com [.filename]#/usr/sbin/nologin# como shell. Recomenda-se usar a versão "ALL-CAPS" do nome do host como o nome do netgroup: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/usr/sbin/nologin .... Quando esta tarefa estiver completa em todas as máquinas, não haverá mais a necessidade de modificar as versões locais de [.filename]#/etc/master.passwd# novamente. Todas as alterações posteriores podem ser manipuladas, modificando o mapa do NIS. Aqui está um exemplo de um possível mapa `netgroup` para este cenário: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... Pode não ser sempre aconselhável usar netgroups baseados em máquina. Ao implantar algumas dúzias ou centenas de sistemas, grupos de rede baseados em funções em vez de grupos de rede baseados em máquina podem ser usados para manter o tamanho do mapa do NIS dentro de limites razoáveis. === Formatos de Senha O NIS requer que todos os hosts em um domínio NIS usem o mesmo formato para criptografar senhas. Se os usuários tiverem problemas para autenticar em um cliente NIS, pode ser devido a um formato de senha diferente. Em uma rede heterogênea, o formato deve ser suportado por todos os sistemas operacionais, onde DES é o padrão comum mais baixo. Para verificar qual formato um servidor ou cliente está usando, veja esta seção do [.filename]#/etc/login.conf#: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Further entries elided] .... Neste exemplo, o sistema está usando o formato DES. Outros valores possíveis são `blf` para Blowfish e `md5` para senhas criptografadas com MD5. Se o formato em um host precisar ser editado para corresponder ao que está sendo usado no domínio NIS, o banco de dados de recursos de login deve ser reconstruído após salvar a alteração: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== O formato das senhas das contas de usuários existentes não será atualizado até que cada usuário mude sua senha _após_ o banco de dados de recursos de login ser reconstruído. ==== [[network-ldap]] == Protocolo leve de acesso de diretório ( LDAP ) O protocolo LDAP (LDAP) é um protocolo da camada de aplicação usado para acessar, modificar e autenticar objetos usando um serviço de informações de diretório distribuído. Pense nisso como um telefone ou livro de registro que armazena vários níveis de informações hierárquicas e homogêneas. Ele é usado nas redes do Active Directory e do OpenLDAP e permite que os usuários acessem vários níveis de informações internas utilizando uma única conta. Por exemplo, a autenticação de email, a obtenção de informações de contato dos funcionários e a autenticação interna de sites podem usar uma única conta de usuário na base de registros do servidor LDAP. Esta seção fornece um guia de início rápido para configurar um servidor LDAP em um sistema FreeBSD. Ele pressupõe que o administrador já tenha um plano de design que inclua o tipo de informação a ser armazenada, para que essas informações sejam usadas, quais usuários devem ter acesso a essas informações e como proteger essas informações contra acesso não autorizado. === Terminologia e Estrutura do LDAP O LDAP usa vários termos que devem ser entendidos antes de iniciar a configuração. Todas as entradas de diretório consistem em um grupo de _attributes_. Cada um desses conjuntos de atributos contém um identificador exclusivo conhecido como _Distinguished Name_ (DN) que é normalmente criado a partir de vários outros atributos, como Common ou _Relative Distinguished Name_ (RDN). Semelhante a como os diretórios têm caminhos absolutos e relativos, considere um DN como um caminho absoluto e o RDN como o caminho relativo. Um exemplo de entrada LDAP é semelhante ao seguinte. Este exemplo procura a entrada para a conta de usuário especificada (`uid`), unidade organizacional (`ou`) e organização (`o`): [source,shell] .... % ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... Esta entrada de exemplo mostra os valores para os atributos `dn`, `mail`, `cn`, `uid` e `telephoneNumber`. O atributo do cn é o RDN. Maiores informações sobre o LDAP e sua terminologia podem ser encontradas em http://www.openldap.org/doc/admin24/intro.html[http://www.openldap.org/doc/admin24/intro.html]. [[ldap-config]] === Configurando um servidor LDAP O FreeBSD não provê um servidor LDAP embutido. Comece a configuração instalando o pacote ou port package:net/openldap-server[]: [source,shell] .... # pkg install openldap-server .... Aqui está um largo conjunto de opções habilitadas no extref:{linux-users}[pacote, software]. Reveja-os rodando o comando `pkg info openldap-server`. Se não for suficiente (por exemplo se o suporte a SQL for necessário), por favor considere recompilar o port usando o framework crossref:ports[ports-using,apropriado]. A instalação cria o diretório [.filename]#/var/db/openldap-data# para conter os dados. O diretório para armazenar os certificados deve ser criado: [source,shell] .... # mkdir /usr/local/etc/openldap/private .... A próxima fase é configurar a autoridade de certificação. Os seguintes comandos devem ser executados em [.filename]#/usr/local/etc/openldap/private#. Isso é importante, pois as permissões de arquivo precisam ser restritivas e os usuários não devem ter acesso a esses arquivos. Informações mais detalhadas sobre certificados e seus parâmetros podem ser encontradas em crossref:security[openssl,OpenSSL]. Para criar a Autoridade de Certificação, comece com este comando e siga os prompts: [source,shell] .... # openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt .... As entradas para os prompts podem ser genéricas _exceto_ para o `Common Name`. Esta entrada deve ser _diferente_ do nome do host do sistema. Se este será um certificado auto-assinado, prefixe o nome do host com `CA` para a Autoridade de Certificação. A próxima tarefa é criar uma solicitação de assinatura de certificado e uma chave privada. Insira este comando e siga os prompts: [source,shell] .... # openssl req -days 365 -nodes -new -keyout server.key -out server.csr .... Durante o processo de geração de certificados, certifique-se de configurar corretamente o atributo `Common Name`. A Solicitação de Assinatura de Certificado deve ser assinada com a Autoridade de Certificação para ser usada como um certificado válido: [source,shell] .... # openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial .... A parte final do processo de geração de certificados é gerar e assinar os certificados do cliente: [source,shell] .... # openssl req -days 365 -nodes -new -keyout client.key -out client.csr # openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key .... Lembre-se de usar o mesmo atributo `Common Name` quando solicitado. Quando terminar, assegure-se de que um total de oito (8) novos arquivos tenham sido gerado através dos comandos procedentes. O daemon que executa o servidor OpenLDAP é o [.filename]#slapd#. Sua configuração é executada através do [.filename]#slapd.ldif#: o antigo [.filename]#slapd.conf# foi descontinuado pelo OpenLDAP. http://www.openldap.org/doc/admin24/slapdconf2.html[Exemplos de configuração] para o [.filename]#slapd.ldif# estão disponíveis e também podem ser encontrados em [.filename]#/usr/local/etc/openldap/slapd.ldif.sample#. As opções estão documentadas em slapd-config(5). Cada seção do [.filename]#slapd.ldif#, como todos os outros conjuntos de atributos LDAP, é identificada exclusivamente por meio de um DN. Certifique-se de que nenhuma linha em branco seja deixada entre a instrução `dn:` e o final desejado da seção. No exemplo a seguir, o TLS será usado para implementar um canal seguro. A primeira seção representa a configuração global: [.programlisting] .... # # See slapd-config(5) for details on configuration options. # This file should NOT be world readable. # dn: cn=config objectClass: olcGlobal cn: config # # # Define global ACLs to disable default read access. # olcArgsFile: /var/run/openldap/slapd.args olcPidFile: /var/run/openldap/slapd.pid olcTLSCertificateFile: /usr/local/etc/openldap/server.crt olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt #olcTLSCipherSuite: HIGH olcTLSProtocolMin: 3.1 olcTLSVerifyClient: never .... A Autoridade de Certificação, o certificado do servidor e os arquivos de chave privada do servidor devem ser especificados aqui. Recomenda-se que os clientes escolham a opção de criptografia de segurança e omitam `olcTLSCipherSuite` (incompatível com clientes TLS diferentes de [.filename]#openssl#). A opção `olcTLSProtocolMin` permite que o servidor exija um nível mínimo de segurança: é recomendado. Enquanto a verificação é obrigatória para o servidor, não é para o cliente: `olcTLSVerifyClient: never`. A segunda seção é sobre os módulos de backend e pode ser configurada da seguinte maneira: [.programlisting] .... # # Load dynamic backend modules: # dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/local/libexec/openldap olcModuleload: back_mdb.la #olcModuleload: back_bdb.la #olcModuleload: back_hdb.la #olcModuleload: back_ldap.la #olcModuleload: back_passwd.la #olcModuleload: back_shell.la .... A terceira seção é dedicada a carregar os esquemas `ldif` necessários para serem usados pelos bancos de dados: eles são essenciais. [.programlisting] .... dn: cn=schema,cn=config objectClass: olcSchemaConfig cn: schema include: file:///usr/local/etc/openldap/schema/core.ldif include: file:///usr/local/etc/openldap/schema/cosine.ldif include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif include: file:///usr/local/etc/openldap/schema/nis.ldif .... Em seguida, a seção de configuração do frontend: [.programlisting] .... # Frontend settings # dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: to * by * read # # Sample global access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # #olcAccess: to dn.base="" by * read #olcAccess: to dn.base="cn=Subschema" by * read #olcAccess: to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! # olcPasswordHash: {SSHA} # {SSHA} is already the default for olcPasswordHash .... Outra seção é dedicada ao _backend de configuração_, a única maneira de acessar posteriormente a configuração do servidor OpenLDAP é como um superusuário global. [.programlisting] .... dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: to * by * none olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U .... O nome de usuário administrador padrão é `cn=config`. Digite [.filename]#slappasswd# em um shell, escolha a senha e use sua hash `olcRootPW`. Se essa opção não for especificada agora, antes do arquivo [.filename]#slapd.ldif# ser importado, ninguém poderá modificar a seção de _configuração global_. A última seção é sobre o back-end do banco de dados: [.programlisting] .... ####################################################################### # LMDB database definitions ####################################################################### # dn: olcDatabase=mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: mdb olcDbMaxSize: 1073741824 olcSuffix: dc=domain,dc=example olcRootDN: cn=mdbadmin,dc=domain,dc=example # Cleartext passwords, especially for the rootdn, should # be avoided. See slappasswd(8) and slapd-config(5) for details. # Use of strong authentication encouraged. olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+ # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. olcDbDirectory: /var/db/openldap-data # Indices to maintain olcDbIndex: objectClass eq .... Esse banco de dados hospeda os _conteudos atuais_ do diretório LDAP. Outros tipos diferentes de `mdb` estão disponiveis. Esse é super-usuário, não confundir com um global, é configurado aqui: um usuário (possivelmente customizado) em `olcRootDN` e a hash da senha em `olcRootPW`; [.filename]#slappasswd# pode ser usado como antes. Esse http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/data/regressions/its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;hb=HEAD[repositorio] contém quatro exemplos do arquivo [.filename]#slapd.ldif#. Para converter um arquivo [.filename]#slapd.conf# existente dentro de [.filename]#slapd.ldif#, referencie a http://www.openldap.org/doc/admin24/slapdconf2.html[essa página] (por favor, note que isso pode introduzir algumas opções inuteis). Quando a configuração estiver concluída, o [.filename]#slapd.ldif# deve ser colocado em um diretório vazio. Recomenda-se criá-lo como: [source,shell] .... # mkdir /usr/local/etc/openldap/slapd.d/ .... Importe o banco de dados de configuração: [source,shell] .... # /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif .... Inicie o daemon [.filename]#slapd#: [source,shell] .... # /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/ .... A opção `-d` pode ser usada para depuração, conforme especificado em slapd(8). Para verificar se o servidor está em execução e funcionando: [source,shell] .... # ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContexts # # dn: namingContexts: dc=domain,dc=example # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... O servidor ainda deve ser confiável. Se isso nunca foi feito antes, siga estas instruções. Instale o pacote ou o port OpenSSL: [source,shell] .... # pkg install openssl .... No diretório onde o [.filename]#ca.crt# está armazenado (neste exemplo, [.filename]#/usr/local/etc/openldap#), execute: [source,shell] .... # c_rehash . .... Tanto a CA quanto o certificado do servidor agora são reconhecidos corretamente em suas respectivas funções. Para verificar isso, execute este comando no diretório [.filename]#server.crt#: [source,shell] .... # openssl verify -verbose -CApath . server.crt .... Se o [.filename]#slapd# estiver em execução, reinicie-o. Como declarado em [.filename]#/usr/local/etc/rc.d/slapd#, para executar corretamente o [.filename]#slapd# na inicialização, as seguintes linhas devem ser adicionadas ao [.filename]#/etc/rc.conf#: [.programlisting] .... lapd_enable="YES" slapd_flags='-h "ldapi://%2fvar%2frun%2fopenldap%2fldapi/ ldap://0.0.0.0/"' slapd_sockets="/var/run/openldap/ldapi" slapd_cn_config="YES" .... O [.filename]#slapd# não fornece depuração na inicialização. Verifique o [.filename]#/var/log/debug.log#, o [.filename]#dmesg -a# e o [.filename]#/var/log/messages# para este propósito. O exemplo a seguir adiciona o grupo `team` e o usuário `john` ao banco de dados LDAP de `domain.example`, que ainda está vazio. Primeiro, crie o arquivo [.filename]#domain.ldif#: [source,shell] .... # cat domain.ldif dn: dc=domain,dc=example objectClass: dcObject objectClass: organization o: domain.example dc: domain dn: ou=groups,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: groups dn: ou=users,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: users dn: cn=team,ou=groups,dc=domain,dc=example objectClass: top objectClass: posixGroup cn: team gidNumber: 10001 dn: uid=john,ou=users,dc=domain,dc=example objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: John McUser uid: john uidNumber: 10001 gidNumber: 10001 homeDirectory: /home/john/ loginShell: /usr/bin/bash userPassword: secret .... Veja a documentação do OpenLDAP para mais detalhes. Use [.filename]#slappasswd# para substituir a senha `secret` em texto puro com um hash no `userPassword`. O caminho especificado como `loginShell` deve existir em todos sistemas onde `john` pode se logar. Finalmente, use o administrador `mdb` para modificar o banco de dados: [source,shell] .... # ldapadd -W -D "cn=mdbadmin,dc=domain,dc=example" -f domain.ldif .... Modificações para a seção _configurações globais_ podem ser feitas apenas pelo super-usuário global. Por exemplo, assume que a opção `olcTLSCipherSuite: HIGH:MEDIUM:SSLv3` foi inicialmente especificada e deve agora ser deletada. Primeiro, crie um arquivo que contenha o seguinte: [source,shell] .... # cat global_mod dn: cn=config changetype: modify delete: olcTLSCipherSuite .... Em seguida, aplique as modificações: [source,shell] .... # ldapmodify -f global_mod -x -D "cn=config" -W .... Quando solicitado, forneça a senha escolhida na seção _configuração backend_. O nome de usuário não é necessário: aqui, `cn=config` representa o DN da seção do banco de dados a ser modificada. Como alternativa, use `ldapmodify` para excluir uma única linha do banco de dados, `ldapdelete` para excluir uma entrada inteira. Se algo der errado ou se o superusuário global não puder acessar o backend de configuração, é possível excluir e reescrever toda a configuração: [source,shell] .... # rm -rf /usr/local/etc/openldap/slapd.d/ .... O [.filename]#slapd.ldif# pode então ser editado e importado novamente. Por favor, siga este procedimento somente quando nenhuma outra solução estiver disponível. Esta é a configuração do servidor apenas. A mesma máquina também pode hospedar um cliente LDAP, com sua própria configuração separada. [[network-dhcp]] == Protocolo de configuração dinâmica de hosts (DHCP) O protocolo de configuração dinâmica de hosts (DHCP) permite que um sistema se conecte a uma rede para receber as informações de endereçamento necessárias para a comunicação nessa rede. O FreeBSD inclui a versão do `dhclient` do OpenBSD que é usada pelo cliente para obter as informações de endereçamento. O FreeBSD não instala um servidor DHCP, mas vários servidores estão disponíveis na coleção de Ports do FreeBSD. O protocolo DHCP é totalmente descrito em http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Recursos informativos também estão disponíveis em http://www.isc.org/downloads/dhcp/[isc.org/downloads/dhcp/]. Esta seção descreve como usar o cliente DHCP integrado. Em seguida, descreve como instalar e configurar um servidor DHCP. [NOTE] ==== No FreeBSD, o dispositivo man:bpf[4] é necessário tanto pelo servidor DHCP como pelo DHCP > cliente. Este dispositivo está incluído no kernel [.filename]#GENERIC# que é instalado com o FreeBSD. Usuários que preferem criar um kernel personalizado precisam manter este dispositivo se o DHCP for usado. Deve-se notar que o [.filename]#bpf# também permite que usuários privilegiados executem sniffers de pacotes de rede naquele sistema. ==== === Configurando um cliente DHCP O suporte ao cliente DHCP está incluído no instalador do FreeBSD, facilitando a configuração de um sistema recém-instalado para receber automaticamente as informações de endereçamento de rede de um servidor DHCP existente. Consulte crossref:bsdinstall[bsdinstall-post,Pós-instalação] para exemplos de configuração de rede. Quando o `dhclient` é executado na máquina cliente, ele inicia as solicitações de transmissão das informações de configuração. Por padrão, esses pedidos usam a porta UDP 68. O servidor responde na porta UDP 67 , fornecendo ao cliente um endereço IP e outras informações de rede relevantes como uma máscara de sub-rede, gateway padrão e endereços de servidor DNS. Esta informação está na forma de uma "concessão" de DHCP e é válida por um tempo configurável. Isso permite que endereços IP obsoletos para clientes que não estejam mais conectados à rede sejam reutilizados automaticamente. Clientes DHCP podem obter uma grande quantidade de informações do servidor. Uma lista exaustiva pode ser encontrada em man:dhcp-options[5]. Por padrão, quando um sistema FreeBSD inicializa, seu cliente DHCP é executado em segundo plano, ou _asynchronously_. Outros scripts de inicialização continuam sendo executados enquanto o processo DHCP é concluído, o que acelera a inicialização do sistema. O DHCP em segundo plano funciona bem quando o servidor DHCP responde rapidamente às solicitações do cliente. No entanto, o DHCP pode levar muito tempo para ser concluído em alguns sistemas. Se os serviços de rede tentarem executar antes que o DHCP tenha atribuído as informações de endereçamento de rede, eles falharão. O uso do DHCP no modo _synchronous_ impede esse problema, pois ele pausa a inicialização até que a configuração DHCP seja concluída. Esta linha no [.filename]#/etc/rc.conf# é usada para configurar o modo background ou assíncrono: [.programlisting] .... ifconfig_fxp0="DHCP" .... Esta linha pode já existir se o sistema foi configurado para usar o DHCP durante a instalação. Substitua o _fxp0_ mostrado nesses exemplos pelo nome da interface a ser configurada dinamicamente, conforme descrito em crossref:config[config-network-setup,Configurando Placas de Interface de Rede]. Para configurar o sistema para usar o modo síncrono e pausar durante a inicialização enquanto o DHCP é concluído, use "`SYNCDHCP`": [.programlisting] .... ifconfig_fxp0="SYNCDHCP" .... Opções adicionais do cliente estão disponíveis. Procure por `dhclient` in man:rc.conf[5] para detalhes. O cliente DHCP usa os seguintes arquivos: * [.filename]#/etc/dhclient.conf# + O arquivo de configuração usado pelo `dhclient`. Normalmente, esse arquivo contém apenas comentários, pois os padrões são adequados para a maioria dos clientes. Este arquivo de configuração é descrito em man:dhclient.conf[5]. * [.filename]#/sbin/dhclient# + Maiores informações sobre o comando em si podem ser encontradas em man:dhclient[8]. * [.filename]#/sbin/dhclient-script# + O script de configuração do cliente DHCP específico do FreeBSD. Ele é descrito em man:dhclient-script[8], mas não deve precisar de nenhuma modificação do usuário para funcionar corretamente. * [.filename]#/var/db/dhclient.leases.interface# + O cliente DHCP mantém um banco de dados de concessões válidas neste arquivo, que é escrito como um log e é descrito em man:dhclient.leases[5]. [[network-dhcp-server]] === Instalando e configurando um servidor DHCP Esta seção demonstra como configurar um sistema FreeBSD para atuar como um servidor DHCP usando a implementação do servidor DHCP do Internet Systems Consortium (ISC). Esta implementação e a sua documentação podem ser instaladas usando o pacote ou port package:net/isc-dhcp44-server[]. A instalação do package:net/isc-dhcp44-server[] instala um arquivo de configuração de exemplo. Copie o [.filename]#/usr/local/etc/dhcpd.conf.example# para [.filename]#/usr/local/etc/dhcpd.conf# e faça as alterações neste novo arquivo. O arquivo de configuração é composto de declarações para sub-redes e hosts que definem as informações que são fornecidas aos clientes DHCP. Por exemplo, essas linhas configuram o seguinte: [.programlisting] .... option domain-name "example.org";<.> option domain-name-servers ns1.example.org;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 600;<.> max-lease-time 72400;<.> ddns-update-style none;<.> subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20;<.> option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;<.> } host fantasia { hardware ethernet 08:00:07:26:c0:a5;<.> fixed-address fantasia.fugue.com;<.> } .... <.> Esta opção especifica o domínio de pesquisa padrão que será fornecido aos clientes. Consulte man:resolv.conf[5] para obter maiores informações. <.> Esta opção especifica uma lista separada por vírgula de servidores DNS que o cliente deve usar. Eles podem ser listados por seus nomes de domínio totalmente qualificados (FQDN), como visto no exemplo, ou por seus endereços de IP. <.> A máscara de sub-rede que será fornecida aos clientes. <.> O tempo de expiração da concessão padrão em segundos. Um cliente pode ser configurado para substituir esse valor. <.> O período máximo permitido de tempo, em segundos, para uma concessão. Se um cliente solicitar uma concessão mais longa, uma concessão ainda será emitida, mas será válida apenas para o tempo especificado em `max-lease-time`. <.> O padrão `none` desabilita as atualizações de DNS dinâmicas. Alterar isso para `interim` configura o servidor DHCP para atualizar um servidor DNS sempre que for concedido um contrato para que o servidor de DNS saiba quais endereços de IP estão associados a quais computadores na rede. Não altere a configuração padrão, a menos que o servidor de DNS tenha sido configurado para suportar DNS dinâmico. <.> Esta linha cria um conjunto de endereços IP disponíveis que são reservados para alocação a clientes DHCP. O intervalo de endereços deve ser válido para a rede ou sub-rede especificada na linha anterior. <.> Declara o gateway padrão que é válido para a rede ou sub-rede especificada antes do colchete de abertura `{`. <.> Especifica o endereço de hardware MAC de um cliente para que o servidor DHCP possa reconhecer o cliente quando ele fizer uma solicitação. <.> Especifica que este host deve sempre receber o mesmo endereço IP. A utilização do nome do host está correta, pois o servidor DHCP resolverá o nome do host antes de retornar as informações de concessão. Este arquivo de configuração suporta muito mais opções. Consulte o man:dhcpd.conf[5], instalado com o servidor, para obter detalhes e exemplos. Uma vez que a configuração do [.filename]#dhcpd.conf# estiver completa, habilite o servidor DHCP em [.filename]#/etc/rc.conf#: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... Substitua o `dc0` pela interface (ou interfaces, separadas por espaço em branco) que o servidor DHCP deverá escutar por solicitações de clientes DHCP. Inicie o servidor executando o seguinte comando: [source,shell] .... # service isc-dhcpd start .... Quaisquer mudanças futuras na configuração do servidor exigirão que o serviço dhcpd seja interrompido e, em seguida, iniciado usando man:service[8]. O servidor DHCP usa os seguintes arquivos. Observe que as páginas de manual são instaladas com o software do servidor. * [.filename]#/usr/local/sbin/dhcpd# + Maiores informações sobre o servidor dhcpd podem ser encontradas em man:dhcpd[8]. * [.filename]#/usr/local/etc/dhcpd.conf# + O arquivo de configuração do servidor precisa conter todas as informações que devem ser fornecidas aos clientes, juntamente com informações sobre a operação do servidor. Este arquivo de configuração é descrito no man:dhcpd.conf[5]. * [.filename]#/var/db/dhcpd.leases# + O servidor DHCP mantém um banco de dados das concessões que ele emitiu neste arquivo, que é gravado como um log. Consulte man:dhcpd.leases[5], o qual fornece uma descrição um pouco mais longa. * [.filename]#/usr/local/sbin/dhcrelay# + Esse daemon é usado em ambientes avançados, onde um servidor DHCP encaminha uma solicitação de um cliente para outro servidor DHCP em uma rede separada. Se esta funcionalidade for necessária, instale o pacote ou port package:net/isc-dhcp44-relay[]. A instalação inclui o man:dhcrelay[8], que fornece maiores detalhes. [[network-dns]] == Sistema de Nomes de Domínio (DNS) O Sistema de Nomes de Domínio (DNS) é o protocolo através do qual os nomes de domínio são mapeados para endereços de IP e vice-versa. O DNS é coordenado pela Internet através de um sistema complexo de raiz de autoridade, Top Level Domain (TLD) e outros servidores de nomes de menor escala, que hospedam e armazenam em cache domínios individuais. Não é necessário executar um servidor de nomes para executar pesquisas de DNS em um sistema. A tabela a seguir descreve alguns dos termos associados ao DNS: .Terminologia DNS [cols="1,1", frame="none", options="header"] |=== | Termo | Definição |Encaminhamento de DNS |Mapeamento de nomes de hosts para endereços de IP. |Origem |Refere-se ao domínio coberto em um arquivo de zona específico. |Resolver |Um processo do sistema através do qual uma máquina consulta um servidor de nomes para informações de zona. |DNS Reverso |Mapeamento de endereços IP para hostnames. |Root zone |O início da hierarquia da zona da Internet. Todas as zonas se enquadram na zona de raiz, semelhante a como todos os arquivos em um sistema de arquivos se enquadram no diretório raiz. |Zona |Um domínio individual, subdomínio ou parte do DNS administrado pela mesma autoridade. |=== Exemplos de zonas: * `.` é como a zona root é geralmente referida na documentação. * `org.` é um domínio de nível superior (TLD) sob a zona raiz. * `example.org.` é uma zona sob o TLD `org.`. * `1.168.192.in-addr.arpa` é uma zona que faz referência a todos os endereços IP que se enquadram no espaço de endereçamento IP `192.168.1.*` . Como se pode ver, a parte mais específica de um nome de host aparece à esquerda. Por exemplo, `example.org.` é mais específico que `org.`, como `org.` é mais específico que a zona raiz . O layout de cada parte de um nome de host é muito parecido com um sistema de arquivos: o diretório [.filename]#/dev# está dentro da raiz e assim por diante. === Razões para executar um servidor de nomes Os servidores de nomes geralmente vêm em duas formas: servidores de nomes autoritativos e servidores de nomes de armazenamento em cache (também conhecidos como servidores de resolução). Um servidor de nomes autoritativo é necessário quando: * Alguém quer servir ao mundo informações de DNS, respondendo autoritariamente a consultas. * Um domínio, como `example.org`, está registrado e os endereços IP precisam ser atribuídos a nomes de host sob ele. * Um bloco de endereços IP requer entradas reversas de DNS (IP para hostname). * Um servidor de nomes de backup ou secundário, chamado de escravo, responderá às consultas. Um servidor de nomes em cache é necessário quando: * Um servidor DNS local pode armazenar em cache e responder mais rapidamente do que consultar um servidor de nomes externo. Quando alguém pergunta por `www.FreeBSD.org`, o resolvedor geralmente consulta o servidor de nomes do ISP e recupera a resposta. Com um servidor local, de cache DNS, a consulta só precisa ser feita uma vez para o mundo externo pelo servidor de Cache DNS. Consultas adicionais não precisarão sair da rede local, pois as informações estão armazenadas em um cache local. === Configuração do servidor de DNS O Unbound é fornecido no sistema básico do FreeBSD. Por padrão, ele fornecerá a resolução de DNS apenas para a máquina local. Embora o pacote básico do sistema possa ser configurado para fornecer serviços de resolução além da máquina local, é recomendável que esses requisitos sejam resolvidos instalando o Unbound da coleção de ports do FreeBSD. Para ativar o Unbound, adicione o seguinte ao [.filename]#/etc/rc.conf#: [.programlisting] .... local_unbound_enable="YES" .... Quaisquer servidores de nomes existentes em [.filename]#/etc/resolv.conf# serão configurados como forwarders na nova configuração do Unbound. [NOTE] ==== Se algum dos servidores de nomes listados não suportar o DNSSEC, a resolução local DNS falhará. Certifique-se de testar cada servidor de nomes e remover qualquer um que falhe no teste. O seguinte comando mostrará a árvore de confiança ou uma falha para um servidor de nomes em execução em `192.168.1.1`: ==== [source,shell] .... % drill -S FreeBSD.org @192.168.1.1 .... Quando cada servidor de nomes for confirmado para suportar DNSSEC, inicie o Unbound: [source,shell] .... # service local_unbound onestart .... Isso cuidará da atualização do arquivo [.filename]#/etc/resolv.conf# para que as consultas para domínios seguros DNSSEC funcionem agora. Por exemplo, execute o seguinte DNSSEC para validar a árvore confiável do FreeBSD.org : [source,shell] .... % drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful .... [[network-apache]] == Servidor HTTP Apache O open source Apache HTTP Server é o servidor Web mais utilizado. O FreeBSD não instala este servidor web por padrão, mas ele pode ser instalado a partir do pacote ou Port package:www/apache24[]. Esta seção resume como configurar e iniciar a versão 2._x_ do Servidor HTTP Apache no FreeBSD. Para informações mais detalhadas sobre o Apache2.X e suas diretivas de configuração, consulte http://httpd.apache.org/[httpd.apache.org]. === Configurando e Iniciando o Apache No FreeBSD, o arquivo de configuração principal do Apache HTTP Server é instalado como [.filename]#/usr/local/etc/apache2x/httpd.conf#, onde _x_ representa o número da versão. Este arquivo ASCII de texto inicia as linhas de comentário com um `#`. As diretivas modificadas com mais freqüência são: `ServerRoot "/usr/local"`:: Especifica a hierarquia de diretório padrão para a instalação do Apache. Os binários são armazenados nos subdiretórios [.filename]#bin# e [.filename]#sbin# da raiz do servidor e os arquivos de configuração são armazenados no subdiretório [.filename]#etc/apache2x#. `ServerAdmin you@example.com`:: Altere isso para seu endereço de e-mail para receber problemas com o servidor. Esse endereço também aparece em algumas páginas geradas pelo servidor, como documentos de erro. `ServerName www.example.com:80`:: Permite que um administrador defina um nome de host que é enviado de volta aos clientes pelo servidor. Por exemplo, `www` pode ser usado em vez do nome do host real. Se o sistema não tiver um nome registrado no DNS, insira seu endereço IP. Se o servidor irá escutar em um relatório alternativo, altere a porta `80` para o número de porta alternativa. `DocumentRoot "/usr/local/www/apache2__x__/data"`:: O diretório no qual os documentos serão exibidos. Por padrão, todas as solicitações são obtidas desse diretório, mas os links e aliases simbólicos podem ser usados para apontar para outros locais. É sempre uma boa ideia fazer uma cópia de backup do arquivo de configuração do Apache padrão antes de fazer alterações. Quando a configuração do Apache estiver concluída, salve o arquivo e verifique a configuração usando o `apachectl`. A execução do `apachectl configtest` deve retornar `Syntax OK`. Para iniciar o Apache na inicialização do sistema, adicione a seguinte linha ao [.filename]#/etc/rc.conf#: [.programlisting] .... apache24_enable="YES" .... Se o Apache deve ser iniciado com opções não-padrão, a seguinte linha pode ser adicionada ao [.filename]#/etc/rc.conf# para especificar os flags necessários: [.programlisting] .... apache24_flags="" .... Se o apachectl não relatar erros de configuração, inicie o `httpd` agora: [source,shell] .... # service apache24 start .... O serviço `httpd` pode ser testado inserindo `http://_localhost_` em um navegador da Web, substituindo _localhost_ pelo nome de domínio totalmente qualificado da máquina que está executando o `httpd`. A página padrão da Web exibida é [.filename]#/usr/local/www/apache24/data/index.html#. A configuração do Apache pode ser testada quanto a erros depois de fazer alterações subsequentes de configuração enquanto o `httpd` está em execução usando o seguinte comando: [source,shell] .... # service apache24 configtest .... [NOTE] ==== É importante notar que o `configtest` não é um padrão man:rc[8] e não se espera que funcione para todos os scripts de inicialização. ==== === Hospedagem Virtual A hospedagem virtual permite que vários sites sejam executados em um servidor Apache. Os hosts virtuais podem ser _baseados em IP_ ou _baseados em nome_. A hospedagem virtual baseada em IP usa um endereço IP diferente para cada site. A hospedagem virtual baseada em nome usa os cabeçalhos HTTP/1.1 do cliente para descobrir o nome do host, o que permite que os sites compartilhem o mesmo endereço de IP. Para configurar o Apache para usar hospedagem virtual baseada em nome, adicione um bloco `VirtualHost` para cada site. Por exemplo, para o servidor Web denominado `www.domain.tld` com um domínio virtual de `www.someotherdomain.tld`, adicione as seguintes entradas ao arquivo [.filename]#httpd.conf#: [.programlisting] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... Para cada host virtual, substitua os valores de `ServerName` e `DocumentRoot` pelos valores a serem usados. Para obter mais informações sobre como configurar hosts virtuais, consulte a documentação oficial do Apache em: http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]. === Módulos Apache O Apache usa módulos para aumentar a funcionalidade fornecida pelo servidor básico. Consulte o http://httpd.apache.org/docs/current/mod/[http://httpd.apache.org/docs/current/mod/] para uma lista completa e detalhes de configuração para os módulos disponíveis. No FreeBSD, alguns módulos podem ser compilados com o port package:www/apache24[]. Digite `make config` dentro do diretório [.filename]#/usr/ports/www/apache24# para ver quais módulos estão disponíveis e quais estão ativados por padrão. Se o módulo não é compilado com o port, a Coleção de Ports do FreeBSD fornece uma maneira fácil de instalar vários módulos. Esta seção descreve três dos módulos mais usados. ==== Suporte SSL Em algum momento, o suporte para o SSL dentro do Apache requer um modulo secundário chamado [.filename]#mod_ssl#. Esse não é mais o casoe a instalação padrão do Apache vem com SSL embutido no servidor web. Um exemplo de como habilitar o suporte para paginas com SSL está disponível no arquivo [.filename]#http-ssl.conf# instalado dentro do diretório [.filename]#/usr/local/etc/apache24/extra#. Dentro desse diretório também esta um exemplo do arquivo chamado [.filename]#ssl.conf-sample#. É recomendado que ambos arquivos sejam avaliados para configurar apropriadamente páginas seguras no servidor web Apache. Depois da configuração do SSL estiver completa, deve ser removido o comentário da linha seguinte no arquivo [.filename]#http.conf# principal para ativar as mudanças no próximo restart ou reload do Apache: [.programlisting] .... #Include etc/apache24/extra/httpd-ssl.conf .... [WARNING] ==== Versão dois do SSL e a versão três tem problemas de vulnerabilidades conhecidas. É altamente recomendado a versão 1.2 do TLS e 1.3 deve ser habilitada no lugar das velhas opções do SSL. Isso pode ser realizado configurando as seguintes opções no arquivo [.filename]#ssl.conf#: ==== [.programlisting] .... SSLProtocol all -SSLv3 -SSLv2 +TLSv1.2 +TLSv1.3 SSLProxyProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 .... Para completar a configuração do SSL no servidor web, remova os comentários da seguinte linha para garantir que a configuração irá ser enviada para dentro do Apache durante o restart ou reload: [.programlisting] .... # Secure (SSL/TLS) connections Include etc/apache24/extra/httpd-ssl.conf .... As linhas a seguir também devem ser descomentadas no [.filename]#httpd.conf# para suportar totalmente o SSL no Apache: [.programlisting] .... LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so LoadModule ssl_module libexec/apache24/mod_ssl.so .... O próximo passo é trabalhar com uma autoridade certificadora para ter certificados apropriados instalados no sistema. Isso vai configurar um cadeia de confiança para a pagina e prever alguns avisos de certificados auto assinados. ==== [.filename]#mod_perl# O módulo [.filename]#mod_perl# torna possível escrever módulos Apache em Perl. Além disso, o intérprete persistente embutido no servidor evita a sobrecarga de iniciar um intérprete externo e a penalidade do tempo de inicialização do Perl. O [.filename]#mod_perl# pode ser instalado usando o pacote ou port package:www/mod_perl2[]. A documentação para usar este módulo pode ser encontrada em http://perl.apache.org/docs/2.0/index.html[http://perl.apache.org/docs/2.0/index .html]. ==== [.filename]#mod_php# _PHP: Pré-processador de hipertexto_ ( PHP ) é uma linguagem de script de propósito geral que é especialmente adequada para desenvolvimento web. Capaz de ser incorporada em HTML, sua sintaxe se baseia em C, Java(TM) e Perl com a intenção de permitir desenvolvedores web para escrever rapidamente páginas da web geradas dinamicamente. Suporte para PHP para o Apache e alguma outra parte escrita na linguagem, pode ser adicionada instalando o port apropriado. Para todas versões suportadas, procure os dados do pacote usando o comando `pkg`: [source,shell] .... # pkg search php .... Uma lista vai ser disponibilizada incluindo as versões e partes adicionais que elas proverem. Os componentes são completamente modulares, significando que as partes especificas são habilitadas instalando o port apropriado. Para instalar o PHP na versão 7.4 para o Apache, use o seguinte comando: [source,shell] .... # pkg install mod_php74 .... Se algum pacote dependente precisar ser instalado, ele irá ser instalado também. Por padrão, o PHP não estará habilitado. As seguintes linhas precisam ser adicionadas no arquivo de configuração do Apache localizado em [.filename]#/usr/local/etc/apache24# para ativa-lo: [.programlisting] .... SetHandler application/x-httpd-php SetHandler application/x-httpd-php-source .... Em adição, a opção `DirectoryIndex` no arquivo de configuração irá precisar ser atualizada também e o Apache irá precisar ser reiniciado ou feito um relaoad também para as mudanças surtirem efeito. Suporte para muitas partes do PHP podem ser instalado também usando o comando `pkg`. Por exemplo, para instalar suporte para o XML ou para SSL, instale os seguintes ports: [source,shell] .... # pkg install php74-xml php74-openssl .... Como antes, a configuração do Apache irá precisar ser recarregada para as mudanças surtirem efeito, mesmo em casos onde foi feita apenas a instalação de um modulo. Para realizar uma reinicialização normal para recarregar a configuração, digite o seguinte comando: [source,shell] .... # apachectl graceful .... Uma vez que a instalação esteja completa, há dois métodos para obter o suporte para os modulos do PHP e a informação do ambiente dessa instalação. A primeira é instalar o binário completo do PHP e rodar o seguinte comando para obter a informação: [source,shell] .... # pkg install php74 .... [source,shell] .... # php -i |less .... Isso é necessário para passar a saída paga um paginador, como o comando `more` ou `less` para visualizar melhor a saída. Finalmente, para fazer alguma mudança na configuração global do PHP há um arquivo bem documentado instalado dentro de [.filename]#/usr/local/etc/php.ini#. No momento da instalação, esse arquivo não irá existir porque há duas versões para escolher, uma é o arquivo [.filename]#php.ini-development# e outra o [.filename]#php.ini-production#. Esses são pontos iniciais para ajudar os administradores na implementação. ==== Suporte a HTTP2 Suporte do Apache para o protocolo HTTP está incluido por padrão quando instala o port com o comando `pkg`. A nova versão do HTTP inclui muitas melhorias em relação a versão anterior, incluindo utilizar uma conexão singular para uma página, reduzindo as idas e vindas de conexões TCP. Também, os dados no cabeçalho do pacote é comprimido e o HTTP2 requer encriptação por padrão. Quando o Apache estiver configurado para usar HTTP2 apenas, os navegadores web irão requisitar conexões seguras, encriptadas com HTTPS. Quando o Apache estiver configurado para usar ambas versões, o HTTP1.1 irá ser considerado uma opção substituta se algum problema surgir durante a conexão. Embora essa mudança exija que os administradores façam alterações, elas são positivas e equivalem a uma Internet mais segura para todos. As mudanças são requeridas apenas para paginas não implementada corretamente com SSL e TLS. [NOTE] ==== Essa configuração depende das seções anteriores, incluindo suporte a TLS. É recomendado que essas instruções seja seguidas antes de continuar com essa configuração. ==== Comece o processo habilitando o modulo http2 removendo o comentário da linha no arquivo [.filename]#/usr/local/etc/apache24/httpd.conf# e trocando o modulo mpm_prefork pelo mpm_event pois o anterior não suporta o http2. [.programlisting] .... LoadModule http2_module libexec/apache24/mod_http2.so LoadModule mpm_event_module libexec/apache24/mod_mpm_event.so .... [NOTE] ==== Aqui há um port [.filename]#mod_http1# distinto que está disponível. Ele existe pra entregar segurança e correção de bugs mais rápido que o modulo instalado por padrão com o port [.filename]#apache24#. Ele não é requisitado para o suporte do HTTP2 mas está disponível. Quando instalado, o [.filename]#mod_h2.so# deve ser usado no lugar do [.filename]#mod_http2.so# na configuração do Apache. ==== Aqui há dois métodos para implementar o HTTP2 no Apache; um caminho é de forma global para todos os sites e cada VirtualHost rodando no sistema. Para habilitar o HTTP2 globalmente, adicione a seguinte linha abaixo da diretiva ServerName: [.programlisting] .... Protocolos h2 http/1.1 .... [NOTE] ==== Para habilitar HTTP2 sobre texto simples, use h2h2chttp/1.1 no arquivo [.filename]#httpd.conf#. ==== Tendo o h2c aqui irá permitir que o dado em texto simples do HTTP2 passar pelo sistema mas isso não é recomendado. Em adição a isso, usando o http/1.1 aqui irá permitir retornar para a versão do protocolo HTTP1.1 caso sejá necessário pelo sistema. Para habilitar HTTP2 para VirtualHosts individuais, adicione a mesma linha com a diretiva VirtualHost no arquivo [.filename]#httpd.conf# ou [.filename]#httpd-ssl.conf#. Recarregue a configuração usando o comando `apachectl` reload e teste a configuração seguindo um dos métodos após visitar uma das paginas hosteadas: [source,shell] .... # grep "HTTP/2.0" /var/log/httpd-access.log .... A saída deve ser semelhante à seguinte: [.programlisting] .... 192.168.1.205 - - [18/Oct/2020:18:34:36 -0400] "GET / HTTP/2.0" 304 - 192.0.2.205 - - [18/Oct/2020:19:19:57 -0400] "GET / HTTP/2.0" 304 - 192.0.0.205 - - [18/Oct/2020:19:20:52 -0400] "GET / HTTP/2.0" 304 - 192.0.2.205 - - [18/Oct/2020:19:23:10 -0400] "GET / HTTP/2.0" 304 - .... O outro metodo é usar o navegador web padrão no debugger do site ou o comando `tcpdump`; contanto, o uso de qualquer método está além do escopo desse documento. Suporte para conexões do proxy reverso HTTP2 usando o modulo [.filename]#mod_proxy_http2.so#. Quando declarado na configuração o ProxyPass ou RewriteRules [P], eles devem usar h2:// para a conexão. === Websites Dinâmicos Além do mod_perl e do mod_php, outras linguagens estão disponíveis para a criação de conteúdo dinâmico da web. Estes incluem o Django e o Ruby on Rails. ==== Django O Django é um framework de licença BSD projetado para permitir que desenvolvedores escrevam aplicações web elegantes e de alto desempenho rapidamente. Ele fornece um mapeador relacional de objeto para que os tipos de dados sejam desenvolvidos como objetos Python. Uma API rica e dinâmica de acesso ao banco de dados é fornecida para os objetos sem que o desenvolvedor tenha que escrever SQL. Ele também fornece um sistema de template extensível para que a lógica do aplicativo seja separada da apresentação HTML. Django depende de [.filename]#mod_python#, e um mecanismo de banco de dados SQL. No FreeBSD, o port package:www/py-django[] instala automaticamente o [.filename]#mod_python# e suporta os banco de dados PostgreSQL, MySQL, ou SQLite, com o padrão sendo o SQLite. Para trocar o mecanismo de banco de dados, digite `make config` dentro do diretório [.filename]#/usr/ports/www/py-django#, então instale o port. Uma vez instalado o Django, a aplicação precisará de um diretório de projeto junto com a configuração Apache para usar o interpretador Python incorporado. Este intérprete é usado para chamar o aplicativo para URLs específicas no site. Para configurar o Apache para que passe a fazer solicitações para determinadas URLs para a aplicação Web, adicione o seguinte ao [.filename]#httpd.conf#, especificando o caminho completo para o diretório do projeto: [.programlisting] .... SetHandler python-program PythonPath "['/dir/to/the/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On .... Consulte https://docs.djangoproject.com[https://docs.djangoproject.com] para maiores informações sobre como usar o Django. ==== Ruby on Rails O Ruby on Rails é outro framework de software livre da Web que fornece uma stack de desenvolvimento completa. Ele é otimizado para tornar os desenvolvedores da Web mais produtivos e capazes de criar rapidamente aplicativos poderosos. No FreeBSD, ele pode ser instalado usando o pacote ou port package:www/rubygem-rails[]. Consulte http://guides.rubyonrails.org[http://guides.rubyonrails.org] para maiores informações sobre como usar o Ruby on Rails . [[network-ftp]] == Protocolo de Transferência de Arquivos (FTP) O Protocolo de Transferência de Arquivos (FTP) fornece aos usuários uma maneira simples de transferir arquivos para um servidor FTP. O FreeBSD inclui o software do servidor FTP, ftpd, no sistema base. O FreeBSD fornece vários arquivos de configuração para controlar o acesso ao servidor FTP. Esta seção resume esses arquivos. Consulte man:ftpd[8] para obter mais detalhes sobre o servidor FTP incorporado. === Configuração A etapa de configuração mais importante é decidir quais contas terão permissão para acessar o servidor FTP. Um sistema FreeBSD possui várias contas do sistema que não devem ter acesso ao FTP. A lista de usuários que não permitem acesso FTP pode ser encontrada em [.filename]#/etc/ftpusers#. Por padrão, inclui contas do sistema. Usuários adicionais que não devem ter acesso a FTP podem ser adicionados. Em alguns casos, pode ser desejável restringir o acesso de alguns usuários sem impedi-los completamente de usar o FTP. Isso pode ser feito criando [.filename]#/etc/ftpchroot# como descrito em man:ftpchroot[5]. Este arquivo lista usuários e grupos sujeitos a restrições de acesso a FTP. Para permitir acesso anônimo ao servidor FTP, crie um usuário chamado `ftp` no sistema FreeBSD. Os usuários poderão então fazer logon no servidor FTP com um nome de usuário `ftp` ou `anonymous` . Quando for solicitada a senha, qualquer entrada será aceita, mas por convenção, um endereço de e-mail deverá ser usado como a senha. O servidor FTP chamará man:chroot[2] quando um usuário anônimo efetuar login para restringir o acesso somente ao diretório home do usuário `ftp`. Existem dois arquivos de texto que podem ser criados para especificar mensagens de boas-vindas a serem exibidas para clientes FTP. O conteúdo de [.filename]#/etc/ftpwelcome# será exibido aos usuários antes que eles atinjam o prompt de login. Após um login bem sucedido, o conteúdo de [.filename]#/etc/ftpmotd# será exibido. Observe que o caminho para esse arquivo é relativo ao ambiente de login, portanto, o conteúdo de [.filename]#~ftp/etc/ftpmotd# seria exibido para usuários anônimos. Uma vez configurado o servidor FTP, defina a variável apropriada em [.filename]#/etc/rc.conf# para iniciar o serviço durante a inicialização: [.programlisting] .... ftpd_enable="YES" .... Para iniciar o serviço agora: [source,shell] .... # service ftpd start .... Teste a conexão com o servidor FTP digitando: [source,shell] .... % ftp localhost .... O daemon ftpd usa o man:syslog[3] para registrar mensagens. Por padrão, o daemon de log do sistema gravará mensagens relacionadas a FTP em [.filename]#/var/log/xferlog#. A localização do log do FTP pode ser modificada alterando a seguinte linha no [.filename]#/etc/syslog.conf#: [.programlisting] .... ftp.info /var/log/xferlog .... [NOTE] ==== Esteja ciente dos possíveis problemas envolvidos na execução de um servidor FTP anônimo. Em particular, pense duas vezes antes de permitir que usuários anônimos façam upload de arquivos. Pode acontecer que o site FTP se torne um fórum para o comércio de software comercial não licenciado ou pior. Se uploads anônimos de FTP forem necessários, verifique as permissões para que esses arquivos não possam ser lidos por outros usuários anônimos até que sejam revisados por um administrador. ==== [[network-samba]] == Serviços de arquivos e impressão para clientes Microsoft(TM)Windows(TM) Clients (Samba) Samba é um popular pacote de software de código aberto que fornece serviços de arquivo e impressão usando o protocolo SMB/CIFS. Este protocolo está incorporado nos sistemas Microsoft(TM)Windows(TM). Ele pode ser adicionado a sistemas não Microsoft(TM)Windows(TM) instalando as bibliotecas-cliente Samba. O protocolo permite que os clientes acessem dados e impressoras compartilhadas. Esses compartilhamentos podem ser mapeados como uma unidade de disco local e as impressoras compartilhadas podem ser usadas como se fossem impressoras locais. No FreeBSD, as bibliotecas cliente do Samba podem ser instaladas usando o port ou pacote package:net/samba410[]. O cliente fornece a capacidade de um sistema FreeBSD acessar compartilhamentos de SMB/CIFS em uma rede Microsoft(TM)Windows(TM). Um sistema FreeBSD também pode ser configurado para atuar como um servidor Samba instalando o port ou pacote package:net/samba410[]. Isso permite que o administrador crie compartilhamentos de SMB/CIFS no sistema FreeBSD que podem ser acessados por clientes executando Microsoft(TM)Windows(TM) ou as bibliotecas do cliente Samba. === Configuração do Servidor O Samba é configurado em [.filename]#/usr/local/etc/smb4.conf#. Este arquivo deve ser criado antes que o Samba possa ser usado. Um simples [.filename]#smb4.conf# para compartilhar diretórios e impressoras com clientes Windows(TM) em um grupo de trabalho é mostrado aqui. Para configurações mais complexas envolvendo LDAP ou Active Directory, é mais fácil usar o man:samba-tool[8] para criar o [.filename]#smb4.conf#. [.programlisting] .... [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = ExampleMachine wins support = Yes security = user passdb backend = tdbsam # Example: share /usr/src accessible only to 'developer' user [src] path = /usr/src valid users = developer writable = yes browsable = yes read only = no guest ok = no public = no create mask = 0666 directory mask = 0755 .... ==== Configurações Globais As configurações que descrevem a rede são adicionadas em [.filename]#/usr/local/etc/smb4.conf#: `workgroup`:: O nome do grupo de trabalho a ser servido. `netbios name`:: O nome NetBIOS pelo qual um servidor Samba é conhecido. Por padrão, é o mesmo que o primeiro componente do nome do DNS do host. `server string`:: A string que será exibida na saída de `net view` e algumas outras ferramentas de rede que buscam exibir texto descritivo sobre o servidor. `wins support`:: Se o Samba funcionará como um servidor WINS. Não habilite o suporte para WINS em mais de um servidor na rede. ==== Configurações de Segurança As configurações mais importantes em [.filename]#/usr/local/etc/smb4.conf# são o modelo de segurança e o formato de senha de backend. Essas diretivas controlam as opções: `security`:: As configurações mais comuns são `security=share` e `security=user`. Se os clientes usarem nomes de usuários que sejam os mesmos nomes de usuários na máquina do FreeBSD, a segurança no nível do usuário deve ser usada. Essa é a política de segurança padrão e exige que os clientes façam logon pela primeira vez antes de poderem acessar recursos compartilhados. + Na segurança em nível de compartilhamento, os clientes não precisam efetuar logon no servidor com um nome de usuário e senha válidos antes de tentar se conectar a um recurso compartilhado. Este era o modelo de segurança padrão para versões mais antigas do Samba. `passdb backend`:: O Samba possui vários modelos de autenticação de backend diferentes. Os clientes podem ser autenticados com LDAP, NIS+, um banco de dados SQL ou um arquivo de senha modificado. O método de autenticação recomendado, `tdbsam`, é ideal para redes simples e é abordado aqui. Para redes maiores ou mais complexas, o `ldapsam` é recomendado. `smbpasswd` foi o padrão anterior e agora está obsoleto. ==== Usuários do Samba As contas de usuário do FreeBSD devem ser mapeadas para o banco de dados `SambaSAMAccount` para que os clientes Windows(TM) acessem o compartilhamento. Mapear contas de usuários existentes do FreeBSD usando man:pdbedit[8]: [source,shell] .... # pdbedit -a username .... Esta seção mencionou apenas as configurações mais usadas. Consulte a https://wiki.samba.org[Wiki Oficial do Samba] para obter informações adicionais sobre as opções de configuração disponíveis. === Iniciando o Samba Para habilitar o Samba no momento da inicialização, adicione a seguinte linha ao [.filename]#/etc/rc.conf#: [.programlisting] .... samba_server_enable="YES" .... Para iniciar o Samba agora: [source,shell] .... # service samba_server start Performing sanity check on Samba configuration: OK Starting nmbd. Starting smbd. .... O Samba consiste em três daemons separados. Os daemons nmbd e smbd são iniciados por `samba_enable`. Se resolução de nomes winbind também é necessária, defina: [.programlisting] .... winbindd_enable="YES" .... O Samba pode ser interrompido a qualquer momento digitando: [source,shell] .... # service samba_server stop .... O Samba é um conjunto de software complexo com funcionalidade que permite ampla integração com as redes Microsoft(TM)Windows(TM). Para mais informações sobre a funcionalidade além da configuração básica descrita aqui, consulte https://www.samba.org[https://www.samba.org]. [[network-ntp]] == Sincronização de Relógio com NTP Com o tempo, o relógio de um computador está propenso a se desviar. Isso é problemático, pois muitos serviços de rede exigem que os computadores em uma rede compartilhem o mesmo tempo exato. Tempo preciso também é necessário para garantir que os registros de data e hora dos arquivos permaneçam consistentes. O protocolo de horário da rede (NTP) é uma maneira de fornecer precisão de relógio em uma rede. O FreeBSD inclui o man:ntpd[8] o qual pode ser configurado para consultar outros servidores NTP para sincronizar o relógio nessa máquina ou para fornecer serviços de horário para outros computadores na rede. Esta seção descreve como configurar o ntpd no FreeBSD. Mais documentação pode ser encontrada em [.filename]#/usr/shared/doc/ntp/# no formato HTML. === Configuração de NTP No FreeBSD, o ntpd nativo pode ser usado para sincronizar o relógio do sistema. O Ntpd é configurado usando variáveis no man:rc.conf[5] e no [.filename]#/etc/ntp.conf#, conforme detalhado nas seções a seguir. O Ntpd se comunica com seus network peers usando pacotes UDP. Quaisquer firewalls entre sua máquina e seus NTP peers devem ser configurados para permitir a entrada e saída de pacotes UDP na porta 123. ==== O arquivo [.filename]#/etc/ntp.conf# O Ntpd faz a leitura do [.filename]#/etc/ntp.conf# para determinar quais servidores NTP que ele deve consultar. É recomendável escolher vários servidores NTP, caso um dos servidores se torne inacessível ou seu relógio torne-se não confiável. Como o ntpd recebe respostas, ele favorece servidores confiáveis em vez dos menos confiáveis. Os servidores consultados podem ser locais na rede, fornecidos por um ISP ou selecionados a partir de uma http://support.ntp.org/bin/view/Servers/WebHome[ lista online de servidores NTP publicamente acessíveis]. Ao escolher um servidor NTP público, selecione um servidor geograficamente próximo e revise sua política de uso. A palavra-chave `pool` de configuração seleciona um ou mais servidores de um pool de servidores. Está disponível uma http://support.ntp.org/bin/view/Servers/NTPPoolServers[ lista online de pools NTP publicamente acessíveis], organizada por área geográfica. Além disso, o FreeBSD fornece um pool patrocinado pelo projeto, `0.freebsd.pool.ntp.org`. .Exemplo de [.filename]#/etc/ntp.conf# [example] ==== Este é um exemplo simples de um arquivo [.filename]#ntp.conf#. Ele pode ser usado com segurança como está; ele contém as opções `restrict` recomendadas para operação em uma conexão de rede pública. [.programlisting] .... # Disallow ntpq control/query access. Allow peers to be added only # based on pool and server statements in this file. restrict default limited kod nomodify notrap noquery nopeer restrict source limited kod nomodify notrap noquery # Allow unrestricted access from localhost for queries and control. restrict 127.0.0.1 restrict ::1 # Add a specific server. server ntplocal.example.com iburst # Add FreeBSD pool servers until 3-6 good servers are available. tos minclock 3 maxclock 6 pool 0.freebsd.pool.ntp.org iburst # Use a local leap-seconds file. leapfile "/var/db/ntpd.leap-seconds.list" .... ==== O formato deste arquivo é descrito em man:ntp.conf[5]. As descrições abaixo fornecem uma visão geral rápida apenas das palavras-chave usadas no arquivo de exemplo acima. Por padrão, um servidor NTP pode ser acessado de qualquer host da rede. A palavra-chave `restrict` controla quais sistemas podem acessar o servidor. Múltiplas entradas `restrict` são suportadas, cada uma refinando as restrições fornecidas nas instruções anteriores. Os valores mostrados no exemplo concedem ao sistema local o acesso completo à consulta e controle, enquanto permitem aos sistemas remotos apenas a capacidade de consultar o horário. Para obter mais detalhes, consulte a subseção `Access Control Support` de man:ntp.conf[5]. A palavra-chave `server` especifica um único servidor para consulta. O arquivo pode conter várias palavras-chave server, com um servidor listado em cada linha. A palavra-chave `pool` especifica um pool de servidores. O Ntpd adicionará um ou mais servidores desse pool, conforme necessário, para atingir o número de peers especificado usando o valor `tos minclock`. A palavra-chave `iburst` direciona o ntpd para executar um burst de oito trocas rápidas de pacotes com um servidor quando o contato é estabelecido pela primeira vez, para ajudar a sincronizar rapidamente a hora do sistema. A palavra-chave `leapfile` especifica o local de um arquivo que contém informações sobre segundos bissextos. O arquivo é atualizado automaticamente pelo man:periodic[8]. O local do arquivo especificado por esta palavra-chave deve corresponder ao local definido na variável `ntp_db_leapfile` em [.filename]#/etc/rc.conf#. ==== Entradas NTP no [.filename]#/etc/rc.conf# Defina `ntpd_enable=YES` para iniciar o ntpd no momento do boot do sistema. Depois que o `ntpd_enable=YES` for adicionado ao [.filename]#/etc/rc.conf#, o ntpd poderá ser iniciado imediatamente sem reiniciar o sistema, digitando: [source,shell] .... # service ntpd start .... Somente `ntpd_enable` deve ser configurado para usar o ntpd. As variáveis [.filename]#rc.conf# listadas abaixo também podem ser definidas conforme necessário. Defina `ntpd_sync_on_start=YES` para permitir que o ntpd adiante o relógio, uma vez na inicialização. Normalmente, o ntpd registra uma mensagem de erro e se finaliza se o relógio estiver dessincronizado por mais de 1000 segundos. Essa opção é especialmente útil em sistemas sem um relógio em tempo real com bateria. Defina `ntpd_oomprotect=YES` para proteger o serviço ntpd de ser finalizado pelo sistema quando ele tentar se recuperar de uma condição de Falta de Nemória (OOM). Defina `ntpd_config=` para o local de um arquivo [.filename]#ntp.conf# alternativo. Defina `ntpd_flags=` para conter outras flags ntpd conforme necessário, mas evite usar as flags gerenciadas internamente pelo [.filename]#/etc/rc.d/ntpd#: * `-p` (local do arquivo pid) * `-c` (configure `ntpd_config=` como alternativa) ==== O Ntpd e o usuário não privilegiado `ntpd` O Ntpd no FreeBSD pode ser iniciado e executado como um usuário não privilegiado. Para isso, é necessário o módulo de política man:mac_ntpd[4]. O script de inicialização [.filename]#/etc/rc.d/ntpd# examina primeiro a configuração do NTP. Se possível, ele carrega o módulo `mac_ntpd` e inicia o ntpd como um usuário não vinculado `ntpd` (user id 123). Para evitar problemas com o acesso a arquivos e diretórios, o script de inicialização não iniciará automaticamente o ntpd como `ntpd` quando a configuração contiver quaisquer opções relacionadas a arquivos. A presença de qualquer um dos itens a seguir em `ntpd_flags` requer configuração manual, conforme descrito abaixo, para ser executada como o usuário `ntpd` user: * -f or --driftfile * -i or --jaildir * -k or --keyfile * -l or --logfile * -s or --statsdir A presença de qualquer uma das seguintes palavras-chave no [.filename]#ntp.conf# requer configuração manual, conforme descrito abaixo, para ser executado como usuário `ntpd`: * crypto * driftfile * key * logdir * statsdir Para configurar manualmente o ntpd para ser executado como usuário `ntpd`, você deve: * Certifique-se de que o usuário `ntpd` tenha acesso a todos os arquivos e diretórios especificados na configuração. * Se certifique para que o módulo `mac_ntpd` seja carregado ou compilado no kernel. Consulte man:mac_ntpd[4] para obter detalhes. * Defina `ntpd_user="ntpd"` no [.filename]#/etc/rc.conf# === Usando NTP com uma Conexão PPP O ntpd não precisa de uma conexão permanente com a Internet para funcionar corretamente. No entanto, se uma conexão PPP estiver configurada para discar sob demanda, o tráfego de NTP deverá ser impedido de disparar uma discagem ou manter a conexão ativa. Isso pode ser configurado com as diretivas `filter` em [.filename]#/etc/ppp/ppp.conf#. Por exemplo: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... Para mais detalhes, consulte a seção `PACKET FILTERING` em man:ppp[8] e os exemplos em [.filename]#/usr/shared/examples/ppp/#. [NOTE] ==== Alguns provedores de acesso à Internet bloqueiam portas de números baixos, impedindo o funcionamento do NTP, pois as respostas nunca chegam à máquina. ==== [[network-iscsi]] == Inicializador iSCSI e Configuração Alvo iSCSI é uma maneira de compartilhar o armazenamento em uma rede. Ao contrário do NFS, que funciona no nível do sistema de arquivos, o iSCSI funciona no nível do dispositivo de bloco. Na terminologia iSCSI, o sistema que compartilha o armazenamento é conhecido como _alvo_. O armazenamento pode ser um disco físico ou uma área representando vários discos ou uma parte de um disco físico. Por exemplo, se os discos estiverem formatados com ZFS, um zvol poderá ser criado para ser usado como armazenamento iSCSI. Os clientes que acessam o armazenamento do iSCSI são chamados de _iniciadores_. Para os iniciadores, o armazenamento disponível por meio do iSCSI aparece como um disco bruto, não formatado, conhecido como LUN. Nós de dispositivo para o disco aparecem em [.filename]#/dev/# e o dispositivo deve ser formatado e montado separadamente. O FreeBSD fornece um alvo e iniciador nativo, baseado em kernel iSCSI. Esta seção descreve como configurar um sistema FreeBSD como um alvo ou um iniciador. [[network-iscsi-target]] === Configurando um Alvo iSCSI Para configurar um alvo iSCSI, crie o arquivo de configuração [.filename]#/etc/ctl.conf#, adicione uma linha ao arquivo [.filename]#/etc/rc.conf# para certificar-se de que o daemon man:ctld[8] seja iniciado automaticamente na inicialização e, em seguida, inicie-o. A seguir, um exemplo de um arquivo de configuração simples [.filename]#/etc/ctl.conf#. Consulte man:ctl.conf[5] para obter uma descrição mais completa das opções disponíveis deste arquivo. [.programlisting] .... portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... A primeira entrada define o grupo de portais `pg0`. Grupos de portal definem quais endereços de rede o daemon man:ctld[8] irá escutar. A entrada `discovery-auth-group no-authentication` indica que qualquer iniciador tem permissão para executar descoberta de alvo iSCSI sem autenticação. As linhas três e quatro configuram man:ctld[8] para escutar em todos os endereços IPv4 (`listen 0.0.0.0`) e IPv6 (`listen [::]`) na porta padrão 3260. Não é necessário definir um grupo de portais, pois há um grupo de portais interno chamado `default`. Nesse caso, a diferença entre `default` e `pg0` é que com `default`, a descoberta de alvo é sempre negada, enquanto com `pg0`, é sempre permitido. A segunda entrada define um único alvo. O alvo tem dois significados possíveis: uma máquina que atende iSCSI ou um grupo nomeado de LUNs. Este exemplo usa o último significado, onde `iqn.2012-06.com.example:target0` é o nome do alvo. Este nome de alvo é adequado para fins de teste. Para uso real, altere `com.example` para o nome de domínio real, invertido. O `2012-06` representa o ano e o mês de aquisição do controle desse nome de domínio, e `target0` pode ser qualquer valor. Qualquer número de alvos pode ser definido neste arquivo de configuração. A linha `auth-group no-authentication` permite que todos os iniciadores se conectem ao alvo especificado e `portal-group pg0` torna o alvo acessível através do grupo do portal `pg0`. A próxima seção define o LUN. Para o iniciador, cada LUN será visível como um dispositivo de disco separado. Múltiplos LUNs podem ser definidos para cada destino. Cada LUN é identificado por um número, onde LUN 0 é obrigatório. A linha `path/data/target0-0` define o caminho completo para um arquivo ou zvol que suporta o LUN. Esse caminho deve existir antes de iniciar man:ctld[8]. A segunda linha é opcional e especifica o tamanho do LUN. Em seguida, para ter certeza que o daemon man:ctld[8] foi iniciado no boot, adicione esta linha ao arquivo [.filename]#/etc/rc.conf#: [.programlisting] .... ctld_enable="YES" .... Para iniciar o man:ctld[8] agora, execute este comando: [source,shell] .... # service ctld start .... Quando o daemon man:ctld[8] é iniciado, ele lê o arquivo [.filename]#/etc/ctl.conf#. Se este arquivo for editado depois que o daemon iniciar, use este comando para que as mudanças entrem em vigor imediatamente: [source,shell] .... # service ctld reload .... ==== Autenticação O exemplo anterior é inerentemente inseguro, pois não usa autenticação, concedendo a qualquer um acesso total a todos os alvos. Para exigir um nome de usuário e senha para acessar os alvos, modifique a configuração da seguinte maneira: [.programlisting] .... auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... A seção `auth-group` define os pares de nome de usuário e senha. Um inicializador tentando se conectar a `iqn.2012-06.com.example:target0` deve primeiro especificar um nome de usuário e senha definidos. No entanto, a descoberta do alvo ainda é permitida sem autenticação. Para exigir autenticação de descoberta de alvo, defina `discovery-auth-group` como um nome `auth-group` definido em vez de `no-authentication`. É comum definir um único alvo exportado para cada inicializador. Como um atalho para a sintaxe acima, o nome de usuário e a senha podem ser especificados diretamente na entrada do alvo: [.programlisting] .... target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } .... [[network-iscsi-initiator]] === Configurando um Inicializador iSCSI [NOTE] ==== O inicializador iSCSI descrito nesta seção é suportado a partir do FreeBSD 10.0-RELEASE. Para usar o inicializador iSCSI disponível em versões mais antigas, consulte man:iscontrol[8]. ==== O inicializador iSCSI requer que o daemon man:iscsid[8] esteja em execução. Este daemon não usa um arquivo de configuração. Para iniciá-lo automaticamente na inicialização, adicione esta linha ao arquivo [.filename]#/etc/rc.conf#: [.programlisting] .... iscsid_enable="YES" .... Para iniciar man:iscsid[8] agora, execute este comando: [source,shell] .... # service iscsid start .... Conectar-se a um alvo pode ser feito com ou sem um arquivo [.filename]#/etc/iscsi.conf# de configuração. Esta seção demonstra os dois tipos de conexões. ==== Conectando-se a um Alvo sem um Arquivo de Configuração Para conectar um inicializador a um único alvo, especifique o endereço IP do portal e o nome do alvo: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 .... Para verificar se a conexão foi bem sucedida, execute `iscsictl` sem nenhum argumento. A saída deve ser semelhante a esta: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 .... Neste exemplo, a sessão iSCSI foi estabelecida com sucesso, com [.filename]#/dev/da0# representando o LUN anexado. Se o destino `iqn.2012-06.com.example:target0` exportar mais de um LUN, vários nós de dispositivos serão mostrados nessa seção da saída: [source,shell] .... Connected: da0 da1 da2. .... Quaisquer erros serão relatados na saída, assim como os logs do sistema. Por exemplo, esta mensagem normalmente significa que o daemon man:iscsid[8] não está em execução: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) .... A mensagem a seguir sugere um problema de rede, como uma porta ou endereço IP incorreto: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused .... Esta mensagem significa que o nome do alvo especificado está errado: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Not found .... Esta mensagem significa que o alvo requer autenticação: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed .... Para especificar um nome de usuário e uma senha de CHAP, use esta sintaxe: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret .... ==== Conectando-se a um Alvo com um Arquivo de Configuração Para se conectar usando um arquivo de configuração, crie o [.filename]#/etc/iscsi.conf# com o seguinte conteúdo: [.programlisting] .... t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } .... O `t0` especifica um nickname para a seção do arquivo de configuração. Ele será usado pelo iniciador para especificar qual configuração usar. As outras linhas especificam os parâmetros a serem usados durante a conexão. O `TargetAddress` e `TargetName` são obrigatórios, enquanto as outras opções são opcionais. Neste exemplo, o nome de usuário e a senha do CHAP são mostrados. Para se conectar ao alvo definido, especifique o apelido: [source,shell] .... # iscsictl -An t0 .... Como alternativa, para conectar-se a todos os alvos definidos no arquivo de configuração, use: [source,shell] .... # iscsictl -Aa .... Para fazer com que o inicializador se conecte automaticamente a todos os alvos no arquivo [.filename]#/etc/iscsi.conf#, adicione o seguinte ao arquivo [.filename]#/etc/rc.conf#: [.programlisting] .... iscsictl_enable="YES" iscsictl_flags="-Aa" .... diff --git a/documentation/content/ru/books/handbook/mac/_index.adoc b/documentation/content/ru/books/handbook/mac/_index.adoc index 1265968613..9a001f0f36 100644 --- a/documentation/content/ru/books/handbook/mac/_index.adoc +++ b/documentation/content/ru/books/handbook/mac/_index.adoc @@ -1,810 +1,808 @@ --- description: 'Эта глава посвящена инфраструктуре MAC и набору модулей политики безопасности, предоставляемых FreeBSD для включения различных механизмов безопасности' next: books/handbook/audit params: path: /books/handbook/mac/ part: 'Часть III. Администрирование системы' prev: books/handbook/jails showBookMenu: 'true' tags: ["MAC", "labels", "security", "configuration", "nagios"] title: 'Глава 18. Принудительное управление доступом (MAC)' weight: 22 --- [[mac]] = Принудительный контроль доступа (MAC) :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 18 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == Обзор FreeBSD поддерживает расширения безопасности, основанные на документах POSIX(R).1e. Эти механизмы безопасности включают списки контроля доступа к файловой системе (crossref:security[fs-acl,“Списки контроля доступа”]) и принудительный контроль доступа (MAC). MAC позволяет загружать модули контроля доступа для реализации политик безопасности. Некоторые модули обеспечивают защиту узкого подмножества системы, укрепляя определённую службу. Другие предоставляют всеобъемлющую безопасность с метками на всех субъектах и объектах. Обязательная часть определения указывает на то, что применение контроля осуществляется администраторами и операционной системой. Это отличается от механизма безопасности по умолчанию — обычного контроля доступа (Discretionary Access Control, DAC), где применение контроля остаётся на усмотрение пользователей. Эта глава посвящена инфраструктуре MAC и набору модулей политики безопасности, предоставляемых FreeBSD для включения различных механизмов безопасности. Прочитайте эту главу, чтобы узнать: * Терминология, связанная с инфраструктурой MAC. * Возможности модулей политики безопасности MAC, а также разница между политикой с метками и без меток. * Соображения, которые необходимо учитывать перед настройкой системы для использования инфраструктуры MAC. * Какие модули политики безопасности MAC включены в FreeBSD и как их настроить. * Как реализовать более безопасную среду с использованием инфраструктуры MAC. * Как проверить конфигурацию MAC, чтобы убедиться в правильном использовании инфраструктуры. Прежде чем читать эту главу, необходимо: * Понимать основы UNIX(R) и FreeBSD (crossref:basics[basics,Основы FreeBSD]). * Иметь некоторое представление о безопасности и о том, как она относится к FreeBSD (crossref:security[security,Безопасность]). [WARNING] ==== Неправильная настройка MAC может привести к потере доступа к системе, недовольству пользователей или невозможности использования функций, предоставляемых Xorg. Важно отметить, что нельзя полагаться исключительно на MAC для полной защиты системы. Инфраструктура MAC лишь дополняет существующую политику безопасности. Без грамотных мер безопасности и регулярных проверок система никогда не будет полностью защищена. Примеры, приведённые в этой главе, предназначены для демонстрации, и их настройки _не_ следует применять в рабочей системе. Внедрение любой политики безопасности требует глубокого понимания, правильного проектирования и тщательного тестирования. ==== Хотя в этой главе рассматривается широкий спектр вопросов безопасности, связанных с инфраструктурой MAC, разработка новых модулей политики безопасности MAC не будет освещена. Ряд модулей политики безопасности, включенных в фреймворк MAC, обладают специфическими характеристиками, которые предоставляются как для тестирования, так и для разработки новых модулей. Дополнительную информацию об этих модулях политики безопасности и предоставляемых ими механизмах можно найти в man:mac_test[4], man:mac_stub[4] и man:mac_none[4]. [[mac-inline-glossary]] == Ключевые термины В документации FreeBSD при обращении к инфраструктуре MAC используются следующие ключевые термины: * _компартмент (compartment)_: набор программ и данных, которые должны быть разделены или изолированы, при этом пользователи получают явный доступ к определенным компонентам системы. Компартмент представляет собой группу, такую как рабочая группа, отдел, проект или тема. Компартменты позволяют реализовать политику безопасности на основе принципа "необходимо знать". * _целостность (integrity)_: уровень доверия, который может быть оказан данным. По мере повышения целостности данных возрастает и возможность доверять этим данным. * _уровень (level)_: увеличенное или уменьшенное значение атрибута безопасности. По мере повышения уровня его безопасность также считается более высокой. * _метка (label)_: атрибут безопасности, который может быть применён к файлам, каталогам или другим элементам системы. Его можно рассматривать как штамп конфиденциальности. Когда метка назначается файлу, она описывает его свойства безопасности и разрешает доступ только файлам, пользователям и ресурсам с аналогичными настройками безопасности. Значение и интерпретация меток зависят от конфигурации политики. Некоторые политики рассматривают метку как показатель целостности или секретности объекта, тогда как другие могут использовать метки для хранения правил доступа. * _множественные метки (multilabel)_: это свойство является параметром файловой системы, которое можно установить в однопользовательском режиме с помощью man:tunefs[8], во время загрузки с использованием man:fstab[5] или при создании новой файловой системы. Эта опция позволяет администратору применять различные метки MAC к разным объектам. Данная опция применяется только к модулям политики безопасности, которые поддерживают маркировку метками (далее — маркировку). * _одиночная метка (single label)_: политика, при которой вся файловая система использует одну метку для контроля доступа к потокам данных. Если параметр `multilabel` не установлен, все файлы будут соответствовать одной и той же настройке метки. * _объект (object)_: сущность, через которую информация передаётся под управлением _субъекта_. Это включает каталоги, файлы, поля, экраны, клавиатуры, память, магнитные накопители, принтеры или любые другие устройства хранения или передачи данных. Объект является контейнером данных или системным ресурсом. Доступ к объекту фактически означает доступ к его данным. * _субъект (subject)_: любая активная сущность, вызывающая передачу информации между _объектами_, например, пользователь, пользовательский процесс или системный процесс. В FreeBSD это почти всегда поток, действующий в процессе от имени пользователя. * _политика (policy)_: набор правил, определяющий, как достичь поставленных целей. Политика обычно документирует, каким образом следует обращаться с определёнными элементами. В этой главе под политикой понимается набор правил, контролирующих поток данных и информации, а также определяющих, кто имеет доступ к этим данным и информации. * _верхний уровень (high-watermark)_: этот тип политики позволяет повышать уровни безопасности для доступа к информации более высокого уровня. В большинстве случаев исходный уровень восстанавливается после завершения процесса. В настоящее время в инфраструктуре MAC в FreeBSD отсутствует такой тип политики. * _нижний уровень (low-watermark)_: этот тип политики позволяет понижать уровни безопасности для доступа к информации с более низким уровнем защиты. В большинстве случаев исходный уровень безопасности пользователя восстанавливается после завершения процесса. Единственный модуль политики безопасности в FreeBSD, использующий этот подход, — это man:mac_lomac[4]. * _ чувствительность (sensivity)_: обычно используется при обсуждении Многоуровневой Безопасности (Multilevel Security, MLS). Уровень чувствительности описывает, насколько важными или секретными должны быть данные. По мере увеличения уровня чувствительности возрастает и важность секретности, или конфиденциальности, данных. [[mac-understandlabel]] == Метки MAC Метка MAC — это атрибут безопасности, который может быть применён к субъектам и объектам в системе. При установке метки администратор должен понимать её последствия, чтобы избежать неожиданного или нежелательного поведения системы. Доступные атрибуты объекта зависят от загруженного модуля политики, так как модули политики интерпретируют свои атрибуты по-разному. Метка безопасности объекта используется как часть решения по контролю доступа в соответствии с политикой. В некоторых политиках метка содержит всю информацию, необходимую для принятия решения. В других политиках метки могут обрабатываться как часть более обширного набора правил. Существует два типа политик меток: одноуровневые и многоуровневые. По умолчанию система использует одноуровневые метки. Администратор должен учитывать преимущества и недостатки каждого типа, чтобы реализовать политики, соответствующие требованиям модели безопасности системы. Политика безопасности с одной меткой разрешает использование только одной метки для каждого субъекта или объекта. Поскольку политика с одной меткой применяет единый набор прав доступа во всей системе, это снижает нагрузку на администрирование, но уменьшает гибкость политик, поддерживающих маркировку. Однако во многих средах политика с одной меткой может быть всем, что требуется. Политика с одной меткой несколько похожа на DAC, поскольку `root` настраивает политики так, чтобы пользователи попадали в соответствующие категории и уровни доступа. Заметное отличие заключается в том, что многие модули политики также могут ограничивать пользователя `root`. Базовый контроль над объектами затем передаётся группе, но `root` может отозвать или изменить настройки в любое время. Когда это уместно, политику с несколькими метками можно установить на файловой системе UFS, передав `multilabel` в man:tunefs[8]. Политика с несколькими метками позволяет каждому субъекту или объекту иметь свою собственную независимую метку MAC. Решение использовать политику с несколькими метками или одной меткой требуется только для политик, реализующих функцию маркировки, таких как `biba`, `lomac` и `mls`. Некоторые политики, такие как `seeotheruids`, `portacl` и `partition`, вообще не используют метки. Использование политики с несколькими метками на разделе и установление модели безопасности с несколькими метками может увеличить административную нагрузку, так как всё в этой файловой системе имеет метку. Это включает каталоги, файлы и даже узлы устройств. Следующая команда установит `multilabel` для указанной файловой системы UFS. Это можно сделать только в однопользовательском режиме и не является обязательным для файловой системы подкачки: [source, shell] .... # tunefs -l enable / .... [NOTE] ==== Некоторые пользователи столкнулись с проблемами при установке флага `multilabel` на корневом разделе. Если это ваш случай, ознакомьтесь с crossref:mac[mac-troubleshoot, Устранение неполадок инфраструктуры MAC]. ==== Поскольку политика с несколькими метками устанавливается для каждой файловой системы, она может не потребоваться, если структура файловых систем хорошо продумана. Рассмотрим пример модели безопасности MAC для веб-сервера FreeBSD. На этой машине используется единая метка `biba/high` для всего в стандартных файловых системах. Если веб-сервер должен работать с `biba/low`, чтобы предотвратить возможность записи вверх, его можно установить в отдельную файловую систему UFS [.filename]#/usr/local# с меткой `biba/low`. === Конфигурация меток Практически все аспекты настройки модуля политики меток будут выполняться с помощью базовых системных утилит. Эти команды предоставляют простой интерфейс для настройки объекта или субъекта, а также для изменения и проверки конфигурации. Вся настройка может быть выполнена с помощью `setfmac`, который используется для установки меток MAC на объектах системы, и `setpmac`, который используется для установки меток на субъектах системы. Например, чтобы установить метку MAC `biba` в значение `high` для [.filename]#test#: [source, shell] .... # setfmac biba/high test .... Если конфигурация выполнена успешно, командная строка вернётся без ошибок. Частая ошибка — `Permission denied`, которая обычно возникает при установке или изменении метки на защищённом объекте. Другие условия могут вызывать иные сбои. Например, файл может не принадлежать пользователю, пытающемуся изменить метку объекта, объект может не существовать или быть доступным только для чтения. Обязательная политика не позволит процессу изменить метку файла, возможно, из-за свойств самого файла, процесса или предлагаемого нового значения метки. Например, если пользователь с низким уровнем целостности попытается изменить метку файла с высоким уровнем целостности или если пользователь с низким уровнем целостности попытается изменить метку файла с низкого уровня на высокий, эти операции завершатся неудачей. Системный администратор может использовать `setpmac` для переопределения настроек модуля политики, назначив другую метку вызываемому процессу: [source, shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... Для уже запущенных процессов, таких как sendmail, обычно используется `getpmac` вместо этого. Эта команда принимает идентификатор процесса (PID) вместо имени команды. Если пользователи пытаются работать с файлом, к которому у них нет доступа, в соответствии с правилами загруженных модулей политики, будет отображаться ошибка `Операция не разрешена (Operation not permitted)`. === Предопределенные метки Несколько модулей политики FreeBSD, поддерживающих функцию меток, предлагают три предопределённых метки: `low`, `equal` и `high`, где: * `low` считается минимальным уровнем метки, который может быть установлен для объекта или субъекта. Установка этого уровня для объектов или субъектов блокирует их доступ к объектам или субъектам, помеченным как `high`. * `equal` устанавливает, что субъект или объект отключён или не затронут, и должен использоваться только для объектов, считающихся исключёнными из политики. * `high` предоставляет объекту или субъекту наивысший уровень доступа, доступный в модулях политик Biba и MLS. Такие модули политик включают man:mac_biba[4], man:mac_mls[4] и man:mac_lomac[4]. Каждый из предопределённых меток устанавливает различные директивы потока информации. Обратитесь к справочной странице модуля, чтобы определить особенности стандартных конфигураций меток. === Числовые метки Модули политик Biba и MLS поддерживают числовую метку, которая может быть установлена для указания точного уровня иерархического контроля. Этот числовой уровень используется для разделения или сортировки информации по различным группам классификации, разрешая доступ только к этой группе или к группе более высокого уровня. Например: [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... может интерпретироваться как "Метка/уровень целостности (grade) политики Biba 10: компартменты 2, 3 и 6: (уровень 5 ...)" В этом примере первый уровень целостности будет считаться эффективным с эффективными компартментами, второй уровень — низкий уровень, а последний — высокий. В большинстве конфигураций такие детальные настройки не требуются, так как они считаются расширенными конфигурациями. Системные объекты имеют только текущий уровень целостности и компартмент. Системные субъекты отражают диапазон доступных прав в системе, а сетевые интерфейсы используются для контроля доступа. Уровни целостности и компартменты в паре субъект-объект используются для построения отношения, известного как _доминирование_, при котором субъект доминирует над объектом, объект доминирует над субъектом, ни один не доминирует над другим или оба доминируют друг над другом. Случай «оба доминируют» возникает, когда две метки равны. В силу природы информационных потоков в модели Biba пользователь имеет права на набор компартментов, которые могут соответствовать проектам, но объекты также имеют набор компартментов. Пользователям может потребоваться ограничить свои права с помощью `su` или `setpmac`, чтобы получить доступ к объектам в компартментах, от которой они не ограничены. === Пользовательские метки Пользователи должны иметь метки, чтобы их файлы и процессы корректно взаимодействовали с политикой безопасности, определенной в системе. Это настраивается в [.filename]#/etc/login.conf# с использованием классов входа. Каждый модуль политики, использующий метки, будет реализовывать настройку класса пользователя. Чтобы установить метку класса пользователя по умолчанию, которая будет применяться MAC, добавьте запись `label`. Ниже приведен пример записи `label`, содержащей каждый модуль политики. Обратите внимание, что в реальной конфигурации администратор никогда не включает все модули политики. Рекомендуется ознакомиться с остальной частью этой главы перед внедрением любой конфигурации. [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/share/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... Хотя пользователи не могут изменить значение по умолчанию, они могут изменить свою метку после входа в систему, в соответствии с ограничениями политики. В приведённом выше примере политика Biba указывает, что минимальный уровень целостности процесса — `5`, максимальный — `15`, а эффективная метка по умолчанию — `10`. Процесс будет выполняться с меткой `10`, пока не решит изменить её, например, если пользователь воспользуется `setpmac`, что будет ограничено политикой Biba в рамках заданного диапазона. После любого изменения в файле [.filename]#login.conf# необходимо перестроить базу данных возможностей классов входа с помощью команды `cap_mkdb`. Многие сайты имеют большое количество пользователей, требующих нескольких различных классов пользователей. Требуется тщательное планирование, так как это может стать сложным в управлении. === Метки сетевых интерфейсов Метки могут быть установлены на сетевых интерфейсах для помощи в контроле потока данных в сети. Политики, использующие метки сетевых интерфейсов, работают так же, как политики, применяемые к объектам. Например, пользователи с высокими уровнями в Biba не смогут получить доступ к сетевым интерфейсам с меткой `low`. При установке MAC-метки на сетевых интерфейсах, `maclabel` может быть передан в `ifconfig`: [source, shell] .... # ifconfig bge0 maclabel biba/equal .... Этот пример установит метку MAC `biba/equal` на интерфейсе `bge0`. При использовании настройки вида `biba/high(low-high)` всю метку следует заключить в кавычки, чтобы избежать ошибки. Каждый модуль политики, поддерживающий метки, имеет настраиваемый параметр, который может использоваться для отключения метки MAC на сетевых интерфейсах. Установка метки в значение `equal` даст аналогичный эффект. Для получения дополнительной информации об этих настраиваемых параметрах ознакомьтесь с выводом `sysctl`, справочными страницами политик и информацией в остальной части этой главы. [[mac-planning]] == Планирование конфигурации безопасности Прежде чем внедрять какие-либо политики MAC, рекомендуется этап планирования. На этапе планирования администратор должен учесть требования и цели внедрения, такие как: * Как классифицировать информацию и ресурсы, доступные в целевых системах. * Какую информацию или ресурсы следует ограничить в доступе, а также тип ограничений, которые должны быть применены. * Какие модули MAC потребуются для достижения этой цели. Пробный запуск доверенной системы и её конфигурации должен быть выполнен _до_ использования реализации MAC в производственных системах. Поскольку различные среды имеют разные потребности и требования, создание полного профиля безопасности уменьшит необходимость изменений после ввода системы в эксплуатацию. Рассмотрим, как инфраструктура MAC усиливает безопасность системы в целом. Различные модули политики безопасности, предоставляемые инфраструктурой MAC, могут использоваться для защиты сети и файловых систем или для блокировки доступа пользователей к определённым портам и сокетам. Возможно, наилучшее применение модулей политики — это загрузка нескольких модулей политики безопасности одновременно для создания среды MLS. Такой подход отличается от политики усиления защиты, которая обычно укрепляет элементы системы, используемые только для определённых целей. Недостатком MLS является увеличение административных затрат. Накладные расходы минимальны по сравнению с долгосрочным эффектом от инфраструктуры, который предоставляет возможность выбирать необходимые политики для конкретной конфигурации и минимизирует снижение производительности. Уменьшение поддержки ненужных политик может повысить общую производительность системы, а также обеспечить гибкость выбора. Хорошая реализация должна учитывать общие требования безопасности и эффективно внедрять различные модули политик безопасности, предоставляемые инфраструктурой. Система, использующая MAC, гарантирует, что пользователь не сможет по своему желанию изменять атрибуты безопасности. Все пользовательские утилиты, программы и скрипты должны работать в рамках ограничений, установленных правилами доступа выбранных модулей политики безопасности, а управление правилами доступа MAC находится в руках системного администратора. Обязанностью системного администратора является тщательный выбор подходящих модулей политики безопасности. Для среды, где требуется ограничить контроль доступа по сети, модули политик man:mac_portacl[4], man:mac_ifoff[4] и man:mac_biba[4] могут стать хорошей отправной точкой. В среде, где необходима строгая конфиденциальность объектов файловой системы, следует рассмотреть модули политик man:mac_bsdextended[4] и man:mac_mls[4]. Решения о политиках могут приниматься на основе конфигурации сети. Если только определенные пользователи должны иметь доступ к man:ssh[1], модуль политики man:mac_portacl[4] является хорошим выбором. В случае файловых систем доступ к объектам может считаться конфиденциальным для одних пользователей, но не для других. Например, большая команда разработчиков может быть разделена на меньшие проекты, где разработчики из проекта A не должны иметь доступа к объектам, созданным разработчиками из проекта B. Однако оба проекта могут нуждаться в доступе к объектам, созданным разработчиками из проекта C. Используя различные модули политик безопасности, предоставляемые MAC-фреймворком, пользователи могут быть разделены на эти группы и затем получить доступ к соответствующим объектам. Каждый модуль политики безопасности имеет уникальный способ обработки общей безопасности системы. Выбор модуля должен основываться на продуманной политике безопасности, которая может потребовать пересмотра и повторной реализации. Понимание различных модулей политики безопасности, предоставляемых инфраструктурой MAC, поможет администраторам выбрать наилучшие политики для их ситуаций. Остальная часть этой главы посвящена доступным модулям, описанию их использования и настройки, а в некоторых случаях содержит рекомендации по их применению в различных ситуациях. [CAUTION] ==== Внедрение MAC во многом похоже на настройку межсетевого экрана, так как необходимо соблюдать осторожность, чтобы не оказаться полностью заблокированным в системе. Следует предусмотреть возможность возврата к предыдущей конфигурации, а реализацию MAC через удалённое соединение следует выполнять с особой осторожностью. ==== [[mac-policies]] == Доступные политики MAC Стандартное ядро FreeBSD включает `options MAC`. Это означает, что каждый модуль, входящий в состав инфраструктуры MAC, может быть загружен с помощью `kldload` как модуль ядра во время выполнения. После тестирования модуля добавьте его имя в [.filename]#/boot/loader.conf#, чтобы он загружался при старте системы. Каждый модуль также предоставляет опцию ядра для администраторов, которые предпочитают компилировать собственное ядро системы. FreeBSD включает набор политик, которые охватывают большинство требований безопасности. Каждая политика кратко описана ниже. Последние три политики поддерживают целочисленные настройки вместо трёх стандартных меток. [[mac-seeotheruids]] === Политика MAC — See Other UIDs Название модуля: [.filename]#mac_seeotheruids.ko# Строка конфигурации ядра: `options MAC_SEEOTHERUIDS` Опция загрузки: `mac_seeotheruids_load="YES"` Модуль man:mac_seeotheruids[4] расширяет настройки `security.bsd.see_other_uids` и `security.bsd.see_other_gids sysctl`. Эта опция не требует предварительной установки меток для настройки и может работать прозрачно с другими модулями. После загрузки модуля следующие настройки `sysctl` могут быть использованы для управления его функциями: * `security.mac.seeotheruids.enabled` включает модуль и реализует настройки по умолчанию, которые запрещают пользователям возможность просматривать процессы и сокеты, принадлежащие другим пользователям. * `security.mac.seeotheruids.specificgid_enabled` позволяет исключить указанные группы из действия этой политики. Чтобы исключить определённые группы, используйте параметр `security.mac.seeotheruids.specificgid=_XXX_ sysctl`, заменив _XXX_ на числовой идентификатор группы, которую нужно исключить. * `security.mac.seeotheruids.primarygroup_enabled` используется для исключения определённых первичных групп из этой политики. При использовании этого параметра `security.mac.seeotheruids.specificgid_enabled` не может быть установлен. [[mac-bsdextended]] === Политика MAC — BSD Extended Имя модуля: [.filename]#mac_bsdextended.ko# Строка конфигурации ядра: `options MAC_BSDEXTENDED` Параметр загрузки: `mac_bsdextended_load="YES"` Модуль man:mac_bsdextended[4] обеспечивает файловый межсетевой экран. Он расширяет стандартную модель прав доступа к файловой системе, позволяя администратору создавать набор правил, подобный межсетевому экрану, для защиты файлов, утилит и каталогов в иерархии файловой системы. При попытке доступа к объекту файловой системы происходит перебор списка правил до тех пор, пока не будет найдено соответствующее правило или не будет достигнут конец списка. Это поведение можно изменить с помощью параметра `security.mac.bsdextended.firstmatch_enabled`. Подобно другим модулям межсетевого экрана в FreeBSD, файл с правилами контроля доступа может быть создан и прочитан системой во время загрузки с использованием переменной man:rc.conf[5]. Список правил может быть введён с помощью man:ugidfw[8], синтаксис которого похож на man:ipfw[8]. Дополнительные инструменты могут быть написаны с использованием функций из библиотеки man:libugidfw[3]. После загрузки модуля man:mac_bsdextended[4] для просмотра текущей конфигурации правил можно использовать следующую команду: [source, shell] .... # ugidfw list 0 slots, 0 rules .... По умолчанию никакие правила не определены, и доступ полностью открыт. Чтобы создать правило, которое блокирует доступ для всех пользователей, кроме `root`: [source, shell] .... # ugidfw add subject not uid root new object not uid root mode n .... Хотя это правило просто реализовать, это очень плохая идея, так как оно блокирует всех пользователей от выполнения любых команд. Более реалистичный пример запрещает `user1` любой доступ, включая просмотр каталогов, к домашнему каталогу ``_user2_``: [source, shell] .... # ugidfw set 2 subject uid user1 object uid user2 mode n # ugidfw set 3 subject uid user1 object gid user2 mode n .... Вместо `user1` можно использовать `not uid _user2_`, чтобы применять одинаковые ограничения доступа для всех пользователей. Однако пользователь `root` не подвержен влиянию этих правил. [NOTE] ==== Следует проявлять крайнюю осторожность при работе с этим модулем, так как неправильное использование может заблокировать доступ к определенным частям файловой системы. ==== [[mac-ifoff]] === Политика MAC — подавление интерфейсов Имя модуля: [.filename]#mac_ifoff.ko# Строка конфигурации ядра: `options MAC_IFOFF` Параметр загрузки: `mac_ifoff_load="YES"` Модуль man:mac_ifoff[4] используется для отключения сетевых интерфейсов на лету и предотвращения их включения во время загрузки системы. Он не использует метки и не зависит от других модулей MAC. Большая часть управления этим модулем осуществляется через следующие настраиваемые параметры `sysctl`: * `security.mac.ifoff.lo_enabled` включает или отключает весь трафик на интерфейсе loopback, man:lo[4]. * `security.mac.ifoff.bpfrecv_enabled` включает или отключает весь трафик на интерфейсе Berkeley Packet Filter, man:bpf[4]. * `security.mac.ifoff.other_enabled` включает или отключает трафик на всех остальных интерфейсах. Одним из наиболее распространённых применений `mac_ifoff(4)` является мониторинг сети в среде, где сетевой трафик не должен разрешаться во время последовательности загрузки. Другое применение — написание скрипта, который использует приложение, например package:security/aide[], для автоматической блокировки сетевого трафика при обнаружении новых или изменённых файлов в защищённых каталогах. [[mac-portacl]] === Политика MAC - списки управления доступом к портам Имя модуля: [.filename]#mac_portacl.ko# Строка конфигурации ядра: `MAC_PORTACL` Параметр загрузки: `mac_portacl_load="YES"` Модуль man:mac_portacl[4] используется для ограничения привязки к локальным TCP- и UDP-портам, позволяя непривилегированным пользователям (не `root`) привязываться к указанным привилегированным портам ниже 1024. После загрузки этот модуль включает политику MAC для всех сокетов. Доступны следующие настраиваемые параметры: * `security.mac.portacl.enabled` включает или отключает политику полностью. * `security.mac.portacl.port_high` устанавливает наибольший номер порта, который защищает man:mac_portacl[4]. * `security.mac.portacl.suser_exempt` при установке в ненулевое значение освобождает пользователя `root` от действия данной политики. * `security.mac.portacl.rules` задаёт политику в виде текстовой строки формата `правило[,правило,...]`, с любым необходимым количеством правил, где каждое правило имеет вид `тип_идентификатора:идентификатор:протокол:порт`. `тип_идентификатора` может быть `uid` или `gid`. Параметр `протокол ` принимает значения `tcp` или `udp`. Параметр `порт` указывает номер порта, к которому разрешено привязываться указанному пользователю или группе. Для параметров `идентификатор пользователя`, `идентификатор группы` и `порт` можно использовать только числовые значения. По умолчанию порты ниже 1024 могут использоваться только привилегированными процессами, работающими от имени `root`. Чтобы разрешить непривилегированным процессам привязываться к портам ниже 1024 с помощью man:mac_portacl[4], задайте следующие настраиваемые параметры следующим образом: [source, shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 # sysctl net.inet.ip.portrange.reservedhigh=0 .... Чтобы предотвратить влияние этой политики на пользователя `root`, установите `security.mac.portacl.suser_exempt` в ненулевое значение. [source, shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... Чтобы пользователь `www` с UID 80 мог привязываться к порту 80 без необходимости в привилегиях `root`: [source, shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... Следующий пример разрешает пользователю с UID 1001 привязываться к TCP-портам 110 (POP3) и 995 (POP3s): [source, shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... [[mac-partition]] === Политика MAC — разделы процессов Имя модуля: [.filename]#mac_partition.ko# Строка конфигурации ядра: `options MAC_PARTITION` Опция загрузки: `mac_partition_load="YES"` Политика man:mac_partition[4] помещает процессы в определенные "разделы" на основе их метки MAC. Большая часть настройки этой политики выполняется с помощью man:setpmac[8]. Для этой политики доступна одна настраиваемая переменная `sysctl`: * `security.mac.partition.enabled` включает принудительное применение разделов процессов MAC. Когда эта политика включена, пользователи смогут видеть только свои процессы и процессы в своем разделе, но не смогут работать с утилитами за пределами этого раздела. Например, пользователь из класса `insecure` не сможет получить доступ к `top`, а также ко многим другим командам, которые должны запускать процессы. В этом примере `top` добавляется к набору меток пользователей в классе `insecure`. Все процессы, запущенные пользователями из класса `insecure`, останутся с меткой `partition/13`. [source, shell] .... # setpmac partition/13 top .... Эта команда отображает метку раздела и список процессов: [source, shell] .... # ps Zax .... Эта команда отображает метку раздела процессов другого пользователя и его текущие запущенные процессы: [source, shell] .... # ps -ZU trhodes .... [NOTE] ==== Пользователи могут видеть процессы с меткой ``root``, если не загружена политика man:mac_seeotheruids[4]. ==== [[mac-mls]] === Модуль многоуровневой безопасности MAC Имя модуля: [.filename]#mac_mls.ko# Строка конфигурации ядра: `options MAC_MLS` Параметр загрузки: `mac_mls_load="YES"` Политика man:mac_mls[4] контролирует доступ между субъектами и объектами в системе, применяя строгую политику управления потоком информации. В средах MLS в метке каждого субъекта или объекта устанавливается уровень "допуска" вместе с компартментами. Поскольку эти уровни допуска могут достигать значений, превышающих несколько тысяч, тщательная настройка каждого субъекта или объекта была бы сложной задачей. Для снижения административной нагрузки в эту политику включены три метки: `mls/low`, `mls/equal` и `mls/high`, где: * Все объекты, помеченные меткой `mls/low`, будут иметь низкий уровень доступа и не смогут обращаться к информации более высокого уровня. Эта метка также предотвращает запись или передачу информации от объектов с более высоким уровнем доступа к объектам с более низким уровнем. * `mls/equal` следует размещать на объектах, которые должны быть освобождены от политики. * `mls/high` — это наивысший возможный уровень допуска. Объекты с этой меткой будут доминировать над всеми остальными объектами в системе; однако они не допустят утечки информации к объектам более низкого класса. MLS предоставляет: * Иерархический уровень безопасности с набором неиерархических категорий. * Фиксированные правила `нет чтения вверх, нет записи вниз`. Это означает, что субъект может иметь право чтения объектов на своём уровне или ниже, но не выше. Аналогично, субъект может иметь право записи объектов на своём уровне или выше, но не ниже. * Секретность, или предотвращение несанкционированного раскрытия данных. * Основы проектирования систем, которые одновременно обрабатывают данные с разными уровнями конфиденциальности, не допуская утечки информации между секретными и конфиденциальными данными. Доступны следующие настраиваемые параметры `sysctl`: * `security.mac.mls.enabled` используется для включения или отключения политики MLS. * `security.mac.mls.ptys_equal` помечает все устройства man:pty[4] как `mls/equal` при создании. * `security.mac.mls.revocation_enabled` отзывает доступ к объектам после изменения их метки на метку более низкого уровня целостности. * `security.mac.mls.max_compartments` устанавливает максимальное количество уровней компартментов, разрешенных в системе. Для работы с метками MLS используйте man:setfmac[8]. Чтобы назначить метку объекту: [source, shell] .... # setfmac mls/5 test .... Чтобы получить метку MLS для файла [.filename]#test#: [source, shell] .... # getfmac test .... Другой подход заключается в создании основного файла политики в [.filename]#/etc/#, который определяет информацию о политике MLS, и передаче этого файла в `setfmac`. При использовании модуля политики MLS администратор планирует контролировать поток конфиденциальной информации. Значение по умолчанию `block read up block write down` устанавливает всё в состояние low. Вся информация доступна, и администратор постепенно повышает её конфиденциальность. Помимо трех основных вариантов меток, администратор может группировать пользователей и группы по мере необходимости, чтобы блокировать поток информации между ними. Возможно, будет проще рассматривать информацию на уровнях допуска, используя описательные слова, такие как классификации `Confidential` (`Конфиденциально`), `Secret` (`Секретно`) и `Top Secret` (`Совершенно секретно`). Некоторые администраторы вместо этого создают разные группы на основе уровней проектов. Независимо от метода классификации, перед внедрением ограничительной политики должен существовать продуманный план. Некоторые примеры ситуаций для модуля политики MLS включают веб-сервер электронной коммерции, файловый сервер с критически важной информацией компании и среды финансовых учреждений. [[mac-biba]] === Модуль MAC Biba Имя модуля: [.filename]#mac_biba.ko# Строка конфигурации ядра: `options MAC_BIBA` Опция загрузки: `mac_biba_load="YES"` Модуль man:mac_biba[4] загружает политику MAC Biba. Эта политика похожа на политику MLS, за исключением того, что правила передачи информации слегка изменены в обратном порядке. Это предотвращает поток конфиденциальной информации вниз, тогда как политика MLS предотвращает поток конфиденциальной информации вверх. В средах Biba для каждого субъекта или объекта устанавливается метка «целостности». Эти метки состоят из иерархических уровней целостности и неиерархических компонентов. По мере повышения уровня увеличивается и его целостность. Поддерживаемые метки: `biba/low`, `biba/equal` и `biba/high`, где: * `biba/low` считается самой низкой целостностью, которую может иметь объект или субъект. Установка этого уровня на объекты или субъекты блокирует их запись в объекты или субъекты с меткой `biba/high`, но не предотвращает чтение. * `biba/equal` следует размещать только на объектах, которые считаются исключёнными из политики. * `biba/high` разрешает запись в объекты с более низкой меткой, но запрещает чтение этих объектов. Рекомендуется устанавливать эту метку для объектов, которые влияют на целостность всей системы. Biba обеспечивает: * Иерархические уровни целостности с набором неиерархических категорий целостности. * Фиксированные правила — это `нет записи вверх, нет чтения вниз`, что противоположно MLS. Субъект может иметь право записи в объекты на своём уровне или ниже, но не выше. Аналогично, субъект может иметь право чтения объектов на своём уровне или выше, но не ниже. * Целостность за счет предотвращения нежелательного изменения данных. * Уровни целостности вместо уровней чувствительности MLS. Следующие настраиваемые параметры могут быть использованы для управления политикой Biba: * `security.mac.biba.enabled` используется для включения или отключения принудительного применения политики Biba на целевой машине. * `security.mac.biba.ptys_equal` используется для отключения политики Biba на устройствах man:pty[4]. * `security.mac.biba.revocation_enabled` принудительно отзывает доступ к объектам, если их метка изменяется так, чтобы доминировать над субъектом. Для доступа к настройкам политики Biba для системных объектов используйте `setfmac` и `getfmac`: [source, shell] .... # setfmac biba/low test # getfmac test test: biba/low .... Целостность, которая отличается от конфиденциальности, используется для гарантии того, что информация не будет изменена ненадёжными сторонами. Это включает информацию, передаваемую между субъектами и объектами. Она обеспечивает пользователям возможность изменять или получать доступ только к той информации, к которой у них есть явный доступ. Модуль политики безопасности man:mac_biba[4] позволяет администратору настроить, какие файлы и программы пользователь может просматривать и запускать, гарантируя, что эти программы и файлы считаются системой доверенными для данного пользователя. В ходе начального этапа планирования администратор должен быть готов разделить пользователей по уровням целостности (grade), уровням объектов (level) и областям. После включения этого модуля политики система по умолчанию перейдет на высокий уровень метки, и администратору потребуется настроить различные уровни целостности и уровни объектов для пользователей. Вместо использования уровней доступа хорошим методом планирования может стать использование тематик. Например, разрешить разработчикам доступ на изменение только к репозиторию исходного кода, компилятору исходного кода и другим инструментам разработки. Остальные пользователи будут распределены по другим категориям, таким как тестировщики, дизайнеры или конечные пользователи, и им будет разрешен только доступ на чтение. Субъект с более низким уровнем целостности не может записывать данные в субъект с более высоким уровнем целостности, а субъект с более высоким уровнем целостности не может просматривать или читать объект с более низким уровнем целостности. Установка метки на минимально возможном уровне может сделать объект недоступным для субъектов. Перспективными средами для использования этого модуля политики безопасности могут быть ограниченный веб-сервер, машина для разработки и тестирования, а также репозиторий исходного кода. Менее полезной реализацией будет персональная рабочая станция, машина, используемая в качестве маршрутизатора, или сетевой межсетевой экран. [[mac-lomac]] === Модуль MAC Low-watermark (нижний порог) Имя модуля: [.filename]#mac_lomac.ko# Строка конфигурации ядра: `options MAC_LOMAC` Параметр загрузки: `mac_lomac_load="YES"` В отличие от политики MAC Biba, политика man:mac_lomac[4] разрешает доступ к объектам с более низким уровнем целостности только после понижения уровня целостности, чтобы не нарушать правила целостности. Политика целостности Low-watermark работает почти идентично Biba, за исключением использования плавающих меток для поддержки понижения уровня субъекта через компартмент с вспомогательным уровнем целостности. Этот вторичный компартмент имеет вид `[auxgrade]`. При назначении политики с вспомогательным уровнем целостности используйте синтаксис `lomac/10[2]`, где `2` — это вспомогательный уровень целостности. Данная политика основывается на повсеместной маркировке всех системных объектов метками целостности, позволяя субъектам читать из объектов с низкой целостностью, а затем понижая уровень метки на субъекте с помощью `[auxgrade]`, чтобы предотвратить последующие записи в объекты с высокой целостностью. Эта политика может обеспечить большую совместимость и потребовать меньше начальной настройки по сравнению с Biba. Как и в политиках Biba и MLS, `setfmac` и `setpmac` используются для назначения меток объектам системы: [source, shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... Вспомогательный уровень целостности `low` — это функция, предоставляемая только политикой MACLOMAC. [[mac-userlocked]] == Блокировка пользователя Этот пример рассматривает относительно небольшую систему хранения данных с менее чем пятьюдесятью пользователями. Пользователи будут иметь возможность входа в систему и могут хранить данные и получать доступ к ресурсам. Для данного сценария модули политик man:mac_bsdextended[4] и man:mac_seeotheruids[4] могут сосуществовать и блокировать доступ к системным объектам, скрывая при этом пользовательские процессы. Начните с добавления следующей строки в [.filename]#/boot/loader.conf#: [.programlisting] .... mac_seeotheruids_load="YES" .... Модуль политики безопасности man:mac_bsdextended[4] может быть активирован добавлением следующей строки в [.filename]#/etc/rc.conf#: [.programlisting] .... ugidfw_enable="YES" .... Файл с правилами по умолчанию, хранящийся в [.filename]#/etc/rc.bsdextended#, будет загружен при инициализации системы. Однако стандартные записи могут потребовать изменения. Поскольку предполагается, что данная машина будет обслуживать только пользователей, все строки можно оставить закомментированными, за исключением последних двух, чтобы обеспечить принудительную загрузку системных объектов, принадлежащих пользователям, по умолчанию. Добавьте необходимых пользователей на эту машину и перезагрузитесь. Для тестирования попробуйте войти в систему под разными пользователями на двух консолях. Выполните `ps -aux`, чтобы проверить, видны ли процессы других пользователей. Убедитесь, что выполнение man:ls[1] для домашнего каталога другого пользователя завершается ошибкой. Не пытайтесь проводить тестирование от пользователя `root`, если специальные параметры ``sysctl`` не были изменены для блокировки доступа суперпользователя. [NOTE] ==== При добавлении нового пользователя его правило man:mac_bsdextended[4] не будет в списке набора правил. Чтобы быстро обновить набор правил, выгрузите модуль политики безопасности и загрузите его снова с помощью man:kldunload[8] и man:kldload[8]. ==== [[mac-implementing]] == Nagios в MAC клетке В этом разделе показаны шаги, необходимые для внедрения системы мониторинга сети Nagios в среде MAC. Это пример, который требует от администратора проверки соответствия реализованной политики требованиям безопасности сети перед использованием в рабочей среде. Этот пример требует установки `multilabel` на каждой файловой системе. Также предполагается, что package:net-mgmt/nagios-plugins[], package:net-mgmt/nagios[] и package:www/apache22[] установлены, настроены и корректно работают до попытки интеграции в инфраструктуре MAC. === Создайте небезопасный класс пользователя Начните процедуру, добавив следующий класс пользователя в [.filename]#/etc/login.conf#: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/share/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... Затем добавьте следующую строку в раздел класса пользователя по умолчанию: [.programlisting] .... :label=biba/high: .... Сохраните изменения и выполните следующую команду для перестроения базы данных: [source, shell] .... # cap_mkdb /etc/login.conf .... === Настройте пользователей Установите пользователя `root` в класс по умолчанию с помощью: [source, shell] .... # pw usermod root -L default .... Все пользовательские учетные записи, кроме `root`, теперь требуют указания класса входа. Класс входа обязателен, в противном случае пользователям будет отказано в доступе к распространённым командам. Следующий скрипт на `sh` должен помочь: [source, shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ /etc/passwd`; do pw usermod $x -L default; done; .... Затем добавьте учетные записи `nagios` и `www` в класс insecure: [source, shell] .... # pw usermod nagios -L insecure # pw usermod www -L insecure .... === Создайте файл контекстов Файл контекстов теперь должен быть создан как [.filename]#/etc/policy.contexts#: [.programlisting] .... # This is the default BIBA policy for this system. # System: /var/run(/.*)? biba/equal /dev/(/.*)? biba/equal /var biba/equal /var/spool(/.*)? biba/equal /var/log(/.*)? biba/equal /tmp(/.*)? biba/equal /var/tmp(/.*)? biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # For Nagios: /usr/local/etc/nagios(/.*)? biba/10 /var/spool/nagios(/.*)? biba/10 # For apache /usr/local/etc/apache(/.*)? biba/10 .... Эта политика обеспечивает безопасность, устанавливая ограничения на поток информации. В данной конкретной конфигурации пользователям, включая `root`, никогда не должно быть разрешено обращаться к Nagios. Конфигурационные файлы и процессы, являющиеся частью Nagios, будут полностью самодостаточными или изолированными. Этот файл будет прочитан после выполнения `setfsmac` для каждой файловой системы. В этом примере устанавливается политика для корневой файловой системы: [source, shell] .... # setfsmac -ef /etc/policy.contexts / .... Далее добавьте эти изменения в основной раздел файла [.filename]#/etc/mac.conf#: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === Конфигурация загрузчика Для завершения настройки добавьте следующие строки в [.filename]#/boot/loader.conf#: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" security.mac.biba.trust_all_interfaces=1 .... Добавьте следующую строку в конфигурацию сетевой карты, хранящуюся в [.filename]#/etc/rc.conf#. Если основная настройка сети выполняется через DHCP, это может потребовать ручной настройки после каждой загрузки системы: [.programlisting] .... maclabel biba/equal .... === Проверка конфигурации Сначала убедитесь, что веб-сервер и Nagios не будут запускаться при инициализации системы и перезагрузке. Убедитесь, что `root` не имеет доступа к любым файлам в конфигурационном каталоге Nagios. Если `root` может просматривать содержимое [.filename]#/var/spool/nagios#, значит что-то не так. Вместо этого должна возвращаться ошибка "permission denied". Если все выглядит нормально, можно запустить Nagios, Apache и Sendmail: [source, shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Тщательно проверьте, чтобы всё работало правильно. Если нет, проверьте файлы журналов на наличие сообщений об ошибках. При необходимости используйте `man:sysctl[8]` для отключения модуля политики безопасности `man:mac_biba[4]` и попробуйте запустить всё снова как обычно. [NOTE] ==== Пользователь `root` всё ещё может изменять параметры безопасности и редактировать конфигурационные файлы. Следующая команда разрешит понижение уровня целостности политики безопасности для нового запущенного shell: [source, shell] .... # setpmac biba/10 csh .... Чтобы предотвратить это, принудительно ограничьте пользователя диапазоном с помощью man:login.conf[5]. Если man:setpmac[8] попытается выполнить команду вне пределов компартмента, будет возвращена ошибка и команда не выполнится. В данном случае установите root в `biba/high(high-high)`. ==== [[mac-troubleshoot]] == Устранение проблем с инфраструктурой MAC Этот раздел посвящён распространённым ошибкам конфигурации и способам их устранения. Флаг `multilabel` не сохраняется на корневом ([.filename]#/#) разделе::: Следующие действия могут помочь устранить эту временную ошибку: [.procedure] ==== . Отредактируйте файл [.filename]#/etc/fstab# и установите корневой раздел в `ro` для режима только для чтения. . Перезагрузитесь в однопользовательском режиме. . Выполните команду `tunefs -l enable` для раздела [.filename]#/#. . Перезагрузите систему. . Выполните `mount -urw`[.filename]#/#, измените `ro` обратно на `rw` в [.filename]#/etc/fstab# и перезагрузите систему снова. . Перепроверьте вывод команды `mount`, чтобы убедиться, что опция `multilabel` корректно установлена для корневой файловой системы. ==== После настройки безопасной среды с MAC, Xorg больше не запускается::: Это может быть вызвано политикой MAC `partition` или ошибкой маркировки в одной из политик маркировки MAC. Для диагностики попробуйте следующее: [.procedure] ==== . Проверьте сообщение об ошибке. Если пользователь находится в классе `insecure`, проблема может быть в политике `partition`. Попробуйте вернуть пользователя в класс `default` и пересобрать базу данных с помощью `cap_mkdb`. Если это не решит проблему, перейдите ко второму шагу. . Перепроверьте, что политики меток правильно установлены для пользователя, Xorg и записей в [.filename]#/dev#. . Если ни один из этих способов не решит проблему, отправьте сообщение об ошибке и описание окружения на {freebsd-questions}. ==== Появляется ошибка `_secure_path: unable to stat .login_conf`::: Этот ошибка может возникать, когда пользователь пытается переключиться с пользователя `root` на другого пользователя в системе. Это сообщение обычно появляется, когда у пользователя установлена более высокая метка, чем у пользователя, в которого он пытается переключиться. Например, если у `joe` метка по умолчанию `biba/low`, а у `root` — `biba/high`, `root` не сможет просмотреть домашний каталог ``joe``. Это произойдет независимо от того, использовал ли `root` команду `su` для переключения на `joe`, так как модель целостности Biba не позволяет `root` просматривать объекты с более низким уровнем целостности. Система больше не распознает `root`::: Когда это происходит, `whoami` возвращает `0`, а `su` выводит `who are you?`. + Это может произойти, если политика меток была отключена через man:sysctl[8] или модуль политики был выгружен. Если политика отключена, необходимо перенастроить базу данных возможностей входа. Проверьте файл [.filename]#/etc/login.conf#, чтобы убедиться, что все опции `label` удалены, и перестройте базу данных с помощью `cap_mkdb`. + Это также может произойти, если политика ограничивает доступ к [.filename]#master.passwd#. Обычно это происходит, когда администратор изменяет файл под меткой, которая конфликтует с общей политикой, используемой системой. В таких случаях система прочитает информацию о пользователе, но доступ будет заблокирован, так как файл унаследовал новую метку. Отключите политику с помощью man:sysctl[8], и всё должно вернуться в норму. diff --git a/documentation/content/ru/books/handbook/mac/_index.po b/documentation/content/ru/books/handbook/mac/_index.po index fcd95a7fc1..d4cc3f5580 100644 --- a/documentation/content/ru/books/handbook/mac/_index.po +++ b/documentation/content/ru/books/handbook/mac/_index.po @@ -1,3114 +1,3110 @@ # SOME DESCRIPTIVE TITLE # Copyright (C) YEAR The FreeBSD Project # This file is distributed under the same license as the FreeBSD Documentation package. # Vladlen Popolitov , 2025. msgid "" msgstr "" "Project-Id-Version: FreeBSD Documentation VERSION\n" "POT-Creation-Date: 2025-11-08 16:17+0000\n" "PO-Revision-Date: 2025-11-20 04:45+0000\n" "Last-Translator: Vladlen Popolitov \n" "Language-Team: Russian \n" "Language: ru\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && " "n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;\n" "X-Generator: Weblate 4.17\n" #. type: YAML Front Matter: description #: documentation/content/en/books/handbook/mac/_index.adoc:1 #, no-wrap msgid "This chapter focuses on the MAC framework and the set of pluggable security policy modules FreeBSD provides for enabling various security mechanisms" msgstr "Эта глава посвящена инфраструктуре MAC и набору модулей политики безопасности, предоставляемых FreeBSD для включения различных механизмов безопасности" #. type: YAML Front Matter: part #: documentation/content/en/books/handbook/mac/_index.adoc:1 #, no-wrap msgid "Part III. System Administration" msgstr "Часть III. Администрирование системы" #. type: YAML Front Matter: title #: documentation/content/en/books/handbook/mac/_index.adoc:1 #, no-wrap msgid "Chapter 18. Mandatory Access Control" msgstr "Глава 18. Принудительное управление доступом (MAC)" #. type: Title = #: documentation/content/en/books/handbook/mac/_index.adoc:15 #, no-wrap msgid "Mandatory Access Control" msgstr "Принудительный контроль доступа (MAC)" #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:53 #, no-wrap msgid "Synopsis" msgstr "Обзор" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:62 msgid "" "FreeBSD supports security extensions based on the POSIX(R).1e draft. These " "security mechanisms include file system Access Control Lists (crossref:" "security[fs-acl,“Access Control Lists”]) and Mandatory Access Control " "(MAC). MAC allows access control modules to be loaded in order to implement " "security policies. Some modules provide protections for a narrow subset of " "the system, hardening a particular service. Others provide comprehensive " "labeled security across all subjects and objects. The mandatory part of the " "definition indicates that enforcement of controls is performed by " "administrators and the operating system. This is in contrast to the default " "security mechanism of Discretionary Access Control (DAC) where enforcement " "is left to the discretion of users." msgstr "" "FreeBSD поддерживает расширения безопасности, основанные на документах " "POSIX(R).1e. Эти механизмы безопасности включают списки контроля доступа к " "файловой системе (crossref:security[fs-acl,“Списки контроля доступа”]) и " "принудительный контроль доступа (MAC). MAC позволяет загружать модули " "контроля доступа для реализации политик безопасности. Некоторые модули " "обеспечивают защиту узкого подмножества системы, укрепляя определённую " "службу. Другие предоставляют всеобъемлющую безопасность с метками на всех " "субъектах и объектах. Обязательная часть определения указывает на то, что " "применение контроля осуществляется администраторами и операционной системой. " "Это отличается от механизма безопасности по умолчанию — обычного контроля " "доступа (Discretionary Access Control, DAC), где применение контроля " "остаётся на усмотрение пользователей." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:64 msgid "" "This chapter focuses on the MAC framework and the set of pluggable security " "policy modules FreeBSD provides for enabling various security mechanisms." msgstr "" "Эта глава посвящена инфраструктуре MAC и набору модулей политики " "безопасности, предоставляемых FreeBSD для включения различных механизмов " "безопасности." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:66 msgid "Read this chapter to learn:" msgstr "Прочитайте эту главу, чтобы узнать:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:68 msgid "The terminology associated with the MAC framework." msgstr "Терминология, связанная с инфраструктурой MAC." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:69 msgid "" "The capabilities of MAC security policy modules as well as the difference " "between a labeled and non-labeled policy." msgstr "" "Возможности модулей политики безопасности MAC, а также разница между " "политикой с метками и без меток." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:70 msgid "" "The considerations to take into account before configuring a system to use " "the MAC framework." msgstr "" "Соображения, которые необходимо учитывать перед настройкой системы для " "использования инфраструктуры MAC." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:71 msgid "" "Which MAC security policy modules are included in FreeBSD and how to " "configure them." msgstr "" "Какие модули политики безопасности MAC включены в FreeBSD и как их настроить." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:72 msgid "How to implement a more secure environment using the MAC framework." msgstr "" "Как реализовать более безопасную среду с использованием инфраструктуры MAC." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:73 msgid "" "How to test the MAC configuration to ensure the framework has been properly " "implemented." msgstr "" "Как проверить конфигурацию MAC, чтобы убедиться в правильном использовании " "инфраструктуры." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:75 msgid "Before reading this chapter:" msgstr "Прежде чем читать эту главу, необходимо:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:77 msgid "" "Understand UNIX(R) and FreeBSD basics (crossref:basics[basics,FreeBSD " "Basics])." msgstr "" "Понимать основы UNIX(R) и FreeBSD (crossref:basics[basics,Основы FreeBSD])." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:78 msgid "" "Have some familiarity with security and how it pertains to FreeBSD (crossref:" "security[security,Security])." msgstr "" "Иметь некоторое представление о безопасности и о том, как она относится к " "FreeBSD (crossref:security[security,Безопасность])." #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:85 msgid "" "Improper MAC configuration may cause loss of system access, aggravation of " "users, or inability to access the features provided by Xorg. More " "importantly, MAC should not be relied upon to completely secure a system. " "The MAC framework only augments an existing security policy. Without sound " "security practices and regular security checks, the system will never be " "completely secure." msgstr "" "Неправильная настройка MAC может привести к потере доступа к системе, " "недовольству пользователей или невозможности использования функций, " "предоставляемых Xorg. Важно отметить, что нельзя полагаться исключительно на " "MAC для полной защиты системы. Инфраструктура MAC лишь дополняет " "существующую политику безопасности. Без грамотных мер безопасности и " "регулярных проверок система никогда не будет полностью защищена." #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:88 msgid "" "The examples contained within this chapter are for demonstration purposes " "and the example settings should _not_ be implemented on a production " "system. Implementing any security policy takes a good deal of " "understanding, proper design, and thorough testing." msgstr "" "Примеры, приведённые в этой главе, предназначены для демонстрации, и их " "настройки _не_ следует применять в рабочей системе. Внедрение любой политики " "безопасности требует глубокого понимания, правильного проектирования и " "тщательного тестирования." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:93 msgid "" "While this chapter covers a broad range of security issues relating to the " "MAC framework, the development of new MAC security policy modules will not " "be covered. A number of security policy modules included with the MAC " "framework have specific characteristics which are provided for both testing " "and new module development. Refer to man:mac_test[4], man:mac_stub[4] and " "man:mac_none[4] for more information on these security policy modules and " "the various mechanisms they provide." msgstr "" "Хотя в этой главе рассматривается широкий спектр вопросов безопасности, " "связанных с инфраструктурой MAC, разработка новых модулей политики " "безопасности MAC не будет освещена. Ряд модулей политики безопасности, " "включенных в фреймворк MAC, обладают специфическими характеристиками, " "которые предоставляются как для тестирования, так и для разработки новых " "модулей. Дополнительную информацию об этих модулях политики безопасности и " "предоставляемых ими механизмах можно найти в man:mac_test[4], man:" "mac_stub[4] и man:mac_none[4]." #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:95 #, no-wrap msgid "Key Terms" msgstr "Ключевые термины" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:98 msgid "The following key terms are used when referring to the MAC framework:" msgstr "" "В документации FreeBSD при обращении к инфраструктуре MAC используются " "следующие ключевые термины:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:100 msgid "" "_compartment_: a set of programs and data to be partitioned or separated, " "where users are given explicit access to specific component of a system. A " "compartment represents a grouping, such as a work group, department, " "project, or topic. Compartments make it possible to implement a need-to-know-" "basis security policy." msgstr "" "_компартмент (compartment)_: набор программ и данных, которые должны быть " "разделены или изолированы, при этом пользователи получают явный доступ к " "определенным компонентам системы. Компартмент представляет собой группу, " "такую как рабочая группа, отдел, проект или тема. Компартменты позволяют " "реализовать политику безопасности на основе принципа \"необходимо знать\"." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:101 msgid "" "_integrity_: the level of trust which can be placed on data. As the " "integrity of the data is elevated, so does the ability to trust that data." msgstr "" "_целостность (integrity)_: уровень доверия, который может быть оказан " "данным. По мере повышения целостности данных возрастает и возможность " "доверять этим данным." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:102 msgid "" "_level_: the increased or decreased setting of a security attribute. As the " "level increases, its security is considered to elevate as well." msgstr "" "_уровень (level)_: увеличенное или уменьшенное значение атрибута " "безопасности. По мере повышения уровня его безопасность также считается " "более высокой." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:103 msgid "" "_label_: a security attribute which can be applied to files, directories, or " "other items in the system. It could be considered a confidentiality stamp. " "When a label is placed on a file, it describes the security properties of " "that file and will only permit access by files, users, and resources with a " "similar security setting. The meaning and interpretation of label values " "depends on the policy configuration. Some policies treat a label as " "representing the integrity or secrecy of an object while other policies " "might use labels to hold rules for access." msgstr "" "_метка (label)_: атрибут безопасности, который может быть применён к файлам, " "каталогам или другим элементам системы. Его можно рассматривать как штамп " "конфиденциальности. Когда метка назначается файлу, она описывает его " "свойства безопасности и разрешает доступ только файлам, пользователям и " "ресурсам с аналогичными настройками безопасности. Значение и интерпретация " "меток зависят от конфигурации политики. Некоторые политики рассматривают " "метку как показатель целостности или секретности объекта, тогда как другие " "могут использовать метки для хранения правил доступа." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:104 msgid "" "_multilabel_: this property is a file system option which can be set in " "single-user mode using man:tunefs[8], during boot using man:fstab[5], or " "during the creation of a new file system. This option permits an " "administrator to apply different MAC labels on different objects. This " "option only applies to security policy modules which support labeling." msgstr "" "_множественные метки (multilabel)_: это свойство является параметром " "файловой системы, которое можно установить в однопользовательском режиме с " "помощью man:tunefs[8], во время загрузки с использованием man:fstab[5] или " "при создании новой файловой системы. Эта опция позволяет администратору " "применять различные метки MAC к разным объектам. Данная опция применяется " "только к модулям политики безопасности, которые поддерживают маркировку " "метками (далее — маркировку)." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:105 msgid "" "_single label_: a policy where the entire file system uses one label to " "enforce access control over the flow of data. Whenever `multilabel` is not " "set, all files will conform to the same label setting." msgstr "" "_одиночная метка (single label)_: политика, при которой вся файловая система " "использует одну метку для контроля доступа к потокам данных. Если параметр " "`multilabel` не установлен, все файлы будут соответствовать одной и той же " "настройке метки." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:106 msgid "" "_object_: an entity through which information flows under the direction of a " "_subject_. This includes directories, files, fields, screens, keyboards, " "memory, magnetic storage, printers or any other data storage or moving " "device. An object is a data container or a system resource. Access to an " "object effectively means access to its data." msgstr "" "_объект (object)_: сущность, через которую информация передаётся под " "управлением _субъекта_. Это включает каталоги, файлы, поля, экраны, " "клавиатуры, память, магнитные накопители, принтеры или любые другие " "устройства хранения или передачи данных. Объект является контейнером данных " "или системным ресурсом. Доступ к объекту фактически означает доступ к его " "данным." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:107 msgid "" "_subject_: any active entity that causes information to flow between " "_objects_ such as a user, user process, or system process. On FreeBSD, this " "is almost always a thread acting in a process on behalf of a user." msgstr "" "_субъект (subject)_: любая активная сущность, вызывающая передачу информации " "между _объектами_, например, пользователь, пользовательский процесс или " "системный процесс. В FreeBSD это почти всегда поток, действующий в процессе " "от имени пользователя." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:108 msgid "" "_policy_: a collection of rules which defines how objectives are to be " "achieved. A policy usually documents how certain items are to be handled. " "This chapter considers a policy to be a collection of rules which controls " "the flow of data and information and defines who has access to that data and " "information." msgstr "" "_политика (policy)_: набор правил, определяющий, как достичь поставленных " "целей. Политика обычно документирует, каким образом следует обращаться с " "определёнными элементами. В этой главе под политикой понимается набор " "правил, контролирующих поток данных и информации, а также определяющих, кто " "имеет доступ к этим данным и информации." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:109 msgid "" "_high-watermark_: this type of policy permits the raising of security levels " "for the purpose of accessing higher level information. In most cases, the " "original level is restored after the process is complete. Currently, the " "FreeBSD MAC framework does not include this type of policy." msgstr "" "_верхний уровень (high-watermark)_: этот тип политики позволяет повышать " "уровни безопасности для доступа к информации более высокого уровня. В " "большинстве случаев исходный уровень восстанавливается после завершения " "процесса. В настоящее время в инфраструктуре MAC в FreeBSD отсутствует такой " "тип политики." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:110 msgid "" "_low-watermark_: this type of policy permits lowering security levels for " "the purpose of accessing information which is less secure. In most cases, " "the original security level of the user is restored after the process is " "complete. The only security policy module in FreeBSD to use this is man:" "mac_lomac[4]." msgstr "" "_нижний уровень (low-watermark)_: этот тип политики позволяет понижать " "уровни безопасности для доступа к информации с более низким уровнем защиты. " "В большинстве случаев исходный уровень безопасности пользователя " "восстанавливается после завершения процесса. Единственный модуль политики " "безопасности в FreeBSD, использующий этот подход, — это man:mac_lomac[4]." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:111 msgid "" "_sensitivity_: usually used when discussing Multilevel Security (MLS). A " "sensitivity level describes how important or secret the data should be. As " "the sensitivity level increases, so does the importance of the secrecy, or " "confidentiality, of the data." msgstr "" "_ чувствительность (sensivity)_: обычно используется при обсуждении " "Многоуровневой Безопасности (Multilevel Security, MLS). Уровень " "чувствительности описывает, насколько важными или секретными должны быть " "данные. По мере увеличения уровня чувствительности возрастает и важность " "секретности, или конфиденциальности, данных." #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:113 #, no-wrap msgid "Understanding MAC Labels" msgstr "Метки MAC" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:118 msgid "" "A MAC label is a security attribute which may be applied to subjects and " "objects throughout the system. When setting a label, the administrator must " "understand its implications in order to prevent unexpected or undesired " "behavior of the system. The attributes available on an object depend on the " "loaded policy module, as policy modules interpret their attributes in " "different ways." msgstr "" "Метка MAC — это атрибут безопасности, который может быть применён к " "субъектам и объектам в системе. При установке метки администратор должен " "понимать её последствия, чтобы избежать неожиданного или нежелательного " "поведения системы. Доступные атрибуты объекта зависят от загруженного модуля " "политики, так как модули политики интерпретируют свои атрибуты по-разному." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:122 msgid "" "The security label on an object is used as a part of a security access " "control decision by a policy. With some policies, the label contains all of " "the information necessary to make a decision. In other policies, the labels " "may be processed as part of a larger rule set." msgstr "" "Метка безопасности объекта используется как часть решения по контролю " "доступа в соответствии с политикой. В некоторых политиках метка содержит всю " "информацию, необходимую для принятия решения. В других политиках метки могут " "обрабатываться как часть более обширного набора правил." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:126 msgid "" "There are two types of label policies: single label and multi label. By " "default, the system will use single label. The administrator should be " "aware of the pros and cons of each in order to implement policies which meet " "the requirements of the system's security model." msgstr "" "Существует два типа политик меток: одноуровневые и многоуровневые. По " "умолчанию система использует одноуровневые метки. Администратор должен " "учитывать преимущества и недостатки каждого типа, чтобы реализовать " "политики, соответствующие требованиям модели безопасности системы." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:130 msgid "" "A single label security policy only permits one label to be used for every " "subject or object. Since a single label policy enforces one set of access " "permissions across the entire system, it provides lower administration " "overhead, but decreases the flexibility of policies which support labeling. " "However, in many environments, a single label policy may be all that is " "required." msgstr "" "Политика безопасности с одной меткой разрешает использование только одной " "метки для каждого субъекта или объекта. Поскольку политика с одной меткой " "применяет единый набор прав доступа во всей системе, это снижает нагрузку на " "администрирование, но уменьшает гибкость политик, поддерживающих маркировку. " "Однако во многих средах политика с одной меткой может быть всем, что " "требуется." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:134 msgid "" "A single label policy is somewhat similar to DAC as `root` configures the " "policies so that users are placed in the appropriate categories and access " "levels. A notable difference is that many policy modules can also restrict " "`root`. Basic control over objects will then be released to the group, but " "`root` may revoke or modify the settings at any time." msgstr "" "Политика с одной меткой несколько похожа на DAC, поскольку `root` " "настраивает политики так, чтобы пользователи попадали в соответствующие " "категории и уровни доступа. Заметное отличие заключается в том, что многие " "модули политики также могут ограничивать пользователя `root`. Базовый " "контроль над объектами затем передаётся группе, но `root` может отозвать или " "изменить настройки в любое время." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:139 msgid "" "When appropriate, a multi label policy can be set on a UFS file system by " "passing `multilabel` to man:tunefs[8]. A multi label policy permits each " "subject or object to have its own independent MAC label. The decision to " "use a multi label or single label policy is only required for policies which " "implement the labeling feature, such as `biba`, `lomac`, and `mls`. Some " "policies, such as `seeotheruids`, `portacl` and `partition`, do not use " "labels at all." msgstr "" "Когда это уместно, политику с несколькими метками можно установить на " "файловой системе UFS, передав `multilabel` в man:tunefs[8]. Политика с " "несколькими метками позволяет каждому субъекту или объекту иметь свою " "собственную независимую метку MAC. Решение использовать политику с " "несколькими метками или одной меткой требуется только для политик, " "реализующих функцию маркировки, таких как `biba`, `lomac` и `mls`. Некоторые " "политики, такие как `seeotheruids`, `portacl` и `partition`, вообще не " "используют метки." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:142 msgid "" "Using a multi label policy on a partition and establishing a multi label " "security model can increase administrative overhead as everything in that " "file system has a label. This includes directories, files, and even device " "nodes." msgstr "" "Использование политики с несколькими метками на разделе и установление " "модели безопасности с несколькими метками может увеличить административную " "нагрузку, так как всё в этой файловой системе имеет метку. Это включает " "каталоги, файлы и даже узлы устройств." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:145 msgid "" "The following command will set `multilabel` on the specified UFS file " "system. This may only be done in single-user mode and is not a requirement " "for the swap file system:" msgstr "" "Следующая команда установит `multilabel` для указанной файловой системы UFS. " "Это можно сделать только в однопользовательском режиме и не является " "обязательным для файловой системы подкачки:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:149 #, no-wrap msgid "# tunefs -l enable /\n" msgstr "# tunefs -l enable /\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:155 msgid "" "Some users have experienced problems with setting the `multilabel` flag on " "the root partition. If this is the case, please review crossref:mac[mac-" "troubleshoot, Troubleshooting the MAC Framework]." msgstr "" "Некоторые пользователи столкнулись с проблемами при установке флага " "`multilabel` на корневом разделе. Если это ваш случай, ознакомьтесь с " "crossref:mac[mac-troubleshoot, Устранение неполадок инфраструктуры MAC]." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:161 msgid "" "Since the multi label policy is set on a per-file system basis, a multi " "label policy may not be needed if the file system layout is well designed. " "Consider an example security MAC model for a FreeBSD web server. This " "machine uses the single label, `biba/high`, for everything in the default " "file systems. If the web server needs to run at `biba/low` to prevent write " "up capabilities, it could be installed to a separate UFS [.filename]#/usr/" "local# file system set at `biba/low`." msgstr "" "Поскольку политика с несколькими метками устанавливается для каждой файловой " "системы, она может не потребоваться, если структура файловых систем хорошо " "продумана. Рассмотрим пример модели безопасности MAC для веб-сервера " "FreeBSD. На этой машине используется единая метка `biba/high` для всего в " "стандартных файловых системах. Если веб-сервер должен работать с `biba/low`, " "чтобы предотвратить возможность записи вверх, его можно установить в " "отдельную файловую систему UFS [.filename]#/usr/local# с меткой `biba/low`." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:162 #, no-wrap msgid "Label Configuration" msgstr "Конфигурация меток" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:166 msgid "" "Virtually all aspects of label policy module configuration will be performed " "using the base system utilities. These commands provide a simple interface " "for object or subject configuration or the manipulation and verification of " "the configuration." msgstr "" "Практически все аспекты настройки модуля политики меток будут выполняться с " "помощью базовых системных утилит. Эти команды предоставляют простой " "интерфейс для настройки объекта или субъекта, а также для изменения и " "проверки конфигурации." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:169 msgid "" "All configuration may be done using `setfmac`, which is used to set MAC " "labels on system objects, and `setpmac`, which is used to set the labels on " "system subjects. For example, to set the `biba` MAC label to `high` on [." "filename]#test#:" msgstr "" "Вся настройка может быть выполнена с помощью `setfmac`, который используется " "для установки меток MAC на объектах системы, и `setpmac`, который " "используется для установки меток на субъектах системы. Например, чтобы " "установить метку MAC `biba` в значение `high` для [.filename]#test#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:173 #, no-wrap msgid "# setfmac biba/high test\n" msgstr "# setfmac biba/high test\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:181 msgid "" "If the configuration is successful, the prompt will be returned without " "error. A common error is `Permission denied` which usually occurs when the " "label is being set or modified on a restricted object. Other conditions may " "produce different failures. For instance, the file may not be owned by the " "user attempting to relabel the object, the object may not exist, or the " "object may be read-only. A mandatory policy will not allow the process to " "relabel the file, maybe because of a property of the file, a property of the " "process, or a property of the proposed new label value. For example, if a " "user running at low integrity tries to change the label of a high integrity " "file, or a user running at low integrity tries to change the label of a low " "integrity file to a high integrity label, these operations will fail." msgstr "" "Если конфигурация выполнена успешно, командная строка вернётся без ошибок. " "Частая ошибка — `Permission denied`, которая обычно возникает при установке " "или изменении метки на защищённом объекте. Другие условия могут вызывать " "иные сбои. Например, файл может не принадлежать пользователю, пытающемуся " "изменить метку объекта, объект может не существовать или быть доступным " "только для чтения. Обязательная политика не позволит процессу изменить метку " "файла, возможно, из-за свойств самого файла, процесса или предлагаемого " "нового значения метки. Например, если пользователь с низким уровнем " "целостности попытается изменить метку файла с высоким уровнем целостности " "или если пользователь с низким уровнем целостности попытается изменить метку " "файла с низкого уровня на высокий, эти операции завершатся неудачей." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:183 msgid "" "The system administrator may use `setpmac` to override the policy module's " "settings by assigning a different label to the invoked process:" msgstr "" "Системный администратор может использовать `setpmac` для переопределения " "настроек модуля политики, назначив другую метку вызываемому процессу:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:191 #, no-wrap msgid "" "# setfmac biba/high test\n" "Permission denied\n" "# setpmac biba/low setfmac biba/high test\n" "# getfmac test\n" "test: biba/high\n" msgstr "" "# setfmac biba/high test\n" "Permission denied\n" "# setpmac biba/low setfmac biba/high test\n" "# getfmac test\n" "test: biba/high\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:196 msgid "" "For currently running processes, such as sendmail, `getpmac` is usually used " "instead. This command takes a process ID (PID) in place of a command name. " "If users attempt to manipulate a file not in their access, subject to the " "rules of the loaded policy modules, the `Operation not permitted` error will " "be displayed." msgstr "" "Для уже запущенных процессов, таких как sendmail, обычно используется " "`getpmac` вместо этого. Эта команда принимает идентификатор процесса (PID) " "вместо имени команды. Если пользователи пытаются работать с файлом, к " "которому у них нет доступа, в соответствии с правилами загруженных модулей " "политики, будет отображаться ошибка `Операция не разрешена (Operation not " "permitted)`." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:197 #, no-wrap msgid "Predefined Labels" msgstr "Предопределенные метки" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:200 msgid "" "A few FreeBSD policy modules which support the labeling feature offer three " "predefined labels: `low`, `equal`, and `high`, where:" msgstr "" "Несколько модулей политики FreeBSD, поддерживающих функцию меток, предлагают " "три предопределённых метки: `low`, `equal` и `high`, где:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:202 msgid "" "`low` is considered the lowest label setting an object or subject may have. " "Setting this on objects or subjects blocks their access to objects or " "subjects marked high." msgstr "" "`low` считается минимальным уровнем метки, который может быть установлен для " "объекта или субъекта. Установка этого уровня для объектов или субъектов " "блокирует их доступ к объектам или субъектам, помеченным как `high`." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:203 msgid "" "`equal` sets the subject or object to be disabled or unaffected and should " "only be placed on objects considered to be exempt from the policy." msgstr "" "`equal` устанавливает, что субъект или объект отключён или не затронут, и " "должен использоваться только для объектов, считающихся исключёнными из " "политики." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:204 msgid "" "`high` grants an object or subject the highest setting available in the Biba " "and MLS policy modules." msgstr "" "`high` предоставляет объекту или субъекту наивысший уровень доступа, " "доступный в модулях политик Biba и MLS." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:208 msgid "" "Such policy modules include man:mac_biba[4], man:mac_mls[4] and man:" "mac_lomac[4]. Each of the predefined labels establishes a different " "information flow directive. Refer to the manual page of the module to " "determine the traits of the generic label configurations." msgstr "" "Такие модули политик включают man:mac_biba[4], man:mac_mls[4] и man:" "mac_lomac[4]. Каждый из предопределённых меток устанавливает различные " "директивы потока информации. Обратитесь к справочной странице модуля, чтобы " "определить особенности стандартных конфигураций меток." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:209 #, no-wrap msgid "Numeric Labels" msgstr "Числовые метки" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:214 msgid "" "The Biba and MLS policy modules support a numeric label which may be set to " "indicate the precise level of hierarchical control. This numeric level is " "used to partition or sort information into different groups of " "classification, only permitting access to that group or a higher group " "level. For example:" msgstr "" "Модули политик Biba и MLS поддерживают числовую метку, которая может быть " "установлена для указания точного уровня иерархического контроля. Этот " "числовой уровень используется для разделения или сортировки информации по " "различным группам классификации, разрешая доступ только к этой группе или к " "группе более высокого уровня. Например:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:218 #, no-wrap msgid "biba/10:2+3+6(5:2+3-20:2+3+4+5+6)\n" msgstr "biba/10:2+3+6(5:2+3-20:2+3+4+5+6)\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:221 msgid "" "may be interpreted as \"Biba Policy Label/Grade 10:Compartments 2, 3 and 6: " "(grade 5 ...\")" msgstr "" "может интерпретироваться как \"Метка/уровень целостности (grade) политики " "Biba 10: компартменты 2, 3 и 6: (уровень 5 ...)\"" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:224 msgid "" "In this example, the first grade would be considered the effective grade " "with effective compartments, the second grade is the low grade, and the last " "one is the high grade. In most configurations, such fine-grained settings " "are not needed as they are considered to be advanced configurations." msgstr "" "В этом примере первый уровень целостности будет считаться эффективным с " "эффективными компартментами, второй уровень — низкий уровень, а последний — " "высокий. В большинстве конфигураций такие детальные настройки не требуются, " "так как они считаются расширенными конфигурациями." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:227 msgid "" "System objects only have a current grade and compartment. System subjects " "reflect the range of available rights in the system, and network interfaces, " "where they are used for access control." msgstr "" "Системные объекты имеют только текущий уровень целостности и компартмент. " "Системные субъекты отражают диапазон доступных прав в системе, а сетевые " "интерфейсы используются для контроля доступа." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:232 msgid "" "The grade and compartments in a subject and object pair are used to " "construct a relationship known as _dominance_, in which a subject dominates " "an object, the object dominates the subject, neither dominates the other, or " "both dominate each other. The \"both dominate\" case occurs when the two " "labels are equal. Due to the information flow nature of Biba, a user has " "rights to a set of compartments that might correspond to projects, but " "objects also have a set of compartments. Users may have to subset their " "rights using `su` or `setpmac` in order to access objects in a compartment " "from which they are not restricted." msgstr "" "Уровни целостности и компартменты в паре субъект-объект используются для " "построения отношения, известного как _доминирование_, при котором субъект " "доминирует над объектом, объект доминирует над субъектом, ни один не " "доминирует над другим или оба доминируют друг над другом. Случай «оба " "доминируют» возникает, когда две метки равны. В силу природы информационных " "потоков в модели Biba пользователь имеет права на набор компартментов, " "которые могут соответствовать проектам, но объекты также имеют набор " "компартментов. Пользователям может потребоваться ограничить свои права с " "помощью `su` или `setpmac`, чтобы получить доступ к объектам в " "компартментах, от которой они не ограничены." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:233 #, no-wrap msgid "User Labels" msgstr "Пользовательские метки" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:238 msgid "" "Users are required to have labels so that their files and processes properly " "interact with the security policy defined on the system. This is configured " "in [.filename]#/etc/login.conf# using login classes. Every policy module " "that uses labels will implement the user class setting." msgstr "" "Пользователи должны иметь метки, чтобы их файлы и процессы корректно " "взаимодействовали с политикой безопасности, определенной в системе. Это " "настраивается в [.filename]#/etc/login.conf# с использованием классов входа. " "Каждый модуль политики, использующий метки, будет реализовывать настройку " "класса пользователя." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:243 msgid "" "To set the user class default label which will be enforced by MAC, add a " "`label` entry. An example `label` entry containing every policy module is " "displayed below. Note that in a real configuration, the administrator would " "never enable every policy module. It is recommended that the rest of this " "chapter be reviewed before any configuration is implemented." msgstr "" "Чтобы установить метку класса пользователя по умолчанию, которая будет " "применяться MAC, добавьте запись `label`. Ниже приведен пример записи " "`label`, содержащей каждый модуль политики. Обратите внимание, что в " "реальной конфигурации администратор никогда не включает все модули политики. " "Рекомендуется ознакомиться с остальной частью этой главы перед внедрением " "любой конфигурации." #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:269 #, no-wrap msgid "" "default:\\\n" -"\t:copyright=/etc/COPYRIGHT:\\\n" "\t:welcome=/etc/motd:\\\n" "\t:setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\\\n" "\t:path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\\\n" "\t:manpath=/usr/share/man /usr/local/man:\\\n" "\t:nologin=/usr/sbin/nologin:\\\n" "\t:cputime=1h30m:\\\n" "\t:datasize=8M:\\\n" "\t:vmemoryuse=100M:\\\n" "\t:stacksize=2M:\\\n" "\t:memorylocked=4M:\\\n" "\t:memoryuse=8M:\\\n" "\t:filesize=8M:\\\n" "\t:coredumpsize=8M:\\\n" "\t:openfiles=24:\\\n" "\t:maxproc=32:\\\n" "\t:priority=0:\\\n" "\t:requirehome:\\\n" "\t:passwordtime=91d:\\\n" "\t:umask=022:\\\n" "\t:ignoretime@:\\\n" "\t:label=partition/13,mls/5,biba/10(5-15),lomac/10[2]:\n" msgstr "" "default:\\\n" -"\t:copyright=/etc/COPYRIGHT:\\\n" "\t:welcome=/etc/motd:\\\n" "\t:setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\\\n" "\t:path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\\\n" "\t:manpath=/usr/share/man /usr/local/man:\\\n" "\t:nologin=/usr/sbin/nologin:\\\n" "\t:cputime=1h30m:\\\n" "\t:datasize=8M:\\\n" "\t:vmemoryuse=100M:\\\n" "\t:stacksize=2M:\\\n" "\t:memorylocked=4M:\\\n" "\t:memoryuse=8M:\\\n" "\t:filesize=8M:\\\n" "\t:coredumpsize=8M:\\\n" "\t:openfiles=24:\\\n" "\t:maxproc=32:\\\n" "\t:priority=0:\\\n" "\t:requirehome:\\\n" "\t:passwordtime=91d:\\\n" "\t:umask=022:\\\n" "\t:ignoretime@:\\\n" "\t:label=partition/13,mls/5,biba/10(5-15),lomac/10[2]:\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:274 msgid "" "While users can not modify the default value, they may change their label " "after they login, subject to the constraints of the policy. The example " "above tells the Biba policy that a process's minimum integrity is `5`, its " "maximum is `15`, and the default effective label is `10`. The process will " "run at `10` until it chooses to change label, perhaps due to the user using " "`setpmac`, which will be constrained by Biba to the configured range." msgstr "" "Хотя пользователи не могут изменить значение по умолчанию, они могут " "изменить свою метку после входа в систему, в соответствии с ограничениями " "политики. В приведённом выше примере политика Biba указывает, что " "минимальный уровень целостности процесса — `5`, максимальный — `15`, а " "эффективная метка по умолчанию — `10`. Процесс будет выполняться с меткой " "`10`, пока не решит изменить её, например, если пользователь воспользуется " "`setpmac`, что будет ограничено политикой Biba в рамках заданного диапазона." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:276 msgid "" "After any change to [.filename]#login.conf#, the login class capability " "database must be rebuilt using `cap_mkdb`." msgstr "" "После любого изменения в файле [.filename]#login.conf# необходимо " "перестроить базу данных возможностей классов входа с помощью команды " "`cap_mkdb`." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:279 msgid "" "Many sites have a large number of users requiring several different user " "classes. In depth planning is required as this can become difficult to " "manage." msgstr "" "Многие сайты имеют большое количество пользователей, требующих нескольких " "различных классов пользователей. Требуется тщательное планирование, так как " "это может стать сложным в управлении." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:280 #, no-wrap msgid "Network Interface Labels" msgstr "Метки сетевых интерфейсов" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:285 msgid "" "Labels may be set on network interfaces to help control the flow of data " "across the network. Policies using network interface labels function in the " "same way that policies function with respect to objects. Users at high " "settings in Biba, for example, will not be permitted to access network " "interfaces with a label of `low`." msgstr "" "Метки могут быть установлены на сетевых интерфейсах для помощи в контроле " "потока данных в сети. Политики, использующие метки сетевых интерфейсов, " "работают так же, как политики, применяемые к объектам. Например, " "пользователи с высокими уровнями в Biba не смогут получить доступ к сетевым " "интерфейсам с меткой `low`." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:287 msgid "" "When setting the MAC label on network interfaces, `maclabel` may be passed " "to `ifconfig`:" msgstr "" "При установке MAC-метки на сетевых интерфейсах, `maclabel` может быть " "передан в `ifconfig`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:291 #, no-wrap msgid "# ifconfig bge0 maclabel biba/equal\n" msgstr "# ifconfig bge0 maclabel biba/equal\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:295 msgid "" "This example will set the MAC label of `biba/equal` on the `bge0` " "interface. When using a setting similar to `biba/high(low-high)`, the " "entire label should be quoted to prevent an error from being returned." msgstr "" "Этот пример установит метку MAC `biba/equal` на интерфейсе `bge0`. При " "использовании настройки вида `biba/high(low-high)` всю метку следует " "заключить в кавычки, чтобы избежать ошибки." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:299 msgid "" "Each policy module which supports labeling has a tunable which may be used " "to disable the MAC label on network interfaces. Setting the label to " "`equal` will have a similar effect. Review the output of `sysctl`, the " "policy manual pages, and the information in the rest of this chapter for " "more information on those tunables." msgstr "" "Каждый модуль политики, поддерживающий метки, имеет настраиваемый параметр, " "который может использоваться для отключения метки MAC на сетевых " "интерфейсах. Установка метки в значение `equal` даст аналогичный эффект. Для " "получения дополнительной информации об этих настраиваемых параметрах " "ознакомьтесь с выводом `sysctl`, справочными страницами политик и " "информацией в остальной части этой главы." #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:301 #, no-wrap msgid "Planning the Security Configuration" msgstr "Планирование конфигурации безопасности" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:305 msgid "" "Before implementing any MAC policies, a planning phase is recommended. " "During the planning stages, an administrator should consider the " "implementation requirements and goals, such as:" msgstr "" "Прежде чем внедрять какие-либо политики MAC, рекомендуется этап " "планирования. На этапе планирования администратор должен учесть требования и " "цели внедрения, такие как:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:307 msgid "" "How to classify information and resources available on the target systems." msgstr "" "Как классифицировать информацию и ресурсы, доступные в целевых системах." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:308 msgid "" "Which information or resources to restrict access to along with the type of " "restrictions that should be applied." msgstr "" "Какую информацию или ресурсы следует ограничить в доступе, а также тип " "ограничений, которые должны быть применены." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:309 msgid "Which MAC modules will be required to achieve this goal." msgstr "Какие модули MAC потребуются для достижения этой цели." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:312 msgid "" "A trial run of the trusted system and its configuration should occur " "_before_ a MAC implementation is used on production systems. Since " "different environments have different needs and requirements, establishing a " "complete security profile will decrease the need of changes once the system " "goes live." msgstr "" "Пробный запуск доверенной системы и её конфигурации должен быть выполнен " "_до_ использования реализации MAC в производственных системах. Поскольку " "различные среды имеют разные потребности и требования, создание полного " "профиля безопасности уменьшит необходимость изменений после ввода системы в " "эксплуатацию." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:318 msgid "" "Consider how the MAC framework augments the security of the system as a " "whole. The various security policy modules provided by the MAC framework " "could be used to protect the network and file systems or to block users from " "accessing certain ports and sockets. Perhaps the best use of the policy " "modules is to load several security policy modules at a time in order to " "provide a MLS environment. This approach differs from a hardening policy, " "which typically hardens elements of a system which are used only for " "specific purposes. The downside to MLS is increased administrative overhead." msgstr "" "Рассмотрим, как инфраструктура MAC усиливает безопасность системы в целом. " "Различные модули политики безопасности, предоставляемые инфраструктурой MAC, " "могут использоваться для защиты сети и файловых систем или для блокировки " "доступа пользователей к определённым портам и сокетам. Возможно, наилучшее " "применение модулей политики — это загрузка нескольких модулей политики " "безопасности одновременно для создания среды MLS. Такой подход отличается от " "политики усиления защиты, которая обычно укрепляет элементы системы, " "используемые только для определённых целей. Недостатком MLS является " "увеличение административных затрат." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:322 msgid "" "The overhead is minimal when compared to the lasting effect of a framework " "which provides the ability to pick and choose which policies are required " "for a specific configuration and which keeps performance overhead down. The " "reduction of support for unneeded policies can increase the overall " "performance of the system as well as offer flexibility of choice. A good " "implementation would consider the overall security requirements and " "effectively implement the various security policy modules offered by the " "framework." msgstr "" "Накладные расходы минимальны по сравнению с долгосрочным эффектом от " "инфраструктуры, который предоставляет возможность выбирать необходимые " "политики для конкретной конфигурации и минимизирует снижение " "производительности. Уменьшение поддержки ненужных политик может повысить " "общую производительность системы, а также обеспечить гибкость выбора. " "Хорошая реализация должна учитывать общие требования безопасности и " "эффективно внедрять различные модули политик безопасности, предоставляемые " "инфраструктурой." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:325 msgid "" "A system utilizing MAC guarantees that a user will not be permitted to " "change security attributes at will. All user utilities, programs, and " "scripts must work within the constraints of the access rules provided by the " "selected security policy modules and control of the MAC access rules is in " "the hands of the system administrator." msgstr "" "Система, использующая MAC, гарантирует, что пользователь не сможет по своему " "желанию изменять атрибуты безопасности. Все пользовательские утилиты, " "программы и скрипты должны работать в рамках ограничений, установленных " "правилами доступа выбранных модулей политики безопасности, а управление " "правилами доступа MAC находится в руках системного администратора." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:329 msgid "" "It is the duty of the system administrator to carefully select the correct " "security policy modules. For an environment that needs to limit access " "control over the network, the man:mac_portacl[4], man:mac_ifoff[4], and man:" "mac_biba[4] policy modules make good starting points. For an environment " "where strict confidentiality of file system objects is required, consider " "the man:mac_bsdextended[4] and man:mac_mls[4] policy modules." msgstr "" "Обязанностью системного администратора является тщательный выбор подходящих " "модулей политики безопасности. Для среды, где требуется ограничить контроль " "доступа по сети, модули политик man:mac_portacl[4], man:mac_ifoff[4] и man:" "mac_biba[4] могут стать хорошей отправной точкой. В среде, где необходима " "строгая конфиденциальность объектов файловой системы, следует рассмотреть " "модули политик man:mac_bsdextended[4] и man:mac_mls[4]." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:336 msgid "" "Policy decisions could be made based on network configuration. If only " "certain users should be permitted access to man:ssh[1], the man:" "mac_portacl[4] policy module is a good choice. In the case of file systems, " "access to objects might be considered confidential to some users, but not to " "others. As an example, a large development team might be broken off into " "smaller projects where developers in project A might not be permitted to " "access objects written by developers in project B. Yet both projects might " "need to access objects created by developers in project C. Using the " "different security policy modules provided by the MAC framework, users could " "be divided into these groups and then given access to the appropriate " "objects." msgstr "" "Решения о политиках могут приниматься на основе конфигурации сети. Если " "только определенные пользователи должны иметь доступ к man:ssh[1], модуль " "политики man:mac_portacl[4] является хорошим выбором. В случае файловых " "систем доступ к объектам может считаться конфиденциальным для одних " "пользователей, но не для других. Например, большая команда разработчиков " "может быть разделена на меньшие проекты, где разработчики из проекта A не " "должны иметь доступа к объектам, созданным разработчиками из проекта B. " "Однако оба проекта могут нуждаться в доступе к объектам, созданным " "разработчиками из проекта C. Используя различные модули политик " "безопасности, предоставляемые MAC-фреймворком, пользователи могут быть " "разделены на эти группы и затем получить доступ к соответствующим объектам." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:340 msgid "" "Each security policy module has a unique way of dealing with the overall " "security of a system. Module selection should be based on a well thought " "out security policy which may require revision and reimplementation. " "Understanding the different security policy modules offered by the MAC " "framework will help administrators choose the best policies for their " "situations." msgstr "" "Каждый модуль политики безопасности имеет уникальный способ обработки общей " "безопасности системы. Выбор модуля должен основываться на продуманной " "политике безопасности, которая может потребовать пересмотра и повторной " "реализации. Понимание различных модулей политики безопасности, " "предоставляемых инфраструктурой MAC, поможет администраторам выбрать " "наилучшие политики для их ситуаций." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:342 msgid "" "The rest of this chapter covers the available modules, describes their use " "and configuration, and in some cases, provides insight on applicable " "situations." msgstr "" "Остальная часть этой главы посвящена доступным модулям, описанию их " "использования и настройки, а в некоторых случаях содержит рекомендации по их " "применению в различных ситуациях." #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:347 msgid "" "Implementing MAC is much like implementing a firewall since care must be " "taken to prevent being completely locked out of the system. The ability to " "revert back to a previous configuration should be considered and the " "implementation of MAC over a remote connection should be done with extreme " "caution." msgstr "" "Внедрение MAC во многом похоже на настройку межсетевого экрана, так как " "необходимо соблюдать осторожность, чтобы не оказаться полностью " "заблокированным в системе. Следует предусмотреть возможность возврата к " "предыдущей конфигурации, а реализацию MAC через удалённое соединение следует " "выполнять с особой осторожностью." #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:350 #, no-wrap msgid "Available MAC Policies" msgstr "Доступные политики MAC" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:356 msgid "" "The default FreeBSD kernel includes `options MAC`. This means that every " "module included with the MAC framework can be loaded with `kldload` as a run-" "time kernel module. After testing the module, add the module name to [." "filename]#/boot/loader.conf# so that it will load during boot. Each module " "also provides a kernel option for those administrators who choose to compile " "their own custom kernel." msgstr "" "Стандартное ядро FreeBSD включает `options MAC`. Это означает, что каждый " "модуль, входящий в состав инфраструктуры MAC, может быть загружен с помощью " "`kldload` как модуль ядра во время выполнения. После тестирования модуля " "добавьте его имя в [.filename]#/boot/loader.conf#, чтобы он загружался при " "старте системы. Каждый модуль также предоставляет опцию ядра для " "администраторов, которые предпочитают компилировать собственное ядро системы." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:359 msgid "" "FreeBSD includes a group of policies that will cover most security " "requirements. Each policy is summarized below. The last three policies " "support integer settings in place of the three default labels." msgstr "" "FreeBSD включает набор политик, которые охватывают большинство требований " "безопасности. Каждая политика кратко описана ниже. Последние три политики " "поддерживают целочисленные настройки вместо трёх стандартных меток." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:361 #, no-wrap msgid "The MAC See Other UIDs Policy" msgstr "Политика MAC — See Other UIDs" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:364 msgid "Module name: [.filename]#mac_seeotheruids.ko#" msgstr "Название модуля: [.filename]#mac_seeotheruids.ko#" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:366 msgid "Kernel configuration line: `options MAC_SEEOTHERUIDS`" msgstr "Строка конфигурации ядра: `options MAC_SEEOTHERUIDS`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:368 msgid "Boot option: `mac_seeotheruids_load=\"YES\"`" msgstr "Опция загрузки: `mac_seeotheruids_load=\"YES\"`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:371 msgid "" "The man:mac_seeotheruids[4] module extends the `security.bsd.see_other_uids` " "and `security.bsd.see_other_gids sysctl` tunables. This option does not " "require any labels to be set before configuration and can operate " "transparently with other modules." msgstr "" "Модуль man:mac_seeotheruids[4] расширяет настройки `security.bsd." "see_other_uids` и `security.bsd.see_other_gids sysctl`. Эта опция не требует " "предварительной установки меток для настройки и может работать прозрачно с " "другими модулями." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:373 msgid "" "After loading the module, the following `sysctl` tunables may be used to " "control its features:" msgstr "" "После загрузки модуля следующие настройки `sysctl` могут быть использованы " "для управления его функциями:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:375 msgid "" "`security.mac.seeotheruids.enabled` enables the module and implements the " "default settings which deny users the ability to view processes and sockets " "owned by other users." msgstr "" "`security.mac.seeotheruids.enabled` включает модуль и реализует настройки по " "умолчанию, которые запрещают пользователям возможность просматривать " "процессы и сокеты, принадлежащие другим пользователям." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:376 msgid "" "`security.mac.seeotheruids.specificgid_enabled` allows specified groups to " "be exempt from this policy. To exempt specific groups, use the `security.mac." "seeotheruids.specificgid=_XXX_ sysctl` tunable, replacing _XXX_ with the " "numeric group ID to be exempted." msgstr "" "`security.mac.seeotheruids.specificgid_enabled` позволяет исключить " "указанные группы из действия этой политики. Чтобы исключить определённые " "группы, используйте параметр `security.mac.seeotheruids.specificgid=_XXX_ " "sysctl`, заменив _XXX_ на числовой идентификатор группы, которую нужно " "исключить." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:377 msgid "" "`security.mac.seeotheruids.primarygroup_enabled` is used to exempt specific " "primary groups from this policy. When using this tunable, `security.mac." "seeotheruids.specificgid_enabled` may not be set." msgstr "" "`security.mac.seeotheruids.primarygroup_enabled` используется для исключения " "определённых первичных групп из этой политики. При использовании этого " "параметра `security.mac.seeotheruids.specificgid_enabled` не может быть " "установлен." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:379 #, no-wrap msgid "The MAC BSD Extended Policy" msgstr "Политика MAC — BSD Extended" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:382 msgid "Module name: [.filename]#mac_bsdextended.ko#" msgstr "Имя модуля: [.filename]#mac_bsdextended.ko#" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:384 msgid "Kernel configuration line: `options MAC_BSDEXTENDED`" msgstr "Строка конфигурации ядра: `options MAC_BSDEXTENDED`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:386 msgid "Boot option: `mac_bsdextended_load=\"YES\"`" msgstr "Параметр загрузки: `mac_bsdextended_load=\"YES\"`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:392 msgid "" "The man:mac_bsdextended[4] module enforces a file system firewall. It " "provides an extension to the standard file system permissions model, " "permitting an administrator to create a firewall-like ruleset to protect " "files, utilities, and directories in the file system hierarchy. When access " "to a file system object is attempted, the list of rules is iterated until " "either a matching rule is located or the end is reached. This behavior may " "be changed using `security.mac.bsdextended.firstmatch_enabled`. Similar to " "other firewall modules in FreeBSD, a file containing the access control " "rules can be created and read by the system at boot time using an man:rc." "conf[5] variable." msgstr "" "Модуль man:mac_bsdextended[4] обеспечивает файловый межсетевой экран. Он " "расширяет стандартную модель прав доступа к файловой системе, позволяя " "администратору создавать набор правил, подобный межсетевому экрану, для " "защиты файлов, утилит и каталогов в иерархии файловой системы. При попытке " "доступа к объекту файловой системы происходит перебор списка правил до тех " "пор, пока не будет найдено соответствующее правило или не будет достигнут " "конец списка. Это поведение можно изменить с помощью параметра `security.mac." "bsdextended.firstmatch_enabled`. Подобно другим модулям межсетевого экрана в " "FreeBSD, файл с правилами контроля доступа может быть создан и прочитан " "системой во время загрузки с использованием переменной man:rc.conf[5]." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:395 msgid "" "The rule list may be entered using man:ugidfw[8] which has a syntax similar " "to man:ipfw[8]. More tools can be written by using the functions in the man:" "libugidfw[3] library." msgstr "" "Список правил может быть введён с помощью man:ugidfw[8], синтаксис которого " "похож на man:ipfw[8]. Дополнительные инструменты могут быть написаны с " "использованием функций из библиотеки man:libugidfw[3]." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:397 msgid "" "After the man:mac_bsdextended[4] module has been loaded, the following " "command may be used to list the current rule configuration:" msgstr "" "После загрузки модуля man:mac_bsdextended[4] для просмотра текущей " "конфигурации правил можно использовать следующую команду:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:402 #, no-wrap msgid "" "# ugidfw list\n" "0 slots, 0 rules\n" msgstr "" "# ugidfw list\n" "0 slots, 0 rules\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:406 msgid "" "By default, no rules are defined and everything is completely accessible. " "To create a rule which blocks all access by users but leaves `root` " "unaffected:" msgstr "" "По умолчанию никакие правила не определены, и доступ полностью открыт. Чтобы " "создать правило, которое блокирует доступ для всех пользователей, кроме " "`root`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:410 #, no-wrap msgid "# ugidfw add subject not uid root new object not uid root mode n\n" msgstr "# ugidfw add subject not uid root new object not uid root mode n\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:414 msgid "" "While this rule is simple to implement, it is a very bad idea as it blocks " "all users from issuing any commands. A more realistic example blocks " "`user1` all access, including directory listings, to ``_user2_``'s home " "directory:" msgstr "" "Хотя это правило просто реализовать, это очень плохая идея, так как оно " "блокирует всех пользователей от выполнения любых команд. Более реалистичный " "пример запрещает `user1` любой доступ, включая просмотр каталогов, к " "домашнему каталогу ``_user2_``:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:419 #, no-wrap msgid "" "# ugidfw set 2 subject uid user1 object uid user2 mode n\n" "# ugidfw set 3 subject uid user1 object gid user2 mode n\n" msgstr "" "# ugidfw set 2 subject uid user1 object uid user2 mode n\n" "# ugidfw set 3 subject uid user1 object gid user2 mode n\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:423 msgid "" "Instead of `user1`, `not uid _user2_` could be used in order to enforce the " "same access restrictions for all users. However, the `root` user is " "unaffected by these rules." msgstr "" "Вместо `user1` можно использовать `not uid _user2_`, чтобы применять " "одинаковые ограничения доступа для всех пользователей. Однако пользователь " "`root` не подвержен влиянию этих правил." #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:427 msgid "" "Extreme caution should be taken when working with this module as incorrect " "use could block access to certain parts of the file system." msgstr "" "Следует проявлять крайнюю осторожность при работе с этим модулем, так как " "неправильное использование может заблокировать доступ к определенным частям " "файловой системы." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:430 #, no-wrap msgid "The MAC Interface Silencing Policy" msgstr "Политика MAC — подавление интерфейсов" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:433 msgid "Module name: [.filename]#mac_ifoff.ko#" msgstr "Имя модуля: [.filename]#mac_ifoff.ko#" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:435 msgid "Kernel configuration line: `options MAC_IFOFF`" msgstr "Строка конфигурации ядра: `options MAC_IFOFF`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:437 msgid "Boot option: `mac_ifoff_load=\"YES\"`" msgstr "Параметр загрузки: `mac_ifoff_load=\"YES\"`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:440 msgid "" "The man:mac_ifoff[4] module is used to disable network interfaces on the fly " "and to keep network interfaces from being brought up during system boot. It " "does not use labels and does not depend on any other MAC modules." msgstr "" "Модуль man:mac_ifoff[4] используется для отключения сетевых интерфейсов на " "лету и предотвращения их включения во время загрузки системы. Он не " "использует метки и не зависит от других модулей MAC." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:442 msgid "" "Most of this module's control is performed through these `sysctl` tunables:" msgstr "" "Большая часть управления этим модулем осуществляется через следующие " "настраиваемые параметры `sysctl`:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:444 msgid "" "`security.mac.ifoff.lo_enabled` enables or disables all traffic on the " "loopback, man:lo[4], interface." msgstr "" "`security.mac.ifoff.lo_enabled` включает или отключает весь трафик на " "интерфейсе loopback, man:lo[4]." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:445 msgid "" "`security.mac.ifoff.bpfrecv_enabled` enables or disables all traffic on the " "Berkeley Packet Filter interface, man:bpf[4]." msgstr "" "`security.mac.ifoff.bpfrecv_enabled` включает или отключает весь трафик на " "интерфейсе Berkeley Packet Filter, man:bpf[4]." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:446 msgid "" "`security.mac.ifoff.other_enabled` enables or disables traffic on all other " "interfaces." msgstr "" "`security.mac.ifoff.other_enabled` включает или отключает трафик на всех " "остальных интерфейсах." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:449 msgid "" "One of the most common uses of man:mac_ifoff[4] is network monitoring in an " "environment where network traffic should not be permitted during the boot " "sequence. Another use would be to write a script which uses an application " "such as package:security/aide[] to automatically block network traffic if it " "finds new or altered files in protected directories." msgstr "" "Одним из наиболее распространённых применений `mac_ifoff(4)` является " "мониторинг сети в среде, где сетевой трафик не должен разрешаться во время " "последовательности загрузки. Другое применение — написание скрипта, который " "использует приложение, например package:security/aide[], для автоматической " "блокировки сетевого трафика при обнаружении новых или изменённых файлов в " "защищённых каталогах." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:451 #, no-wrap msgid "The MAC Port Access Control List Policy" msgstr "Политика MAC - списки управления доступом к портам" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:454 msgid "Module name: [.filename]#mac_portacl.ko#" msgstr "Имя модуля: [.filename]#mac_portacl.ko#" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:456 msgid "Kernel configuration line: `MAC_PORTACL`" msgstr "Строка конфигурации ядра: `MAC_PORTACL`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:458 msgid "Boot option: `mac_portacl_load=\"YES\"`" msgstr "Параметр загрузки: `mac_portacl_load=\"YES\"`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:460 msgid "" "The man:mac_portacl[4] module is used to limit binding to local TCP and UDP " "ports, making it possible to allow non-`root` users to bind to specified " "privileged ports below 1024." msgstr "" "Модуль man:mac_portacl[4] используется для ограничения привязки к локальным " "TCP- и UDP-портам, позволяя непривилегированным пользователям (не `root`) " "привязываться к указанным привилегированным портам ниже 1024." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:463 msgid "" "Once loaded, this module enables the MAC policy on all sockets. The " "following tunables are available:" msgstr "" "После загрузки этот модуль включает политику MAC для всех сокетов. Доступны " "следующие настраиваемые параметры:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:465 msgid "" "`security.mac.portacl.enabled` enables or disables the policy completely." msgstr "" "`security.mac.portacl.enabled` включает или отключает политику полностью." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:466 msgid "" "`security.mac.portacl.port_high` sets the highest port number that man:" "mac_portacl[4] protects." msgstr "" "`security.mac.portacl.port_high` устанавливает наибольший номер порта, " "который защищает man:mac_portacl[4]." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:467 msgid "" "`security.mac.portacl.suser_exempt`, when set to a non-zero value, exempts " "the `root` user from this policy." msgstr "" "`security.mac.portacl.suser_exempt` при установке в ненулевое значение " "освобождает пользователя `root` от действия данной политики." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:468 msgid "" "`security.mac.portacl.rules` specifies the policy as a text string of the " "form `rule[,rule,...]`, with as many rules as needed, and where each rule is " "of the form `idtype:id:protocol:port`. The `idtype` is either `uid` or " "`gid`. The `protocol` parameter can be `tcp` or `udp`. The `port` parameter " "is the port number to allow the specified user or group to bind to. Only " "numeric values can be used for the user ID, group ID, and port parameters." msgstr "" "`security.mac.portacl.rules` задаёт политику в виде текстовой строки формата " "`правило[,правило,...]`, с любым необходимым количеством правил, где каждое " "правило имеет вид `тип_идентификатора:идентификатор:протокол:порт`. " "`тип_идентификатора` может быть `uid` или `gid`. Параметр `протокол ` " "принимает значения `tcp` или `udp`. Параметр `порт` указывает номер порта, к " "которому разрешено привязываться указанному пользователю или группе. Для " "параметров `идентификатор пользователя`, `идентификатор группы` и `порт` " "можно использовать только числовые значения." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:471 msgid "" "By default, ports below 1024 can only be used by privileged processes which " "run as `root`. For man:mac_portacl[4] to allow non-privileged processes to " "bind to ports below 1024, set the following tunables as follows:" msgstr "" "По умолчанию порты ниже 1024 могут использоваться только привилегированными " "процессами, работающими от имени `root`. Чтобы разрешить непривилегированным " "процессам привязываться к портам ниже 1024 с помощью man:mac_portacl[4], " "задайте следующие настраиваемые параметры следующим образом:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:477 #, no-wrap msgid "" "# sysctl security.mac.portacl.port_high=1023\n" "# sysctl net.inet.ip.portrange.reservedlow=0\n" "# sysctl net.inet.ip.portrange.reservedhigh=0\n" msgstr "" "# sysctl security.mac.portacl.port_high=1023\n" "# sysctl net.inet.ip.portrange.reservedlow=0\n" "# sysctl net.inet.ip.portrange.reservedhigh=0\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:480 msgid "" "To prevent the `root` user from being affected by this policy, set `security." "mac.portacl.suser_exempt` to a non-zero value." msgstr "" "Чтобы предотвратить влияние этой политики на пользователя `root`, установите " "`security.mac.portacl.suser_exempt` в ненулевое значение." #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:484 #, no-wrap msgid "# sysctl security.mac.portacl.suser_exempt=1\n" msgstr "# sysctl security.mac.portacl.suser_exempt=1\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:487 msgid "" "To allow the `www` user with UID 80 to bind to port 80 without ever needing " "`root` privilege:" msgstr "" "Чтобы пользователь `www` с UID 80 мог привязываться к порту 80 без " "необходимости в привилегиях `root`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:491 #, no-wrap msgid "# sysctl security.mac.portacl.rules=uid:80:tcp:80\n" msgstr "# sysctl security.mac.portacl.rules=uid:80:tcp:80\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:494 msgid "" "This next example permits the user with the UID of 1001 to bind to TCP ports " "110 (POP3) and 995 (POP3s):" msgstr "" "Следующий пример разрешает пользователю с UID 1001 привязываться к TCP-" "портам 110 (POP3) и 995 (POP3s):" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:498 #, no-wrap msgid "# sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995\n" msgstr "# sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995\n" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:501 #, no-wrap msgid "The MAC Partition Policy" msgstr "Политика MAC — разделы процессов" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:504 msgid "Module name: [.filename]#mac_partition.ko#" msgstr "Имя модуля: [.filename]#mac_partition.ko#" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:506 msgid "Kernel configuration line: `options MAC_PARTITION`" msgstr "Строка конфигурации ядра: `options MAC_PARTITION`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:508 msgid "Boot option: `mac_partition_load=\"YES\"`" msgstr "Опция загрузки: `mac_partition_load=\"YES\"`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:512 msgid "" "The man:mac_partition[4] policy drops processes into specific \"partitions\" " "based on their MAC label. Most configuration for this policy is done using " "man:setpmac[8]. One `sysctl` tunable is available for this policy:" msgstr "" "Политика man:mac_partition[4] помещает процессы в определенные \"разделы\" " "на основе их метки MAC. Большая часть настройки этой политики выполняется с " "помощью man:setpmac[8]. Для этой политики доступна одна настраиваемая " "переменная `sysctl`:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:514 msgid "" "`security.mac.partition.enabled` enables the enforcement of MAC process " "partitions." msgstr "" "`security.mac.partition.enabled` включает принудительное применение разделов " "процессов MAC." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:517 msgid "" "When this policy is enabled, users will only be permitted to see their " "processes, and any others within their partition, but will not be permitted " "to work with utilities outside the scope of this partition. For instance, a " "user in the `insecure` class will not be permitted to access `top` as well " "as many other commands that must spawn a process." msgstr "" "Когда эта политика включена, пользователи смогут видеть только свои процессы " "и процессы в своем разделе, но не смогут работать с утилитами за пределами " "этого раздела. Например, пользователь из класса `insecure` не сможет " "получить доступ к `top`, а также ко многим другим командам, которые должны " "запускать процессы." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:520 msgid "" "This example adds `top` to the label set on users in the `insecure` class. " "All processes spawned by users in the `insecure` class will stay in the " "`partition/13` label." msgstr "" "В этом примере `top` добавляется к набору меток пользователей в классе " "`insecure`. Все процессы, запущенные пользователями из класса `insecure`, " "останутся с меткой `partition/13`." #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:524 #, no-wrap msgid "# setpmac partition/13 top\n" msgstr "# setpmac partition/13 top\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:527 msgid "This command displays the partition label and the process list:" msgstr "Эта команда отображает метку раздела и список процессов:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:531 #, no-wrap msgid "# ps Zax\n" msgstr "# ps Zax\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:534 msgid "" "This command displays another user's process partition label and that user's " "currently running processes:" msgstr "" "Эта команда отображает метку раздела процессов другого пользователя и его " "текущие запущенные процессы:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:538 #, no-wrap msgid "# ps -ZU trhodes\n" msgstr "# ps -ZU trhodes\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:543 msgid "" "Users can see processes in ``root``'s label unless the man:" "mac_seeotheruids[4] policy is loaded." msgstr "" "Пользователи могут видеть процессы с меткой ``root``, если не загружена " "политика man:mac_seeotheruids[4]." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:546 #, no-wrap msgid "The MAC Multi-Level Security Module" msgstr "Модуль многоуровневой безопасности MAC" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:549 msgid "Module name: [.filename]#mac_mls.ko#" msgstr "Имя модуля: [.filename]#mac_mls.ko#" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:551 msgid "Kernel configuration line: `options MAC_MLS`" msgstr "Строка конфигурации ядра: `options MAC_MLS`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:553 msgid "Boot option: `mac_mls_load=\"YES\"`" msgstr "Параметр загрузки: `mac_mls_load=\"YES\"`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:555 msgid "" "The man:mac_mls[4] policy controls access between subjects and objects in " "the system by enforcing a strict information flow policy." msgstr "" "Политика man:mac_mls[4] контролирует доступ между субъектами и объектами в " "системе, применяя строгую политику управления потоком информации." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:559 msgid "" "In MLS environments, a \"clearance\" level is set in the label of each " "subject or object, along with compartments. Since these clearance levels " "can reach numbers greater than several thousand, it would be a daunting task " "to thoroughly configure every subject or object. To ease this " "administrative overhead, three labels are included in this policy: `mls/" "low`, `mls/equal`, and `mls/high`, where:" msgstr "" "В средах MLS в метке каждого субъекта или объекта устанавливается уровень " "\"допуска\" вместе с компартментами. Поскольку эти уровни допуска могут " "достигать значений, превышающих несколько тысяч, тщательная настройка " "каждого субъекта или объекта была бы сложной задачей. Для снижения " "административной нагрузки в эту политику включены три метки: `mls/low`, `mls/" "equal` и `mls/high`, где:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:561 msgid "" "Anything labeled with `mls/low` will have a low clearance level and not be " "permitted to access information of a higher level. This label also prevents " "objects of a higher clearance level from writing or passing information to a " "lower level." msgstr "" "Все объекты, помеченные меткой `mls/low`, будут иметь низкий уровень доступа " "и не смогут обращаться к информации более высокого уровня. Эта метка также " "предотвращает запись или передачу информации от объектов с более высоким " "уровнем доступа к объектам с более низким уровнем." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:562 msgid "" "`mls/equal` should be placed on objects which should be exempt from the " "policy." msgstr "" "`mls/equal` следует размещать на объектах, которые должны быть освобождены " "от политики." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:563 msgid "" "`mls/high` is the highest level of clearance possible. Objects assigned this " "label will hold dominance over all other objects in the system; however, " "they will not permit the leaking of information to objects of a lower class." msgstr "" "`mls/high` — это наивысший возможный уровень допуска. Объекты с этой меткой " "будут доминировать над всеми остальными объектами в системе; однако они не " "допустят утечки информации к объектам более низкого класса." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:565 msgid "MLS provides:" msgstr "MLS предоставляет:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:567 msgid "" "A hierarchical security level with a set of non-hierarchical categories." msgstr "" "Иерархический уровень безопасности с набором неиерархических категорий." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:568 msgid "" "Fixed rules of `no read up, no write down`. This means that a subject can " "have read access to objects on its own level or below, but not above. " "Similarly, a subject can have write access to objects on its own level or " "above, but not beneath." msgstr "" "Фиксированные правила `нет чтения вверх, нет записи вниз`. Это означает, что " "субъект может иметь право чтения объектов на своём уровне или ниже, но не " "выше. Аналогично, субъект может иметь право записи объектов на своём уровне " "или выше, но не ниже." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:569 msgid "Secrecy, or the prevention of inappropriate disclosure of data." msgstr "Секретность, или предотвращение несанкционированного раскрытия данных." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:570 msgid "" "A basis for the design of systems that concurrently handle data at multiple " "sensitivity levels without leaking information between secret and " "confidential." msgstr "" "Основы проектирования систем, которые одновременно обрабатывают данные с " "разными уровнями конфиденциальности, не допуская утечки информации между " "секретными и конфиденциальными данными." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:572 msgid "The following `sysctl` tunables are available:" msgstr "Доступны следующие настраиваемые параметры `sysctl`:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:574 msgid "`security.mac.mls.enabled` is used to enable or disable the MLS policy." msgstr "" "`security.mac.mls.enabled` используется для включения или отключения " "политики MLS." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:575 msgid "" "`security.mac.mls.ptys_equal` labels all man:pty[4] devices as `mls/equal` " "during creation." msgstr "" "`security.mac.mls.ptys_equal` помечает все устройства man:pty[4] как `mls/" "equal` при создании." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:576 msgid "" "`security.mac.mls.revocation_enabled` revokes access to objects after their " "label changes to a label of a lower grade." msgstr "" "`security.mac.mls.revocation_enabled` отзывает доступ к объектам после " "изменения их метки на метку более низкого уровня целостности." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:577 msgid "" "`security.mac.mls.max_compartments` sets the maximum number of compartment " "levels allowed on a system." msgstr "" "`security.mac.mls.max_compartments` устанавливает максимальное количество " "уровней компартментов, разрешенных в системе." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:579 msgid "" "To manipulate MLS labels, use man:setfmac[8]. To assign a label to an object:" msgstr "" "Для работы с метками MLS используйте man:setfmac[8]. Чтобы назначить метку " "объекту:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:583 #, no-wrap msgid "# setfmac mls/5 test\n" msgstr "# setfmac mls/5 test\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:586 msgid "To get the MLS label for the file [.filename]#test#:" msgstr "Чтобы получить метку MLS для файла [.filename]#test#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:590 #, no-wrap msgid "# getfmac test\n" msgstr "# getfmac test\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:593 msgid "" "Another approach is to create a master policy file in [.filename]#/etc/# " "which specifies the MLS policy information and to feed that file to " "`setfmac`." msgstr "" "Другой подход заключается в создании основного файла политики в [.filename]#/" "etc/#, который определяет информацию о политике MLS, и передаче этого файла " "в `setfmac`." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:597 msgid "" "When using the MLS policy module, an administrator plans to control the flow " "of sensitive information. The default `block read up block write down` sets " "everything to a low state. Everything is accessible and an administrator " "slowly augments the confidentiality of the information." msgstr "" "При использовании модуля политики MLS администратор планирует контролировать " "поток конфиденциальной информации. Значение по умолчанию `block read up " "block write down` устанавливает всё в состояние low. Вся информация " "доступна, и администратор постепенно повышает её конфиденциальность." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:602 msgid "" "Beyond the three basic label options, an administrator may group users and " "groups as required to block the information flow between them. It might be " "easier to look at the information in clearance levels using descriptive " "words, such as classifications of `Confidential`, `Secret`, and `Top " "Secret`. Some administrators instead create different groups based on " "project levels. Regardless of the classification method, a well thought out " "plan must exist before implementing a restrictive policy." msgstr "" "Помимо трех основных вариантов меток, администратор может группировать " "пользователей и группы по мере необходимости, чтобы блокировать поток " "информации между ними. Возможно, будет проще рассматривать информацию на " "уровнях допуска, используя описательные слова, такие как классификации " "`Confidential` (`Конфиденциально`), `Secret` (`Секретно`) и `Top Secret` " "(`Совершенно секретно`). Некоторые администраторы вместо этого создают " "разные группы на основе уровней проектов. Независимо от метода " "классификации, перед внедрением ограничительной политики должен существовать " "продуманный план." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:604 msgid "" "Some example situations for the MLS policy module include an e-commerce web " "server, a file server holding critical company information, and financial " "institution environments." msgstr "" "Некоторые примеры ситуаций для модуля политики MLS включают веб-сервер " "электронной коммерции, файловый сервер с критически важной информацией " "компании и среды финансовых учреждений." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:606 #, no-wrap msgid "The MAC Biba Module" msgstr "Модуль MAC Biba" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:609 msgid "Module name: [.filename]#mac_biba.ko#" msgstr "Имя модуля: [.filename]#mac_biba.ko#" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:611 msgid "Kernel configuration line: `options MAC_BIBA`" msgstr "Строка конфигурации ядра: `options MAC_BIBA`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:613 msgid "Boot option: `mac_biba_load=\"YES\"`" msgstr "Опция загрузки: `mac_biba_load=\"YES\"`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:617 msgid "" "The man:mac_biba[4] module loads the MAC Biba policy. This policy is " "similar to the MLS policy with the exception that the rules for information " "flow are slightly reversed. This is to prevent the downward flow of " "sensitive information whereas the MLS policy prevents the upward flow of " "sensitive information." msgstr "" "Модуль man:mac_biba[4] загружает политику MAC Biba. Эта политика похожа на " "политику MLS, за исключением того, что правила передачи информации слегка " "изменены в обратном порядке. Это предотвращает поток конфиденциальной " "информации вниз, тогда как политика MLS предотвращает поток конфиденциальной " "информации вверх." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:621 msgid "" "In Biba environments, an \"integrity\" label is set on each subject or " "object. These labels are made up of hierarchical grades and non-" "hierarchical components. As a grade ascends, so does its integrity." msgstr "" "В средах Biba для каждого субъекта или объекта устанавливается метка " "«целостности». Эти метки состоят из иерархических уровней целостности и " "неиерархических компонентов. По мере повышения уровня увеличивается и его " "целостность." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:623 msgid "Supported labels are `biba/low`, `biba/equal`, and `biba/high`, where:" msgstr "Поддерживаемые метки: `biba/low`, `biba/equal` и `biba/high`, где:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:625 msgid "" "`biba/low` is considered the lowest integrity an object or subject may have. " "Setting this on objects or subjects blocks their write access to objects or " "subjects marked as `biba/high`, but will not prevent read access." msgstr "" "`biba/low` считается самой низкой целостностью, которую может иметь объект " "или субъект. Установка этого уровня на объекты или субъекты блокирует их " "запись в объекты или субъекты с меткой `biba/high`, но не предотвращает " "чтение." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:626 msgid "" "`biba/equal` should only be placed on objects considered to be exempt from " "the policy." msgstr "" "`biba/equal` следует размещать только на объектах, которые считаются " "исключёнными из политики." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:627 msgid "" "`biba/high` permits writing to objects set at a lower label, but does not " "permit reading that object. It is recommended that this label be placed on " "objects that affect the integrity of the entire system." msgstr "" "`biba/high` разрешает запись в объекты с более низкой меткой, но запрещает " "чтение этих объектов. Рекомендуется устанавливать эту метку для объектов, " "которые влияют на целостность всей системы." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:629 msgid "Biba provides:" msgstr "Biba обеспечивает:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:631 msgid "" "Hierarchical integrity levels with a set of non-hierarchical integrity " "categories." msgstr "" "Иерархические уровни целостности с набором неиерархических категорий " "целостности." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:632 msgid "" "Fixed rules are `no write up, no read down`, the opposite of MLS. A subject " "can have write access to objects on its own level or below, but not above. " "Similarly, a subject can have read access to objects on its own level or " "above, but not below." msgstr "" "Фиксированные правила — это `нет записи вверх, нет чтения вниз`, что " "противоположно MLS. Субъект может иметь право записи в объекты на своём " "уровне или ниже, но не выше. Аналогично, субъект может иметь право чтения " "объектов на своём уровне или выше, но не ниже." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:633 msgid "Integrity by preventing inappropriate modification of data." msgstr "Целостность за счет предотвращения нежелательного изменения данных." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:634 msgid "Integrity levels instead of MLS sensitivity levels." msgstr "Уровни целостности вместо уровней чувствительности MLS." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:636 msgid "The following tunables can be used to manipulate the Biba policy:" msgstr "" "Следующие настраиваемые параметры могут быть использованы для управления " "политикой Biba:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:638 msgid "" "`security.mac.biba.enabled` is used to enable or disable enforcement of the " "Biba policy on the target machine." msgstr "" "`security.mac.biba.enabled` используется для включения или отключения " "принудительного применения политики Biba на целевой машине." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:639 msgid "" "`security.mac.biba.ptys_equal` is used to disable the Biba policy on man:" "pty[4] devices." msgstr "" "`security.mac.biba.ptys_equal` используется для отключения политики Biba на " "устройствах man:pty[4]." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:640 msgid "" "`security.mac.biba.revocation_enabled` forces the revocation of access to " "objects if the label is changed to dominate the subject." msgstr "" "`security.mac.biba.revocation_enabled` принудительно отзывает доступ к " "объектам, если их метка изменяется так, чтобы доминировать над субъектом." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:642 msgid "" "To access the Biba policy setting on system objects, use `setfmac` and " "`getfmac`:" msgstr "" "Для доступа к настройкам политики Biba для системных объектов используйте " "`setfmac` и `getfmac`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:648 #, no-wrap msgid "" "# setfmac biba/low test\n" "# getfmac test\n" "test: biba/low\n" msgstr "" "# setfmac biba/low test\n" "# getfmac test\n" "test: biba/low\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:654 msgid "" "Integrity, which is different from sensitivity, is used to guarantee that " "information is not manipulated by untrusted parties. This includes " "information passed between subjects and objects. It ensures that users will " "only be able to modify or access information they have been given explicit " "access to. The man:mac_biba[4] security policy module permits an " "administrator to configure which files and programs a user may see and " "invoke while assuring that the programs and files are trusted by the system " "for that user." msgstr "" "Целостность, которая отличается от конфиденциальности, используется для " "гарантии того, что информация не будет изменена ненадёжными сторонами. Это " "включает информацию, передаваемую между субъектами и объектами. Она " "обеспечивает пользователям возможность изменять или получать доступ только к " "той информации, к которой у них есть явный доступ. Модуль политики " "безопасности man:mac_biba[4] позволяет администратору настроить, какие файлы " "и программы пользователь может просматривать и запускать, гарантируя, что " "эти программы и файлы считаются системой доверенными для данного " "пользователя." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:660 msgid "" "During the initial planning phase, an administrator must be prepared to " "partition users into grades, levels, and areas. The system will default to " "a high label once this policy module is enabled, and it is up to the " "administrator to configure the different grades and levels for users. " "Instead of using clearance levels, a good planning method could include " "topics. For instance, only allow developers modification access to the " "source code repository, source code compiler, and other development " "utilities. Other users would be grouped into other categories such as " "testers, designers, or end users and would only be permitted read access." msgstr "" "В ходе начального этапа планирования администратор должен быть готов " "разделить пользователей по уровням целостности (grade), уровням объектов " "(level) и областям. После включения этого модуля политики система по " "умолчанию перейдет на высокий уровень метки, и администратору потребуется " "настроить различные уровни целостности и уровни объектов для пользователей. " "Вместо использования уровней доступа хорошим методом планирования может " "стать использование тематик. Например, разрешить разработчикам доступ на " "изменение только к репозиторию исходного кода, компилятору исходного кода и " "другим инструментам разработки. Остальные пользователи будут распределены по " "другим категориям, таким как тестировщики, дизайнеры или конечные " "пользователи, и им будет разрешен только доступ на чтение." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:665 msgid "" "A lower integrity subject is unable to write to a higher integrity subject " "and a higher integrity subject cannot list or read a lower integrity " "object. Setting a label at the lowest possible grade could make it " "inaccessible to subjects. Some prospective environments for this security " "policy module would include a constrained web server, a development and test " "machine, and a source code repository. A less useful implementation would " "be a personal workstation, a machine used as a router, or a network firewall." msgstr "" "Субъект с более низким уровнем целостности не может записывать данные в " "субъект с более высоким уровнем целостности, а субъект с более высоким " "уровнем целостности не может просматривать или читать объект с более низким " "уровнем целостности. Установка метки на минимально возможном уровне может " "сделать объект недоступным для субъектов. Перспективными средами для " "использования этого модуля политики безопасности могут быть ограниченный веб-" "сервер, машина для разработки и тестирования, а также репозиторий исходного " "кода. Менее полезной реализацией будет персональная рабочая станция, машина, " "используемая в качестве маршрутизатора, или сетевой межсетевой экран." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:667 #, no-wrap msgid "The MAC Low-watermark Module" msgstr "Модуль MAC Low-watermark (нижний порог)" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:670 msgid "Module name: [.filename]#mac_lomac.ko#" msgstr "Имя модуля: [.filename]#mac_lomac.ko#" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:672 msgid "Kernel configuration line: `options MAC_LOMAC`" msgstr "Строка конфигурации ядра: `options MAC_LOMAC`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:674 msgid "Boot option: `mac_lomac_load=\"YES\"`" msgstr "Параметр загрузки: `mac_lomac_load=\"YES\"`" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:676 msgid "" "Unlike the MAC Biba policy, the man:mac_lomac[4] policy permits access to " "lower integrity objects only after decreasing the integrity level to not " "disrupt any integrity rules." msgstr "" "В отличие от политики MAC Biba, политика man:mac_lomac[4] разрешает доступ к " "объектам с более низким уровнем целостности только после понижения уровня " "целостности, чтобы не нарушать правила целостности." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:680 msgid "" "The Low-watermark integrity policy works almost identically to Biba, with " "the exception of using floating labels to support subject demotion via an " "auxiliary grade compartment. This secondary compartment takes the form " "`[auxgrade]`. When assigning a policy with an auxiliary grade, use the " "syntax `lomac/10[2]`, where `2` is the auxiliary grade." msgstr "" "Политика целостности Low-watermark работает почти идентично Biba, за " "исключением использования плавающих меток для поддержки понижения уровня " "субъекта через компартмент с вспомогательным уровнем целостности. Этот " "вторичный компартмент имеет вид `[auxgrade]`. При назначении политики с " "вспомогательным уровнем целостности используйте синтаксис `lomac/10[2]`, где " "`2` — это вспомогательный уровень целостности." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:683 msgid "" "This policy relies on the ubiquitous labeling of all system objects with " "integrity labels, permitting subjects to read from low integrity objects and " "then downgrading the label on the subject to prevent future writes to high " "integrity objects using `[auxgrade]`. The policy may provide greater " "compatibility and require less initial configuration than Biba." msgstr "" "Данная политика основывается на повсеместной маркировке всех системных " "объектов метками целостности, позволяя субъектам читать из объектов с низкой " "целостностью, а затем понижая уровень метки на субъекте с помощью " "`[auxgrade]`, чтобы предотвратить последующие записи в объекты с высокой " "целостностью. Эта политика может обеспечить большую совместимость и " "потребовать меньше начальной настройки по сравнению с Biba." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:685 msgid "" "Like the Biba and MLS policies, `setfmac` and `setpmac` are used to place " "labels on system objects:" msgstr "" "Как и в политиках Biba и MLS, `setfmac` и `setpmac` используются для " "назначения меток объектам системы:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:690 #, no-wrap msgid "" "# setfmac /usr/home/trhodes lomac/high[low]\n" "# getfmac /usr/home/trhodes lomac/high[low]\n" msgstr "" "# setfmac /usr/home/trhodes lomac/high[low]\n" "# getfmac /usr/home/trhodes lomac/high[low]\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:693 msgid "" "The auxiliary grade `low` is a feature provided only by the MACLOMAC policy." msgstr "" "Вспомогательный уровень целостности `low` — это функция, предоставляемая " "только политикой MACLOMAC." #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:695 #, no-wrap msgid "User Lock Down" msgstr "Блокировка пользователя" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:699 msgid "" "This example considers a relatively small storage system with fewer than " "fifty users. Users will have login capabilities and are permitted to store " "data and access resources." msgstr "" "Этот пример рассматривает относительно небольшую систему хранения данных с " "менее чем пятьюдесятью пользователями. Пользователи будут иметь возможность " "входа в систему и могут хранить данные и получать доступ к ресурсам." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:701 msgid "" "For this scenario, the man:mac_bsdextended[4] and man:mac_seeotheruids[4] " "policy modules could co-exist and block access to system objects while " "hiding user processes." msgstr "" "Для данного сценария модули политик man:mac_bsdextended[4] и man:" "mac_seeotheruids[4] могут сосуществовать и блокировать доступ к системным " "объектам, скрывая при этом пользовательские процессы." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:703 msgid "Begin by adding the following line to [.filename]#/boot/loader.conf#:" msgstr "" "Начните с добавления следующей строки в [.filename]#/boot/loader.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:707 #, no-wrap msgid "mac_seeotheruids_load=\"YES\"\n" msgstr "mac_seeotheruids_load=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:710 msgid "" "The man:mac_bsdextended[4] security policy module may be activated by adding " "this line to [.filename]#/etc/rc.conf#:" msgstr "" "Модуль политики безопасности man:mac_bsdextended[4] может быть активирован " "добавлением следующей строки в [.filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:714 #, no-wrap msgid "ugidfw_enable=\"YES\"\n" msgstr "ugidfw_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:719 msgid "" "Default rules stored in [.filename]#/etc/rc.bsdextended# will be loaded at " "system initialization. However, the default entries may need modification. " "Since this machine is expected only to service users, everything may be left " "commented out except the last two lines in order to force the loading of " "user owned system objects by default." msgstr "" "Файл с правилами по умолчанию, хранящийся в [.filename]#/etc/rc." "bsdextended#, будет загружен при инициализации системы. Однако стандартные " "записи могут потребовать изменения. Поскольку предполагается, что данная " "машина будет обслуживать только пользователей, все строки можно оставить " "закомментированными, за исключением последних двух, чтобы обеспечить " "принудительную загрузку системных объектов, принадлежащих пользователям, по " "умолчанию." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:724 msgid "" "Add the required users to this machine and reboot. For testing purposes, " "try logging in as a different user across two consoles. Run `ps aux` to see " "if processes of other users are visible. Verify that running man:ls[1] on " "another user's home directory fails." msgstr "" "Добавьте необходимых пользователей на эту машину и перезагрузитесь. Для " "тестирования попробуйте войти в систему под разными пользователями на двух " "консолях. Выполните `ps -aux`, чтобы проверить, видны ли процессы других " "пользователей. Убедитесь, что выполнение man:ls[1] для домашнего каталога " "другого пользователя завершается ошибкой." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:726 msgid "" "Do not try to test with the `root` user unless the specific ``sysctl``s have " "been modified to block super user access." msgstr "" "Не пытайтесь проводить тестирование от пользователя `root`, если специальные " "параметры ``sysctl`` не были изменены для блокировки доступа " "суперпользователя." #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:731 msgid "" "When a new user is added, their man:mac_bsdextended[4] rule will not be in " "the ruleset list. To update the ruleset quickly, unload the security policy " "module and reload it again using man:kldunload[8] and man:kldload[8]." msgstr "" "При добавлении нового пользователя его правило man:mac_bsdextended[4] не " "будет в списке набора правил. Чтобы быстро обновить набор правил, выгрузите " "модуль политики безопасности и загрузите его снова с помощью man:" "kldunload[8] и man:kldload[8]." #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:734 #, no-wrap msgid "Nagios in a MAC Jail" msgstr "Nagios в MAC клетке" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:738 msgid "" "This section demonstrates the steps that are needed to implement the Nagios " "network monitoring system in a MAC environment. This is meant as an example " "which still requires the administrator to test that the implemented policy " "meets the security requirements of the network before using in a production " "environment." msgstr "" "В этом разделе показаны шаги, необходимые для внедрения системы мониторинга " "сети Nagios в среде MAC. Это пример, который требует от администратора " "проверки соответствия реализованной политики требованиям безопасности сети " "перед использованием в рабочей среде." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:741 msgid "" "This example requires `multilabel` to be set on each file system. It also " "assumes that package:net-mgmt/nagios-plugins[], package:net-mgmt/nagios[], " "and package:www/apache22[] are all installed, configured, and working " "correctly before attempting the integration into the MAC framework." msgstr "" "Этот пример требует установки `multilabel` на каждой файловой системе. Также " "предполагается, что package:net-mgmt/nagios-plugins[], package:net-mgmt/" "nagios[] и package:www/apache22[] установлены, настроены и корректно " "работают до попытки интеграции в инфраструктуре MAC." #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:742 #, no-wrap msgid "Create an Insecure User Class" msgstr "Создайте небезопасный класс пользователя" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:745 msgid "" "Begin the procedure by adding the following user class to [.filename]#/etc/" "login.conf#:" msgstr "" "Начните процедуру, добавив следующий класс пользователя в [.filename]#/etc/" "login.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:771 #, no-wrap msgid "" "insecure:\\\n" -":copyright=/etc/COPYRIGHT:\\\n" ":welcome=/etc/motd:\\\n" ":setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\\\n" ":path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin\n" ":manpath=/usr/share/man /usr/local/man:\\\n" ":nologin=/usr/sbin/nologin:\\\n" ":cputime=1h30m:\\\n" ":datasize=8M:\\\n" ":vmemoryuse=100M:\\\n" ":stacksize=2M:\\\n" ":memorylocked=4M:\\\n" ":memoryuse=8M:\\\n" ":filesize=8M:\\\n" ":coredumpsize=8M:\\\n" ":openfiles=24:\\\n" ":maxproc=32:\\\n" ":priority=0:\\\n" ":requirehome:\\\n" ":passwordtime=91d:\\\n" ":umask=022:\\\n" ":ignoretime@:\\\n" ":label=biba/10(10-10):\n" msgstr "" "insecure:\\\n" -":copyright=/etc/COPYRIGHT:\\\n" ":welcome=/etc/motd:\\\n" ":setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\\\n" ":path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin\n" ":manpath=/usr/share/man /usr/local/man:\\\n" ":nologin=/usr/sbin/nologin:\\\n" ":cputime=1h30m:\\\n" ":datasize=8M:\\\n" ":vmemoryuse=100M:\\\n" ":stacksize=2M:\\\n" ":memorylocked=4M:\\\n" ":memoryuse=8M:\\\n" ":filesize=8M:\\\n" ":coredumpsize=8M:\\\n" ":openfiles=24:\\\n" ":maxproc=32:\\\n" ":priority=0:\\\n" ":requirehome:\\\n" ":passwordtime=91d:\\\n" ":umask=022:\\\n" ":ignoretime@:\\\n" ":label=biba/10(10-10):\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:774 msgid "Then, add the following line to the default user class section:" msgstr "" "Затем добавьте следующую строку в раздел класса пользователя по умолчанию:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:778 #, no-wrap msgid ":label=biba/high:\n" msgstr ":label=biba/high:\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:781 msgid "Save the edits and issue the following command to rebuild the database:" msgstr "" "Сохраните изменения и выполните следующую команду для перестроения базы " "данных:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:785 #, no-wrap msgid "# cap_mkdb /etc/login.conf\n" msgstr "# cap_mkdb /etc/login.conf\n" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:787 #, no-wrap msgid "Configure Users" msgstr "Настройте пользователей" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:790 msgid "Set the `root` user to the default class using:" msgstr "Установите пользователя `root` в класс по умолчанию с помощью:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:794 #, no-wrap msgid "# pw usermod root -L default\n" msgstr "# pw usermod root -L default\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:799 msgid "" "All user accounts that are not `root` will now require a login class. The " "login class is required, otherwise users will be refused access to common " "commands. The following `sh` script should do the trick:" msgstr "" "Все пользовательские учетные записи, кроме `root`, теперь требуют указания " "класса входа. Класс входа обязателен, в противном случае пользователям будет " "отказано в доступе к распространённым командам. Следующий скрипт на `sh` " "должен помочь:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:804 #, no-wrap msgid "" "# for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \\\n" "\t/etc/passwd`; do pw usermod $x -L default; done;\n" msgstr "" "# for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \\\n" "\t/etc/passwd`; do pw usermod $x -L default; done;\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:807 msgid "Next, drop the `nagios` and `www` accounts into the insecure class:" msgstr "Затем добавьте учетные записи `nagios` и `www` в класс insecure:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:812 #, no-wrap msgid "" "# pw usermod nagios -L insecure\n" "# pw usermod www -L insecure\n" msgstr "" "# pw usermod nagios -L insecure\n" "# pw usermod www -L insecure\n" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:814 #, no-wrap msgid "Create the Contexts File" msgstr "Создайте файл контекстов" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:817 msgid "" "A contexts file should now be created as [.filename]#/etc/policy.contexts#:" msgstr "" "Файл контекстов теперь должен быть создан как [.filename]#/etc/policy." "contexts#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:821 #, no-wrap msgid "# This is the default BIBA policy for this system.\n" msgstr "# This is the default BIBA policy for this system.\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:824 #, no-wrap msgid "" "# System:\n" "/var/run(/.*)?\t\t\tbiba/equal\n" msgstr "" "# System:\n" "/var/run(/.*)?\t\t\tbiba/equal\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:826 #, no-wrap msgid "/dev/(/.*)?\t\t\tbiba/equal\n" msgstr "/dev/(/.*)?\t\t\tbiba/equal\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:829 #, no-wrap msgid "" "/var\t\t\t\tbiba/equal\n" "/var/spool(/.*)?\t\tbiba/equal\n" msgstr "" "/var\t\t\t\tbiba/equal\n" "/var/spool(/.*)?\t\tbiba/equal\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:831 #, no-wrap msgid "/var/log(/.*)?\t\t\tbiba/equal\n" msgstr "/var/log(/.*)?\t\t\tbiba/equal\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:834 #, no-wrap msgid "" "/tmp(/.*)?\t\t\tbiba/equal\n" "/var/tmp(/.*)?\t\t\tbiba/equal\n" msgstr "" "/tmp(/.*)?\t\t\tbiba/equal\n" "/var/tmp(/.*)?\t\t\tbiba/equal\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:837 #, no-wrap msgid "" "/var/spool/mqueue\t\tbiba/equal\n" "/var/spool/clientmqueue\t\tbiba/equal\n" msgstr "" "/var/spool/mqueue\t\tbiba/equal\n" "/var/spool/clientmqueue\t\tbiba/equal\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:840 #, no-wrap msgid "" "# For Nagios:\n" "/usr/local/etc/nagios(/.*)?\tbiba/10\n" msgstr "" "# For Nagios:\n" "/usr/local/etc/nagios(/.*)?\tbiba/10\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:842 #, no-wrap msgid "/var/spool/nagios(/.*)?\t\tbiba/10\n" msgstr "/var/spool/nagios(/.*)?\t\tbiba/10\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:845 #, no-wrap msgid "" "# For apache\n" "/usr/local/etc/apache(/.*)?\tbiba/10\n" msgstr "" "# For apache\n" "/usr/local/etc/apache(/.*)?\tbiba/10\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:850 msgid "" "This policy enforces security by setting restrictions on the flow of " "information. In this specific configuration, users, including `root`, " "should never be allowed to access Nagios. Configuration files and processes " "that are a part of Nagios will be completely self contained or jailed." msgstr "" "Эта политика обеспечивает безопасность, устанавливая ограничения на поток " "информации. В данной конкретной конфигурации пользователям, включая `root`, " "никогда не должно быть разрешено обращаться к Nagios. Конфигурационные файлы " "и процессы, являющиеся частью Nagios, будут полностью самодостаточными или " "изолированными." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:853 msgid "" "This file will be read after running `setfsmac` on every file system. This " "example sets the policy on the root file system:" msgstr "" "Этот файл будет прочитан после выполнения `setfsmac` для каждой файловой " "системы. В этом примере устанавливается политика для корневой файловой " "системы:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:857 #, no-wrap msgid "# setfsmac -ef /etc/policy.contexts /\n" msgstr "# setfsmac -ef /etc/policy.contexts /\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:860 msgid "" "Next, add these edits to the main section of [.filename]#/etc/mac.conf#:" msgstr "" "Далее добавьте эти изменения в основной раздел файла [.filename]#/etc/mac." "conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:867 #, no-wrap msgid "" "default_labels file ?biba\n" "default_labels ifnet ?biba\n" "default_labels process ?biba\n" "default_labels socket ?biba\n" msgstr "" "default_labels file ?biba\n" "default_labels ifnet ?biba\n" "default_labels process ?biba\n" "default_labels socket ?biba\n" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:869 #, no-wrap msgid "Loader Configuration" msgstr "Конфигурация загрузчика" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:872 msgid "" "To finish the configuration, add the following lines to [.filename]#/boot/" "loader.conf#:" msgstr "" "Для завершения настройки добавьте следующие строки в [.filename]#/boot/" "loader.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:878 #, no-wrap msgid "" "mac_biba_load=\"YES\"\n" "mac_seeotheruids_load=\"YES\"\n" "security.mac.biba.trust_all_interfaces=1\n" msgstr "" "mac_biba_load=\"YES\"\n" "mac_seeotheruids_load=\"YES\"\n" "security.mac.biba.trust_all_interfaces=1\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:882 msgid "" "And the following line to the network card configuration stored in [." "filename]#/etc/rc.conf#. If the primary network configuration is done via " "DHCP, this may need to be configured manually after every system boot:" msgstr "" "Добавьте следующую строку в конфигурацию сетевой карты, хранящуюся в [." "filename]#/etc/rc.conf#. Если основная настройка сети выполняется через " "DHCP, это может потребовать ручной настройки после каждой загрузки системы:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:886 #, no-wrap msgid "maclabel biba/equal\n" msgstr "maclabel biba/equal\n" #. type: Title === #: documentation/content/en/books/handbook/mac/_index.adoc:888 #, no-wrap msgid "Testing the Configuration" msgstr "Проверка конфигурации" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:894 msgid "" "First, ensure that the web server and Nagios will not be started on system " "initialization and reboot. Ensure that `root` cannot access any of the " "files in the Nagios configuration directory. If `root` can list the " "contents of [.filename]#/var/spool/nagios#, something is wrong. Instead, a " "\"permission denied\" error should be returned." msgstr "" "Сначала убедитесь, что веб-сервер и Nagios не будут запускаться при " "инициализации системы и перезагрузке. Убедитесь, что `root` не имеет доступа " "к любым файлам в конфигурационном каталоге Nagios. Если `root` может " "просматривать содержимое [.filename]#/var/spool/nagios#, значит что-то не " "так. Вместо этого должна возвращаться ошибка \"permission denied\"." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:896 msgid "If all seems well, Nagios, Apache, and Sendmail can now be started:" msgstr "" "Если все выглядит нормально, можно запустить Nagios, Apache и Sendmail:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:902 #, no-wrap msgid "" "# cd /etc/mail && make stop && \\\n" "setpmac biba/equal make start && setpmac biba/10\\(10-10\\) apachectl start && \\\n" "setpmac biba/10\\(10-10\\) /usr/local/etc/rc.d/nagios.sh forcestart\n" msgstr "" "# cd /etc/mail && make stop && \\\n" "setpmac biba/equal make start && setpmac biba/10\\(10-10\\) apachectl start && \\\n" "setpmac biba/10\\(10-10\\) /usr/local/etc/rc.d/nagios.sh forcestart\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:907 msgid "" "Double check to ensure that everything is working properly. If not, check " "the log files for error messages. If needed, use man:sysctl[8] to disable " "the man:mac_biba[4] security policy module and try starting everything again " "as usual." msgstr "" "Тщательно проверьте, чтобы всё работало правильно. Если нет, проверьте файлы " "журналов на наличие сообщений об ошибках. При необходимости используйте `man:" "sysctl[8]` для отключения модуля политики безопасности `man:mac_biba[4]` и " "попробуйте запустить всё снова как обычно." #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:912 msgid "" "The `root` user can still change the security enforcement and edit its " "configuration files. The following command will permit the degradation of " "the security policy to a lower grade for a newly spawned shell:" msgstr "" "Пользователь `root` всё ещё может изменять параметры безопасности и " "редактировать конфигурационные файлы. Следующая команда разрешит понижение " "уровня целостности политики безопасности для нового запущенного shell:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/mac/_index.adoc:916 #, no-wrap msgid "# setpmac biba/10 csh\n" msgstr "# setpmac biba/10 csh\n" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:921 msgid "" "To block this from happening, force the user into a range using man:login." "conf[5]. If man:setpmac[8] attempts to run a command outside of the " "compartment's range, an error will be returned and the command will not be " "executed. In this case, set root to `biba/high(high-high)`." msgstr "" "Чтобы предотвратить это, принудительно ограничьте пользователя диапазоном с " "помощью man:login.conf[5]. Если man:setpmac[8] попытается выполнить команду " "вне пределов компартмента, будет возвращена ошибка и команда не выполнится. " "В данном случае установите root в `biba/high(high-high)`." #. type: Title == #: documentation/content/en/books/handbook/mac/_index.adoc:924 #, no-wrap msgid "Troubleshooting the MAC Framework" msgstr "Устранение проблем с инфраструктурой MAC" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:927 msgid "" "This section discusses common configuration errors and how to resolve them." msgstr "" "Этот раздел посвящён распространённым ошибкам конфигурации и способам их " "устранения." #. type: Labeled list #: documentation/content/en/books/handbook/mac/_index.adoc:928 #, no-wrap msgid "The `multilabel` flag does not stay enabled on the root ([.filename]#/#) partition" msgstr "Флаг `multilabel` не сохраняется на корневом ([.filename]#/#) разделе" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:930 msgid "The following steps may resolve this transient error:" msgstr "Следующие действия могут помочь устранить эту временную ошибку:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:934 msgid "" "Edit [.filename]#/etc/fstab# and set the root partition to `ro` for read-" "only." msgstr "" "Отредактируйте файл [.filename]#/etc/fstab# и установите корневой раздел в " "`ro` для режима только для чтения." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:935 msgid "Reboot into single user mode." msgstr "Перезагрузитесь в однопользовательском режиме." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:936 msgid "Run `tunefs -l enable` on [.filename]#/#." msgstr "Выполните команду `tunefs -l enable` для раздела [.filename]#/#." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:937 msgid "Reboot the system." msgstr "Перезагрузите систему." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:938 msgid "" "Run `mount -urw`[.filename]#/# and change the `ro` back to `rw` in [." "filename]#/etc/fstab# and reboot the system again." msgstr "" "Выполните `mount -urw`[.filename]#/#, измените `ro` обратно на `rw` в [." "filename]#/etc/fstab# и перезагрузите систему снова." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:939 msgid "" "Double-check the output from `mount` to ensure that `multilabel` has been " "properly set on the root file system." msgstr "" "Перепроверьте вывод команды `mount`, чтобы убедиться, что опция `multilabel` " "корректно установлена для корневой файловой системы." #. type: Labeled list #: documentation/content/en/books/handbook/mac/_index.adoc:941 #, no-wrap msgid "After establishing a secure environment with MAC, Xorg no longer starts" msgstr "После настройки безопасной среды с MAC, Xorg больше не запускается" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:944 msgid "" "This could be caused by the MAC `partition` policy or by a mislabeling in " "one of the MAC labeling policies. To debug, try the following:" msgstr "" "Это может быть вызвано политикой MAC `partition` или ошибкой маркировки в " "одной из политик маркировки MAC. Для диагностики попробуйте следующее:" #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:948 msgid "" "Check the error message. If the user is in the `insecure` class, the " "`partition` policy may be the culprit. Try setting the user's class back to " "the `default` class and rebuild the database with `cap_mkdb`. If this does " "not alleviate the problem, go to step two." msgstr "" "Проверьте сообщение об ошибке. Если пользователь находится в классе " "`insecure`, проблема может быть в политике `partition`. Попробуйте вернуть " "пользователя в класс `default` и пересобрать базу данных с помощью " "`cap_mkdb`. Если это не решит проблему, перейдите ко второму шагу." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:949 msgid "" "Double-check that the label policies are set correctly for the user, Xorg, " "and the [.filename]#/dev# entries." msgstr "" "Перепроверьте, что политики меток правильно установлены для пользователя, " "Xorg и записей в [.filename]#/dev#." #. type: Plain text #: documentation/content/en/books/handbook/mac/_index.adoc:950 msgid "" "If neither of these resolve the problem, send the error message and a " "description of the environment to the {freebsd-questions}." msgstr "" "Если ни один из этих способов не решит проблему, отправьте сообщение об " "ошибке и описание окружения на {freebsd-questions}." #. type: Labeled list #: documentation/content/en/books/handbook/mac/_index.adoc:952 #, no-wrap msgid "The `_secure_path: unable to stat .login_conf` error appears" msgstr "Появляется ошибка `_secure_path: unable to stat .login_conf`" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:957 msgid "" "This error can appear when a user attempts to switch from the `root` user to " "another user in the system. This message usually occurs when the user has a " "higher label setting than that of the user they are attempting to become. " "For instance, if `joe` has a default label of `biba/low` and `root` has a " "label of `biba/high`, `root` cannot view ``joe``'s home directory. This " "will happen whether or not `root` has used `su` to become `joe` as the Biba " "integrity model will not permit `root` to view objects set at a lower " "integrity level." msgstr "" "Этот ошибка может возникать, когда пользователь пытается переключиться с " "пользователя `root` на другого пользователя в системе. Это сообщение обычно " "появляется, когда у пользователя установлена более высокая метка, чем у " "пользователя, в которого он пытается переключиться. Например, если у `joe` " "метка по умолчанию `biba/low`, а у `root` — `biba/high`, `root` не сможет " "просмотреть домашний каталог ``joe``. Это произойдет независимо от того, " "использовал ли `root` команду `su` для переключения на `joe`, так как модель " "целостности Biba не позволяет `root` просматривать объекты с более низким " "уровнем целостности." #. type: Labeled list #: documentation/content/en/books/handbook/mac/_index.adoc:958 #, no-wrap msgid "The system no longer recognizes `root`" msgstr "Система больше не распознает `root`" #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:960 msgid "When this occurs, `whoami` returns `0` and `su` returns `who are you?`." msgstr "" "Когда это происходит, `whoami` возвращает `0`, а `su` выводит `who are you?`." #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:964 msgid "" "This can happen if a labeling policy has been disabled by man:sysctl[8] or " "the policy module was unloaded. If the policy is disabled, the login " "capabilities database needs to be reconfigured. Double check [.filename]#/" "etc/login.conf# to ensure that all `label` options have been removed and " "rebuild the database with `cap_mkdb`." msgstr "" "Это может произойти, если политика меток была отключена через man:sysctl[8] " "или модуль политики был выгружен. Если политика отключена, необходимо " "перенастроить базу данных возможностей входа. Проверьте файл [.filename]#/" "etc/login.conf#, чтобы убедиться, что все опции `label` удалены, и " "перестройте базу данных с помощью `cap_mkdb`." #. type: delimited block = 4 #: documentation/content/en/books/handbook/mac/_index.adoc:968 msgid "" "This may also happen if a policy restricts access to [.filename]#master." "passwd#. This is usually caused by an administrator altering the file under " "a label which conflicts with the general policy being used by the system. " "In these cases, the user information would be read by the system and access " "would be blocked as the file has inherited the new label. Disable the " "policy using man:sysctl[8] and everything should return to normal." msgstr "" "Это также может произойти, если политика ограничивает доступ к [." "filename]#master.passwd#. Обычно это происходит, когда администратор " "изменяет файл под меткой, которая конфликтует с общей политикой, " "используемой системой. В таких случаях система прочитает информацию о " "пользователе, но доступ будет заблокирован, так как файл унаследовал новую " "метку. Отключите политику с помощью man:sysctl[8], и всё должно вернуться в " "норму." diff --git a/documentation/content/ru/books/handbook/network-servers/_index.adoc b/documentation/content/ru/books/handbook/network-servers/_index.adoc index 4a34bf7ccd..9253aa4f63 100644 --- a/documentation/content/ru/books/handbook/network-servers/_index.adoc +++ b/documentation/content/ru/books/handbook/network-servers/_index.adoc @@ -1,2591 +1,2590 @@ --- description: 'Эта глава рассказывает о некоторых из наиболее часто используемых сетевых служб в системах UNIX' next: books/handbook/firewalls params: path: /books/handbook/network-servers/ part: 'IV. Сетевое взаимодействие' prev: books/handbook/mail showBookMenu: 'true' tags: ["network", "servers", "inetd", "NFS", "NIS", "LDAP", "DHCP", "DNS", "Apache HTTP", "FTP", "Samba", "NTP", "iSCSI"] title: 'Глава 32. Сетевые серверы' weight: 37 --- [[network-servers]] = Сетевые серверы :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 32 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == Обзор В этой главе рассматриваются некоторые из наиболее часто используемых сетевых служб в системах UNIX(R). Сюда входит установка, настройка, тестирование и поддержка различных типов сетевых служб. В этой главе приведены примеры конфигурационных файлов для справки. К концу этой главы читатели будут знать: * Как управлять демоном inetd. * Как настроить Network File System (NFS). * Как настроить сервер сетевой информации (NIS) для централизации и совместного использования учетных записей пользователей. * Как настроить FreeBSD в качестве сервера или клиента LDAP * Как настроить автоматические параметры сети с использованием DHCP. * Как настроить сервер доменных имен (DNS). * Как настроить веб-сервер Apache HTTP. * Как настроить сервер протокола передачи файлов (FTP). * Как настроить файловый и печатный сервер для клиентов Windows(R) с использованием Samba. * Как синхронизировать время и дату, а также настроить сервер времени с использованием протокола Network Time Protocol (NTP). * Как настроить iSCSI. Эта глава предполагает базовые знания о: * Скриптах [.filename]#/etc/rc#. * Сетевой терминологии. * Установке дополнительного стороннего программного обеспечения (crossref:ports[ports,Установка приложений: Пакеты и Порты]). [[network-inetd]] == Суперсервер inetd Демон man:inetd[8] иногда называют суперсервером, потому что он управляет соединениями для многих служб. Вместо запуска множества приложений, достаточно запустить только службу inetd. Когда поступает соединение для службы, управляемой inetd, он определяет, какому программе предназначено соединение, создает процесс для этой программы и делегирует программе сокет. Использование inetd для служб, которые не используются интенсивно, может снизить нагрузку на систему по сравнению с запуском каждого демона отдельно в автономном режиме. Прежде всего, inetd используется для запуска других демонов, но несколько простых протоколов обрабатываются внутри него, таких как chargen, auth, time, echo, discard и daytime. Этот раздел охватывает основы настройки inetd. [[network-inetd-conf]] === Файл конфигурации Настройка inetd выполняется путем редактирования [.filename]#/etc/inetd.conf#. Каждая строка этого файла конфигурации представляет приложение, которое может быть запущено inetd. По умолчанию каждая строка начинается с комментария (`+#+`), что означает, что inetd не ожидает подключений для каких-либо приложений. Чтобы настроить inetd на ожидание подключений для приложения, удалите `+#+` в начале соответствующей строки. После сохранения изменений настройте inetd для запуска при загрузке системы, отредактировав [.filename]#/etc/rc.conf#: [.programlisting] .... inetd_enable="YES" .... Чтобы запустить inetd сейчас, чтобы он начал прослушивать настроенную службу, введите: [source, shell] .... # service inetd start .... После запуска inetd необходимо уведомлять его о каждом изменении в файле [.filename]#/etc/inetd.conf#: [[network-inetd-reread]] .Перезагрузка конфигурационного файла inetd [example] ==== [source, shell] .... # service inetd reload .... ==== Обычно запись по умолчанию для приложения не требует редактирования, кроме удаления `+#+`. В некоторых ситуациях может быть целесообразно изменить запись по умолчанию. В качестве примера, это стандартная запись для man:ftpd[8] по IPv4: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... Семь столбцов в записи следующие: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... где: service-name:: Имя службы демона для запуска. Оно должно соответствовать службе, указанной в [.filename]#/etc/services#. Это определяет, на каком порту inetd ожидает входящие соединения для этой службы. При использовании пользовательской службы она сначала должна быть добавлена в [.filename]#/etc/services#. socket-type:: Либо `stream`, `dgram`, `raw`, или `seqpacket`. Используйте `stream` для TCP-соединений и `dgram` для UDP-сервисов. protocol:: Используйте одно из следующих названий протоколов: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Имя протокола | Объяснение |tcp или tcp4 |TCP IPv4 |udp или udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |Как TCP IPv4, так и IPv6 |udp46 |Как UDP IPv4, так и IPv6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: В этом поле необходимо указать `wait` или `nowait`. Параметры `max-child`, `max-connections-per-ip-per-minute` и `max-child-per-ip` являются необязательными. + `wait|nowait` указывает, способна ли служба обрабатывать свой собственный сокет. Типы сокетов `dgram` должны использовать `wait`, в то время как для демонов `stream`, которые обычно многопоточные, следует использовать `nowait`. `wait` обычно передаёт несколько сокетов одному демону, тогда как `nowait` создаёт дочерний демон для каждого нового сокета. + Максимальное количество дочерних демонов, которые может породить inetd, задается параметром `max-child`. Например, чтобы ограничить десять экземпляров демона, укажите `/10` после `nowait`. Указание `/0` позволяет создавать неограниченное количество дочерних процессов. + `max-connections-per-ip-per-minute` ограничивает количество соединений с любого конкретного IP-адреса в минуту. Как только лимит достигнут, последующие соединения с этого IP-адреса будут отбрасываться до конца минуты. Например, значение `/10` ограничивает любой конкретный IP-адрес десятью попытками соединения в минуту. `max-child-per-ip` ограничивает количество дочерних процессов, которые могут быть запущены от имени любого отдельного IP-адреса в любой момент времени. Эти опции позволяют ограничить чрезмерное потребление ресурсов и помогают предотвратить атаки типа "Отказ в обслуживании". + Пример можно увидеть в настройках по умолчанию для man:fingerd[8]: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s .... user:: Имя пользователя, от имени которого будет работать демон. Демоны обычно работают от имени `root`, `daemon` или `nobody`. server-program:: Полный путь к демону. Если демон является службой, предоставляемой inetd внутренне, используйте `internal`. server-program-arguments:: Используется для указания любых аргументов командной строки, передаваемых демону при его запуске. Если демон является внутренней службой, используйте `internal`. [[network-inetd-cmdline]] === Параметры командной строки Как и большинство серверных демонов, inetd имеет ряд опций, которые можно использовать для изменения его поведения. По умолчанию inetd запускается с параметрами `-wW -C 60`. Эти опции включают TCP wrappers для всех сервисов, включая внутренние, и предотвращают запросы любого IP-адреса к любому сервису чаще 60 раз в минуту. Для изменения параметров по умолчанию, передаваемых inetd, добавьте запись `inetd_flags` в файл [.filename]#/etc/rc.conf#. Если inetd уже запущен, перезапустите его командой `service inetd restart`. Доступные варианты ограничения скорости: -c maximum:: Укажите максимальное количество одновременных вызовов каждой службы по умолчанию, где по умолчанию значение не ограничено. Может быть переопределено для каждой службы отдельно с помощью параметра `max-child` в [.filename]#/etc/inetd.conf#. -C rate:: Укажите максимальное количество вызовов службы с одного IP-адреса в минуту по умолчанию. Это значение может быть переопределено для отдельной службы с помощью параметра `max-connections-per-ip-per-minute` в файле [.filename]#/etc/inetd.conf#. -R rate:: Укажите максимальное количество вызовов службы в течение одной минуты, где значение по умолчанию — `256`. Значение `0` позволяет неограниченное количество. -s maximum:: Укажите максимальное количество раз, которое служба может быть вызвана с одного IP-адреса одновременно, по умолчанию значение не ограничено. Может быть переопределено для каждой службы отдельно с помощью параметра `max-child-per-ip` в [.filename]#/etc/inetd.conf#. Доступны дополнительные параметры. Полный список параметров смотрите в man:inetd[8]. [[network-inetd-security]] === Безопасность Многие демоны, которыми может управлять inetd, не обладают достаточной защитой. Некоторые демоны, такие как fingerd, могут предоставлять информацию, полезную для злоумышленника. Включайте только необходимые службы и отслеживайте систему на предмет чрезмерных попыток подключения. Параметры `max-connections-per-ip-per-minute`, `max-child` и `max-child-per-ip` могут быть использованы для ограничения подобных атак. По умолчанию TCP wrappers включены. Дополнительную информацию о наложении TCP-ограничений на различные демоны, запускаемые через inetd, можно найти в man:hosts_access[5]. [[network-nfs]] == Сетевая файловая система (NFS — Network File System) FreeBSD поддерживает Network File System (NFS), что позволяет серверу делиться каталогами и файлами с клиентами по сети. С помощью NFS пользователи и программы могут обращаться к файлам на удалённых системах так, как если бы они хранились локально. NFS имеет множество практических применений. Некоторые из наиболее распространённых вариантов использования включают: * Данные, которые в противном случае дублировались бы на каждом клиенте, могут храниться в одном месте и быть доступными для клиентов в сети. * Несколько клиентов могут нуждаться в доступе к каталогу [.filename]#/usr/ports/distfiles#. Общий доступ к этому каталогу позволяет быстро получить исходные файлы без необходимости загрузки их на каждый клиент. * На крупных сетях часто удобнее настроить центральный NFS-сервер, на котором хранятся все домашние каталоги пользователей. Пользователи могут входить в систему с любого клиента в сети и получать доступ к своим домашним каталогам. * Управление экспортом NFS упрощено. Например, существует только одна файловая система, в которой необходимо настраивать политики безопасности или резервного копирования. * Съемные устройства хранения данных могут использоваться другими компьютерами в сети. Это уменьшает количество устройств в сети и обеспечивает централизованное управление их безопасностью. Часто бывает удобнее устанавливать программное обеспечение на несколько компьютеров с централизованного носителя для установки. NFS состоит из сервера и одного или нескольких клиентов. Клиент удалённо получает доступ к данным, хранящимся на машине сервера. Для корректной работы необходимо настроить и запустить несколько процессов. Эти демоны должны быть запущены на сервере: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Демон | Описание |nfsd |Демон NFS, обслуживающий запросы от клиентов NFS. |mountd |Демон монтирования NFS, который выполняет запросы, полученные от nfsd. |rpcbind | Этот демон позволяет клиентам NFS определять, какой порт использует сервер NFS. |=== Запуск man:nfsiod[8] на клиенте может повысить производительность, но не является обязательным. [[network-configuring-nfs]] === Настройка сервера Файловые системы, которые сервер NFS будет предоставлять в общий доступ, указаны в [.filename]#/etc/exports#. Каждая строка в этом файле определяет файловую систему для экспорта, клиентов, которые имеют доступ к этой файловой системе, и любые параметры доступа. При добавлении записей в этот файл каждая экспортируемая файловая система, её свойства и разрешённые хосты должны быть указаны в одной строке. Если в записи не указаны клиенты, то любой клиент в сети может подключить эту файловую систему. Следующие записи в [.filename]#/etc/exports# демонстрируют, как экспортировать файловые системы. Примеры могут быть изменены в соответствии с файловыми системами и именами клиентов в сети читателя. В этом файле можно использовать множество опций, но здесь упомянуты лишь некоторые. Полный список опций смотрите в man:exports[5]. В этом примере показано, как экспортировать [.filename]#/media# на три хоста с именами _alpha_, _bravo_ и _charlie_: [.programlisting] .... /media -ro alpha bravo charlie .... Флаг `-ro` делает файловую систему доступной только для чтения, предотвращая внесение клиентами изменений в экспортированную файловую систему. В этом примере предполагается, что имена хостов находятся либо в DNS, либо в [.filename]#/etc/hosts#. Обратитесь к man:hosts[5], если в сети нет DNS-сервера. Следующий пример экспортирует [.filename]#/home# трём клиентам по IP-адресу. Это может быть полезно для сетей без DNS или записей в [.filename]#/etc/hosts#. Флаг `-alldirs` позволяет подкаталогам быть точками монтирования. Другими словами, он не будет автоматически монтировать подкаталоги, но разрешит клиенту монтировать необходимые каталоги по мере надобности. [.programlisting] .... /usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... Следующий пример экспортирует [.filename]#/a#, чтобы два клиента из разных доменов могли получить доступ к этой файловой системе. Параметр `-maproot=root` позволяет пользователю `root` на удалённой системе записывать данные в экспортированную файловую систему как `root`. Если параметр `-maproot=root` не указан, пользователь `root` на клиенте будет отображён на учётную запись `nobody` на сервере и будет ограничен правами доступа, определёнными для `nobody`. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... Клиент может быть указан только один раз для каждой файловой системы. Например, если [.filename]#/usr# представляет собой одну файловую систему, следующие записи будут недопустимыми, так как обе указывают на один и тот же узел: [.programlisting] .... # Invalid when /usr is one file system /usr/src client /usr/ports client .... Правильный формат для данной ситуации — использовать одну запись: [.programlisting] .... /usr/src /usr/ports client .... Ниже приведён пример корректного списка экспорта, где [.filename]#/usr# и [.filename]#/exports# являются локальными файловыми системами: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... Чтобы включить процессы, необходимые для работы сервера NFS при загрузке, добавьте следующие параметры в [.filename]#/etc/rc.conf#: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_enable="YES" .... Сервер можно запустить, выполнив следующую команду: [source, shell] .... # service nfsd start .... Всякий раз, когда запускается сервер NFS, также автоматически запускается mountd. Однако mountd читает [.filename]#/etc/exports# только при запуске. Чтобы последующие изменения в [.filename]#/etc/exports# вступили в силу немедленно, заставьте mountd перечитать его: [source, shell] .... # service mountd reload .... Обратитесь к man:zfs-share[8] для описания экспорта наборов данных ZFS через NFS с использованием свойства ZFS `sharenfs` вместо файла man:exports[5]. Обратитесь к man:nfsv4[4] для описания настройки NFS версии 4. === Настройка клиента Чтобы включить клиенты NFS, установите эту опцию в файле [.filename]#/etc/rc.conf# каждого клиента: [.programlisting] .... nfs_client_enable="YES" .... Затем выполните эту команду на каждом клиенте NFS: [source, shell] .... # service nfsclient start .... Клиент теперь имеет всё необходимое для монтирования удалённой файловой системы. В этих примерах имя сервера — `server`, а имя клиента — `client`. Чтобы смонтировать [.filename]#/home# с сервера `server` в точку монтирования [.filename]#/mnt# на клиенте `client`: [source, shell] .... # mount server:/home /mnt .... Файлы и каталоги в [.filename]#/home# теперь будут доступны на `client`, в каталоге [.filename]#/mnt#. Для монтирования удаленной файловой системы при каждой загрузке клиента добавьте её в [.filename]#/etc/fstab#: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... Обратитесь к man:fstab[5] для описания всех доступных опций. === Блокировка Некоторые приложения требуют блокировки файлов для корректной работы. Чтобы включить блокировку, выполните следующую команду как на клиенте, так и на сервере: [source, shell] .... # sysrc rpc_lockd_enable="YES" .... Затем запустите службу man:rpc.lockd[8]: [source, shell] .... # service lockd start .... Если блокировка не требуется на сервере, клиент NFS можно настроить для локальной блокировки, добавив параметр `-L` при выполнении команды mount. Дополнительные сведения см. в man:mount_nfs[8]. [[network-autofs]] === Автоматизация монтирования с помощью man:autofs[5] [NOTE] ==== Автомонтирование man:autofs[5] поддерживается начиная с FreeBSD 10.1-RELEASE. Для использования функциональности автомонтирования в более старых версиях FreeBSD используйте man:amd[8]. В этой главе описывается только автомонтирование man:autofs[5]. ==== Утилита man:autofs[5] — это общее название для нескольких компонентов, которые вместе позволяют автоматически монтировать удалённые и локальные файловые системы при обращении к файлу или каталогу внутри этих файловых систем. Она состоит из компонента ядра man:autofs[5] и нескольких пользовательских приложений: man:automount[8], man:automountd[8] и man:autounmountd[8]. Она служит альтернативой для man:amd[8] из предыдущих выпусков FreeBSD. amd по-прежнему предоставляется для обратной совместимости, так как эти утилиты используют разные форматы карт; формат, используемый autofs, совпадает с форматом других автомонтировщиков SVR4, таких как в Solaris, MacOS X и Linux. Виртуальная файловая система man:autofs[5] монтируется на указанные точки монтирования с помощью man:automount[8], который обычно запускается во время загрузки. Всякий раз, когда процесс пытается получить доступ к файлу в точке монтирования man:autofs[5], ядро уведомляет демон man:automountd[8] и приостанавливает вызвавший процесс. Демон man:automountd[8] обрабатывает запросы ядра, находя соответствующую карту и монтируя файловую систему в соответствии с ней, после чего сигнализирует ядру о разблокировке процесса. Демон man:autounmountd[8] автоматически размонтирует автомонтируемые файловые системы по истечении некоторого времени, если они больше не используются. Основной файл конфигурации autofs — это [.filename]#/etc/auto_master#. Он связывает отдельные карты с корневыми точками монтирования. Для объяснения синтаксиса [.filename]#auto_master# и карт обратитесь к man:auto_master[5]. Существует специальная карта автомонтирования, смонтированная в [.filename]#/net#. При обращении к файлу в этом каталоге, man:autofs[5] ищет соответствующую удалённую точку монтирования и автоматически монтирует её. Например, попытка доступа к файлу в [.filename]#/net/foobar/usr# приведёт к тому, что man:automountd[8] смонтирует экспорт [.filename]#/usr# с хоста `foobar`. .Подключение экспорта с помощью man:autofs[5] [example] ==== В этом примере `showmount -e` показывает экспортированные файловые системы, которые могут быть подключены с NFS-сервера `foobar`: [source, shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /net/foobar/usr .... ==== Результат выполнения `showmount` показывает, что [.filename]#/usr# экспортируется. При переходе в каталог [.filename]#/host/foobar/usr#, man:automountd[8] перехватывает запрос и пытается разрешить имя хоста `foobar`. В случае успеха man:automountd[8] автоматически монтирует исходный экспорт. Чтобы включить man:autofs[5] при загрузке, добавьте следующую строку в [.filename]#/etc/rc.conf#: [.programlisting] .... autofs_enable="YES" .... Затем man:autofs[5] может быть запущен выполнением: [source, shell] .... # service automount start # service automountd start # service autounmountd start .... Формат карты man:autofs[5] такой же, как и в других операционных системах. Информация об этом формате из других источников может быть полезной, например, из http://web.archive.org/web/20160813071113/http://images.apple.com/business/docs/Autofs.pdf[документации Mac OS X]. Обратитесь к справочным страницам man:automount[8], man:automountd[8], man:autounmountd[8] и man:auto_master[5] для получения дополнительной информации. [[network-nis]] == Сетевая информационная система (NIS) Сетевая информационная система (NIS — Network Information System) предназначена для централизованного администрирования UNIX(R)-подобных систем, таких как Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD и FreeBSD. Изначально NIS была известна как Yellow Pages, но название было изменено из-за проблем с товарными знаками. Именно поэтому команды NIS начинаются с `yp`. NIS — это клиент-серверная система на основе удалённых вызовов процедур (RPC), которая позволяет группе машин в домене NIS использовать общий набор конфигурационных файлов. Это позволяет системному администратору настраивать клиентские системы NIS с минимальным объёмом конфигурационных данных, а также добавлять, удалять или изменять конфигурационные данные из единого места. FreeBSD использует вторую версию протокола NIS. === Термины и процессы NIS Таблица 28.1 обобщает термины и важные процессы, используемые NIS: .Терминология NIS [cols="1,1", frame="none", options="header"] |=== | Термин | Описание |Имя домена NIS |Серверы и клиенты NIS используют общее имя домена NIS. Как правило, это имя не связано с DNS. |man:rpcbind[8] |Эта служба включает RPC и должна работать для запуска сервера NIS или работы в качестве клиента NIS. |man:ypbind[8] |Эта служба связывает клиент NIS с его сервером NIS. Она принимает имя домена NIS и использует RPC для подключения к серверу. Это основа клиент-серверного взаимодействия в среде NIS. Если эта служба не запущена на клиентской машине, она не сможет получить доступ к серверу NIS. |man:ypserv[8] |Это процесс для сервера NIS. Если эта служба перестанет работать, сервер больше не сможет отвечать на запросы NIS, поэтому, надеюсь, есть подчиненный сервер, который возьмет на себя управление. Некоторые клиенты, не относящиеся к FreeBSD, не будут пытаться переподключиться с использованием подчиненного сервера, и процесс ypbind, возможно, потребуется перезапустить на этих клиентах. |man:rpc.yppasswdd[8] |Этот процесс работает только на основных серверах NIS. Этот демон позволяет клиентам NIS изменять свои пароли в NIS. Если этот демон не запущен, пользователям придется входить на главный сервер NIS и изменять пароли там. |=== === Типы машин В среде NIS существует три типа хостов: * Основной сервер NIS + Этот сервер выступает в роли центрального хранилища информации о конфигурации хостов и содержит авторитетные копии файлов, используемых всеми клиентами NIS. Файлы [.filename]#passwd#, [.filename]#group# и другие, используемые клиентами NIS, хранятся на главном сервере. Хотя возможно, чтобы одна машина была основным сервером NIS для нескольких доменов NIS, такая конфигурация не рассматривается в этой главе, так как предполагается относительно небольшая среда NIS. * Подчиненные серверы NIS + Подчинённые серверы NIS хранят копии файлов данных NIS главного сервера для обеспечения избыточности. Подчинённые серверы также помогают распределить нагрузку основного сервера, так как клиенты NIS всегда подключаются к NIS серверу, который отвечает первым. * Клиенты NIS + Клиенты NIS проходят аутентификацию на сервере NIS при входе в систему. Информация из многих файлов может быть совместно использована с помощью NIS. Файлы [.filename]#master.passwd#, [.filename]#group# и [.filename]#hosts# часто распространяются через NIS. Когда процессу на клиенте требуется информация, которая обычно находится в этих файлах локально, он отправляет запрос к связанному с ним NIS-серверу. === Планирование и подготовка В этом разделе описывается пример среды NIS, состоящей из 15 машин FreeBSD без централизованной точки администрирования. На каждой машине есть свои файлы [.filename]#/etc/passwd# и [.filename]#/etc/master.passwd#. Эти файлы синхронизируются между собой только вручную. В настоящее время, когда в лабораторию добавляется новый пользователь, этот процесс необходимо повторять на всех 15 машинах. Конфигурация лаборатории будет следующей: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Имя машины | IP-адрес | Роль машины |`ellington` |`10.0.0.2` |Основной сервер NIS |`coltrane` |`10.0.0.3` |Подчиненный сервер NIS |`basie` |`10.0.0.4` |Факультетская рабочая станция |`bird` |`10.0.0.5` |Клиентская машина |`cli[1-11]` |`10.0.0.[6-17]` |Другие клиентские машины |=== Если это первый раз, когда разрабатывается схема NIS, её следует тщательно спланировать заранее. Независимо от размера сети, в процессе планирования необходимо принять несколько решений. ==== Выбор имени домена NIS Когда клиент рассылает широковещательные запросы на получение информации, он включает имя домена NIS, к которому принадлежит. Таким образом, несколько серверов в одной сети могут определить, какой сервер должен отвечать на конкретный запрос. Думайте о доменном имени NIS как об имени для группы хостов. Некоторые организации предпочитают использовать своё доменное имя интернета в качестве имени домена NIS. Это не рекомендуется, так как может вызвать путаницу при попытках отладки сетевых проблем. Имя домена NIS должно быть уникальным в пределах сети, и полезно, если оно описывает группу машин, которую представляет. Например, художественный отдел компании Acme Inc. может находиться в домене NIS "acme-art". В этом примере будет использоваться имя домена `test-domain`. Однако некоторые операционные системы, отличные от FreeBSD, требуют, чтобы имя домена NIS совпадало с именем интернет-домена. Если одна или несколько машин в сети имеют это ограничение, _необходимо_ использовать имя интернет-домена в качестве имени домена NIS. ==== Требования к физическому серверу Есть несколько моментов, которые следует учитывать при выборе машины для использования в качестве сервера NIS. Поскольку клиенты NIS зависят от доступности сервера, следует выбрать машину, которая не перезагружается часто. Идеально, чтобы сервер NIS был отдельной машиной, единственной целью которой является быть сервером NIS. Если сеть не сильно загружена, допустимо разместить сервер NIS на машине, где выполняются другие службы. Однако, если сервер NIS станет недоступен, это негативно скажется на всех клиентах NIS. === Настройка основного сервера NIS Канонические копии всех NIS-файлов хранятся на основном сервере. Базы данных, используемые для хранения информации, называются NIS-картами. В FreeBSD эти карты хранятся в [.filename]#/var/yp/[domainname]#, где [.filename]#[domainname]# — это имя NIS-домена. Поскольку поддерживается несколько доменов, возможно наличие нескольких каталогов, по одному для каждого домена. Каждый домен будет иметь свой независимый набор карт. Основные и подчинённые серверы NIS обрабатывают все запросы NIS через man:ypserv[8]. Этот демон отвечает за приём входящих запросов от клиентов NIS, преобразование запрошенного домена и имени карты в путь к соответствующему файлу базы данных и передачу данных из базы обратно клиенту. Настройка основного NIS-сервера может быть относительно простой, в зависимости от потребностей окружения. Поскольку FreeBSD предоставляет встроенную поддержку NIS, её достаточно включить, добавив следующие строки в [.filename]#/etc/rc.conf#: [.programlisting] .... nisdomainname="test-domain" <.> nis_server_enable="YES" <.> nis_yppasswdd_enable="YES" <.> .... <.> Эта строка устанавливает имя домена NIS в `test-domain`. <.> Это автоматизирует запуск процессов сервера NIS при загрузке системы. <.> Это включает демон man:rpc.yppasswdd[8], позволяющий пользователям изменять свой NIS-пароль с клиентской машины. В многосерверном домене, где серверные машины также являются клиентами NIS, необходимо соблюдать осторожность. Обычно рекомендуется принудительно заставлять серверы привязываться к самим себе, а не разрешать им рассылать запросы на привязку и потенциально привязываться друг к другу. Могут возникнуть странные режимы сбоев, если один сервер выйдет из строя, а другие будут зависеть от него. В конечном итоге все клиенты превысят время ожидания и попытаются привязаться к другим серверам, но задержка может быть значительной, а режим сбоя сохранится, поскольку серверы могут снова привязаться друг к другу. Сервер, который также является клиентом, может быть принудительно привязан к определённому серверу путём добавления следующих строк в [.filename]#/etc/rc.conf#: [.programlisting] .... nis_client_enable="YES" <.> nis_client_flags="-S test-domain,server" <.> .... <.> Это позволяет также запускать клиентские приложения. <.> Эта строка устанавливает имя домена NIS в `test-domain` и привязывает к себе. После сохранения изменений введите `/etc/netstart`, чтобы перезапустить сеть и применить значения, указанные в [.filename]#/etc/rc.conf#. Перед инициализацией карт NIS запустите man:ypserv[8]: [source, shell] .... # service ypserv start .... ==== Инициализация карт NIS NIS-карты создаются из конфигурационных файлов в [.filename]#/etc# на NIS-мастере, за исключением одного: [.filename]#/etc/master.passwd#. Это сделано для предотвращения распространения паролей на все серверы в NIS-домене. Поэтому перед инициализацией NIS-карт необходимо настроить основные файлы паролей: [source, shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... Рекомендуется удалить все записи системных учетных записей, а также любые пользовательские учетные записи, которые не нужно распространять на клиенты NIS, такие как `root` и другие административные учетные записи. [NOTE] ==== Убедитесь, что файл [.filename]#/var/yp/master.passwd# не доступен для чтения группе или всем, установив его права доступа на `600`. ==== После завершения этой задачи инициализируйте карты NIS. FreeBSD включает скрипт man:ypinit[8] для этого. При создании карт для главного сервера укажите `-m` и задайте имя домена NIS: [source, shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... Это создаст файл [.filename]#/var/yp/Makefile# на основе [.filename]#/var/yp/Makefile.dist#. По умолчанию этот файл предполагает, что в окружении есть единственный NIS-сервер только с клиентами FreeBSD. Поскольку у `test-domain` есть подчиненный сервер, отредактируйте эту строку в [.filename]#/var/yp/Makefile#, чтобы она начиналась с комментария (`+#+`): [.programlisting] .... NOPUSH = "True" .... ==== Добавление новых пользователей Каждый раз при создании нового пользователя учетная запись должна быть добавлена на основной NIS-сервер, а NIS-карты должны быть перестроены. До этого новый пользователь не сможет войти в систему нигде, кроме главного NIS-сервера. Например, чтобы добавить нового пользователя `jsmith` в домен `test-domain`, выполните следующие команды на основном сервере: [source, shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... Пользователь также может быть добавлен с помощью `adduser jsmith` вместо `pw useradd smith`. === Настройка подчиненного сервера NIS Для настройки подчиненного сервера NIS войдите на подчиненный сервер и отредактируйте [.filename]#/etc/rc.conf#, как для основного сервера. Не генерируйте карты NIS, так как они уже существуют на основном сервере. При запуске `ypinit` на подчиненном сервере используйте `-s` (для подчиненного) вместо `-m` (для основного). Эта опция требует указания имени основного сервера NIS в дополнение к имени домена, как показано в этом примере: [source, shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. .... Это создаст каталог на подчиненном сервере с именем [.filename]#/var/yp/test-domain#, который содержит копии карт основного сервера NIS. Добавление этих записей в [.filename]#/etc/crontab# на каждом подчиненном сервере заставит их синхронизировать свои карты с картами на основном сервере: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... Эти записи не являются обязательными, поскольку основной сервер автоматически пытается передать любые изменения карт своим подчинённым серверам. Однако, поскольку клиенты могут зависеть от подчинённого сервера для предоставления корректной информации о паролях, рекомендуется принудительно выполнять частые обновления карт паролей. Это особенно важно в загруженных сетях, где обновления карт могут не всегда завершаться. Для завершения настройки выполните `/etc/netstart` на подчинённом сервере, чтобы запустить службы NIS. === Настройка клиента NIS Клиент NIS связывается с сервером NIS с помощью man:ypbind[8]. Этот демон рассылает RPC-запросы в локальной сети. Эти запросы указывают доменное имя, настроенное на клиенте. Если NIS-сервер в том же домене получает один из таких запросов, он отвечает, и ypbind записывает адрес сервера. Если доступно несколько серверов, клиент будет использовать адрес первого ответившего сервера и направлять все свои NIS-запросы к нему. Клиент автоматически отправляет ping-запросы серверу через регулярные промежутки времени, чтобы убедиться, что он всё ещё доступен. Если ответ не получен в разумные сроки, ypbind пометит домен как несвязанный и снова начнёт рассылку запросов в надежде найти другой сервер. Для настройки машины FreeBSD в качестве клиента NIS: [.procedure] ==== . Отредактируйте файл [.filename]#/etc/rc.conf# и добавьте следующие строки, чтобы установить имя домена NIS и запустить man:ypbind[8] при старте сети: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... . Для импорта всех возможных записей паролей с сервера NIS, используйте `vipw`, чтобы удалить все учетные записи пользователей, кроме одной, из [.filename]#/etc/master.passwd#. При удалении учетных записей учитывайте, что хотя бы одна локальная учетная запись должна остаться, и эта учетная запись должна быть членом группы `wheel`. Если возникнут проблемы с NIS, эту локальную учетную запись можно использовать для удаленного входа, получения прав суперпользователя и устранения проблемы. Перед сохранением изменений добавьте следующую строку в конец файла: + [.programlisting] .... +::::::::: .... + Эта строка настраивает клиент для предоставления любому пользователю с действительной учётной записью в картах паролей NIS-сервера учётной записи на клиенте. Существует множество способов настройки NIS-клиента путём изменения этой строки. Один из методов описан в crossref:network-servers[network-netgroups, Использование групп сети]. Для более подробного ознакомления обратитесь к книге `Managing NFS and NIS`, опубликованной O'Reilly Media. . Для импорта всех возможных записей групп с сервера NIS добавьте следующую строку в [.filename]#/etc/group#: + [.programlisting] .... +:*:: .... ==== Для немедленного запуска клиента NIS выполните следующие команды от имени суперпользователя: [source, shell] .... # /etc/netstart # service ypbind start .... После выполнения этих шагов, выполнение команды `ypcat passwd` на клиенте должно отобразить карту [.filename]#passwd# сервера. === Безопасность NIS Поскольку RPC — это широковещательный сервис, любая система, запускающая ypbind в том же домене, может получить содержимое NIS-карт. Чтобы предотвратить несанкционированные операции, man:ypserv[8] поддерживает функцию под названием "securenets", которая может использоваться для ограничения доступа к определённому набору хостов. По умолчанию эта информация хранится в [.filename]#/var/yp/securenets#, если только man:ypserv[8] не запущен с ключом `-p` и альтернативным путём. Этот файл содержит записи, состоящие из спецификации сети и сетевой маски, разделённых пробелами. Строки, начинающиеся с `+"#"+`, считаются комментариями. Пример файла [.filename]#securenets# может выглядеть так: [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... Если man:ypserv[8] получает запрос от адреса, соответствующего одному из этих правил, он обработает запрос как обычно. Если адрес не соответствует ни одному правилу, запрос будет проигнорирован и в журнал будет записано предупреждение. Если файл [.filename]#securenets# не существует, `ypserv` разрешит соединения с любого хоста. crossref:security[tcpwrappers,"TCP Wrapper"] — это альтернативный механизм контроля доступа вместо [.filename]#securenets#. Хотя оба механизма контроля доступа добавляют некоторый уровень безопасности, они оба уязвимы к атакам "подмены IP". Весь трафик, связанный с NIS, должен блокироваться на межсетевом экране. Серверы, использующие [.filename]#securenets#, могут не обслуживать легитимных клиентов NIS с устаревшими реализациями TCP/IP. Некоторые из этих реализаций устанавливают все биты хоста в ноль при выполнении широковещательных запросов или не учитывают маску подсети при вычислении широковещательного адреса. Хотя некоторые из этих проблем можно устранить, изменив конфигурацию клиента, другие проблемы могут потребовать вывода из эксплуатации этих клиентских систем или отказа от [.filename]#securenets#. Использование TCP Wrapper увеличивает задержку сервера NIS. Дополнительная задержка может быть достаточно длительной, чтобы вызвать таймауты в клиентских программах, особенно в загруженных сетях с медленными серверами NIS. Если один или несколько клиентов страдают от задержек, преобразуйте этих клиентов в подчинённые серверы NIS и заставьте их привязываться к самим себе. ==== Запрет доступа некоторым пользователям В этом примере система `basie` является рабочей станцией преподавателя в домене NIS. Файл [.filename]#passwd# на главном сервере NIS содержит учетные записи как преподавателей, так и студентов. В этом разделе показано, как разрешить вход преподавателей в эту систему, запретив вход студентам. Чтобы предотвратить вход определенных пользователей в систему, даже если они присутствуют в базе данных NIS, используйте `vipw` для добавления `-_имя_пользователя_` с правильным количеством двоеточий в конце файла [.filename]#/etc/master.passwd# на клиенте, где _имя_пользователя_ — это имя пользователя, которому запрещен вход. Строка с заблокированным пользователем должна находиться перед строкой `+`, которая разрешает вход пользователям NIS. В этом примере пользователю `bill` запрещен вход на `basie`: [source, shell] .... basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin operator:*:2:5::0:0:System &:/:/usr/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/usr/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin -bill::::::::: +::::::::: basie# .... [[network-netgroups]] === Использование сетевых групп Запрет указанным пользователям возможности входа в отдельные системы становится неэффективным и не масштабируемым в крупных сетях и быстро лишает NIS основного преимущества: _централизованного_ администрирования. Сетевые группы были разработаны для управления большими и сложными сетями с сотнями пользователей и машин. Их использование аналогично группам в UNIX(R), с той основной разницей, что отсутствует числовой идентификатор и есть возможность определять сетевую группу, включая как учётные записи пользователей, так и другие сетевые группы. Для дополнения примера, используемого в этой главе, домен NIS будет увеличен за счет пользователей и систем, показанным в таблицах 28.2 и 28.3: .Дополнительные пользователи [cols="1,1", frame="none", options="header"] |=== | Имена пользователей | Описание |`alpha`, `beta` |Сотрудники IT-отдела |`charlie`, `delta` |Стажеры IT-отдела |`echo`, `foxtrott`, `golf`, ... |Сотрудники |`able`, `baker`, ... |Интерны |=== .Дополнительные Системы [cols="1,1", frame="none", options="header"] |=== | Имена машин | Описание |`war`, `death`, `famine`, `pollution` |Только сотрудники IT имеют право входить на эти серверы. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |Все сотрудники IT-отдела имеют право входить на эти серверы. |`one`, `two`, `three`, `four`, ... |Обычные рабочие станции, используемые сотрудниками. |`trashcan` |Очень старая машина без каких-либо важных данных. Даже интернам разрешено использовать эту систему. |=== При использовании сетевых групп для настройки этого сценария каждый пользователь назначается в одну или несколько сетевых групп, а затем вход разрешается или запрещается для всех членов сетевой группы. При добавлении новой машины необходимо определить ограничения входа для всех сетевых групп. Когда добавляется новый пользователь, его учётная запись должна быть добавлена в одну или несколько сетевых групп. Если настройка NIS выполнена тщательно, для предоставления или запрета доступа к машинам потребуется изменить только один центральный файл конфигурации. Первым шагом является инициализация NIS `netgroup` карты. В FreeBSD эта карта не создается по умолчанию. На главном сервере NIS используйте редактор для создания карты с именем [.filename]#/var/yp/netgroup#. Этот пример создает четыре сетевые группы для представления сотрудников IT, стажеров IT, сотрудников и интернов: [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... Каждая запись настраивает сетевую группу. Первый столбец в записи — это название сетевой группы. Каждый набор скобок представляет либо группу из одного или нескольких пользователей, либо имя другой сетевой группы. При указании пользователя три поля, разделённые запятыми, внутри каждой группы означают: . Имя хоста(ов), на котором другие поля, представляющие пользователя, действительны. Если имя хоста не указано, запись действительна на всех хостах. . Имя учетной записи, принадлежащей этой сетевой группе. . NIS-домен для учетной записи. Учетные записи могут быть импортированы из других NIS-доменов в сетевую группу. Если группа содержит нескольких пользователей, разделяйте каждого пользователя пробелом. Кроме того, каждое поле может содержать символы подстановки. Подробности см. в man:netgroup[5]. Имена сетевых групп длиннее 8 символов не должны использоваться. Имена чувствительны к регистру, и использование заглавных букв для имён сетевых групп — это простой способ отличить имена пользователей, машин и сетевых групп. Некоторые клиенты NIS, не относящиеся к FreeBSD, не могут обрабатывать сетевые группы, содержащие более 15 записей. Это ограничение можно обойти, создав несколько подгрупп с 15 или менее пользователями и настоящую сетевую группу, состоящую из этих подгрупп, как показано в этом примере: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Повторите этот процесс, если в одной сетевой группе существует более 225 (15 умножить на 15) пользователей. Для активации и распространения новой NIS-карты: [source, shell] .... ellington# cd /var/yp ellington# make .... Это создаст три карты NIS [.filename]#netgroup#, [.filename]#netgroup.byhost# и [.filename]#netgroup.byuser#. Используйте опцию ключа карты в man:ypcat[1], чтобы проверить доступность новых карт NIS: [source, shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... Вывод первой команды должен напоминать содержимое файла [.filename]#/var/yp/netgroup#. Вторая команда выводит результат только в случае создания специфичных для хоста групп сетей. Третья команда используется для получения списка групп сетей для пользователя. Для настройки клиента используйте man:vipw[8], чтобы указать имя сетевой группы. Например, на сервере с именем `war` замените эту строку: [.programlisting] .... +::::::::: .... строкой [.programlisting] .... +@IT_EMP::::::::: .... Указывает, что только пользователи, определённые в сетевой группе `IT_EMP`, будут импортированы в базу данных паролей этой системы, и только этим пользователям разрешён вход в систему. Эта конфигурация также применяется к функции `~` оболочки и всем процедурам, которые преобразуют между именами пользователей и числовыми идентификаторами пользователей. Другими словами, `cd ~_user_` не будет работать, `ls -l` покажет числовой ID вместо имени пользователя, а `find . -user joe -print` завершится с сообщением `No such user`. Чтобы исправить это, импортируйте все записи пользователей, не разрешая им вход на серверы. Это можно достичь, добавив дополнительную строку: [.programlisting] .... +:::::::::/usr/sbin/nologin .... Эта строка настраивает клиент на импорт всех записей, но с заменой оболочки в этих записях на [.filename]#/usr/sbin/nologin#. Убедитесь, что дополнительная строка добавлена _после_ `+@IT_EMP:::::::::`. В противном случае у всех пользовательских учётных записей, импортированных из NIS, будет указана оболочка входа [.filename]#/usr/sbin/nologin#, и никто не сможет войти в систему. Для настройки менее важных серверов замените старые `+:::::::::` на серверах следующими строками: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/usr/sbin/nologin .... Соответствующие строки для рабочих станций будут: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/usr/sbin/nologin .... NIS поддерживает создание netgroups из других netgroups, что может быть полезно при изменении политики доступа пользователей. Одна из возможностей — создание ролевых netgroups. Например, можно создать netgroup с именем `BIGSRV` для определения ограничений входа на важные серверы, другую netgroup `SMALLSRV` для менее важных серверов и третью netgroup `USERBOX` для рабочих станций. Каждая из этих netgroups содержит netgroups, которым разрешено входить на эти машины. Новые записи для карты NIS `netgroup` будут выглядеть так: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... Этот метод определения ограничений входа работает достаточно хорошо, когда можно определить группы машин с одинаковыми ограничениями. К сожалению, это скорее исключение, чем правило. В большинстве случаев требуется возможность определять ограничения входа для каждой машины отдельно. Определения машинозависимых сетевых групп — ещё один способ справиться с изменениями политики. В этом сценарии файл [.filename]#/etc/master.passwd# на каждой системе содержит две строки, начинающиеся с "+". Первая строка добавляет сетевую группу с учётными записями, которым разрешён вход на эту машину, а вторая строка добавляет все остальные учётные записи с оболочкой [.filename]#/usr/sbin/nologin#. Рекомендуется использовать имя сетевой группы в версии "ВСЕ-ЗАГЛАВНЫЕ", соответствующее имени хоста: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/usr/sbin/nologin .... После выполнения этой задачи на всех машинах больше не требуется изменять локальные версии файла [.filename]#/etc/master.passwd#. Все дальнейшие изменения можно выполнять, редактируя карту NIS. Вот пример возможной карты `netgroup` для данного сценария: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... Не всегда целесообразно использовать сетевые группы, привязанные к машинам. При развертывании нескольких десятков или сотен систем можно использовать ролевые сетевые группы вместо машинных, чтобы размер карты NIS оставался в разумных пределах. === Форматы паролей NIS требует, чтобы все хосты в домене NIS использовали одинаковый формат шифрования паролей. Если у пользователей возникают проблемы с аутентификацией на клиенте NIS, это может быть связано с разным форматом паролей. В гетерогенной сети формат должен поддерживаться всеми операционными системами, где DES является минимальным общим стандартом. Чтобы проверить, какой формат использует сервер или клиент, посмотрите на этот раздел в [.filename]#/etc/login.conf#: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Further entries elided] .... В этом примере система использует формат DES для хеширования паролей. Другие возможные значения включают `blf` для Blowfish, `md5` для MD5, `sha256` и `sha512` для SHA-256 и SHA-512 соответственно. Для получения дополнительной информации и актуального списка доступных вариантов на вашей системе обратитесь к man:crypt[3]. Если формат на хосте необходимо изменить, чтобы он соответствовал формату, используемому в домене NIS, базу данных возможностей входа необходимо перестроить после сохранения изменений: [source, shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== Формат паролей для существующих учётных записей не будет обновлён, пока каждый пользователь не изменит свой пароль _после_ перестроения базы данных возможностей входа. ==== [[network-ldap]] == Протокол LDAP Протокол LDAP (Lightweight Directory Access Protocol) — это протокол уровня приложений, используемый для доступа, изменения и аутентификации объектов с помощью распределённой службы каталогов. Его можно сравнить с телефонной книгой или архивом, который хранит несколько уровней иерархической однородной информации. Он применяется в сетях Active Directory и OpenLDAP, позволяя пользователям получать доступ к различным уровням внутренней информации с использованием одной учётной записи. Например, аутентификация электронной почты, получение контактных данных сотрудников и аутентификация на внутренних веб-сайтах могут осуществляться с помощью одной учётной записи в базе данных LDAP-сервера. В этом разделе представлено краткое руководство по настройке сервера LDAP в системе FreeBSD. Предполагается, что администратор уже имеет продуманный план, включающий тип хранимой информации, её назначение, перечень пользователей с доступом к этой информации и способы защиты от несанкционированного доступа. === Терминология и структура LDAP LDAP использует несколько терминов, которые следует понять перед началом настройки. Все записи каталога состоят из группы _атрибутов_. Каждый из этих наборов атрибутов содержит уникальный идентификатор, известный как _Отличительное имя_ (DN — Distinguished Name), который обычно строится из нескольких других атрибутов, таких как общее имя или _Относительное отличительное имя_ (RDN — Relative Distinguished Name). Подобно тому, как каталоги имеют абсолютные и относительные пути, можно рассматривать DN как абсолютный путь, а RDN — как относительный путь. Пример записи LDAP выглядит следующим образом. В этом примере выполняется поиск записи для указанной учетной записи пользователя (`uid`), организационного подразделения (`ou`) и организации (`o`): [source, shell] .... % ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... Этот пример записи показывает значения атрибутов `dn`, `mail`, `cn`, `uid` и `telephoneNumber`. Атрибут `cn` является RDN. Дополнительная информация о LDAP и его терминологии доступна по адресу http://www.openldap.org/doc/admin24/intro.html[http://www.openldap.org/doc/admin24/intro.html]. [[ldap-config]] === Настройка сервера LDAP FreeBSD не предоставляет встроенный LDAP-сервер. Начните настройку с установки пакета package:net/openldap-server[] или порта: [source, shell] .... # pkg install openldap-server .... В extref:{linux-users}[пакете, software] включен большой набор параметров по умолчанию. Их можно просмотреть, выполнив команду `pkg info openldap-server`. Если их недостаточно (например, требуется поддержка SQL), рекомендуется перекомпилировать порт с использованием соответствующего crossref:ports[ports-using,фреймворка]. Установка создает каталог [.filename]#/var/db/openldap-data# для хранения данных. Необходимо создать каталог для хранения сертификатов: [source, shell] .... # mkdir /usr/local/etc/openldap/private .... Следующий этап — настройка Центра Сертификации. Следующие команды должны быть выполнены из каталога [.filename]#/usr/local/etc/openldap/private#. Это важно, так как права доступа к файлам должны быть строгими, и пользователи не должны иметь доступ к этим файлам. Более подробную информацию о сертификатах и их параметрах можно найти в crossref:security[openssl,"OpenSSL"]. Чтобы создать Центр Сертификации, начните с этой команды и следуйте инструкциям: [source, shell] .... # openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt .... Записи для запросов могут быть любыми, _за исключением_ `Common Name`. Эта запись должна _отличаться_ от имени хоста системы. Если это будет самоподписанный сертификат, добавьте к имени хоста префикс `CA` — как Центр Сертификации. Следующая задача — создать запрос на подпись сертификата и закрытый ключ. Введите эту команду и следуйте инструкциям: [source, shell] .... # openssl req -days 365 -nodes -new -keyout server.key -out server.csr .... В процессе генерации сертификата обязательно правильно укажите атрибут `Common Name`. Запрос на подпись сертификата (Certificate Signing Request) должен быть подписан Центром сертификации, чтобы использоваться в качестве действительного сертификата: [source, shell] .... # openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial .... Заключительная часть процесса генерации сертификатов — создание и подписание клиентских сертификатов: [source, shell] .... # openssl req -days 365 -nodes -new -keyout client.key -out client.csr # openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key .... Помните, что нужно использовать тот же атрибут `Common Name` при запросе. По завершении убедитесь, что в результате выполнения команд было создано в общей сложности восемь (8) новых файлов. Демон, запускающий сервер OpenLDAP, называется [.filename]#slapd#. Его настройка выполняется через файл [.filename]#slapd.ldif#: старый файл [.filename]#slapd.conf# больше не используется в OpenLDAP. Есть http://www.openldap.org/doc/admin24/slapdconf2.html[примеры конфигурации] для [.filename]#slapd.ldif# доступны, и также их можно найти в [.filename]#/usr/local/etc/openldap/slapd.ldif.sample#. Документация параметров в slapd-config(5). Каждый раздел [.filename]#slapd.ldif#, как и все другие наборы атрибутов LDAP, однозначно идентифицируется через DN. Убедитесь, что между строкой `dn:` и желаемым концом раздела нет пустых строк. В следующем примере TLS будет использоваться для настройки безопасного канала. Первый раздел представляет глобальную конфигурацию: [.programlisting] .... # # See slapd-config(5) for details on configuration options. # This file should NOT be world readable. # dn: cn=config objectClass: olcGlobal cn: config # # # Define global ACLs to disable default read access. # olcArgsFile: /var/run/openldap/slapd.args olcPidFile: /var/run/openldap/slapd.pid olcTLSCertificateFile: /usr/local/etc/openldap/server.crt olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt #olcTLSCipherSuite: HIGH olcTLSProtocolMin: 3.1 olcTLSVerifyClient: never .... Здесь необходимо указать файлы Центра сертификации, сертификата сервера и закрытого ключа сервера. Рекомендуется позволить клиентам выбирать алгоритм шифрования и опустить опцию `olcTLSCipherSuite` (несовместимо с TLS-клиентами, кроме [.filename]#openssl#). Опция `olcTLSProtocolMin` позволяет серверу требовать минимальный уровень безопасности: это рекомендуется. Хотя проверка обязательна для сервера, для клиента она не требуется: `olcTLSVerifyClient: never`. Второй раздел посвящен серверным модулям и может быть настроен следующим образом: [.programlisting] .... # # Load dynamic backend modules: # dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/local/libexec/openldap olcModuleload: back_mdb.la #olcModuleload: back_bdb.la #olcModuleload: back_hdb.la #olcModuleload: back_ldap.la #olcModuleload: back_passwd.la #olcModuleload: back_shell.la .... Третий раздел посвящён загрузке необходимых схем `ldif` для использования базами данных: они являются важными. [.programlisting] .... dn: cn=schema,cn=config objectClass: olcSchemaConfig cn: schema include: file:///usr/local/etc/openldap/schema/core.ldif include: file:///usr/local/etc/openldap/schema/cosine.ldif include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif include: file:///usr/local/etc/openldap/schema/nis.ldif .... Далее, раздел конфигурации фронтенда (уровня взаимодействия с клиентами): [.programlisting] .... # Frontend settings # dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: to * by * read # # Sample global access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # #olcAccess: to dn.base="" by * read #olcAccess: to dn.base="cn=Subschema" by * read #olcAccess: to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! # olcPasswordHash: {SSHA} # {SSHA} is already the default for olcPasswordHash .... Еще один раздел посвящен _бэкенду конфигурации_ — единственному способу последующего доступа к конфигурации сервера OpenLDAP, который доступен только глобальному суперпользователю. [.programlisting] .... dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: to * by * none olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U .... Имя администратора по умолчанию — `cn=config`. Введите [.filename]#slappasswd# в оболочке, выберите пароль и используйте его хеш в `olcRootPW`. Если этот параметр не указан сейчас, до импорта [.filename]#slapd.ldif#, никто не сможет впоследствии изменить раздел _глобальной конфигурации_. Последний раздел посвящен бэкенду базы данных (уровню хранения данных): [.programlisting] .... ####################################################################### # LMDB database definitions ####################################################################### # dn: olcDatabase=mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: mdb olcDbMaxSize: 1073741824 olcSuffix: dc=domain,dc=example olcRootDN: cn=mdbadmin,dc=domain,dc=example # Cleartext passwords, especially for the rootdn, should # be avoided. See slappasswd(8) and slapd-config(5) for details. # Use of strong authentication encouraged. olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+ # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. olcDbDirectory: /var/db/openldap-data # Indices to maintain olcDbIndex: objectClass eq .... Эта база данных содержит _фактическое содержимое_ каталога LDAP. Доступны типы, отличные от `mdb`. Суперпользователь (не путать с глобальным) настраивается здесь: (возможно, пользовательское) имя пользователя в `olcRootDN` и хэш пароля в `olcRootPW`; [.filename]#slappasswd# можно использовать, как и раньше. Этот http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/data/regressions/its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;hb=HEAD[репозиторий] содержит четыре примера файла [.filename]#slapd.ldif#. Для преобразования существующего [.filename]#slapd.conf# в [.filename]#slapd.ldif# обратитесь к http://www.openldap.org/doc/admin24/slapdconf2.html[этой странице] (обратите внимание, что это может добавить некоторые бесполезные опции). После завершения настройки файл [.filename]#slapd.ldif# должен быть скопирован в пустой каталог. Рекомендуется создать его следующим образом: [source, shell] .... # mkdir /usr/local/etc/openldap/slapd.d/ .... Импорт базы данных конфигурации: [source, shell] .... # /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif .... Запустите демон [.filename]#slapd#: [source, shell] .... # /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/ .... Опция `-d` может использоваться для отладки, как указано в slapd(8). Чтобы проверить, что сервер запущен и работает: [source, shell] .... # ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContexts # # dn: namingContexts: dc=domain,dc=example # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... Сервер по-прежнему должен быть доверенным. Если это никогда не делалось ранее, следуйте этим инструкциям. Установите пакет или порт OpenSSL: [source, shell] .... # pkg install openssl .... Из каталога, где находится [.filename]#ca.crt# (в данном примере, [.filename]#/usr/local/etc/openldap#), выполните: [source, shell] .... # c_rehash . .... И сертификат центра сертификации, и сертификат сервера теперь правильно распознаются в своих соответствующих ролях. Чтобы проверить это, выполните следующую команду из каталога, где находится [.filename]#server.crt#: [source, shell] .... # openssl verify -verbose -CApath . server.crt .... Если [.filename]#slapd# был запущен, перезапустите его. Как указано в [.filename]#/usr/local/etc/rc.d/slapd#, для корректного запуска [.filename]#slapd# при загрузке следующие строки должны быть добавлены в [.filename]#/etc/rc.conf#: [.programlisting] .... slapd_enable="YES" slapd_flags='-h "ldapi://%2fvar%2frun%2fopenldap%2fldapi/ ldap://0.0.0.0/"' slapd_sockets="/var/run/openldap/ldapi" slapd_cn_config="YES" .... [.filename]#slapd# не предоставляет отладку при загрузке. Для этой цели проверьте [.filename]#/var/log/debug.log#, [.filename]#dmesg -a# и [.filename]#/var/log/messages#. Следующий пример добавляет группу `team` и пользователя `john` в базу данных LDAP `domain.example`, которая пока пуста. Сначала создайте файл [.filename]#domain.ldif#: [source, shell] .... # cat domain.ldif dn: dc=domain,dc=example objectClass: dcObject objectClass: organization o: domain.example dc: domain dn: ou=groups,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: groups dn: ou=users,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: users dn: cn=team,ou=groups,dc=domain,dc=example objectClass: top objectClass: posixGroup cn: team gidNumber: 10001 dn: uid=john,ou=users,dc=domain,dc=example objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: John McUser uid: john uidNumber: 10001 gidNumber: 10001 homeDirectory: /home/john/ loginShell: /usr/bin/bash userPassword: secret .... См. документацию OpenLDAP для получения более подробной информации. Используйте [.filename]#slappasswd# для замены пароля в открытом тексте `secret` на хеш в `userPassword`. Путь, указанный как `loginShell`, должен существовать во всех системах, где `john` имеет право входить. Наконец, используйте администратора `mdb` для изменения базы данных: [source, shell] .... # ldapadd -W -D "cn=mdbadmin,dc=domain,dc=example" -f domain.ldif .... Изменения в разделе _глобальной конфигурации_ могут выполняться только глобальным суперпользователем. Например, предположим, что изначально была указана опция `olcTLSCipherSuite: HIGH:MEDIUM:SSLv3`, которую теперь необходимо удалить. Сначала создайте файл, содержащий следующее: [source, shell] .... # cat global_mod dn: cn=config changetype: modify delete: olcTLSCipherSuite .... Затем примените изменения: [source, shell] .... # ldapmodify -f global_mod -x -D "cn=config" -W .... При запросе введите пароль, выбранный в разделе _бекенда конфигурации_. Имя пользователя не требуется: здесь `cn=config` представляет DN раздела базы данных, который нужно изменить. Альтернативно, используйте `ldapmodify` для удаления отдельной строки базы данных или `ldapdelete` для удаления всей записи. Если что-то пойдет не так или если глобальный суперпользователь не сможет получить доступ к бэкенду конфигурации, можно удалить и перезаписать всю конфигурацию: [source, shell] .... # rm -rf /usr/local/etc/openldap/slapd.d/ .... [.filename]#slapd.ldif# затем можно отредактировать и снова импортировать. Пожалуйста, следуйте этой процедуре только в том случае, если нет другого доступного решения. Это конфигурация только сервера. На той же машине также может быть размещен LDAP-клиент с собственной отдельной конфигурацией. [[network-dhcp]] == Протокол динамической конфигурации узла (DHCP) Протокол динамической конфигурации узла (DHCP —Dynamic Host Configuration Protocol) позволяет системе подключаться к сети для получения необходимой адресной информации для общения в этой сети. FreeBSD включает версию `dhclient` от OpenBSD, которая используется клиентом для получения адресной информации. FreeBSD не устанавливает сервер DHCP, но несколько серверов доступны в коллекции портов FreeBSD. Протокол DHCP полностью описан в http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Информационные ресурсы также доступны на http://www.isc.org/downloads/dhcp/[isc.org/downloads/dhcp/]. Этот раздел описывает, как использовать встроенный DHCP-клиент. Затем он описывает, как установить и настроить DHCP-сервер. [NOTE] ==== В FreeBSD устройство man:bpf[4] необходимо как для сервера DHCP, так и для клиента DHCP. Это устройство включено в ядро [.filename]#GENERIC#, которое устанавливается с FreeBSD. Пользователям, предпочитающим создавать собственное ядро, необходимо оставить это устройство, если используется DHCP. Следует отметить, что [.filename]#bpf# также позволяет привилегированным пользователям запускать анализаторы сетевых пакетов в этой системе. ==== === Настройка клиента DHCP Поддержка DHCP-клиента включена в установщик FreeBSD, что позволяет легко настроить новую систему для автоматического получения сетевой адресации от существующего DHCP-сервера. Примеры настройки сети можно найти в разделе crossref:bsdinstall[bsdinstall-post,"Учетные записи, Часовая зона, Службы и Защита"]. Когда `dhclient` выполняется на клиентской машине, он начинает транслировать запросы на получение конфигурационной информации. По умолчанию эти запросы используют UDP-порт 68. Сервер отвечает на UDP-порту 67, предоставляя клиенту IP-адрес и другую соответствующую сетевую информацию, такую как маска подсети, шлюз по умолчанию и адреса DNS-серверов. Эта информация предоставляется в форме "аренды" DHCP и действительна в течение настраиваемого времени. Это позволяет автоматически повторно использовать устаревшие IP-адреса для клиентов, которые больше не подключены к сети. Клиенты DHCP могут получить от сервера большое количество информации. Полный список можно найти в man:dhcp-options[5]. По умолчанию, при загрузке системы FreeBSD её DHCP-клиент работает в фоновом режиме или _асинхронно_. Другие скрипты запуска продолжают выполняться, пока завершается процесс DHCP, что ускоряет загрузку системы. DHCP в фоновом режиме работает хорошо, когда сервер DHCP быстро отвечает на запросы клиента. Однако на некоторых системах выполнение DHCP может занять много времени. Если сетевые службы пытаются запуститься до того, как DHCP назначит информацию о сетевой адресации, они завершатся с ошибкой. Использование DHCP в _синхронном_ режиме предотвращает эту проблему, приостанавливая запуск до завершения настройки DHCP. Эта строка в [.filename]#/etc/rc.conf# используется для настройки фонового или асинхронного режима: [.programlisting] .... ifconfig_fxp0="DHCP" .... Эта строка может уже существовать, если система была настроена на использование DHCP во время установки. Замените `_fxp0_`, указанный в этих примерах, на имя интерфейса, который нужно настроить динамически, как описано в crossref:config[config-network-setup,"Настройка сетевых интерфейсов"]. Для настройки системы на использование синхронного режима с приостановкой во время запуска до завершения DHCP используйте "`SYNCDHCP`": [.programlisting] .... ifconfig_fxp0="SYNCDHCP" .... Есть еще несколько опций клиента. Подробности смотрите в man:rc.conf[5], выполнив поиск по `dhclient`. Клиент DHCP использует следующие файлы: * [.filename]#/etc/dhclient.conf# + Файл конфигурации, используемый `dhclient`. Обычно этот файл содержит только комментарии, так как значения по умолчанию подходят для большинства клиентов. Этот конфигурационный файл описан в man:dhclient.conf[5]. * [.filename]#/sbin/dhclient# + Дополнительную информацию о самой команде можно найти в man:dhclient[8]. * [.filename]#/sbin/dhclient-script# + Специфичный для FreeBSD скрипт конфигурации DHCP-клиента. Он описан в man:dhclient-script[8], но для правильной работы не требует изменений со стороны пользователя. * [.filename]#/var/db/dhclient.leases.interface# + Клиент DHCP сохраняет базу данных действительных аренд в этом файле, который записывается как журнал и описывается в man:dhclient.leases[5]. [[network-dhcp-server]] === Установка и настройка сервера DHCP В этом разделе показано, как настроить систему FreeBSD в качестве DHCP-сервера с использованием реализации DHCP-сервера от Консорциума Интернет-систем (ISC —Internet Systems Consortium). Эту реализацию и её документацию можно установить с помощью пакета package:net/isc-dhcp44-server[] или порта. Установка пакета package:net/isc-dhcp44-server[] включает образец файла конфигурации. Скопируйте [.filename]#/usr/local/etc/dhcpd.conf.example# в [.filename]#/usr/local/etc/dhcpd.conf# и внесите необходимые изменения в этот новый файл. Файл конфигурации состоит из объявлений для подсетей и хостов, которые определяют информацию, предоставляемую клиентам DHCP. Например, следующие строки настраивают следующее: [.programlisting] .... option domain-name "example.org";<.> option domain-name-servers ns1.example.org;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 600;<.> max-lease-time 72400;<.> ddns-update-style none;<.> subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20;<.> option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;<.> } host fantasia { hardware ethernet 08:00:07:26:c0:a5;<.> fixed-address fantasia.fugue.com;<.> } .... <.> Этот параметр задаёт домен поиска по умолчанию, который будет предоставляться клиентам. Дополнительную информацию можно найти в man:resolv.conf[5]. <.> Эта опция определяет разделённый запятыми список DNS-серверов, которые должен использовать клиент. Они могут быть указаны по их Полным Доменным Именам (FQDN), как показано в примере, или по их IP-адресам. <.> Маска подсети, которая будет предоставлена клиентам. <.> Время истечения аренды по умолчанию в секундах. Клиент может быть настроен для переопределения этого значения. <.> Максимально допустимая продолжительность аренды в секундах. Если клиент запросит аренду на более длительный срок, аренда всё равно будет выдана, но будет действительна только в течение `max-lease-time`. <.> Значение по умолчанию `none` отключает динамические обновления DNS. Изменение этого параметра на `interim` настраивает DHCP-сервер на обновление DNS-сервера при каждой выдаче аренды, чтобы DNS-сервер знал, какие IP-адреса связаны с какими компьютерами в сети. Не изменяйте значение по умолчанию, если DNS-сервер не настроен для поддержки динамического DNS. <.> Эта строка создает пул доступных IP-адресов, зарезервированных для выделения клиентам DHCP. Диапазон адресов должен быть действительным для сети или подсети, указанной в предыдущей строке. <.> Объявляет шлюз по умолчанию, действительный для сети или подсети, указанной перед открывающей скобкой `{`. <.> Указывает аппаратный MAC-адрес клиента, чтобы DHCP-сервер мог распознать клиента при его запросе. <.> Указывает, что данный узел всегда должен получать один и тот же IP-адрес. Использование имени узла корректно, так как DHCP-сервер разрешит имя узла перед возвратом информации об аренде. Этот файл конфигурации поддерживает гораздо больше опций. Подробности и примеры смотрите в dhcpd.conf(5), который устанавливается вместе с сервером. После завершения настройки [.filename]#dhcpd.conf# включите DHCP-сервер в [.filename]#/etc/rc.conf#: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... Замените `dc0` на интерфейс (или интерфейсы, разделенные пробелами), на котором DHCP-сервер должен ожидать запросы от DHCP-клиентов. Запустите сервер, выполнив следующую команду: [source, shell] .... # service isc-dhcpd start .... Любые последующие изменения конфигурации сервера потребуют остановки и последующего запуска службы dhcpd с помощью man:service[8]. Сервер DHCP использует следующие файлы. Обратите внимание, что страницы руководства устанавливаются вместе с серверным программным обеспечением. * [.filename]#/usr/local/sbin/dhcpd# + Дополнительную информацию о сервере dhcpd можно найти в dhcpd(8). * [.filename]#/usr/local/etc/dhcpd.conf# + Файл конфигурации сервера должен содержать всю информацию, которую необходимо предоставить клиентам, а также сведения о работе сервера. Этот конфигурационный файл описан в dhcpd.conf(5). * [.filename]#/var/db/dhcpd.leases# + Сервер DHCP ведет базу данных выданных аренд в этом файле, который записывается в виде журнала. Обратитесь к dhcpd.leases(5), где приводится несколько более подробное описание. * [.filename]#/usr/local/sbin/dhcrelay# + Этот демон используется в сложных средах, где один DHCP-сервер перенаправляет запрос от клиента другому DHCP-серверу в отдельной сети. Если требуется такая функциональность, установите пакет package:net/isc-dhcp44-relay[] или соответствующий порт. Установка включает dhcrelay(8), где приведены более подробные сведения. [[network-dns]] == Система доменных имен (DNS) Система доменных имен (DNS) — это протокол, который сопоставляет доменные имена с IP-адресами и наоборот. DNS координируется в масштабах Интернета через довольно сложную систему авторитетных корневых серверов, серверов доменов верхнего уровня (TLD — Top Level Domain) и других менее масштабных серверов имен, которые хранят и кэшируют информацию об отдельных доменах. Для выполнения DNS-запросов в системе не обязательно запускать сервер имен. Следующая таблица описывает некоторые термины, связанные с DNS: .Терминология DNS [cols="1,1", frame="none", options="header"] |=== | Термин | Определение |Прямая запись DNS (Forward DNS) |Сопоставление имен хостов с IP-адресами. |Зона ответственности (Origin) |Относится к домену, охватываемому определенным файлом зоны. |Резолвер (Resolver) |Системный процесс, с помощью которого машина запрашивает у сервера имен информацию о зоне. |Обратная запись DNS (Reverse DNS) |Сопоставление IP-адресов с именами хостов. |Корневая зона (Root zone) |Начало иерархии зон Интернета. Все зоны находятся под корневой зоной, аналогично тому, как все файлы в файловой системе находятся под корневым каталогом. |Зона (Zone) |Отдельный домен, поддомен или часть DNS, управляемые одной организацией. |=== Примеры зон: * `.` — так обычно обозначается корневая зона в документации. * `org.` — это домен верхнего уровня (TLD) в корневой зоне. * `example.org.` — это зона под доменом `org.`. * `1.168.192.in-addr.arpa` — это зона, содержащая ссылки на все IP-адреса, входящие в адресное пространство `192.168.1.*`. Как видно, более специфичная часть имени хоста расположена слева. Например, `example.org.` более специфично, чем `org.`, а `org.` более специфично, чем корневая зона. Структура каждой части имени хоста во многом напоминает файловую систему: каталог [.filename]#/dev# находится в корне и так далее. === Причины для запуска сервера имен Серверы имен обычно бывают двух видов: авторитетные DNS-серверы и кэширующие DNS-серверы (также известные как резолвинг-серверы). Авторитетный сервер имен необходим, когда: * Хочется предоставлять DNS-информацию для всего мира, отвечая на запросы авторитетно. * Домен, например `example.org`, зарегистрирован, и IP-адреса должны быть назначены именам хостов в нём. * Блок IP-адресов требует обратных DNS-записей (IP к имени хоста). * Резервный или вторичный сервер имен, называемый подчиненным (slave), будет отвечать на запросы. Кэширующий сервер имен необходим, когда: * Локальный DNS-сервер может кэшировать и отвечать быстрее, чем запрос к внешнему серверу имен. Когда кто-нибудь запрашивает информацию о `www.FreeBSD.org`, резолвер обычно обращается к серверу имен провайдера и получает ответ. При использовании локального кэширующего DNS-сервера запрос во внешний мир выполняется только один раз этим сервером. Дополнительные запросы не будут выходить за пределы локальной сети, так как информация сохраняется в локальном кэше. === Конфигурация DNS-сервера Unbound включён в базовую систему FreeBSD. По умолчанию он предоставляет разрешение DNS только для локальной машины. Хотя пакет базовой системы можно настроить для предоставления служб разрешения за пределами локальной машины, рекомендуется удовлетворять такие требования путём установки Unbound из коллекции портов FreeBSD. Чтобы включить Unbound, добавьте следующую строку в [.filename]#/etc/rc.conf#: [.programlisting] .... local_unbound_enable="YES" .... Любые существующие серверы имен в [.filename]#/etc/resolv.conf# будут настроены как серверы пересылки в новой конфигурации Unbound. [NOTE] ==== Если какой-либо из перечисленных серверов имен не поддерживает DNSSEC, локальное разрешение DNS завершится неудачей. Обязательно протестируйте каждый сервер имен и удалите те, которые не прошли проверку. Следующая команда покажет дерево доверия или ошибку для сервера имен, работающего на `192.168.1.1`: [source, shell] .... % drill -S FreeBSD.org @192.168.1.1 .... ==== После подтверждения поддержки DNSSEC каждым сервером имен, запустите Unbound: [source, shell] .... # service local_unbound onestart .... Это обеспечит обновление [.filename]#/etc/resolv.conf#, чтобы запросы к доменам, защищённым DNSSEC, теперь работали. Например, выполните следующую команду для проверки дерева доверия DNSSEC FreeBSD.org: [source, shell] .... % drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful .... === Конфигурация Авторитетного Сервера Имен FreeBSD не предоставляет программное обеспечение авторитетного сервера имен в базовой системе. Пользователям рекомендуется устанавливать сторонние приложения, такие как package:dns/nsd[] или package:dns/bind918[] из пакетов или портов. [[network-zeroconf]] == Сетевое взаимодействие без настройки (mDNS/DNS-SD) https://en.wikipedia.org/wiki/Zero-configuration_networking[Сетевое взаимодействие без настройки] (иногда называемое _Zeroconf_) — это набор технологий, упрощающих настройку сети. Главные составные части Zeroconf: - Линк-локальная адресация, предоставляющая автоматическое назначение числовых сетевых адресов. - Мультикаст DNS (_mDNS_), обеспечивающий автоматическое распространение и разрешение имён хостов. - DNS-Based Service Discovery (_DNS-SD_), обеспечивающий автоматическое обнаружение экземпляров сервисов. === Настройка и запуск Avahi Одной из популярных реализаций zeroconf является https://avahi.org/[Avahi]. Avahi можно установить и настроить с помощью следующих команд: [source, shell] .... # pkg install avahi-app nss_mdns # grep -q '^hosts:.*\' /etc/nsswitch.conf || sed -i "" 's/^hosts: .*/& mdns/' /etc/nsswitch.conf # service dbus enable # service avahi-daemon enable # service dbus start # service avahi-daemon start .... [[network-apache]] == HTTP сервер Apache Веб-сервер с открытым исходным кодом Apache является наиболее широко используемым веб-сервером. FreeBSD не устанавливает этот веб-сервер по умолчанию, но его можно установить из пакета package:www/apache24[] или порта. В этом разделе приведена сводка по настройке и запуску версии 2._x_ HTTP-сервера Apache на FreeBSD. Более подробная информация о Apache 2.X и его директивах конфигурации доступна по ссылке http://httpd.apache.org/[httpd.apache.org]. === Настройка и запуск Apache В FreeBSD основной файл конфигурации сервера Apache HTTP Server устанавливается как [.filename]#/usr/local/etc/apache2x/httpd.conf#, где _x_ обозначает номер версии. Этот текстовый файл в формате ASCII начинается со строк комментариев, обозначенных символом `+#+`. Наиболее часто изменяемые директивы: `ServerRoot "/usr/local"`:: Указывает иерархию каталогов по умолчанию для установки Apache. Исполняемые файлы хранятся в подкаталогах [.filename]#bin# и [.filename]#sbin# корня сервера, а конфигурационные файлы — в подкаталоге [.filename]#etc/apache2x#. `ServerAdmin \you@example.com`:: Замените это на адрес электронной почты для получения сообщений о проблемах с сервером. Этот адрес также появляется на некоторых страницах, сгенерированных сервером, таких как документы об ошибках. `ServerName www.example.com:80`:: Позволяет администратору установить имя хоста, которое отправляется клиентам сервера. Например, можно использовать `www` вместо фактического имени хоста. Если у системы нет зарегистрированного DNS-имени, введите её IP-адрес. Если сервер будет прослушивать альтернативный порт, замените `80` на номер альтернативного порта. `DocumentRoot "/usr/local/www/apache2_x_/data"`:: Каталог, из которого будут обслуживаться документы. По умолчанию все запросы обрабатываются из этого каталога, но символические ссылки и псевдонимы могут использоваться для указания на другие расположения. Всегда рекомендуется создать резервную копию конфигурационного файла Apache по умолчанию перед внесением изменений. После завершения настройки Apache сохраните файл и проверьте конфигурацию с помощью `apachectl`. Запуск команды `apachectl configtest` должен вернуть `Syntax OK`. Чтобы запускать Apache при загрузке системы, добавьте следующую строку в [.filename]#/etc/rc.conf#: [.programlisting] .... apache24_enable="YES" .... Если Apache должен запускаться с нестандартными параметрами, следующую строку можно добавить в [.filename]#/etc/rc.conf# для указания необходимых флагов: [.programlisting] .... apache24_flags="" .... Если apachectl не сообщает об ошибках конфигурации, то запустите `httpd`: [source, shell] .... # service apache24 start .... Службу `httpd` можно проверить, введя `http://_localhost_` в веб-браузере, заменив _localhost_ на полное доменное имя машины, на которой работает `httpd`. Отображаемая веб-страница по умолчанию находится в [.filename]#/usr/local/www/apache24/data/index.html#. Проверить конфигурацию Apache на наличие ошибок после внесения последующих изменений в конфигурацию во время работы `httpd` можно с помощью следующей команды: [source, shell] .... # service apache24 configtest .... [NOTE] ==== Важно отметить, что `configtest` не является стандартом man:rc[8], и не следует ожидать, что он будет работать для всех стартовых скриптов. ==== === Виртуальный хостинг Виртуальный хостинг позволяет запускать несколько веб-сайтов на одном сервере Apache. Виртуальные хосты могут быть _IP-ориентированными_ или _имя-ориентированными_. IP-ориентированный виртуальный хостинг использует разные IP-адреса для каждого сайта. Имя-ориентированный виртуальный хостинг использует заголовки HTTP/1.1 клиента для определения имени хоста, что позволяет сайтам использовать один и тот же IP-адрес. Для настройки Apache с использованием виртуального хоста на основе имен добавьте блок `VirtualHost` для каждого веб-сайта. Например, для веб-сервера с именем `www.domain.tld` и виртуальным доменом `www.someotherdomain.tld` добавьте следующие записи в [.filename]#httpd.conf#: [.programlisting] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... Для каждого виртуального хоста замените значения `ServerName` и `DocumentRoot` на те, которые должны использоваться. Для получения дополнительной информации о настройке виртуальных хостов обратитесь к официальной документации Apache по адресу: http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]. === Модули Apache Apache использует модули для расширения функциональности, предоставляемой базовым сервером. Обратитесь к http://httpd.apache.org/docs/current/mod/[http://httpd.apache.org/docs/current/mod/] для получения полного списка и деталей настройки доступных модулей. В FreeBSD некоторые модули могут быть скомпилированы с портом package:www/apache24[]. Введите `make config` в [.filename]#/usr/ports/www/apache24#, чтобы увидеть, какие модули доступны и какие включены по умолчанию. Если модуль не скомпилирован с портом, коллекция портов FreeBSD предоставляет простой способ установки многих модулей. В этом разделе описаны три наиболее часто используемых модуля. ==== Поддержка SSL В прошлом поддержка SSL в Apache требовала дополнительного модуля под названием [.filename]#mod_ssl#. Сейчас это не так, и стандартная установка Apache включает SSL в веб-сервер. Пример настройки поддержки SSL-сайтов доступен в установленном файле [.filename]#httpd-ssl.conf# внутри каталога [.filename]#/usr/local/etc/apache24/extra#. В этом же каталоге также находится пример файла с именем [.filename]#ssl.conf-sample#. Рекомендуется изучить оба файла для правильной настройки защищённых сайтов в веб-сервере Apache. После настройки SSL необходимо раскомментировать следующую строку в основном файле [.filename]#http.conf#, чтобы активировать изменения при следующей перезагрузке или обновлении Apache: [.programlisting] .... #Include etc/apache24/extra/httpd-ssl.conf .... [WARNING] ==== Версии SSL два и три имеют известные уязвимости. Настоятельно рекомендуется использовать версии TLS 1.2 и 1.3 вместо старых вариантов SSL. Это можно сделать, установив следующие параметры в [.filename]#ssl.conf#: ==== [.programlisting] .... SSLProtocol all -SSLv3 -SSLv2 +TLSv1.2 +TLSv1.3 SSLProxyProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 .... Для завершения настройки SSL в веб-сервере раскомментируйте следующую строку, чтобы убедиться, что конфигурация будет загружена в Apache при перезапуске или обновлении: [.programlisting] .... # Secure (SSL/TLS) connections Include etc/apache24/extra/httpd-ssl.conf .... Следующие строки также должны быть раскомментированы в файле [.filename]#httpd.conf# для полной поддержки SSL в Apache: [.programlisting] .... LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so LoadModule ssl_module libexec/apache24/mod_ssl.so .... Следующий шаг — работа с центром сертификации для установки соответствующих сертификатов в системе. Это создаст цепочку доверия для сайта и предотвратит появление предупреждений о самоподписанных сертификатах. ==== [.filename]#mod_perl# Модуль [.filename]#mod_perl# позволяет писать модули Apache на Perl. Кроме того, встроенный в сервер постоянный интерпретатор избегает накладных расходов на запуск внешнего интерпретатора и задержек при старте Perl. [.filename]#mod_perl# можно установить с помощью пакета package:www/mod_perl2[] или порта. Документация по использованию этого модуля доступна по адресу http://perl.apache.org/docs/2.0/index.html[http://perl.apache.org/docs/2.0/index.html]. ==== [.filename]#mod_php# _PHP: Препроцессор Гипертекста_ (PHP) — это язык программирования общего назначения, который особенно хорошо подходит для веб-разработки. Способный встраиваться в HTML, его синтаксис основан на C, Java(TM) и Perl, что позволяет веб-разработчикам быстро создавать динамически генерируемые веб-страницы. Поддержка PHP для Apache и любых других функций, написанных на этом языке, может быть добавлена путем установки соответствующего порта. Для всех поддерживаемых версий выполните поиск в базе данных пакетов с помощью `pkg`: [source, shell] .... # pkg search php .... Будет отображен список, включающий версии и дополнительные возможности, которые они предоставляют. Компоненты полностью модульные, что означает, что функции включаются путем установки соответствующего порта. Чтобы установить PHP версии 7.4 для Apache, выполните следующую команду: [source, shell] .... # pkg install mod_php74 .... Если необходимо установить какие-либо зависимости, они также будут установлены. По умолчанию PHP не будет включен. Следующие строки необходимо добавить в конфигурационный файл Apache, расположенный в [.filename]#/usr/local/etc/apache24#, чтобы активировать его: [.programlisting] .... SetHandler application/x-httpd-php SetHandler application/x-httpd-php-source .... В дополнение, параметр `DirectoryIndex` в конфигурационном файле также потребуется обновить, и Apache нужно будет перезапустить или перезагрузить, чтобы изменения вступили в силу. Поддержка многих функций PHP также может быть установлена с помощью `pkg`. Например, для установки поддержки XML или SSL установите соответствующие порты: [source, shell] .... # pkg install php74-xml php74-openssl .... Как и ранее, для вступления изменений в силу необходимо перезагрузить конфигурацию Apache, даже в случаях, когда была просто установка модуля. Для выполнения плавного перезапуска с целью перезагрузки конфигурации выполните следующую команду: [source, shell] .... # apachectl graceful .... После завершения установки есть два способа получить установленные модули поддержки PHP и информацию о среде сборки. Первый — установить полный бинарный файл PHP и выполнить команду для получения информации: [source, shell] .... # pkg install php74 .... [source, shell] .... # php -i | less .... Необходимо передать вывод в постраничный просмотрщик, например, `more` или `less`, чтобы упростить восприятие большого объема вывода. Наконец, для внесения изменений в глобальную конфигурацию PHP существует хорошо документированный файл, установленный в [.filename]#/usr/local/etc/php.ini#. На момент установки этот файл не будет существовать, так как есть две версии на выбор: [.filename]#php.ini-development# и [.filename]#php.ini-production#. Они представляют собой отправные точки, помогающие администраторам в развертывании. ==== Поддержка HTTP2 Поддержка протокола HTTP2 в Apache включена по умолчанию при установке порта с помощью `pkg`. Новая версия HTTP содержит множество улучшений по сравнению с предыдущей версией, включая использование одного соединения с веб-сайтом, что сокращает общее количество циклов TCP-соединений. Кроме того, данные заголовков пакетов сжимаются, а HTTP2 по умолчанию требует шифрования. Когда Apache настроен на использование только HTTP2, веб-браузеры будут требовать безопасное, зашифрованное HTTPS-соединение. Если Apache настроен на использование обеих версий, HTTP1.1 будет считаться резервным вариантом при возникновении проблем с соединением. Хотя это изменение требует от администраторов внесения изменений, они положительные и способствуют более безопасному Интернету для всех. Изменения требуются только для сайтов, которые в настоящее время не используют SSL и TLS. [NOTE] ==== Эта конфигурация зависит от предыдущих разделов, включая поддержку TLS. Рекомендуется выполнить эти инструкции перед продолжением с данной конфигурацией. ==== Начните процесс, включив модуль http2, раскомментировав строку в [.filename]#/usr/local/etc/apache24/httpd.conf#, и замените модуль mpm_prefork на mpm_event, так как первый не поддерживает HTTP2. [.programlisting] .... LoadModule http2_module libexec/apache24/mod_http2.so LoadModule mpm_event_module libexec/apache24/mod_mpm_event.so .... [NOTE] ==== Существует отдельный порт [.filename]#mod_http2#, который доступен. Он предназначен для более быстрого получения исправлений безопасности и ошибок по сравнению с модулем, установленным через встроенный порт [.filename]#apache24#. Он не обязателен для поддержки HTTP2, но доступен. При установке следует использовать [.filename]#mod_h2.so# вместо [.filename]#mod_http2.so# в конфигурации Apache. ==== Существует два метода реализации HTTP2 в Apache; один способ — глобально для всех сайтов и каждого VirtualHost, работающего в системе. Чтобы включить HTTP2 глобально, добавьте следующую строку под директивой ServerName: [.programlisting] .... Protocols h2 http/1.1 .... [NOTE] ==== Для включения HTTP2 в незашифрованном виде используйте h2h2chttp/1.1 в файле [.filename]#httpd.conf#. ==== Наличие здесь h2c позволит передавать незашифрованные данные HTTP2 в системе, но это не рекомендуется. Кроме того, использование здесь http/1.1 позволит системе вернуться к версии протокола HTTP1.1, если это потребуется. Для включения HTTP2 для отдельных VirtualHosts добавьте ту же строку в директиву VirtualHost в файле [.filename]#httpd.conf# или [.filename]#httpd-ssl.conf#. Перезагрузите конфигурацию с помощью команды `apachectl`[parameter]#reload# и проверьте конфигурацию каким-либо из следующих способов после посещения одной из страниц на сервере: [source, shell] .... # grep "HTTP/2.0" /var/log/httpd-access.log .... Это должно вернуть что-то похожее на следующее: [.programlisting] .... 192.168.1.205 - - [18/Oct/2020:18:34:36 -0400] "GET / HTTP/2.0" 304 - 192.0.2.205 - - [18/Oct/2020:19:19:57 -0400] "GET / HTTP/2.0" 304 - 192.0.0.205 - - [18/Oct/2020:19:20:52 -0400] "GET / HTTP/2.0" 304 - 192.0.2.205 - - [18/Oct/2020:19:23:10 -0400] "GET / HTTP/2.0" 304 - .... Другой способ — использование встроенного в веб-браузер отладчика сайтов или `tcpdump`; однако использование любого из этих методов выходит за рамки данного документа. Поддержка обратных прокси-соединений HTTP2 с использованием модуля [.filename]#mod_proxy_http2.so#. При настройке директив ProxyPass или RewriteRules с флагом [P] следует использовать h2:// для соединения. === Динамические веб-сайты В дополнение к mod_perl и mod_php, доступны другие языки для создания динамического веб-содержимого. Среди них Django и Ruby on Rails. ==== Django Django — это фреймворк под лицензией BSD, разработанный для того, чтобы позволить разработчикам быстро создавать высокопроизводительные и элегантные веб-приложения. Он предоставляет объектно-реляционный преобразователь (ORM), позволяющий разрабатывать типы данных как объекты Python. Для этих объектов предоставляется богатый динамический API доступа к базе данных, без необходимости написания разработчиком кода на SQL. Также имеется расширяемая система шаблонов, чтобы логика приложения была отделена от HTML-представления. Django зависит от [.filename]#mod_python# и движка SQL-базы данных. В FreeBSD порт package:www/py-django[] автоматически устанавливает [.filename]#mod_python# и поддерживает базы данных PostgreSQL, MySQL или SQLite, по умолчанию используется SQLite. Чтобы изменить движок базы данных, введите `make config` в [.filename]#/usr/ports/www/py-django#, затем установите порт. После установки Django приложению понадобится каталог проекта вместе с конфигурацией Apache для использования встроенного интерпретатора Python. Этот интерпретатор используется для вызова приложения при обращении к определённым URL на сайте. Для настройки Apache для передачи запросов определенных URL веб-приложению добавьте следующее в [.filename]#httpd.conf#, указав полный путь к каталогу проекта: [.programlisting] .... SetHandler python-program PythonPath "['/dir/to/the/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On .... Обратитесь к https://docs.djangoproject.com[https://docs.djangoproject.com] для получения дополнительной информации о том, как использовать Django. ==== Ruby on Rails Ruby on Rails — это еще один фреймворк с открытым исходным кодом для веб-разработки, предоставляющий полный стек разработки. Он оптимизирован для повышения продуктивности веб-разработчиков и позволяет быстро создавать мощные приложения. В FreeBSD его можно установить с помощью пакета package:www/rubygem-rails[] или порта. Обратитесь к http://guides.rubyonrails.org[http://guides.rubyonrails.org] для получения дополнительной информации о том, как использовать Ruby on Rails. [[network-ftp]] == Протокол передачи файлов (FTP) Протокол передачи файлов (FTP — File Transfer Protocol) предоставляет пользователям простой способ передачи файлов на FTP-сервер и с него. В базовую систему FreeBSD включено программное обеспечение FTP-сервера — ftpd. В FreeBSD предусмотрено несколько файлов конфигурации для управления доступом к FTP-серверу. В этом разделе приводится их краткое описание. Подробнее о встроенном FTP-сервере можно узнать в man:ftpd[8]. === Конфигурация Самый важный этап настройки — определение учётных записей, которым будет разрешён доступ к FTP-серверу. В системе FreeBSD существует ряд системных учётных записей, которым не следует разрешать доступ по FTP. Список пользователей, которым запрещён любой доступ по FTP, можно найти в [.filename]#/etc/ftpusers#. По умолчанию в него включены системные учётные записи. Можно добавить дополнительных пользователей, которым не следует разрешать доступ к FTP. В некоторых случаях может быть желательно ограничить доступ некоторых пользователей, не запрещая им полностью использовать FTP. Это можно сделать, создав файл [.filename]#/etc/ftpchroot#, как описано в man:ftpchroot[5]. В этом файле перечислены пользователи и группы, подлежащие ограничениям доступа к FTP. Для обеспечения анонимного доступа по FTP к серверу создайте пользователя с именем `ftp` в системе FreeBSD. Пользователи смогут входить на FTP-сервер под именем `ftp` или `anonymous`. При запросе пароля будет принят любой ввод, но по соглашению в качестве пароля следует использовать адрес электронной почты. FTP-сервер вызовет man:chroot[2] при входе анонимного пользователя, чтобы ограничить доступ только домашним каталогом пользователя `ftp`. Существуют два текстовых файла, которые можно создать для отображения приветственных сообщений клиентам FTP. Содержимое файла [.filename]#/etc/ftpwelcome# будет показано пользователям до появления запроса на вход. После успешного входа будет отображено содержимое файла [.filename]#/etc/ftpmotd#. Обратите внимание, что путь к этому файлу указывается относительно окружения входа, поэтому для анонимных пользователей будет отображаться содержимое файла [.filename]#~ftp/etc/ftpmotd#. После настройки FTP-сервера установите соответствующую переменную в [.filename]#/etc/rc.conf#, чтобы служба запускалась при загрузке: [.programlisting] .... ftpd_enable="YES" .... Чтобы запустить службу сейчас: [source, shell] .... # service ftpd start .... Протестируйте подключение к FTP-серверу, набрав: [source, shell] .... % ftp localhost .... Демон ftpd использует man:syslog[3] для записи сообщений. По умолчанию, демон системного журнала записывает сообщения, связанные с FTP, в [.filename]#/var/log/xferlog#. Местоположение журнала FTP может быть изменено путём редактирования следующей строки в [.filename]#/etc/syslog.conf#: [.programlisting] .... ftp.info /var/log/xferlog .... [NOTE] ==== Имейте в виду потенциальные проблемы, связанные с запуском анонимного FTP-сервера. В частности, хорошо подумайте, прежде чем разрешать анонимным пользователям загружать файлы. Может оказаться, что FTP-сайт станет площадкой для обмена нелицензионным коммерческим программным обеспечением или даже чем-то хуже. Если загрузка файлов анонимными пользователями необходима, убедитесь в правильности настроек прав доступа, чтобы эти файлы не могли быть прочитаны другими анонимными пользователями до тех пор, пока их не проверит администратор. ==== [[network-samba]] == Услуги файлов и печати для клиентов Microsoft(R) Windows(R) (Samba) Samba — это популярный пакет открытого программного обеспечения, предоставляющий файловые и печатные услуги с использованием протокола SMB/CIFS. Этот протокол встроен в системы Microsoft(R) Windows(R). Он может быть добавлен в системы, отличные от Microsoft(R) Windows(R), путем установки клиентских библиотек Samba. Протокол позволяет клиентам получать доступ к общим данным и принтерам. Эти ресурсы могут быть отображены как локальный диск, а общие принтеры могут использоваться так, как если бы они были локальными. На FreeBSD клиентские библиотеки Samba могут быть установлены с помощью порта или пакета package:net/samba416[]. Клиент предоставляет возможность системе FreeBSD получать доступ к общим ресурсам SMB/CIFS в сети Microsoft(R) Windows(R). Система FreeBSD также может быть настроена в качестве сервера Samba путем установки порта или пакета package:net/samba416[]. Это позволяет администратору создавать общие ресурсы SMB/CIFS на системе FreeBSD, к которым могут обращаться клиенты под управлением Microsoft(R) Windows(R) или использующие клиентские библиотеки Samba. === Конфигурация сервера Samba настраивается в файле [.filename]#/usr/local/etc/smb4.conf#. Этот файл должен быть создан до начала использования Samba. Простой пример [.filename]#smb4.conf# для общего доступа к каталогам и принтерам с клиентами Windows(R) в рабочей группе показан ниже. Для более сложных настроек, включающих LDAP или Active Directory, проще использовать man:samba-tool[8] для создания начального [.filename]#smb4.conf#. [.programlisting] .... [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = ExampleMachine wins support = Yes security = user passdb backend = tdbsam # Example: share /usr/src accessible only to 'developer' user [src] path = /usr/src valid users = developer writable = yes browsable = yes read only = no guest ok = no public = no create mask = 0666 directory mask = 0755 .... ==== Глобальные Настройки Настройки, описывающие сеть, добавляются в [.filename]#/usr/local/etc/smb4.conf#: `workgroup`:: Имя рабочей группы, которая будет обслуживаться. `netbios name`:: Имя NetBIOS, под которым известен сервер Samba. По умолчанию оно совпадает с первой частью DNS-имени хоста. `server string`:: Строка, которая будет отображаться в выводе команды `net view` и некоторых других сетевых инструментов, предназначенных для отображения описательного текста о сервере. `wins support`:: Будет ли Samba выступать в качестве сервера WINS. Не следует включать поддержку WINS более чем на одном сервере в сети. ==== Настройки Безопасности Важнейшие настройки в [.filename]#/usr/local/etc/smb4.conf# — это модель безопасности и формат хранения паролей. Эти параметры управляются следующими директивами: `security`:: Если клиенты используют имена пользователей, совпадающие с их именами на машине FreeBSD, следует использовать уровень безопасности пользователя. `security = user` — это политика безопасности по умолчанию, которая требует от клиентов сначала войти в систему, прежде чем они смогут получить доступ к общим ресурсам. + Обратитесь к man:smb.conf[5], чтобы узнать о других поддерживаемых настройках для опции `security`. `passdb backend`:: Samba поддерживает несколько различных моделей аутентификации на стороне сервера. Клиенты могут быть аутентифицированы с помощью LDAP, NIS+, SQL-базы данных или модифицированного файла паролей. Рекомендуемый метод аутентификации `tdbsam` идеально подходит для простых сетей, и мы его рассмотрим здесь. Для более крупных или сложных сетей рекомендуется `ldapsam`. `smbpasswd` был прежним методом по умолчанию и теперь устарел. ==== Пользователи Samba Пользовательские учетные записи FreeBSD должны быть сопоставлены с базой данных `SambaSAMAccount` для доступа клиентов Windows(R) к общему ресурсу. Сопоставьте существующие учетные записи FreeBSD с помощью man:pdbedit[8]: [source, shell] .... # pdbedit -a -u username .... В этом разделе упомянуты только наиболее часто используемые настройки. Дополнительную информацию о доступных параметрах конфигурации можно найти на https://wiki.samba.org[Официальном вики Samba]. === Начало работы с Samba Чтобы включить Samba при загрузке, добавьте следующую строку в [.filename]#/etc/rc.conf#: [.programlisting] .... samba_server_enable="YES" .... Чтобы сейчас запустить Samba: [source, shell] .... # service samba_server start Performing sanity check on Samba configuration: OK Starting nmbd. Starting smbd. .... Samba состоит из трёх отдельных демонов. Оба демона nmbd и smbd запускаются параметром `samba_enable`. Если также требуется разрешение имён через winbind, укажите: [.programlisting] .... winbindd_enable="YES" .... Samba можно остановить в любой момент, набрав: [source, shell] .... # service samba_server stop .... Samba — это комплексный программный комплект, функциональность которого обеспечивает широкую интеграцию с сетями Microsoft(R) Windows(R). Для получения дополнительной информации о возможностях, выходящих за рамки базовой конфигурации, описанной здесь, обратитесь к https://www.samba.org[https://www.samba.org]. [[network-ntp]] == Синхронизация времени с помощью NTP Со временем часы компьютера могут отставать или спешить. Это создаёт проблемы, так как многие сетевые службы требуют, чтобы компьютеры в сети использовали одинаковое точное время. Точное время также необходимо для обеспечения согласованности временных меток файлов. Протокол сетевого времени (NTP — Network Time Protocol) — это один из способов обеспечить точность часов в сети. FreeBSD включает man:ntpd[8], который можно настроить для запроса к другим серверам NTP с целью синхронизации часов на этом компьютере или для предоставления сервиса времени другим компьютерам в сети. В этом разделе описывается, как настроить ntpd в FreeBSD. Дополнительная документация доступна в [.filename]#/usr/share/doc/ntp/# в формате HTML. === Конфигурация NTP На FreeBSD встроенный ntpd может использоваться для синхронизации системных часов. Настройка ntpd осуществляется с помощью переменных man:rc.conf[5] и файла [.filename]#/etc/ntp.conf#, как подробно описано в следующих разделах. ntpd взаимодействует с сетевыми узлами с помощью UDP-пакетов. Любые межсетевые экраны между вашей машиной и её NTP-узлами должны быть настроены так, чтобы разрешать входящие и исходящие UDP-пакеты через порт 123. ==== Файл [.filename]#/etc/ntp.conf# ntpd читает файл [.filename]#/etc/ntp.conf#, чтобы определить, к каким серверам NTP обращаться. Рекомендуется выбирать несколько серверов NTP на случай, если один из серверов станет недоступен или его часы окажутся ненадёжными. По мере получения ответов ntpd отдаёт предпочтение более надёжным серверам перед менее надёжными. Запрашиваемые серверы могут быть локальными в сети, предоставляться ISP или выбираться из http://support.ntp.org/bin/view/Servers/WebHome[онлайн-списка общедоступных серверов NTP]. При выборе общедоступного сервера NTP следует выбирать сервер, географически близкий к вам, и ознакомиться с его политикой использования. Ключевое слово `pool` в конфигурации выбирает один или несколько серверов из пула серверов. Доступен http://support.ntp.org/bin/view/Servers/NTPPoolServers[онлайн-список общедоступных пулов NTP], организованный по географическим регионам. Кроме того, FreeBSD предоставляет спонсируемый проектом пул `0.freebsd.pool.ntp.org`. .Пример [.filename]#/etc/ntp.conf# [example] ==== Вот простой пример файла [.filename]#ntp.conf#. Его можно безопасно использовать в таком виде; он содержит рекомендуемые параметры `restrict` для работы в общедоступном сетевом подключении. [.programlisting] .... # Disallow ntpq control/query access. Allow peers to be added only # based on pool and server statements in this file. restrict default limited kod nomodify notrap noquery nopeer restrict source limited kod nomodify notrap noquery # Allow unrestricted access from localhost for queries and control. restrict 127.0.0.1 restrict ::1 # Add a specific server. server ntplocal.example.com iburst # Add FreeBSD pool servers until 3-6 good servers are available. tos minclock 3 maxclock 6 pool 0.freebsd.pool.ntp.org iburst # Use a local leap-seconds file. leapfile "/var/db/ntpd.leap-seconds.list" .... ==== Формат этого файла описан в man:ntp.conf[5]. Приведённые ниже описания дают краткий обзор только ключевых слов, использованных в примере файла выше. По умолчанию сервер NTP доступен для любого узла сети. Ключевое слово `restrict` управляет тем, какие системы могут обращаться к серверу. Поддерживается несколько записей `restrict`, каждая из которых уточняет ограничения, заданные в предыдущих утверждениях. Значения, указанные в примере, предоставляют локальной системе полный доступ для запросов и управления, в то время как удалённые системы могут только запрашивать время. Для получения дополнительной информации обратитесь к подразделу `Access Control Support` в man:ntp.conf[5]. Ключевое слово `server` указывает отдельный сервер для запросов. Файл может содержать несколько ключевых слов `server`, по одному серверу на каждой строке. Ключевое слово `pool` определяет пул серверов. ntpd добавит один или несколько серверов из этого пула по мере необходимости, чтобы достичь количества узлов, указанного с помощью значения `tos minclock`. Ключевое слово `iburst` предписывает ntpd выполнить серию из восьми быстрых обменов пакетами с сервером при первом установлении соединения, чтобы быстро синхронизировать системное время. Ключевое слово `leapfile` указывает расположение файла, содержащего информацию о високосных секундах. Этот файл автоматически обновляется с помощью man:periodic[8]. Указанное расположение файла должно соответствовать значению переменной `ntp_db_leapfile` в файле [.filename]#/etc/rc.conf#. ==== Записи NTP в [.filename]#/etc/rc.conf# Установите `ntpd_enable=YES` для запуска ntpd при загрузке. После добавления `ntpd_enable=YES` в [.filename]#/etc/rc.conf#, ntpd можно немедленно запустить без перезагрузки системы, введя: [source, shell] .... # service ntpd start .... Для использования ntpd необходимо установить только `ntpd_enable`. При необходимости также могут быть заданы перечисленные ниже переменные [.filename]#rc.conf#. Установите `ntpd_sync_on_start=YES`, чтобы разрешить ntpd однократно корректировать время при запуске на любую величину. Обычно ntpd записывает сообщение об ошибке и завершает работу, если расхождение времени превышает 1000 секунд. Эта опция особенно полезна для систем без аккумуляторного резервного питания часов реального времени. Установите `ntpd_oomprotect=YES`, чтобы защитить демон ntpd от завершения системой при попытке восстановиться после состояния нехватки памяти (OOM). Установите в значении`ntpd_config=` расположение альтернативного файла [.filename]#ntp.conf#. Установите `ntpd_flags=` с любыми другими флагами ntpd по необходимости, но избегайте использования тех флагов, которые устанавливаются внутри файла [.filename]#/etc/rc.d/ntpd#: * `-p` (расположение pid-файла) * `-c` (вместо этого установите `ntpd_config=` ) ==== ntpd и непривилегированный пользователь `ntpd` ntpd в FreeBSD может запускаться и работать как непривилегированный пользователь. Для этого требуется модуль политики man:mac_ntpd[4]. Скрипт запуска [.filename]#/etc/rc.d/ntpd# сначала проверяет конфигурацию NTP. Если возможно, он загружает модуль `mac_ntpd`, а затем запускает ntpd как непривилегированный пользователь `ntpd` (идентификатор пользователя 123). Чтобы избежать проблем с доступом к файлам и каталогам, скрипт запуска не будет автоматически запускать ntpd как `ntpd`, если конфигурация содержит любые файлозависимые параметры. Присутствие любого из следующих параметров в `ntpd_flags` требует ручной настройки, как описано ниже, для запуска от пользователя `ntpd`: * -f or --driftfile * -i or --jaildir * -k or --keyfile * -l or --logfile * -s or --statsdir Наличие любого из следующих ключевых слов в [.filename]#ntp.conf# требует ручной настройки, как описано ниже, для запуска от пользователя `ntpd`: * crypto * driftfile * key * logdir * statsdir Для ручной настройки ntpd для запуска от пользователя `ntpd` необходимо: * Убедитесь, что пользователь `ntpd` имеет доступ ко всем файлам и каталогам, указанным в конфигурации. * Обеспечьте загрузку или компиляцию модуля `mac_ntpd` в ядро. Подробности см. в man:mac_ntpd[4]. * Установите `ntpd_user="ntpd"` в [.filename]#/etc/rc.conf# === Использование NTP с PPP-подключением ntpd не требует постоянного подключения к Интернету для корректной работы. Однако, если PPP-соединение настроено на дозвон по требованию, следует предотвратить инициацию дозвона или поддержание соединения из-за трафика NTP. Это можно настроить с помощью директив `filter` в [.filename]#/etc/ppp/ppp.conf#. Например: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... Для получения более подробной информации обратитесь к разделу `ФИЛЬТРАЦИЯ ПАКЕТОВ` в man:ppp[8] и примерам в [.filename]#/usr/share/examples/ppp/#. [NOTE] ==== Некоторые интернет-провайдеры блокируют порты с низкими номерами, что мешает работе NTP, так как ответы никогда не достигают машины. ==== [[network-iscsi]] == Настройка инициатора и цели iSCSI iSCSI — это способ совместного использования хранилища по сети. В отличие от NFS, который работает на уровне файловой системы, iSCSI работает на уровне блочного устройства. В терминологии iSCSI система, предоставляющая хранилище, называется _целью_. Хранилище может быть физическим диском, областью, представляющей несколько дисков, или частью физического диска. Например, если диск(и) отформатированы с использованием ZFS, можно создать zvol для использования в качестве хранилища iSCSI. Клиенты, которые обращаются к хранилищу iSCSI, называются _инициаторами_. Для инициаторов хранилище, доступное через iSCSI, отображается как неформатированный диск, известный как LUN (логический номер устройства). Узлы устройств для диска появляются в [.filename]#/dev/#, и устройство должно быть отдельно отформатировано и смонтировано. FreeBSD предоставляет встроенную поддержку iSCSI целевой системы и инициатора на уровне ядра. В этом разделе описывается, как настроить систему FreeBSD в качестве целевой системы или инициатора. [[network-iscsi-target]] === Настройка цели iSCSI Для настройки цели iSCSI создайте конфигурационный файл [.filename]#/etc/ctl.conf#, добавьте строку в [.filename]#/etc/rc.conf#, чтобы убедиться, что демон man:ctld[8] автоматически запускается при загрузке, а затем запустите демон. Вот пример простого файла конфигурации [.filename]#/etc/ctl.conf#. Полное описание доступных опций этого файла можно найти в man:ctl.conf[5]. [.programlisting] .... portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... Первая запись определяет группу порталов `pg0`. Группы порталов определяют, на каких сетевых адресах будет слушать демон man:ctld[8]. Запись `discovery-auth-group no-authentication` указывает, что любой инициатор может выполнять обнаружение целей iSCSI без аутентификации. Третья и четвёртая строки настраивают man:ctld[8] для прослушивания всех IPv4-адресов (`listen 0.0.0.0`) и IPv6-адресов (`listen [::]`) на стандартном порту 3260. Нет необходимости определять группу порталов, так как существует встроенная группа порталов с именем `default`. В этом случае разница между `default` и `pg0` заключается в том, что для `default` обнаружение целей всегда запрещено, а для `pg0` — всегда разрешено. Вторая запись определяет одну цель. У цели есть два возможных значения: машина, обслуживающая iSCSI, или именованная группа LUN. В этом примере используется второе значение, где `iqn.2012-06.com.example:target0` — это имя цели. Это имя цели подходит для тестирования. Для реального использования замените `com.example` на настоящий домен, записанный в обратном порядке. `2012-06` представляет год и месяц получения контроля над этим доменом, а `target0` может быть любым значением. В этом файле конфигурации можно определить любое количество целей. Строка `auth-group no-authentication` разрешает всем инициаторам подключаться к указанной цели, а `portal-group pg0` делает цель доступной через группу порталов `pg0`. Следующий раздел определяет LUN. Для инициатора каждый LUN будет виден как отдельное дисковое устройство. Для каждой цели можно определить несколько LUN. Каждый LUN идентифицируется числом, где LUN 0 является обязательным. Строка `path /data/target0-0` определяет полный путь к файлу или zvol, который используется для LUN. Этот путь должен существовать до запуска man:ctld[8]. Вторая строка необязательна и указывает размер LUN. Далее, чтобы убедиться, что демон man:ctld[8] запускается при загрузке, добавьте эту строку в [.filename]#/etc/rc.conf#: [.programlisting] .... ctld_enable="YES" .... Чтобы запустить man:ctld[8] сейчас, выполните следующую команду: [source, shell] .... # service ctld start .... Поскольку демон man:ctld[8] запускается, он читает файл [.filename]#/etc/ctl.conf#. Если этот файл был изменён после запуска демона, используйте следующую команду, чтобы изменения вступили в силу немедленно: [source, shell] .... # service ctld reload .... ==== Аутентификация Предыдущий пример изначально небезопасен, так как не использует аутентификацию, предоставляя любому полный доступ ко всем целям. Чтобы потребовать имя пользователя и пароль для доступа к целям, измените конфигурацию следующим образом: [.programlisting] .... auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... Раздел `auth-group` определяет пары имени пользователя и пароля. Инициатор, пытающийся подключиться к `iqn.2012-06.com.example:target0`, должен сначала указать определённое имя пользователя и секрет. Однако обнаружение цели по-прежнему разрешено без аутентификации. Чтобы потребовать аутентификацию при обнаружении цели, установите `discovery-auth-group` в определённое имя `auth-group` вместо `no-authentication`. Обычно определяют один экспортируемый объект для каждого инициатора. В качестве сокращения для синтаксиса выше, имя пользователя и пароль могут быть указаны непосредственно в записи объекта: [.programlisting] .... target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } .... [[network-iscsi-initiator]] === Настройка инициатора iSCSI [NOTE] ==== Описанный в этом разделе инициатор iSCSI поддерживается начиная с FreeBSD 10.0-RELEASE. Для использования инициатора iSCSI, доступного в более старых версиях, обратитесь к man:iscontrol[8]. ==== Инициатору iSCSI требуется, чтобы демон man:iscsid[8] был запущен. Этот демон не использует файл конфигурации. Для его автоматического запуска при загрузке добавьте следующую строку в [.filename]#/etc/rc.conf#: [.programlisting] .... iscsid_enable="YES" .... Чтобы сейчас запустить man:iscsid[8], выполните следующую команду: [source, shell] .... # service iscsid start .... Подключение к цели может быть выполнено с файлом конфигурации [.filename]#/etc/iscsi.conf# или без него. В этом разделе показаны оба типа подключений. ==== Подключение к цели без файла конфигурации Для подключения инициатора к одному целевому устройству укажите IP-адрес портала и имя целевого устройства: [source, shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 .... Для проверки успешности соединения выполните команду `iscsictl` без аргументов. Вывод должен выглядеть примерно так: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 .... В этом примере сеанс iSCSI был успешно установлен, где [.filename]#/dev/da0# представляет подключённый LUN. Если цель `iqn.2012-06.com.example:target0` экспортирует более одного LUN, в соответствующем разделе вывода будет показано несколько устройств: [source, shell] .... Connected: da0 da1 da2. .... Любые ошибки будут отображены в выводе, а также в системных журналах. Например, это сообщение обычно означает, что демон man:iscsid[8] не запущен: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) .... Следующее сообщение указывает на проблему с сетью, например, неверный IP-адрес или порт: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused .... Это сообщение означает, что указано неправильное имя цели: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Not found .... Это сообщение означает, что цель требует аутентификации: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed .... Чтобы указать имя пользователя CHAP и секрет, используйте следующий синтаксис: [source, shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret .... ==== Подключение к цели с использованием файла конфигурации Для подключения с использованием файла конфигурации создайте файл [.filename]#/etc/iscsi.conf# с содержимым, подобным этому: [.programlisting] .... t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } .... `t0` задаёт псевдоним для раздела конфигурационного файла. Он будет использоваться инициатором для указания, какую конфигурацию применять. Остальные строки определяют параметры, используемые при подключении. `TargetAddress` и `TargetName` являются обязательными, тогда как остальные параметры — опциональными. В этом примере показаны имя пользователя CHAP и секретный ключ. Для подключения к указанной цели укажите псевдоним: [source, shell] .... # iscsictl -An t0 .... Или для подключения ко всем целям, определенным в файле конфигурации, используйте: [source, shell] .... # iscsictl -Aa .... Чтобы инициатор автоматически подключался ко всем целям в [.filename]#/etc/iscsi.conf#, добавьте следующее в [.filename]#/etc/rc.conf#: [.programlisting] .... iscsictl_enable="YES" iscsictl_flags="-Aa" .... diff --git a/documentation/content/ru/books/handbook/network-servers/_index.po b/documentation/content/ru/books/handbook/network-servers/_index.po index 6dac9429ac..1d5ded0039 100644 --- a/documentation/content/ru/books/handbook/network-servers/_index.po +++ b/documentation/content/ru/books/handbook/network-servers/_index.po @@ -1,8368 +1,8366 @@ # SOME DESCRIPTIVE TITLE # Copyright (C) YEAR The FreeBSD Project # This file is distributed under the same license as the FreeBSD Documentation package. # Vladlen Popolitov , 2025. msgid "" msgstr "" "Project-Id-Version: FreeBSD Documentation VERSION\n" "POT-Creation-Date: 2025-11-08 16:17+0000\n" "PO-Revision-Date: 2025-11-20 04:45+0000\n" "Last-Translator: Vladlen Popolitov \n" "Language-Team: Russian \n" "Language: ru\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && " "n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;\n" "X-Generator: Weblate 4.17\n" #. type: YAML Front Matter: description #: documentation/content/en/books/handbook/network-servers/_index.adoc:1 #, no-wrap msgid "This chapter covers some of the more frequently used network services on UNIX systems" msgstr "Эта глава рассказывает о некоторых из наиболее часто используемых сетевых служб в системах UNIX" #. type: YAML Front Matter: part #: documentation/content/en/books/handbook/network-servers/_index.adoc:1 #, no-wrap msgid "IV. Network Communication" msgstr "IV. Сетевое взаимодействие" #. type: YAML Front Matter: title #: documentation/content/en/books/handbook/network-servers/_index.adoc:1 #, no-wrap msgid "Chapter 32. Network Servers" msgstr "Глава 32. Сетевые серверы" #. type: Title = #: documentation/content/en/books/handbook/network-servers/_index.adoc:15 #, no-wrap msgid "Network Servers" msgstr "Сетевые серверы" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:53 #, no-wrap msgid "Synopsis" msgstr "Обзор" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:58 msgid "" "This chapter covers some of the more frequently used network services on " "UNIX(R) systems. This includes installing, configuring, testing, and " "maintaining many different types of network services. Example configuration " "files are included throughout this chapter for reference." msgstr "" "В этой главе рассматриваются некоторые из наиболее часто используемых " "сетевых служб в системах UNIX(R). Сюда входит установка, настройка, " "тестирование и поддержка различных типов сетевых служб. В этой главе " "приведены примеры конфигурационных файлов для справки." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:60 msgid "By the end of this chapter, readers will know:" msgstr "К концу этой главы читатели будут знать:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:62 msgid "How to manage the inetd daemon." msgstr "Как управлять демоном inetd." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:63 msgid "How to set up the Network File System (NFS)." msgstr "Как настроить Network File System (NFS)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:64 msgid "" "How to set up the Network Information Server (NIS) for centralizing and " "sharing user accounts." msgstr "" "Как настроить сервер сетевой информации (NIS) для централизации и " "совместного использования учетных записей пользователей." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:65 msgid "How to set FreeBSD up to act as an LDAP server or client" msgstr "Как настроить FreeBSD в качестве сервера или клиента LDAP" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:66 msgid "How to set up automatic network settings using DHCP." msgstr "Как настроить автоматические параметры сети с использованием DHCP." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:67 msgid "How to set up a Domain Name Server (DNS)." msgstr "Как настроить сервер доменных имен (DNS)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:68 msgid "How to set up the Apache HTTP Server." msgstr "Как настроить веб-сервер Apache HTTP." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:69 msgid "How to set up a File Transfer Protocol (FTP) server." msgstr "Как настроить сервер протокола передачи файлов (FTP)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:70 msgid "" "How to set up a file and print server for Windows(R) clients using Samba." msgstr "" "Как настроить файловый и печатный сервер для клиентов Windows(R) с " "использованием Samba." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:71 msgid "" "How to synchronize the time and date, and set up a time server using the " "Network Time Protocol (NTP)." msgstr "" "Как синхронизировать время и дату, а также настроить сервер времени с " "использованием протокола Network Time Protocol (NTP)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:72 msgid "How to set up iSCSI." msgstr "Как настроить iSCSI." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:74 msgid "This chapter assumes a basic knowledge of:" msgstr "Эта глава предполагает базовые знания о:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:76 msgid "[.filename]#/etc/rc# scripts." msgstr "Скриптах [.filename]#/etc/rc#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:77 msgid "Network terminology." msgstr "Сетевой терминологии." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:78 msgid "" "Installation of additional third-party software (crossref:ports[ports," "Installing Applications: Packages and Ports])." msgstr "" "Установке дополнительного стороннего программного обеспечения (crossref:" "ports[ports,Установка приложений: Пакеты и Порты])." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:80 #, no-wrap msgid "The inetd Super-Server" msgstr "Суперсервер inetd" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:86 msgid "" "The man:inetd[8] daemon is sometimes referred to as a Super-Server because " "it manages connections for many services. Instead of starting multiple " "applications, only the inetd service needs to be started. When a connection " "is received for a service that is managed by inetd, it determines which " "program the connection is destined for, spawns a process for that program, " "and delegates the program a socket. Using inetd for services that are not " "heavily used can reduce system load, when compared to running each daemon " "individually in stand-alone mode." msgstr "" "Демон man:inetd[8] иногда называют суперсервером, потому что он управляет " "соединениями для многих служб. Вместо запуска множества приложений, " "достаточно запустить только службу inetd. Когда поступает соединение для " "службы, управляемой inetd, он определяет, какому программе предназначено " "соединение, создает процесс для этой программы и делегирует программе сокет. " "Использование inetd для служб, которые не используются интенсивно, может " "снизить нагрузку на систему по сравнению с запуском каждого демона отдельно " "в автономном режиме." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:88 msgid "" "Primarily, inetd is used to spawn other daemons, but several trivial " "protocols are handled internally, such as chargen, auth, time, echo, " "discard, and daytime." msgstr "" "Прежде всего, inetd используется для запуска других демонов, но несколько " "простых протоколов обрабатываются внутри него, таких как chargen, auth, " "time, echo, discard и daytime." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:90 msgid "This section covers the basics of configuring inetd." msgstr "Этот раздел охватывает основы настройки inetd." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:92 #, no-wrap msgid "Configuration File" msgstr "Файл конфигурации" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:98 msgid "" "Configuration of inetd is done by editing [.filename]#/etc/inetd.conf#. " "Each line of this configuration file represents an application which can be " "started by inetd. By default, every line starts with a comment (`+#+`), " "meaning that inetd is not listening for any applications. To configure " "inetd to listen for an application's connections, remove the `+#+` at the " "beginning of the line for that application." msgstr "" "Настройка inetd выполняется путем редактирования [.filename]#/etc/inetd." "conf#. Каждая строка этого файла конфигурации представляет приложение, " "которое может быть запущено inetd. По умолчанию каждая строка начинается с " "комментария (`+#+`), что означает, что inetd не ожидает подключений для " "каких-либо приложений. Чтобы настроить inetd на ожидание подключений для " "приложения, удалите `+#+` в начале соответствующей строки." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:100 msgid "" "After saving the edits, configure inetd to start at system boot by editing [." "filename]#/etc/rc.conf#:" msgstr "" "После сохранения изменений настройте inetd для запуска при загрузке системы, " "отредактировав [.filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:104 #, no-wrap msgid "inetd_enable=\"YES\"\n" msgstr "inetd_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:107 msgid "" "To start inetd now, so that it listens for the configured service, type:" msgstr "" "Чтобы запустить inetd сейчас, чтобы он начал прослушивать настроенную " "службу, введите:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:111 #, no-wrap msgid "# service inetd start\n" msgstr "# service inetd start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:114 msgid "" "Once inetd is started, it needs to be notified whenever a modification is " "made to [.filename]#/etc/inetd.conf#:" msgstr "" "После запуска inetd необходимо уведомлять его о каждом изменении в файле [." "filename]#/etc/inetd.conf#:" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:116 #, no-wrap msgid "Reloading the inetd Configuration File" msgstr "Перезагрузка конфигурационного файла inetd" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:123 #, no-wrap msgid "# service inetd reload\n" msgstr "# service inetd reload\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:129 msgid "" "Typically, the default entry for an application does not need to be edited " "beyond removing the `+#+`. In some situations, it may be appropriate to " "edit the default entry." msgstr "" "Обычно запись по умолчанию для приложения не требует редактирования, кроме " "удаления `+#+`. В некоторых ситуациях может быть целесообразно изменить " "запись по умолчанию." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:131 msgid "As an example, this is the default entry for man:ftpd[8] over IPv4:" msgstr "В качестве примера, это стандартная запись для man:ftpd[8] по IPv4:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:135 #, no-wrap msgid "ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l\n" msgstr "ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:138 msgid "The seven columns in an entry are as follows:" msgstr "Семь столбцов в записи следующие:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:148 #, no-wrap msgid "" "service-name\n" "socket-type\n" "protocol\n" "{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]\n" "user[:group][/login-class]\n" "server-program\n" "server-program-arguments\n" msgstr "" "service-name\n" "socket-type\n" "protocol\n" "{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]\n" "user[:group][/login-class]\n" "server-program\n" "server-program-arguments\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:151 msgid "where:" msgstr "где:" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:152 #, no-wrap msgid "service-name" msgstr "service-name" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:157 msgid "" "The service name of the daemon to start. It must correspond to a service " "listed in [.filename]#/etc/services#. This determines which port inetd " "listens on for incoming connections to that service. When using a custom " "service, it must first be added to [.filename]#/etc/services#." msgstr "" "Имя службы демона для запуска. Оно должно соответствовать службе, указанной " "в [.filename]#/etc/services#. Это определяет, на каком порту inetd ожидает " "входящие соединения для этой службы. При использовании пользовательской " "службы она сначала должна быть добавлена в [.filename]#/etc/services#." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:158 #, no-wrap msgid "socket-type" msgstr "socket-type" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:161 msgid "" "Either `stream`, `dgram`, `raw`, or `seqpacket`. Use `stream` for TCP " "connections and `dgram` for UDP services." msgstr "" "Либо `stream`, `dgram`, `raw`, или `seqpacket`. Используйте `stream` для TCP-" "соединений и `dgram` для UDP-сервисов." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:162 #, no-wrap msgid "protocol" msgstr "protocol" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:164 msgid "Use one of the following protocol names:" msgstr "Используйте одно из следующих названий протоколов:" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:169 #, no-wrap msgid "Protocol Name" msgstr "Имя протокола" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:172 #, no-wrap msgid "Explanation" msgstr "Объяснение" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:173 #, no-wrap msgid "tcp or tcp4" msgstr "tcp или tcp4" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:175 #, no-wrap msgid "TCP IPv4" msgstr "TCP IPv4" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:176 #, no-wrap msgid "udp or udp4" msgstr "udp или udp4" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:178 #, no-wrap msgid "UDP IPv4" msgstr "UDP IPv4" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:179 #, no-wrap msgid "tcp6" msgstr "tcp6" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:181 #, no-wrap msgid "TCP IPv6" msgstr "TCP IPv6" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:182 #, no-wrap msgid "udp6" msgstr "udp6" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:184 #, no-wrap msgid "UDP IPv6" msgstr "UDP IPv6" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:185 #, no-wrap msgid "tcp46" msgstr "tcp46" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:187 #, no-wrap msgid "Both TCP IPv4 and IPv6" msgstr "Как TCP IPv4, так и IPv6" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:188 #, no-wrap msgid "udp46" msgstr "udp46" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:189 #, no-wrap msgid "Both UDP IPv4 and IPv6" msgstr "Как UDP IPv4, так и IPv6" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:194 msgid "" "{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-" "ip]]]:: In this field, `wait` or `nowait` must be specified. `max-child`, " "`max-connections-per-ip-per-minute` and `max-child-per-ip` are optional." msgstr "" "{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-" "ip]]]:: В этом поле необходимо указать `wait` или `nowait`. Параметры `max-" "child`, `max-connections-per-ip-per-minute` и `max-child-per-ip` являются " "необязательными." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:198 msgid "" "`wait|nowait` indicates whether or not the service is able to handle its own " "socket. `dgram` socket types must use `wait` while `stream` daemons, which " "are usually multi-threaded, should use `nowait`. `wait` usually hands off " "multiple sockets to a single daemon, while `nowait` spawns a child daemon " "for each new socket." msgstr "" "`wait|nowait` указывает, способна ли служба обрабатывать свой собственный " "сокет. Типы сокетов `dgram` должны использовать `wait`, в то время как для " "демонов `stream`, которые обычно многопоточные, следует использовать " "`nowait`. `wait` обычно передаёт несколько сокетов одному демону, тогда как " "`nowait` создаёт дочерний демон для каждого нового сокета." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:202 msgid "" "The maximum number of child daemons inetd may spawn is set by `max-child`. " "For example, to limit ten instances of the daemon, place a `/10` after " "`nowait`. Specifying `/0` allows an unlimited number of children." msgstr "" "Максимальное количество дочерних демонов, которые может породить inetd, " "задается параметром `max-child`. Например, чтобы ограничить десять " "экземпляров демона, укажите `/10` после `nowait`. Указание `/0` позволяет " "создавать неограниченное количество дочерних процессов." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:208 msgid "" "`max-connections-per-ip-per-minute` limits the number of connections from " "any particular IP address per minute. Once the limit is reached, further " "connections from this IP address will be dropped until the end of the " "minute. For example, a value of `/10` would limit any particular IP address " "to ten connection attempts per minute. `max-child-per-ip` limits the number " "of child processes that can be started on behalf on any single IP address at " "any moment. These options can limit excessive resource consumption and help " "to prevent Denial of Service attacks." msgstr "" "`max-connections-per-ip-per-minute` ограничивает количество соединений с " "любого конкретного IP-адреса в минуту. Как только лимит достигнут, " "последующие соединения с этого IP-адреса будут отбрасываться до конца " "минуты. Например, значение `/10` ограничивает любой конкретный IP-адрес " "десятью попытками соединения в минуту. `max-child-per-ip` ограничивает " "количество дочерних процессов, которые могут быть запущены от имени любого " "отдельного IP-адреса в любой момент времени. Эти опции позволяют ограничить " "чрезмерное потребление ресурсов и помогают предотвратить атаки типа \"Отказ " "в обслуживании\"." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:210 msgid "An example can be seen in the default settings for man:fingerd[8]:" msgstr "Пример можно увидеть в настройках по умолчанию для man:fingerd[8]:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:214 #, no-wrap msgid "finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s\n" msgstr "finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s\n" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:216 #, no-wrap msgid "user" msgstr "user" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:219 msgid "" "The username the daemon will run as. Daemons typically run as `root`, " "`daemon`, or `nobody`." msgstr "" "Имя пользователя, от имени которого будет работать демон. Демоны обычно " "работают от имени `root`, `daemon` или `nobody`." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:220 #, no-wrap msgid "server-program" msgstr "server-program" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:223 msgid "" "The full path to the daemon. If the daemon is a service provided by inetd " "internally, use `internal`." msgstr "" "Полный путь к демону. Если демон является службой, предоставляемой inetd " "внутренне, используйте `internal`." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:224 #, no-wrap msgid "server-program-arguments" msgstr "server-program-arguments" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:227 msgid "" "Used to specify any command arguments to be passed to the daemon on " "invocation. If the daemon is an internal service, use `internal`." msgstr "" "Используется для указания любых аргументов командной строки, передаваемых " "демону при его запуске. Если демон является внутренней службой, используйте " "`internal`." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:229 #, no-wrap msgid "Command-Line Options" msgstr "Параметры командной строки" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:234 msgid "" "Like most server daemons, inetd has a number of options that can be used to " "modify its behavior. By default, inetd is started with `-wW -C 60`. These " "options enable TCP wrappers for all services, including internal services, " "and prevent any IP address from requesting any service more than 60 times " "per minute." msgstr "" "Как и большинство серверных демонов, inetd имеет ряд опций, которые можно " "использовать для изменения его поведения. По умолчанию inetd запускается с " "параметрами `-wW -C 60`. Эти опции включают TCP wrappers для всех сервисов, " "включая внутренние, и предотвращают запросы любого IP-адреса к любому " "сервису чаще 60 раз в минуту." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:237 msgid "" "To change the default options which are passed to inetd, add an entry for " "`inetd_flags` in [.filename]#/etc/rc.conf#. If inetd is already running, " "restart it with `service inetd restart`." msgstr "" "Для изменения параметров по умолчанию, передаваемых inetd, добавьте запись " "`inetd_flags` в файл [.filename]#/etc/rc.conf#. Если inetd уже запущен, " "перезапустите его командой `service inetd restart`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:239 msgid "The available rate limiting options are:" msgstr "Доступные варианты ограничения скорости:" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:240 #, no-wrap msgid "-c maximum" msgstr "-c maximum" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:243 msgid "" "Specify the default maximum number of simultaneous invocations of each " "service, where the default is unlimited. May be overridden on a per-service " "basis by using `max-child` in [.filename]#/etc/inetd.conf#." msgstr "" "Укажите максимальное количество одновременных вызовов каждой службы по " "умолчанию, где по умолчанию значение не ограничено. Может быть " "переопределено для каждой службы отдельно с помощью параметра `max-child` в " "[.filename]#/etc/inetd.conf#." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:244 #, no-wrap msgid "-C rate" msgstr "-C rate" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:247 msgid "" "Specify the default maximum number of times a service can be invoked from a " "single IP address per minute. May be overridden on a per-service basis by " "using `max-connections-per-ip-per-minute` in [.filename]#/etc/inetd.conf#." msgstr "" "Укажите максимальное количество вызовов службы с одного IP-адреса в минуту " "по умолчанию. Это значение может быть переопределено для отдельной службы с " "помощью параметра `max-connections-per-ip-per-minute` в файле [.filename]#/" "etc/inetd.conf#." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:248 #, no-wrap msgid "-R rate" msgstr "-R rate" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:251 msgid "" "Specify the maximum number of times a service can be invoked in one minute, " "where the default is `256`. A rate of `0` allows an unlimited number." msgstr "" "Укажите максимальное количество вызовов службы в течение одной минуты, где " "значение по умолчанию — `256`. Значение `0` позволяет неограниченное " "количество." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:252 #, no-wrap msgid "-s maximum" msgstr "-s maximum" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:255 msgid "" "Specify the maximum number of times a service can be invoked from a single " "IP address at any one time, where the default is unlimited. May be " "overridden on a per-service basis by using `max-child-per-ip` in [." "filename]#/etc/inetd.conf#." msgstr "" "Укажите максимальное количество раз, которое служба может быть вызвана с " "одного IP-адреса одновременно, по умолчанию значение не ограничено. Может " "быть переопределено для каждой службы отдельно с помощью параметра `max-" "child-per-ip` в [.filename]#/etc/inetd.conf#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:257 msgid "" "Additional options are available. Refer to man:inetd[8] for the full list of " "options." msgstr "" "Доступны дополнительные параметры. Полный список параметров смотрите в man:" "inetd[8]." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:259 #, no-wrap msgid "Security Considerations" msgstr "Безопасность" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:265 msgid "" "Many of the daemons which can be managed by inetd are not security-" "conscious. Some daemons, such as fingerd, can provide information that may " "be useful to an attacker. Only enable the services which are needed and " "monitor the system for excessive connection attempts. `max-connections-per-" "ip-per-minute`, `max-child` and `max-child-per-ip` can be used to limit such " "attacks." msgstr "" "Многие демоны, которыми может управлять inetd, не обладают достаточной " "защитой. Некоторые демоны, такие как fingerd, могут предоставлять " "информацию, полезную для злоумышленника. Включайте только необходимые службы " "и отслеживайте систему на предмет чрезмерных попыток подключения. Параметры " "`max-connections-per-ip-per-minute`, `max-child` и `max-child-per-ip` могут " "быть использованы для ограничения подобных атак." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:268 msgid "" "By default, TCP wrappers are enabled. Consult man:hosts_access[5] for more " "information on placing TCP restrictions on various inetd invoked daemons." msgstr "" "По умолчанию TCP wrappers включены. Дополнительную информацию о наложении " "TCP-ограничений на различные демоны, запускаемые через inetd, можно найти в " "man:hosts_access[5]." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:270 #, no-wrap msgid "Network File System (NFS)" msgstr "Сетевая файловая система (NFS — Network File System)" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:274 msgid "" "FreeBSD supports the Network File System (NFS), which allows a server to " "share directories and files with clients over a network. With NFS, users " "and programs can access files on remote systems as if they were stored " "locally." msgstr "" "FreeBSD поддерживает Network File System (NFS), что позволяет серверу " "делиться каталогами и файлами с клиентами по сети. С помощью NFS " "пользователи и программы могут обращаться к файлам на удалённых системах " "так, как если бы они хранились локально." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:277 msgid "NFS has many practical uses. Some of the more common uses include:" msgstr "" "NFS имеет множество практических применений. Некоторые из наиболее " "распространённых вариантов использования включают:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:279 msgid "" "Data that would otherwise be duplicated on each client can be kept in a " "single location and accessed by clients on the network." msgstr "" "Данные, которые в противном случае дублировались бы на каждом клиенте, могут " "храниться в одном месте и быть доступными для клиентов в сети." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:280 msgid "" "Several clients may need access to the [.filename]#/usr/ports/distfiles# " "directory. Sharing that directory allows for quick access to the source " "files without having to download them to each client." msgstr "" "Несколько клиентов могут нуждаться в доступе к каталогу [.filename]#/usr/" "ports/distfiles#. Общий доступ к этому каталогу позволяет быстро получить " "исходные файлы без необходимости загрузки их на каждый клиент." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:281 msgid "" "On large networks, it is often more convenient to configure a central NFS " "server on which all user home directories are stored. Users can log into a " "client anywhere on the network and have access to their home directories." msgstr "" "На крупных сетях часто удобнее настроить центральный NFS-сервер, на котором " "хранятся все домашние каталоги пользователей. Пользователи могут входить в " "систему с любого клиента в сети и получать доступ к своим домашним каталогам." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:282 msgid "" "Administration of NFS exports is simplified. For example, there is only one " "file system where security or backup policies must be set." msgstr "" "Управление экспортом NFS упрощено. Например, существует только одна файловая " "система, в которой необходимо настраивать политики безопасности или " "резервного копирования." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:283 msgid "" "Removable media storage devices can be used by other machines on the " "network. This reduces the number of devices throughout the network and " "provides a centralized location to manage their security. It is often more " "convenient to install software on multiple machines from a centralized " "installation media." msgstr "" "Съемные устройства хранения данных могут использоваться другими компьютерами " "в сети. Это уменьшает количество устройств в сети и обеспечивает " "централизованное управление их безопасностью. Часто бывает удобнее " "устанавливать программное обеспечение на несколько компьютеров с " "централизованного носителя для установки." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:287 msgid "" "NFS consists of a server and one or more clients. The client remotely " "accesses the data that is stored on the server machine. In order for this " "to function properly, a few processes have to be configured and running." msgstr "" "NFS состоит из сервера и одного или нескольких клиентов. Клиент удалённо " "получает доступ к данным, хранящимся на машине сервера. Для корректной " "работы необходимо настроить и запустить несколько процессов." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:289 msgid "These daemons must be running on the server:" msgstr "Эти демоны должны быть запущены на сервере:" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:294 #, no-wrap msgid "Daemon" msgstr "Демон" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:297 #: documentation/content/en/books/handbook/network-servers/_index.adoc:559 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1008 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1027 #, no-wrap msgid "Description" msgstr "Описание" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:298 #, no-wrap msgid "nfsd" msgstr "nfsd" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:300 #, no-wrap msgid "The NFS daemon which services requests from NFS clients." msgstr "Демон NFS, обслуживающий запросы от клиентов NFS." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:301 #, no-wrap msgid "mountd" msgstr "mountd" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:303 #, no-wrap msgid "The NFS mount daemon which carries out requests received from nfsd." msgstr "Демон монтирования NFS, который выполняет запросы, полученные от nfsd." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:304 #, no-wrap msgid "rpcbind" msgstr "rpcbind" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:305 #, no-wrap msgid "This daemon allows NFS clients to discover which port the NFS server is using." msgstr "Этот демон позволяет клиентам NFS определять, какой порт использует сервер NFS." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:308 msgid "" "Running man:nfsiod[8] on the client can improve performance, but is not " "required." msgstr "" "Запуск man:nfsiod[8] на клиенте может повысить производительность, но не " "является обязательным." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:310 #, no-wrap msgid "Configuring the Server" msgstr "Настройка сервера" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:316 msgid "" "The file systems which the NFS server will share are specified in [." "filename]#/etc/exports#. Each line in this file specifies a file system to " "be exported, which clients have access to that file system, and any access " "options. When adding entries to this file, each exported file system, its " "properties, and allowed hosts must occur on a single line. If no clients " "are listed in the entry, then any client on the network can mount that file " "system." msgstr "" "Файловые системы, которые сервер NFS будет предоставлять в общий доступ, " "указаны в [.filename]#/etc/exports#. Каждая строка в этом файле определяет " "файловую систему для экспорта, клиентов, которые имеют доступ к этой " "файловой системе, и любые параметры доступа. При добавлении записей в этот " "файл каждая экспортируемая файловая система, её свойства и разрешённые хосты " "должны быть указаны в одной строке. Если в записи не указаны клиенты, то " "любой клиент в сети может подключить эту файловую систему." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:321 msgid "" "The following [.filename]#/etc/exports# entries demonstrate how to export " "file systems. The examples can be modified to match the file systems and " "client names on the reader's network. There are many options that can be " "used in this file, but only a few will be mentioned here. See man:" "exports[5] for the full list of options." msgstr "" "Следующие записи в [.filename]#/etc/exports# демонстрируют, как " "экспортировать файловые системы. Примеры могут быть изменены в соответствии " "с файловыми системами и именами клиентов в сети читателя. В этом файле можно " "использовать множество опций, но здесь упомянуты лишь некоторые. Полный " "список опций смотрите в man:exports[5]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:323 msgid "" "This example shows how to export [.filename]#/media# to three hosts named " "_alpha_, _bravo_, and _charlie_:" msgstr "" "В этом примере показано, как экспортировать [.filename]#/media# на три хоста " "с именами _alpha_, _bravo_ и _charlie_:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:327 #, no-wrap msgid "/media -ro alpha bravo charlie\n" msgstr "/media -ro alpha bravo charlie\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:332 msgid "" "The `-ro` flag makes the file system read-only, preventing clients from " "making any changes to the exported file system. This example assumes that " "the host names are either in DNS or in [.filename]#/etc/hosts#. Refer to " "man:hosts[5] if the network does not have a DNS server." msgstr "" "Флаг `-ro` делает файловую систему доступной только для чтения, предотвращая " "внесение клиентами изменений в экспортированную файловую систему. В этом " "примере предполагается, что имена хостов находятся либо в DNS, либо в [." "filename]#/etc/hosts#. Обратитесь к man:hosts[5], если в сети нет DNS-" "сервера." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:337 msgid "" "The next example exports [.filename]#/home# to three clients by IP address. " "This can be useful for networks without DNS or [.filename]#/etc/hosts# " "entries. The `-alldirs` flag allows subdirectories to be mount points. In " "other words, it will not automatically mount the subdirectories, but will " "permit the client to mount the directories that are required as needed." msgstr "" "Следующий пример экспортирует [.filename]#/home# трём клиентам по IP-адресу. " "Это может быть полезно для сетей без DNS или записей в [.filename]#/etc/" "hosts#. Флаг `-alldirs` позволяет подкаталогам быть точками монтирования. " "Другими словами, он не будет автоматически монтировать подкаталоги, но " "разрешит клиенту монтировать необходимые каталоги по мере надобности." #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:341 #, no-wrap msgid "/usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4\n" msgstr "/usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:346 msgid "" "This next example exports [.filename]#/a# so that two clients from different " "domains may access that file system. The `-maproot=root` allows `root` on " "the remote system to write data on the exported file system as `root`. If `-" "maproot=root` is not specified, the client's `root` user will be mapped to " "the server's `nobody` account and will be subject to the access limitations " "defined for `nobody`." msgstr "" "Следующий пример экспортирует [.filename]#/a#, чтобы два клиента из разных " "доменов могли получить доступ к этой файловой системе. Параметр `-" "maproot=root` позволяет пользователю `root` на удалённой системе записывать " "данные в экспортированную файловую систему как `root`. Если параметр `-" "maproot=root` не указан, пользователь `root` на клиенте будет отображён на " "учётную запись `nobody` на сервере и будет ограничен правами доступа, " "определёнными для `nobody`." #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:350 #, no-wrap msgid "/a -maproot=root host.example.com box.example.org\n" msgstr "/a -maproot=root host.example.com box.example.org\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:354 msgid "" "A client can only be specified once per file system. For example, if [." "filename]#/usr# is a single file system, these entries would be invalid as " "both entries specify the same host:" msgstr "" "Клиент может быть указан только один раз для каждой файловой системы. " "Например, если [.filename]#/usr# представляет собой одну файловую систему, " "следующие записи будут недопустимыми, так как обе указывают на один и тот же " "узел:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:360 #, no-wrap msgid "" "# Invalid when /usr is one file system\n" "/usr/src client\n" "/usr/ports client\n" msgstr "" "# Invalid when /usr is one file system\n" "/usr/src client\n" "/usr/ports client\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:363 msgid "The correct format for this situation is to use one entry:" msgstr "Правильный формат для данной ситуации — использовать одну запись:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:367 #, no-wrap msgid "/usr/src /usr/ports client\n" msgstr "/usr/src /usr/ports client\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:370 msgid "" "The following is an example of a valid export list, where [.filename]#/usr# " "and [.filename]#/exports# are local file systems:" msgstr "" "Ниже приведён пример корректного списка экспорта, где [.filename]#/usr# и [." "filename]#/exports# являются локальными файловыми системами:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:381 #, no-wrap msgid "" "# Export src and ports to client01 and client02, but only\n" "# client01 has root privileges on it\n" "/usr/src /usr/ports -maproot=root client01\n" "/usr/src /usr/ports client02\n" "# The client machines have root and can mount anywhere\n" "# on /exports. Anyone in the world can mount /exports/obj read-only\n" "/exports -alldirs -maproot=root client01 client02\n" "/exports/obj -ro\n" msgstr "" "# Export src and ports to client01 and client02, but only\n" "# client01 has root privileges on it\n" "/usr/src /usr/ports -maproot=root client01\n" "/usr/src /usr/ports client02\n" "# The client machines have root and can mount anywhere\n" "# on /exports. Anyone in the world can mount /exports/obj read-only\n" "/exports -alldirs -maproot=root client01 client02\n" "/exports/obj -ro\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:384 msgid "" "To enable the processes required by the NFS server at boot time, add these " "options to [.filename]#/etc/rc.conf#:" msgstr "" "Чтобы включить процессы, необходимые для работы сервера NFS при загрузке, " "добавьте следующие параметры в [.filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:390 #, no-wrap msgid "" "rpcbind_enable=\"YES\"\n" "nfs_server_enable=\"YES\"\n" "mountd_enable=\"YES\"\n" msgstr "" "rpcbind_enable=\"YES\"\n" "nfs_server_enable=\"YES\"\n" "mountd_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:393 msgid "The server can be started now by running this command:" msgstr "Сервер можно запустить, выполнив следующую команду:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:397 #, no-wrap msgid "# service nfsd start\n" msgstr "# service nfsd start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:402 msgid "" "Whenever the NFS server is started, mountd also starts automatically. " "However, mountd only reads [.filename]#/etc/exports# when it is started. To " "make subsequent [.filename]#/etc/exports# edits take effect immediately, " "force mountd to reread it:" msgstr "" "Всякий раз, когда запускается сервер NFS, также автоматически запускается " "mountd. Однако mountd читает [.filename]#/etc/exports# только при запуске. " "Чтобы последующие изменения в [.filename]#/etc/exports# вступили в силу " "немедленно, заставьте mountd перечитать его:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:406 #, no-wrap msgid "# service mountd reload\n" msgstr "# service mountd reload\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:409 msgid "" "Refer to man:zfs-share[8] for a description of exporting ZFS datasets via " "NFS using the `sharenfs` ZFS property instead of the man:exports[5] file." msgstr "" "Обратитесь к man:zfs-share[8] для описания экспорта наборов данных ZFS через " "NFS с использованием свойства ZFS `sharenfs` вместо файла man:exports[5]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:411 msgid "Refer to man:nfsv4[4] for a description of an NFS Version 4 setup." msgstr "Обратитесь к man:nfsv4[4] для описания настройки NFS версии 4." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:412 #, no-wrap msgid "Configuring the Client" msgstr "Настройка клиента" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:415 msgid "" "To enable NFS clients, set this option in each client's [.filename]#/etc/rc." "conf#:" msgstr "" "Чтобы включить клиенты NFS, установите эту опцию в файле [.filename]#/etc/rc." "conf# каждого клиента:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:419 #, no-wrap msgid "nfs_client_enable=\"YES\"\n" msgstr "nfs_client_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:422 msgid "Then, run this command on each NFS client:" msgstr "Затем выполните эту команду на каждом клиенте NFS:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:426 #, no-wrap msgid "# service nfsclient start\n" msgstr "# service nfsclient start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:431 msgid "" "The client now has everything it needs to mount a remote file system. In " "these examples, the server's name is `server` and the client's name is " "`client`. To mount [.filename]#/home# on `server` to the [.filename]#/mnt# " "mount point on `client`:" msgstr "" "Клиент теперь имеет всё необходимое для монтирования удалённой файловой " "системы. В этих примерах имя сервера — `server`, а имя клиента — `client`. " "Чтобы смонтировать [.filename]#/home# с сервера `server` в точку " "монтирования [.filename]#/mnt# на клиенте `client`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:435 #, no-wrap msgid "# mount server:/home /mnt\n" msgstr "# mount server:/home /mnt\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:438 msgid "" "The files and directories in [.filename]#/home# will now be available on " "`client`, in the [.filename]#/mnt# directory." msgstr "" "Файлы и каталоги в [.filename]#/home# теперь будут доступны на `client`, в " "каталоге [.filename]#/mnt#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:440 msgid "" "To mount a remote file system each time the client boots, add it to [." "filename]#/etc/fstab#:" msgstr "" "Для монтирования удаленной файловой системы при каждой загрузке клиента " "добавьте её в [.filename]#/etc/fstab#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:444 #, no-wrap msgid "server:/home\t/mnt\tnfs\trw\t0\t0\n" msgstr "server:/home\t/mnt\tnfs\trw\t0\t0\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:447 msgid "Refer to man:fstab[5] for a description of all available options." msgstr "Обратитесь к man:fstab[5] для описания всех доступных опций." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:448 #, no-wrap msgid "Locking" msgstr "Блокировка" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:452 msgid "" "Some applications require file locking to operate correctly. To enable " "locking, execute the following command on both the client and server:" msgstr "" "Некоторые приложения требуют блокировки файлов для корректной работы. Чтобы " "включить блокировку, выполните следующую команду как на клиенте, так и на " "сервере:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:456 #, no-wrap msgid "# sysrc rpc_lockd_enable=\"YES\"\n" msgstr "# sysrc rpc_lockd_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:459 msgid "Then start the man:rpc.lockd[8] service:" msgstr "Затем запустите службу man:rpc.lockd[8]:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:463 #, no-wrap msgid "# service lockd start\n" msgstr "# service lockd start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:467 msgid "" "If locking is not required on the server, the NFS client can be configured " "to lock locally by including `-L` when running mount. Refer to man:" "mount_nfs[8] for further details." msgstr "" "Если блокировка не требуется на сервере, клиент NFS можно настроить для " "локальной блокировки, добавив параметр `-L` при выполнении команды mount. " "Дополнительные сведения см. в man:mount_nfs[8]." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:469 #, no-wrap msgid "Automating Mounts with man:autofs[5]" msgstr "Автоматизация монтирования с помощью man:autofs[5]" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:476 msgid "" "The man:autofs[5] automount facility is supported starting with FreeBSD 10.1-" "RELEASE. To use the automounter functionality in older versions of FreeBSD, " "use man:amd[8] instead. This chapter only describes the man:autofs[5] " "automounter." msgstr "" "Автомонтирование man:autofs[5] поддерживается начиная с FreeBSD 10.1-" "RELEASE. Для использования функциональности автомонтирования в более старых " "версиях FreeBSD используйте man:amd[8]. В этой главе описывается только " "автомонтирование man:autofs[5]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:482 msgid "" "The man:autofs[5] facility is a common name for several components that, " "together, allow for automatic mounting of remote and local filesystems " "whenever a file or directory within that file system is accessed. It " "consists of the kernel component, man:autofs[5], and several userspace " "applications: man:automount[8], man:automountd[8] and man:autounmountd[8]. " "It serves as an alternative for man:amd[8] from previous FreeBSD releases. " "amd is still provided for backward compatibility purposes, as the two use " "different map formats; the one used by autofs is the same as with other SVR4 " "automounters, such as the ones in Solaris, MacOS X, and Linux." msgstr "" "Утилита man:autofs[5] — это общее название для нескольких компонентов, " "которые вместе позволяют автоматически монтировать удалённые и локальные " "файловые системы при обращении к файлу или каталогу внутри этих файловых " "систем. Она состоит из компонента ядра man:autofs[5] и нескольких " "пользовательских приложений: man:automount[8], man:automountd[8] и man:" "autounmountd[8]. Она служит альтернативой для man:amd[8] из предыдущих " "выпусков FreeBSD. amd по-прежнему предоставляется для обратной " "совместимости, так как эти утилиты используют разные форматы карт; формат, " "используемый autofs, совпадает с форматом других автомонтировщиков SVR4, " "таких как в Solaris, MacOS X и Linux." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:484 msgid "" "The man:autofs[5] virtual filesystem is mounted on specified mountpoints by " "man:automount[8], usually invoked during boot." msgstr "" "Виртуальная файловая система man:autofs[5] монтируется на указанные точки " "монтирования с помощью man:automount[8], который обычно запускается во время " "загрузки." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:488 msgid "" "Whenever a process attempts to access a file within the man:autofs[5] " "mountpoint, the kernel will notify man:automountd[8] daemon and pause the " "triggering process. The man:automountd[8] daemon will handle kernel " "requests by finding the proper map and mounting the filesystem according to " "it, then signal the kernel to release blocked process. The man:" "autounmountd[8] daemon automatically unmounts automounted filesystems after " "some time, unless they are still being used." msgstr "" "Всякий раз, когда процесс пытается получить доступ к файлу в точке " "монтирования man:autofs[5], ядро уведомляет демон man:automountd[8] и " "приостанавливает вызвавший процесс. Демон man:automountd[8] обрабатывает " "запросы ядра, находя соответствующую карту и монтируя файловую систему в " "соответствии с ней, после чего сигнализирует ядру о разблокировке процесса. " "Демон man:autounmountd[8] автоматически размонтирует автомонтируемые " "файловые системы по истечении некоторого времени, если они больше не " "используются." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:491 msgid "" "The primary autofs configuration file is [.filename]#/etc/auto_master#. It " "assigns individual maps to top-level mounts. For an explanation of [." "filename]#auto_master# and the map syntax, refer to man:auto_master[5]." msgstr "" "Основной файл конфигурации autofs — это [.filename]#/etc/auto_master#. Он " "связывает отдельные карты с корневыми точками монтирования. Для объяснения " "синтаксиса [.filename]#auto_master# и карт обратитесь к man:auto_master[5]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:495 msgid "" "There is a special automounter map mounted on [.filename]#/net#. When a " "file is accessed within this directory, man:autofs[5] looks up the " "corresponding remote mount and automatically mounts it. For instance, an " "attempt to access a file within [.filename]#/net/foobar/usr# would tell man:" "automountd[8] to mount the [.filename]#/usr# export from the host `foobar`." msgstr "" "Существует специальная карта автомонтирования, смонтированная в [.filename]#/" "net#. При обращении к файлу в этом каталоге, man:autofs[5] ищет " "соответствующую удалённую точку монтирования и автоматически монтирует её. " "Например, попытка доступа к файлу в [.filename]#/net/foobar/usr# приведёт к " "тому, что man:automountd[8] смонтирует экспорт [.filename]#/usr# с хоста " "`foobar`." #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:496 #, no-wrap msgid "Mounting an Export with man:autofs[5]" msgstr "Подключение экспорта с помощью man:autofs[5]" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:500 msgid "" "In this example, `showmount -e` shows the exported file systems that can be " "mounted from the NFS server, `foobar`:" msgstr "" "В этом примере `showmount -e` показывает экспортированные файловые системы, " "которые могут быть подключены с NFS-сервера `foobar`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:508 #, no-wrap msgid "" "% showmount -e foobar\n" "Exports list on foobar:\n" "/usr 10.10.10.0\n" "/a 10.10.10.0\n" "% cd /net/foobar/usr\n" msgstr "" "% showmount -e foobar\n" "Exports list on foobar:\n" "/usr 10.10.10.0\n" "/a 10.10.10.0\n" "% cd /net/foobar/usr\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:515 msgid "" "The output from `showmount` shows [.filename]#/usr# as an export. When " "changing directories to [.filename]#/host/foobar/usr#, man:automountd[8] " "intercepts the request and attempts to resolve the hostname `foobar`. If " "successful, man:automountd[8] automatically mounts the source export." msgstr "" "Результат выполнения `showmount` показывает, что [.filename]#/usr# " "экспортируется. При переходе в каталог [.filename]#/host/foobar/usr#, man:" "automountd[8] перехватывает запрос и пытается разрешить имя хоста `foobar`. " "В случае успеха man:automountd[8] автоматически монтирует исходный экспорт." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:517 msgid "" "To enable man:autofs[5] at boot time, add this line to [.filename]#/etc/rc." "conf#:" msgstr "" "Чтобы включить man:autofs[5] при загрузке, добавьте следующую строку в [." "filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:521 #, no-wrap msgid "autofs_enable=\"YES\"\n" msgstr "autofs_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:524 msgid "Then man:autofs[5] can be started by running:" msgstr "Затем man:autofs[5] может быть запущен выполнением:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:530 #, no-wrap msgid "" "# service automount start\n" "# service automountd start\n" "# service autounmountd start\n" msgstr "" "# service automount start\n" "# service automountd start\n" "# service autounmountd start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:534 msgid "" "The man:autofs[5] map format is the same as in other operating systems. " "Information about this format from other sources can be useful, like the " "http://web.archive.org/web/20160813071113/http://images.apple.com/business/" "docs/Autofs.pdf[Mac OS X document]." msgstr "" "Формат карты man:autofs[5] такой же, как и в других операционных системах. " "Информация об этом формате из других источников может быть полезной, " "например, из http://web.archive.org/web/20160813071113/http://images.apple." "com/business/docs/Autofs.pdf[документации Mac OS X]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:536 msgid "" "Consult the man:automount[8], man:automountd[8], man:autounmountd[8], and " "man:auto_master[5] manual pages for more information." msgstr "" "Обратитесь к справочным страницам man:automount[8], man:automountd[8], man:" "autounmountd[8] и man:auto_master[5] для получения дополнительной информации." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:538 #, no-wrap msgid "Network Information System (NIS)" msgstr "Сетевая информационная система (NIS)" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:543 msgid "" "Network Information System (NIS) is designed to centralize administration of " "UNIX(R)-like systems such as Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, " "OpenBSD, and FreeBSD. NIS was originally known as Yellow Pages but the name " "was changed due to trademark issues. This is the reason why NIS commands " "begin with `yp`." msgstr "" "Сетевая информационная система (NIS — Network Information System) " "предназначена для централизованного администрирования UNIX(R)-подобных " "систем, таких как Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD и " "FreeBSD. Изначально NIS была известна как Yellow Pages, но название было " "изменено из-за проблем с товарными знаками. Именно поэтому команды NIS " "начинаются с `yp`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:546 msgid "" "NIS is a Remote Procedure Call (RPC)-based client/server system that allows " "a group of machines within an NIS domain to share a common set of " "configuration files. This permits a system administrator to set up NIS " "client systems with only minimal configuration data and to add, remove, or " "modify configuration data from a single location." msgstr "" "NIS — это клиент-серверная система на основе удалённых вызовов процедур " "(RPC), которая позволяет группе машин в домене NIS использовать общий набор " "конфигурационных файлов. Это позволяет системному администратору настраивать " "клиентские системы NIS с минимальным объёмом конфигурационных данных, а " "также добавлять, удалять или изменять конфигурационные данные из единого " "места." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:548 msgid "FreeBSD uses version 2 of the NIS protocol." msgstr "FreeBSD использует вторую версию протокола NIS." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:549 #, no-wrap msgid "NIS Terms and Processes" msgstr "Термины и процессы NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:552 msgid "Table 28.1 summarizes the terms and important processes used by NIS:" msgstr "Таблица 28.1 обобщает термины и важные процессы, используемые NIS:" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:553 #, no-wrap msgid "NIS Terminology" msgstr "Терминология NIS" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:557 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1896 #, no-wrap msgid "Term" msgstr "Термин" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:560 #, no-wrap msgid "NIS domain name" msgstr "Имя домена NIS" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:562 #, no-wrap msgid "NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS." msgstr "Серверы и клиенты NIS используют общее имя домена NIS. Как правило, это имя не связано с DNS." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:563 #, no-wrap msgid "man:rpcbind[8]" msgstr "man:rpcbind[8]" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:565 #, no-wrap msgid "This service enables RPC and must be running in order to run an NIS server or act as an NIS client." msgstr "Эта служба включает RPC и должна работать для запуска сервера NIS или работы в качестве клиента NIS." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:566 #, no-wrap msgid "man:ypbind[8]" msgstr "man:ypbind[8]" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:568 #, no-wrap msgid "This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server." msgstr "Эта служба связывает клиент NIS с его сервером NIS. Она принимает имя домена NIS и использует RPC для подключения к серверу. Это основа клиент-серверного взаимодействия в среде NIS. Если эта служба не запущена на клиентской машине, она не сможет получить доступ к серверу NIS." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:569 #, no-wrap msgid "man:ypserv[8]" msgstr "man:ypserv[8]" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:571 #, no-wrap msgid "This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-FreeBSD clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients." msgstr "Это процесс для сервера NIS. Если эта служба перестанет работать, сервер больше не сможет отвечать на запросы NIS, поэтому, надеюсь, есть подчиненный сервер, который возьмет на себя управление. Некоторые клиенты, не относящиеся к FreeBSD, не будут пытаться переподключиться с использованием подчиненного сервера, и процесс ypbind, возможно, потребуется перезапустить на этих клиентах." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:572 #, no-wrap msgid "man:rpc.yppasswdd[8]" msgstr "man:rpc.yppasswdd[8]" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:573 #, no-wrap msgid "This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there." msgstr "Этот процесс работает только на основных серверах NIS. Этот демон позволяет клиентам NIS изменять свои пароли в NIS. Если этот демон не запущен, пользователям придется входить на главный сервер NIS и изменять пароли там." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:575 #, no-wrap msgid "Machine Types" msgstr "Типы машин" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:578 msgid "There are three types of hosts in an NIS environment:" msgstr "В среде NIS существует три типа хостов:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:580 msgid "NIS master server" msgstr "Основной сервер NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:584 msgid "" "This server acts as a central repository for host configuration information " "and maintains the authoritative copy of the files used by all of the NIS " "clients. The [.filename]#passwd#, [.filename]#group#, and other various " "files used by NIS clients are stored on the master server. While it is " "possible for one machine to be an NIS master server for more than one NIS " "domain, this type of configuration will not be covered in this chapter as it " "assumes a relatively small-scale NIS environment." msgstr "" "Этот сервер выступает в роли центрального хранилища информации о " "конфигурации хостов и содержит авторитетные копии файлов, используемых всеми " "клиентами NIS. Файлы [.filename]#passwd#, [.filename]#group# и другие, " "используемые клиентами NIS, хранятся на главном сервере. Хотя возможно, " "чтобы одна машина была основным сервером NIS для нескольких доменов NIS, " "такая конфигурация не рассматривается в этой главе, так как предполагается " "относительно небольшая среда NIS." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:585 msgid "NIS slave servers" msgstr "Подчиненные серверы NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:588 msgid "" "NIS slave servers maintain copies of the NIS master's data files in order to " "provide redundancy. Slave servers also help to balance the load of the " "master server as NIS clients always attach to the NIS server which responds " "first." msgstr "" "Подчинённые серверы NIS хранят копии файлов данных NIS главного сервера для " "обеспечения избыточности. Подчинённые серверы также помогают распределить " "нагрузку основного сервера, так как клиенты NIS всегда подключаются к NIS " "серверу, который отвечает первым." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:589 msgid "NIS clients" msgstr "Клиенты NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:591 msgid "NIS clients authenticate against the NIS server during log on." msgstr "" "Клиенты NIS проходят аутентификацию на сервере NIS при входе в систему." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:595 msgid "" "Information in many files can be shared using NIS. The [.filename]#master." "passwd#, [.filename]#group#, and [.filename]#hosts# files are commonly " "shared via NIS. Whenever a process on a client needs information that would " "normally be found in these files locally, it makes a query to the NIS server " "that it is bound to instead." msgstr "" "Информация из многих файлов может быть совместно использована с помощью NIS. " "Файлы [.filename]#master.passwd#, [.filename]#group# и [.filename]#hosts# " "часто распространяются через NIS. Когда процессу на клиенте требуется " "информация, которая обычно находится в этих файлах локально, он отправляет " "запрос к связанному с ним NIS-серверу." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:596 #, no-wrap msgid "Planning Considerations" msgstr "Планирование и подготовка" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:602 msgid "" "This section describes a sample NIS environment which consists of 15 FreeBSD " "machines with no centralized point of administration. Each machine has its " "own [.filename]#/etc/passwd# and [.filename]#/etc/master.passwd#. These " "files are kept in sync with each other only through manual intervention. " "Currently, when a user is added to the lab, the process must be repeated on " "all 15 machines." msgstr "" "В этом разделе описывается пример среды NIS, состоящей из 15 машин FreeBSD " "без централизованной точки администрирования. На каждой машине есть свои " "файлы [.filename]#/etc/passwd# и [.filename]#/etc/master.passwd#. Эти файлы " "синхронизируются между собой только вручную. В настоящее время, когда в " "лабораторию добавляется новый пользователь, этот процесс необходимо " "повторять на всех 15 машинах." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:604 msgid "The configuration of the lab will be as follows:" msgstr "Конфигурация лаборатории будет следующей:" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:609 #, no-wrap msgid "Machine name" msgstr "Имя машины" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:610 #, no-wrap msgid "IP address" msgstr "IP-адрес" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:613 #, no-wrap msgid "Machine role" msgstr "Роль машины" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:614 #, no-wrap msgid "`ellington`" msgstr "`ellington`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:615 #, no-wrap msgid "`10.0.0.2`" msgstr "`10.0.0.2`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:617 #, no-wrap msgid "NIS master" msgstr "Основной сервер NIS" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:618 #, no-wrap msgid "`coltrane`" msgstr "`coltrane`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:619 #, no-wrap msgid "`10.0.0.3`" msgstr "`10.0.0.3`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:621 #, no-wrap msgid "NIS slave" msgstr "Подчиненный сервер NIS" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:622 #, no-wrap msgid "`basie`" msgstr "`basie`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:623 #, no-wrap msgid "`10.0.0.4`" msgstr "`10.0.0.4`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:625 #, no-wrap msgid "Faculty workstation" msgstr "Факультетская рабочая станция" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:626 #, no-wrap msgid "`bird`" msgstr "`bird`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:627 #, no-wrap msgid "`10.0.0.5`" msgstr "`10.0.0.5`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:629 #, no-wrap msgid "Client machine" msgstr "Клиентская машина" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:630 #, no-wrap msgid "`cli[1-11]`" msgstr "`cli[1-11]`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:631 #, no-wrap msgid "`10.0.0.[6-17]`" msgstr "`10.0.0.[6-17]`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:632 #, no-wrap msgid "Other client machines" msgstr "Другие клиентские машины" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:636 msgid "" "If this is the first time an NIS scheme is being developed, it should be " "thoroughly planned ahead of time. Regardless of network size, several " "decisions need to be made as part of the planning process." msgstr "" "Если это первый раз, когда разрабатывается схема NIS, её следует тщательно " "спланировать заранее. Независимо от размера сети, в процессе планирования " "необходимо принять несколько решений." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:637 #, no-wrap msgid "Choosing a NIS Domain Name" msgstr "Выбор имени домена NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:642 msgid "" "When a client broadcasts its requests for info, it includes the name of the " "NIS domain that it is part of. This is how multiple servers on one network " "can tell which server should answer which request. Think of the NIS domain " "name as the name for a group of hosts." msgstr "" "Когда клиент рассылает широковещательные запросы на получение информации, он " "включает имя домена NIS, к которому принадлежит. Таким образом, несколько " "серверов в одной сети могут определить, какой сервер должен отвечать на " "конкретный запрос. Думайте о доменном имени NIS как об имени для группы " "хостов." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:648 msgid "" "Some organizations choose to use their Internet domain name for their NIS " "domain name. This is not recommended as it can cause confusion when trying " "to debug network problems. The NIS domain name should be unique within the " "network and it is helpful if it describes the group of machines it " "represents. For example, the Art department at Acme Inc. might be in the " "\"acme-art\" NIS domain. This example will use the domain name `test-" "domain`." msgstr "" "Некоторые организации предпочитают использовать своё доменное имя интернета " "в качестве имени домена NIS. Это не рекомендуется, так как может вызвать " "путаницу при попытках отладки сетевых проблем. Имя домена NIS должно быть " "уникальным в пределах сети, и полезно, если оно описывает группу машин, " "которую представляет. Например, художественный отдел компании Acme Inc. " "может находиться в домене NIS \"acme-art\". В этом примере будет " "использоваться имя домена `test-domain`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:651 msgid "" "However, some non-FreeBSD operating systems require the NIS domain name to " "be the same as the Internet domain name. If one or more machines on the " "network have this restriction, the Internet domain name _must_ be used as " "the NIS domain name." msgstr "" "Однако некоторые операционные системы, отличные от FreeBSD, требуют, чтобы " "имя домена NIS совпадало с именем интернет-домена. Если одна или несколько " "машин в сети имеют это ограничение, _необходимо_ использовать имя интернет-" "домена в качестве имени домена NIS." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:652 #, no-wrap msgid "Physical Server Requirements" msgstr "Требования к физическому серверу" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:659 msgid "" "There are several things to keep in mind when choosing a machine to use as a " "NIS server. Since NIS clients depend upon the availability of the server, " "choose a machine that is not rebooted frequently. The NIS server should " "ideally be a stand alone machine whose sole purpose is to be an NIS server. " "If the network is not heavily used, it is acceptable to put the NIS server " "on a machine running other services. However, if the NIS server becomes " "unavailable, it will adversely affect all NIS clients." msgstr "" "Есть несколько моментов, которые следует учитывать при выборе машины для " "использования в качестве сервера NIS. Поскольку клиенты NIS зависят от " "доступности сервера, следует выбрать машину, которая не перезагружается " "часто. Идеально, чтобы сервер NIS был отдельной машиной, единственной целью " "которой является быть сервером NIS. Если сеть не сильно загружена, допустимо " "разместить сервер NIS на машине, где выполняются другие службы. Однако, если " "сервер NIS станет недоступен, это негативно скажется на всех клиентах NIS." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:660 #, no-wrap msgid "Configuring the NIS Master Server" msgstr "Настройка основного сервера NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:667 msgid "" "The canonical copies of all NIS files are stored on the master server. The " "databases used to store the information are called NIS maps. In FreeBSD, " "these maps are stored in [.filename]#/var/yp/[domainname]# where [." "filename]#[domainname]# is the name of the NIS domain. Since multiple " "domains are supported, it is possible to have several directories, one for " "each domain. Each domain will have its own independent set of maps." msgstr "" "Канонические копии всех NIS-файлов хранятся на основном сервере. Базы " "данных, используемые для хранения информации, называются NIS-картами. В " "FreeBSD эти карты хранятся в [.filename]#/var/yp/[domainname]#, где [." "filename]#[domainname]# — это имя NIS-домена. Поскольку поддерживается " "несколько доменов, возможно наличие нескольких каталогов, по одному для " "каждого домена. Каждый домен будет иметь свой независимый набор карт." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:670 msgid "" "NIS master and slave servers handle all NIS requests through man:ypserv[8]. " "This daemon is responsible for receiving incoming requests from NIS clients, " "translating the requested domain and map name to a path to the corresponding " "database file, and transmitting data from the database back to the client." msgstr "" "Основные и подчинённые серверы NIS обрабатывают все запросы NIS через man:" "ypserv[8]. Этот демон отвечает за приём входящих запросов от клиентов NIS, " "преобразование запрошенного домена и имени карты в путь к соответствующему " "файлу базы данных и передачу данных из базы обратно клиенту." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:673 msgid "" "Setting up a master NIS server can be relatively straight forward, depending " "on environmental needs. Since FreeBSD provides built-in NIS support, it " "only needs to be enabled by adding the following lines to [.filename]#/etc/" "rc.conf#:" msgstr "" "Настройка основного NIS-сервера может быть относительно простой, в " "зависимости от потребностей окружения. Поскольку FreeBSD предоставляет " "встроенную поддержку NIS, её достаточно включить, добавив следующие строки в " "[.filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:679 #, no-wrap msgid "" "nisdomainname=\"test-domain\"\t<.>\n" "nis_server_enable=\"YES\"\t\t<.>\n" "nis_yppasswdd_enable=\"YES\"\t<.>\n" msgstr "" "nisdomainname=\"test-domain\"\t<.>\n" "nis_server_enable=\"YES\"\t\t<.>\n" "nis_yppasswdd_enable=\"YES\"\t<.>\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:682 msgid "This line sets the NIS domain name to `test-domain`." msgstr "Эта строка устанавливает имя домена NIS в `test-domain`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:683 msgid "" "This automates the start up of the NIS server processes when the system " "boots." msgstr "Это автоматизирует запуск процессов сервера NIS при загрузке системы." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:684 msgid "" "This enables the man:rpc.yppasswdd[8] daemon so that users can change their " "NIS password from a client machine." msgstr "" "Это включает демон man:rpc.yppasswdd[8], позволяющий пользователям изменять " "свой NIS-пароль с клиентской машины." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:689 msgid "" "Care must be taken in a multi-server domain where the server machines are " "also NIS clients. It is generally a good idea to force the servers to bind " "to themselves rather than allowing them to broadcast bind requests and " "possibly become bound to each other. Strange failure modes can result if " "one server goes down and others are dependent upon it. Eventually, all the " "clients will time out and attempt to bind to other servers, but the delay " "involved can be considerable and the failure mode is still present since the " "servers might bind to each other all over again." msgstr "" "В многосерверном домене, где серверные машины также являются клиентами NIS, " "необходимо соблюдать осторожность. Обычно рекомендуется принудительно " "заставлять серверы привязываться к самим себе, а не разрешать им рассылать " "запросы на привязку и потенциально привязываться друг к другу. Могут " "возникнуть странные режимы сбоев, если один сервер выйдет из строя, а другие " "будут зависеть от него. В конечном итоге все клиенты превысят время ожидания " "и попытаются привязаться к другим серверам, но задержка может быть " "значительной, а режим сбоя сохранится, поскольку серверы могут снова " "привязаться друг к другу." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:691 msgid "" "A server that is also a client can be forced to bind to a particular server " "by adding these additional lines to [.filename]#/etc/rc.conf#:" msgstr "" "Сервер, который также является клиентом, может быть принудительно привязан к " "определённому серверу путём добавления следующих строк в [.filename]#/etc/rc." "conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:696 #, no-wrap msgid "" "nis_client_enable=\"YES\"\t\t\t\t<.>\n" "nis_client_flags=\"-S test-domain,server\"\t<.>\n" msgstr "" "nis_client_enable=\"YES\"\t\t\t\t<.>\n" "nis_client_flags=\"-S test-domain,server\"\t<.>\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:699 msgid "This enables running client stuff as well." msgstr "Это позволяет также запускать клиентские приложения." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:700 msgid "This line sets the NIS domain name to `test-domain` and bind to itself." msgstr "" "Эта строка устанавливает имя домена NIS в `test-domain` и привязывает к себе." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:703 msgid "" "After saving the edits, type `/etc/netstart` to restart the network and " "apply the values defined in [.filename]#/etc/rc.conf#. Before initializing " "the NIS maps, start man:ypserv[8]:" msgstr "" "После сохранения изменений введите `/etc/netstart`, чтобы перезапустить сеть " "и применить значения, указанные в [.filename]#/etc/rc.conf#. Перед " "инициализацией карт NIS запустите man:ypserv[8]:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:707 #, no-wrap msgid "# service ypserv start\n" msgstr "# service ypserv start\n" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:710 #, no-wrap msgid "Initializing the NIS Maps" msgstr "Инициализация карт NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:715 msgid "" "NIS maps are generated from the configuration files in [.filename]#/etc# on " "the NIS master, with one exception: [.filename]#/etc/master.passwd#. This " "is to prevent the propagation of passwords to all the servers in the NIS " "domain. Therefore, before the NIS maps are initialized, configure the " "primary password files:" msgstr "" "NIS-карты создаются из конфигурационных файлов в [.filename]#/etc# на NIS-" "мастере, за исключением одного: [.filename]#/etc/master.passwd#. Это сделано " "для предотвращения распространения паролей на все серверы в NIS-домене. " "Поэтому перед инициализацией NIS-карт необходимо настроить основные файлы " "паролей:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:721 #, no-wrap msgid "" "# cp /etc/master.passwd /var/yp/master.passwd\n" "# cd /var/yp\n" "# vi master.passwd\n" msgstr "" "# cp /etc/master.passwd /var/yp/master.passwd\n" "# cd /var/yp\n" "# vi master.passwd\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:724 msgid "" "It is advisable to remove all entries for system accounts as well as any " "user accounts that do not need to be propagated to the NIS clients, such as " "the `root` and any other administrative accounts." msgstr "" "Рекомендуется удалить все записи системных учетных записей, а также любые " "пользовательские учетные записи, которые не нужно распространять на клиенты " "NIS, такие как `root` и другие административные учетные записи." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:728 msgid "" "Ensure that the [.filename]#/var/yp/master.passwd# is neither group or world " "readable by setting its permissions to `600`." msgstr "" "Убедитесь, что файл [.filename]#/var/yp/master.passwd# не доступен для " "чтения группе или всем, установив его права доступа на `600`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:733 msgid "" "After completing this task, initialize the NIS maps. FreeBSD includes the " "man:ypinit[8] script to do this. When generating maps for the master " "server, include `-m` and specify the NIS domain name:" msgstr "" "После завершения этой задачи инициализируйте карты NIS. FreeBSD включает " "скрипт man:ypinit[8] для этого. При создании карт для главного сервера " "укажите `-m` и задайте имя домена NIS:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:754 #, no-wrap msgid "" "ellington# ypinit -m test-domain\n" "Server Type: MASTER Domain: test-domain\n" "Creating an YP server will require that you answer a few questions.\n" "Questions will all be asked at the beginning of the procedure.\n" "Do you want this procedure to quit on non-fatal errors? [y/n: n] n\n" "Ok, please remember to go back and redo manually whatever fails.\n" "If not, something might not work.\n" "At this point, we have to construct a list of this domains YP servers.\n" "rod.darktech.org is already known as master server.\n" "Please continue to add any slave servers, one per line. When you are\n" "done with the list, type a .\n" "master server : ellington\n" "next host to add: coltrane\n" "next host to add: ^D\n" "The current list of NIS servers looks like this:\n" "ellington\n" "coltrane\n" "Is this correct? [y/n: y] y\n" msgstr "" "ellington# ypinit -m test-domain\n" "Server Type: MASTER Domain: test-domain\n" "Creating an YP server will require that you answer a few questions.\n" "Questions will all be asked at the beginning of the procedure.\n" "Do you want this procedure to quit on non-fatal errors? [y/n: n] n\n" "Ok, please remember to go back and redo manually whatever fails.\n" "If not, something might not work.\n" "At this point, we have to construct a list of this domains YP servers.\n" "rod.darktech.org is already known as master server.\n" "Please continue to add any slave servers, one per line. When you are\n" "done with the list, type a .\n" "master server : ellington\n" "next host to add: coltrane\n" "next host to add: ^D\n" "The current list of NIS servers looks like this:\n" "ellington\n" "coltrane\n" "Is this correct? [y/n: y] y\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:756 #, no-wrap msgid "[..output from map generation..]\n" msgstr "[..output from map generation..]\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:759 #, no-wrap msgid "" "NIS Map update completed.\n" "ellington has been setup as an YP master server without any errors.\n" msgstr "" "NIS Map update completed.\n" "ellington has been setup as an YP master server without any errors.\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:764 msgid "" "This will create [.filename]#/var/yp/Makefile# from [.filename]#/var/yp/" "Makefile.dist#. By default, this file assumes that the environment has a " "single NIS server with only FreeBSD clients. Since `test-domain` has a " "slave server, edit this line in [.filename]#/var/yp/Makefile# so that it " "begins with a comment (`+#+`):" msgstr "" "Это создаст файл [.filename]#/var/yp/Makefile# на основе [.filename]#/var/yp/" "Makefile.dist#. По умолчанию этот файл предполагает, что в окружении есть " "единственный NIS-сервер только с клиентами FreeBSD. Поскольку у `test-" "domain` есть подчиненный сервер, отредактируйте эту строку в [.filename]#/" "var/yp/Makefile#, чтобы она начиналась с комментария (`+#+`):" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:768 #, no-wrap msgid "NOPUSH = \"True\"\n" msgstr "NOPUSH = \"True\"\n" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:771 #, no-wrap msgid "Adding New Users" msgstr "Добавление новых пользователей" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:776 msgid "" "Every time a new user is created, the user account must be added to the " "master NIS server and the NIS maps rebuilt. Until this occurs, the new user " "will not be able to login anywhere except on the NIS master. For example, " "to add the new user `jsmith` to the `test-domain` domain, run these commands " "on the master server:" msgstr "" "Каждый раз при создании нового пользователя учетная запись должна быть " "добавлена на основной NIS-сервер, а NIS-карты должны быть перестроены. До " "этого новый пользователь не сможет войти в систему нигде, кроме главного NIS-" "сервера. Например, чтобы добавить нового пользователя `jsmith` в домен `test-" "domain`, выполните следующие команды на основном сервере:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:782 #, no-wrap msgid "" "# pw useradd jsmith\n" "# cd /var/yp\n" "# make test-domain\n" msgstr "" "# pw useradd jsmith\n" "# cd /var/yp\n" "# make test-domain\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:785 msgid "" "The user could also be added using `adduser jsmith` instead of `pw useradd " "smith`." msgstr "" "Пользователь также может быть добавлен с помощью `adduser jsmith` вместо `pw " "useradd smith`." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:786 #, no-wrap msgid "Setting up a NIS Slave Server" msgstr "Настройка подчиненного сервера NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:792 msgid "" "To set up an NIS slave server, log on to the slave server and edit [." "filename]#/etc/rc.conf# as for the master server. Do not generate any NIS " "maps, as these already exist on the master server. When running `ypinit` on " "the slave server, use `-s` (for slave) instead of `-m` (for master). This " "option requires the name of the NIS master in addition to the domain name, " "as seen in this example:" msgstr "" "Для настройки подчиненного сервера NIS войдите на подчиненный сервер и " "отредактируйте [.filename]#/etc/rc.conf#, как для основного сервера. Не " "генерируйте карты NIS, так как они уже существуют на основном сервере. При " "запуске `ypinit` на подчиненном сервере используйте `-s` (для подчиненного) " "вместо `-m` (для основного). Эта опция требует указания имени основного " "сервера NIS в дополнение к имени домена, как показано в этом примере:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:796 #, no-wrap msgid "coltrane# ypinit -s ellington test-domain\n" msgstr "coltrane# ypinit -s ellington test-domain\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:798 #, no-wrap msgid "Server Type: SLAVE Domain: test-domain Master: ellington\n" msgstr "Server Type: SLAVE Domain: test-domain Master: ellington\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:801 #, no-wrap msgid "" "Creating an YP server will require that you answer a few questions.\n" "Questions will all be asked at the beginning of the procedure.\n" msgstr "" "Creating an YP server will require that you answer a few questions.\n" "Questions will all be asked at the beginning of the procedure.\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:803 #, no-wrap msgid "Do you want this procedure to quit on non-fatal errors? [y/n: n] n\n" msgstr "Do you want this procedure to quit on non-fatal errors? [y/n: n] n\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:848 #, no-wrap msgid "" "Ok, please remember to go back and redo manually whatever fails.\n" "If not, something might not work.\n" "There will be no further questions. The remainder of the procedure\n" "should take a few minutes, to copy the databases from ellington.\n" "Transferring netgroup...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring netgroup.byuser...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring netgroup.byhost...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring master.passwd.byuid...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring passwd.byuid...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring passwd.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring group.bygid...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring group.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring services.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring rpc.bynumber...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring rpc.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring protocols.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring master.passwd.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring networks.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring networks.byaddr...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring netid.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring hosts.byaddr...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring protocols.bynumber...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring ypservers...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring hosts.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" msgstr "" "Ok, please remember to go back and redo manually whatever fails.\n" "If not, something might not work.\n" "There will be no further questions. The remainder of the procedure\n" "should take a few minutes, to copy the databases from ellington.\n" "Transferring netgroup...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring netgroup.byuser...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring netgroup.byhost...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring master.passwd.byuid...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring passwd.byuid...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring passwd.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring group.bygid...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring group.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring services.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring rpc.bynumber...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring rpc.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring protocols.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring master.passwd.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring networks.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring networks.byaddr...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring netid.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring hosts.byaddr...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring protocols.bynumber...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring ypservers...\n" "ypxfr: Exiting: Map successfully transferred\n" "Transferring hosts.byname...\n" "ypxfr: Exiting: Map successfully transferred\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:851 #, no-wrap msgid "" "coltrane has been setup as an YP slave server without any errors.\n" "Remember to update map ypservers on ellington.\n" msgstr "" "coltrane has been setup as an YP slave server without any errors.\n" "Remember to update map ypservers on ellington.\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:855 msgid "" "This will generate a directory on the slave server called [.filename]#/var/" "yp/test-domain# which contains copies of the NIS master server's maps. " "Adding these [.filename]#/etc/crontab# entries on each slave server will " "force the slaves to sync their maps with the maps on the master server:" msgstr "" "Это создаст каталог на подчиненном сервере с именем [.filename]#/var/yp/test-" "domain#, который содержит копии карт основного сервера NIS. Добавление этих " "записей в [.filename]#/etc/crontab# на каждом подчиненном сервере заставит " "их синхронизировать свои карты с картами на основном сервере:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:860 #, no-wrap msgid "" "20 * * * * root /usr/libexec/ypxfr passwd.byname\n" "21 * * * * root /usr/libexec/ypxfr passwd.byuid\n" msgstr "" "20 * * * * root /usr/libexec/ypxfr passwd.byname\n" "21 * * * * root /usr/libexec/ypxfr passwd.byuid\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:865 msgid "" "These entries are not mandatory because the master server automatically " "attempts to push any map changes to its slaves. However, since clients may " "depend upon the slave server to provide correct password information, it is " "recommended to force frequent password map updates. This is especially " "important on busy networks where map updates might not always complete." msgstr "" "Эти записи не являются обязательными, поскольку основной сервер " "автоматически пытается передать любые изменения карт своим подчинённым " "серверам. Однако, поскольку клиенты могут зависеть от подчинённого сервера " "для предоставления корректной информации о паролях, рекомендуется " "принудительно выполнять частые обновления карт паролей. Это особенно важно в " "загруженных сетях, где обновления карт могут не всегда завершаться." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:867 msgid "" "To finish the configuration, run `/etc/netstart` on the slave server in " "order to start the NIS services." msgstr "" "Для завершения настройки выполните `/etc/netstart` на подчинённом сервере, " "чтобы запустить службы NIS." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:868 #, no-wrap msgid "Setting Up an NIS Client" msgstr "Настройка клиента NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:877 msgid "" "An NIS client binds to an NIS server using man:ypbind[8]. This daemon " "broadcasts RPC requests on the local network. These requests specify the " "domain name configured on the client. If an NIS server in the same domain " "receives one of the broadcasts, it will respond to ypbind, which will record " "the server's address. If there are several servers available, the client " "will use the address of the first server to respond and will direct all of " "its NIS requests to that server. The client will automatically ping the " "server on a regular basis to make sure it is still available. If it fails " "to receive a reply within a reasonable amount of time, ypbind will mark the " "domain as unbound and begin broadcasting again in the hopes of locating " "another server." msgstr "" "Клиент NIS связывается с сервером NIS с помощью man:ypbind[8]. Этот демон " "рассылает RPC-запросы в локальной сети. Эти запросы указывают доменное имя, " "настроенное на клиенте. Если NIS-сервер в том же домене получает один из " "таких запросов, он отвечает, и ypbind записывает адрес сервера. Если " "доступно несколько серверов, клиент будет использовать адрес первого " "ответившего сервера и направлять все свои NIS-запросы к нему. Клиент " "автоматически отправляет ping-запросы серверу через регулярные промежутки " "времени, чтобы убедиться, что он всё ещё доступен. Если ответ не получен в " "разумные сроки, ypbind пометит домен как несвязанный и снова начнёт рассылку " "запросов в надежде найти другой сервер." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:879 msgid "To configure a FreeBSD machine to be an NIS client:" msgstr "Для настройки машины FreeBSD в качестве клиента NIS:" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:883 msgid "" "Edit [.filename]#/etc/rc.conf# and add the following lines in order to set " "the NIS domain name and start man:ypbind[8] during network startup:" msgstr "" "Отредактируйте файл [.filename]#/etc/rc.conf# и добавьте следующие строки, " "чтобы установить имя домена NIS и запустить man:ypbind[8] при старте сети:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:888 #, no-wrap msgid "" "nisdomainname=\"test-domain\"\n" "nis_client_enable=\"YES\"\n" msgstr "" "nisdomainname=\"test-domain\"\n" "nis_client_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:891 msgid "" "To import all possible password entries from the NIS server, use `vipw` to " "remove all user accounts except one from [.filename]#/etc/master.passwd#. " "When removing the accounts, keep in mind that at least one local account " "should remain and this account should be a member of `wheel`. If there is a " "problem with NIS, this local account can be used to log in remotely, become " "the superuser, and fix the problem. Before saving the edits, add the " "following line to the end of the file:" msgstr "" "Для импорта всех возможных записей паролей с сервера NIS, используйте " "`vipw`, чтобы удалить все учетные записи пользователей, кроме одной, из [." "filename]#/etc/master.passwd#. При удалении учетных записей учитывайте, что " "хотя бы одна локальная учетная запись должна остаться, и эта учетная запись " "должна быть членом группы `wheel`. Если возникнут проблемы с NIS, эту " "локальную учетную запись можно использовать для удаленного входа, получения " "прав суперпользователя и устранения проблемы. Перед сохранением изменений " "добавьте следующую строку в конец файла:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:895 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1116 #, no-wrap msgid "+:::::::::\n" msgstr "+:::::::::\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:901 msgid "" "This line configures the client to provide anyone with a valid account in " "the NIS server's password maps an account on the client. There are many " "ways to configure the NIS client by modifying this line. One method is " "described in crossref:network-servers[network-netgroups, Using Netgroups]. " "For more detailed reading, refer to the book `Managing NFS and NIS`, " "published by O'Reilly Media." msgstr "" "Эта строка настраивает клиент для предоставления любому пользователю с " "действительной учётной записью в картах паролей NIS-сервера учётной записи " "на клиенте. Существует множество способов настройки NIS-клиента путём " "изменения этой строки. Один из методов описан в crossref:network-" "servers[network-netgroups, Использование групп сети]. Для более подробного " "ознакомления обратитесь к книге `Managing NFS and NIS`, опубликованной " "O'Reilly Media." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:902 msgid "" "To import all possible group entries from the NIS server, add this line to [." "filename]#/etc/group#:" msgstr "" "Для импорта всех возможных записей групп с сервера NIS добавьте следующую " "строку в [.filename]#/etc/group#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:906 #, no-wrap msgid "+:*::\n" msgstr "+:*::\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:910 msgid "" "To start the NIS client immediately, execute the following commands as the " "superuser:" msgstr "" "Для немедленного запуска клиента NIS выполните следующие команды от имени " "суперпользователя:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:915 #, no-wrap msgid "" "# /etc/netstart\n" "# service ypbind start\n" msgstr "" "# /etc/netstart\n" "# service ypbind start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:918 msgid "" "After completing these steps, running `ypcat passwd` on the client should " "show the server's [.filename]#passwd# map." msgstr "" "После выполнения этих шагов, выполнение команды `ypcat passwd` на клиенте " "должно отобразить карту [.filename]#passwd# сервера." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:919 #, no-wrap msgid "NIS Security" msgstr "Безопасность NIS" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:927 msgid "" "Since RPC is a broadcast-based service, any system running ypbind within the " "same domain can retrieve the contents of the NIS maps. To prevent " "unauthorized transactions, man:ypserv[8] supports a feature called " "\"securenets\" which can be used to restrict access to a given set of " "hosts. By default, this information is stored in [.filename]#/var/yp/" "securenets#, unless man:ypserv[8] is started with `-p` and an alternate " "path. This file contains entries that consist of a network specification " "and a network mask separated by white space. Lines starting with `+\"#\"+` " "are considered to be comments. A sample [.filename]#securenets# might look " "like this:" msgstr "" "Поскольку RPC — это широковещательный сервис, любая система, запускающая " "ypbind в том же домене, может получить содержимое NIS-карт. Чтобы " "предотвратить несанкционированные операции, man:ypserv[8] поддерживает " "функцию под названием \"securenets\", которая может использоваться для " "ограничения доступа к определённому набору хостов. По умолчанию эта " "информация хранится в [.filename]#/var/yp/securenets#, если только man:" "ypserv[8] не запущен с ключом `-p` и альтернативным путём. Этот файл " "содержит записи, состоящие из спецификации сети и сетевой маски, разделённых " "пробелами. Строки, начинающиеся с `+\"#\"+`, считаются комментариями. Пример " "файла [.filename]#securenets# может выглядеть так:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:939 #, no-wrap msgid "" "# allow connections from local host -- mandatory\n" "127.0.0.1 255.255.255.255\n" "# allow connections from any host\n" "# on the 192.168.128.0 network\n" "192.168.128.0 255.255.255.0\n" "# allow connections from any host\n" "# between 10.0.0.0 to 10.0.15.255\n" "# this includes the machines in the testlab\n" "10.0.0.0 255.255.240.0\n" msgstr "" "# allow connections from local host -- mandatory\n" "127.0.0.1 255.255.255.255\n" "# allow connections from any host\n" "# on the 192.168.128.0 network\n" "192.168.128.0 255.255.255.0\n" "# allow connections from any host\n" "# between 10.0.0.0 to 10.0.15.255\n" "# this includes the machines in the testlab\n" "10.0.0.0 255.255.240.0\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:944 msgid "" "If man:ypserv[8] receives a request from an address that matches one of " "these rules, it will process the request normally. If the address fails to " "match a rule, the request will be ignored and a warning message will be " "logged. If the [.filename]#securenets# does not exist, `ypserv` will allow " "connections from any host." msgstr "" "Если man:ypserv[8] получает запрос от адреса, соответствующего одному из " "этих правил, он обработает запрос как обычно. Если адрес не соответствует ни " "одному правилу, запрос будет проигнорирован и в журнал будет записано " "предупреждение. Если файл [.filename]#securenets# не существует, `ypserv` " "разрешит соединения с любого хоста." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:948 msgid "" "crossref:security[tcpwrappers,\"TCP Wrapper\"] is an alternate mechanism for " "providing access control instead of [.filename]#securenets#. While either " "access control mechanism adds some security, they are both vulnerable to " "\"IP spoofing\" attacks. All NIS-related traffic should be blocked at the " "firewall." msgstr "" "crossref:security[tcpwrappers,\"TCP Wrapper\"] — это альтернативный механизм " "контроля доступа вместо [.filename]#securenets#. Хотя оба механизма контроля " "доступа добавляют некоторый уровень безопасности, они оба уязвимы к атакам " "\"подмены IP\". Весь трафик, связанный с NIS, должен блокироваться на " "межсетевом экране." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:952 msgid "" "Servers using [.filename]#securenets# may fail to serve legitimate NIS " "clients with archaic TCP/IP implementations. Some of these implementations " "set all host bits to zero when doing broadcasts or fail to observe the " "subnet mask when calculating the broadcast address. While some of these " "problems can be fixed by changing the client configuration, other problems " "may force the retirement of these client systems or the abandonment of [." "filename]#securenets#." msgstr "" "Серверы, использующие [.filename]#securenets#, могут не обслуживать " "легитимных клиентов NIS с устаревшими реализациями TCP/IP. Некоторые из этих " "реализаций устанавливают все биты хоста в ноль при выполнении " "широковещательных запросов или не учитывают маску подсети при вычислении " "широковещательного адреса. Хотя некоторые из этих проблем можно устранить, " "изменив конфигурацию клиента, другие проблемы могут потребовать вывода из " "эксплуатации этих клиентских систем или отказа от [.filename]#securenets#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:956 msgid "" "The use of TCP Wrapper increases the latency of the NIS server. The " "additional delay may be long enough to cause timeouts in client programs, " "especially in busy networks with slow NIS servers. If one or more clients " "suffer from latency, convert those clients into NIS slave servers and force " "them to bind to themselves." msgstr "" "Использование TCP Wrapper увеличивает задержку сервера NIS. Дополнительная " "задержка может быть достаточно длительной, чтобы вызвать таймауты в " "клиентских программах, особенно в загруженных сетях с медленными серверами " "NIS. Если один или несколько клиентов страдают от задержек, преобразуйте " "этих клиентов в подчинённые серверы NIS и заставьте их привязываться к самим " "себе." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:957 #, no-wrap msgid "Barring Some Users" msgstr "Запрет доступа некоторым пользователям" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:962 msgid "" "In this example, the `basie` system is a faculty workstation within the NIS " "domain. The [.filename]#passwd# map on the master NIS server contains " "accounts for both faculty and students. This section demonstrates how to " "allow faculty logins on this system while refusing student logins." msgstr "" "В этом примере система `basie` является рабочей станцией преподавателя в " "домене NIS. Файл [.filename]#passwd# на главном сервере NIS содержит учетные " "записи как преподавателей, так и студентов. В этом разделе показано, как " "разрешить вход преподавателей в эту систему, запретив вход студентам." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:966 msgid "" "To prevent specified users from logging on to a system, even if they are " "present in the NIS database, use `vipw` to add `-_username_` with the " "correct number of colons towards the end of [.filename]#/etc/master.passwd# " "on the client, where _username_ is the username of a user to bar from " "logging in. The line with the blocked user must be before the `+` line that " "allows NIS users. In this example, `bill` is barred from logging on to " "`basie`:" msgstr "" "Чтобы предотвратить вход определенных пользователей в систему, даже если они " "присутствуют в базе данных NIS, используйте `vipw` для добавления `-" "_имя_пользователя_` с правильным количеством двоеточий в конце файла [." "filename]#/etc/master.passwd# на клиенте, где _имя_пользователя_ — это имя " "пользователя, которому запрещен вход. Строка с заблокированным пользователем " "должна находиться перед строкой `+`, которая разрешает вход пользователям " "NIS. В этом примере пользователю `bill` запрещен вход на `basie`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:987 #, no-wrap msgid "" "basie# cat /etc/master.passwd\n" "root:[password]:0:0::0:0:The super-user:/root:/bin/csh\n" "toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh\n" "daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin\n" "operator:*:2:5::0:0:System &:/:/usr/sbin/nologin\n" "bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin\n" "tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin\n" "kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin\n" "games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin\n" "news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin\n" "man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/usr/sbin/nologin\n" "bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin\n" "uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico\n" "xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin\n" "pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin\n" "nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin\n" "-bill:::::::::\n" "+:::::::::\n" msgstr "" "basie# cat /etc/master.passwd\n" "root:[password]:0:0::0:0:The super-user:/root:/bin/csh\n" "toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh\n" "daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin\n" "operator:*:2:5::0:0:System &:/:/usr/sbin/nologin\n" "bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin\n" "tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin\n" "kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin\n" "games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin\n" "news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin\n" "man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/usr/sbin/nologin\n" "bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin\n" "uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico\n" "xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin\n" "pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin\n" "nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin\n" "-bill:::::::::\n" "+:::::::::\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:989 #, no-wrap msgid "basie#\n" msgstr "basie#\n" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:993 #, no-wrap msgid "Using Netgroups" msgstr "Использование сетевых групп" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:996 msgid "" "Barring specified users from logging on to individual systems becomes " "unscaleable on larger networks and quickly loses the main benefit of NIS: " "_centralized_ administration." msgstr "" "Запрет указанным пользователям возможности входа в отдельные системы " "становится неэффективным и не масштабируемым в крупных сетях и быстро лишает " "NIS основного преимущества: _централизованного_ администрирования." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:999 msgid "" "Netgroups were developed to handle large, complex networks with hundreds of " "users and machines. Their use is comparable to UNIX(R) groups, where the " "main difference is the lack of a numeric ID and the ability to define a " "netgroup by including both user accounts and other netgroups." msgstr "" "Сетевые группы были разработаны для управления большими и сложными сетями с " "сотнями пользователей и машин. Их использование аналогично группам в " "UNIX(R), с той основной разницей, что отсутствует числовой идентификатор и " "есть возможность определять сетевую группу, включая как учётные записи " "пользователей, так и другие сетевые группы." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1001 msgid "" "To expand on the example used in this chapter, the NIS domain will be " "extended to add the users and systems shown in Tables 28.2 and 28.3:" msgstr "" "Для дополнения примера, используемого в этой главе, домен NIS будет увеличен " "за счет пользователей и систем, показанным в таблицах 28.2 и 28.3:" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:1002 #, no-wrap msgid "Additional Users" msgstr "Дополнительные пользователи" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1006 #, no-wrap msgid "User Name(s)" msgstr "Имена пользователей" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1009 #, no-wrap msgid "`alpha`, `beta`" msgstr "`alpha`, `beta`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1011 #, no-wrap msgid "IT department employees" msgstr "Сотрудники IT-отдела" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1012 #, no-wrap msgid "`charlie`, `delta`" msgstr "`charlie`, `delta`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1014 #, no-wrap msgid "IT department apprentices" msgstr "Стажеры IT-отдела" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1015 #, no-wrap msgid "`echo`, `foxtrott`, `golf`, ..." msgstr "`echo`, `foxtrott`, `golf`, ..." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1017 #, no-wrap msgid "employees" msgstr "Сотрудники" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1018 #, no-wrap msgid "`able`, `baker`, ..." msgstr "`able`, `baker`, ..." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1019 #, no-wrap msgid "interns" msgstr "Интерны" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:1021 #, no-wrap msgid "Additional Systems" msgstr "Дополнительные Системы" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1025 #, no-wrap msgid "Machine Name(s)" msgstr "Имена машин" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1028 #, no-wrap msgid "`war`, `death`, `famine`, `pollution`" msgstr "`war`, `death`, `famine`, `pollution`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1030 #, no-wrap msgid "Only IT employees are allowed to log onto these servers." msgstr "Только сотрудники IT имеют право входить на эти серверы." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1031 #, no-wrap msgid "`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth`" msgstr "`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1033 #, no-wrap msgid "All members of the IT department are allowed to login onto these servers." msgstr "Все сотрудники IT-отдела имеют право входить на эти серверы." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1034 #, no-wrap msgid "`one`, `two`, `three`, `four`, ..." msgstr "`one`, `two`, `three`, `four`, ..." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1036 #, no-wrap msgid "Ordinary workstations used by employees." msgstr "Обычные рабочие станции, используемые сотрудниками." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1037 #, no-wrap msgid "`trashcan`" msgstr "`trashcan`" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1038 #, no-wrap msgid "A very old machine without any critical data. Even interns are allowed to use this system." msgstr "Очень старая машина без каких-либо важных данных. Даже интернам разрешено использовать эту систему." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1044 msgid "" "When using netgroups to configure this scenario, each user is assigned to " "one or more netgroups and logins are then allowed or forbidden for all " "members of the netgroup. When adding a new machine, login restrictions must " "be defined for all netgroups. When a new user is added, the account must be " "added to one or more netgroups. If the NIS setup is planned carefully, only " "one central configuration file needs modification to grant or deny access to " "machines." msgstr "" "При использовании сетевых групп для настройки этого сценария каждый " "пользователь назначается в одну или несколько сетевых групп, а затем вход " "разрешается или запрещается для всех членов сетевой группы. При добавлении " "новой машины необходимо определить ограничения входа для всех сетевых групп. " "Когда добавляется новый пользователь, его учётная запись должна быть " "добавлена в одну или несколько сетевых групп. Если настройка NIS выполнена " "тщательно, для предоставления или запрета доступа к машинам потребуется " "изменить только один центральный файл конфигурации." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1048 msgid "" "The first step is the initialization of the NIS `netgroup` map. In FreeBSD, " "this map is not created by default. On the NIS master server, use an editor " "to create a map named [.filename]#/var/yp/netgroup#." msgstr "" "Первым шагом является инициализация NIS `netgroup` карты. В FreeBSD эта " "карта не создается по умолчанию. На главном сервере NIS используйте редактор " "для создания карты с именем [.filename]#/var/yp/netgroup#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1050 msgid "" "This example creates four netgroups to represent IT employees, IT " "apprentices, employees, and interns:" msgstr "" "Этот пример создает четыре сетевые группы для представления сотрудников IT, " "стажеров IT, сотрудников и интернов:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1058 #, no-wrap msgid "" "IT_EMP (,alpha,test-domain) (,beta,test-domain)\n" "IT_APP (,charlie,test-domain) (,delta,test-domain)\n" "USERS (,echo,test-domain) (,foxtrott,test-domain) \\\n" " (,golf,test-domain)\n" "INTERNS (,able,test-domain) (,baker,test-domain)\n" msgstr "" "IT_EMP (,alpha,test-domain) (,beta,test-domain)\n" "IT_APP (,charlie,test-domain) (,delta,test-domain)\n" "USERS (,echo,test-domain) (,foxtrott,test-domain) \\\n" " (,golf,test-domain)\n" "INTERNS (,able,test-domain) (,baker,test-domain)\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1063 msgid "" "Each entry configures a netgroup. The first column in an entry is the name " "of the netgroup. Each set of parentheses represents either a group of one " "or more users or the name of another netgroup. When specifying a user, the " "three comma-delimited fields inside each group represent:" msgstr "" "Каждая запись настраивает сетевую группу. Первый столбец в записи — это " "название сетевой группы. Каждый набор скобок представляет либо группу из " "одного или нескольких пользователей, либо имя другой сетевой группы. При " "указании пользователя три поля, разделённые запятыми, внутри каждой группы " "означают:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1065 msgid "" "The name of the host(s) where the other fields representing the user are " "valid. If a hostname is not specified, the entry is valid on all hosts." msgstr "" "Имя хоста(ов), на котором другие поля, представляющие пользователя, " "действительны. Если имя хоста не указано, запись действительна на всех " "хостах." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1066 msgid "The name of the account that belongs to this netgroup." msgstr "Имя учетной записи, принадлежащей этой сетевой группе." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1067 msgid "" "The NIS domain for the account. Accounts may be imported from other NIS " "domains into a netgroup." msgstr "" "NIS-домен для учетной записи. Учетные записи могут быть импортированы из " "других NIS-доменов в сетевую группу." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1071 msgid "" "If a group contains multiple users, separate each user with whitespace. " "Additionally, each field may contain wildcards. See man:netgroup[5] for " "details." msgstr "" "Если группа содержит нескольких пользователей, разделяйте каждого " "пользователя пробелом. Кроме того, каждое поле может содержать символы " "подстановки. Подробности см. в man:netgroup[5]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1074 msgid "" "Netgroup names longer than 8 characters should not be used. The names are " "case sensitive and using capital letters for netgroup names is an easy way " "to distinguish between user, machine and netgroup names." msgstr "" "Имена сетевых групп длиннее 8 символов не должны использоваться. Имена " "чувствительны к регистру, и использование заглавных букв для имён сетевых " "групп — это простой способ отличить имена пользователей, машин и сетевых " "групп." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1077 msgid "" "Some non-FreeBSD NIS clients cannot handle netgroups containing more than 15 " "entries. This limit may be circumvented by creating several sub-netgroups " "with 15 users or fewer and a real netgroup consisting of the sub-netgroups, " "as seen in this example:" msgstr "" "Некоторые клиенты NIS, не относящиеся к FreeBSD, не могут обрабатывать " "сетевые группы, содержащие более 15 записей. Это ограничение можно обойти, " "создав несколько подгрупп с 15 или менее пользователями и настоящую сетевую " "группу, состоящую из этих подгрупп, как показано в этом примере:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1084 #, no-wrap msgid "" "BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...]\n" "BIGGRP2 (,joe16,domain) (,joe17,domain) [...]\n" "BIGGRP3 (,joe31,domain) (,joe32,domain)\n" "BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3\n" msgstr "" "BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...]\n" "BIGGRP2 (,joe16,domain) (,joe17,domain) [...]\n" "BIGGRP3 (,joe31,domain) (,joe32,domain)\n" "BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1087 msgid "" "Repeat this process if more than 225 (15 times 15) users exist within a " "single netgroup." msgstr "" "Повторите этот процесс, если в одной сетевой группе существует более 225 (15 " "умножить на 15) пользователей." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1089 msgid "To activate and distribute the new NIS map:" msgstr "Для активации и распространения новой NIS-карты:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1094 #, no-wrap msgid "" "ellington# cd /var/yp\n" "ellington# make\n" msgstr "" "ellington# cd /var/yp\n" "ellington# make\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1098 msgid "" "This will generate the three NIS maps [.filename]#netgroup#, [." "filename]#netgroup.byhost# and [.filename]#netgroup.byuser#. Use the map " "key option of man:ypcat[1] to check if the new NIS maps are available:" msgstr "" "Это создаст три карты NIS [.filename]#netgroup#, [.filename]#netgroup." "byhost# и [.filename]#netgroup.byuser#. Используйте опцию ключа карты в man:" "ypcat[1], чтобы проверить доступность новых карт NIS:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1104 #, no-wrap msgid "" "ellington% ypcat -k netgroup\n" "ellington% ypcat -k netgroup.byhost\n" "ellington% ypcat -k netgroup.byuser\n" msgstr "" "ellington% ypcat -k netgroup\n" "ellington% ypcat -k netgroup.byhost\n" "ellington% ypcat -k netgroup.byuser\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1109 msgid "" "The output of the first command should resemble the contents of [.filename]#/" "var/yp/netgroup#. The second command only produces output if host-specific " "netgroups were created. The third command is used to get the list of " "netgroups for a user." msgstr "" "Вывод первой команды должен напоминать содержимое файла [.filename]#/var/yp/" "netgroup#. Вторая команда выводит результат только в случае создания " "специфичных для хоста групп сетей. Третья команда используется для получения " "списка групп сетей для пользователя." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1112 msgid "" "To configure a client, use man:vipw[8] to specify the name of the netgroup. " "For example, on the server named `war`, replace this line:" msgstr "" "Для настройки клиента используйте man:vipw[8], чтобы указать имя сетевой " "группы. Например, на сервере с именем `war` замените эту строку:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1119 msgid "with" msgstr "строкой" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1123 #, no-wrap msgid "+@IT_EMP:::::::::\n" msgstr "+@IT_EMP:::::::::\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1126 msgid "" "This specifies that only the users defined in the netgroup `IT_EMP` will be " "imported into this system's password database and only those users are " "allowed to login to this system." msgstr "" "Указывает, что только пользователи, определённые в сетевой группе `IT_EMP`, " "будут импортированы в базу данных паролей этой системы, и только этим " "пользователям разрешён вход в систему." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1131 msgid "" "This configuration also applies to the `~` function of the shell and all " "routines which convert between user names and numerical user IDs. In other " "words, `cd ~_user_` will not work, `ls -l` will show the numerical ID " "instead of the username, and `find . -user joe -print` will fail with the " "message `No such user`. To fix this, import all user entries without " "allowing them to login into the servers. This can be achieved by adding an " "extra line:" msgstr "" "Эта конфигурация также применяется к функции `~` оболочки и всем процедурам, " "которые преобразуют между именами пользователей и числовыми идентификаторами " "пользователей. Другими словами, `cd ~_user_` не будет работать, `ls -l` " "покажет числовой ID вместо имени пользователя, а `find . -user joe -print` " "завершится с сообщением `No such user`. Чтобы исправить это, импортируйте " "все записи пользователей, не разрешая им вход на серверы. Это можно достичь, " "добавив дополнительную строку:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1135 #, no-wrap msgid "+:::::::::/usr/sbin/nologin\n" msgstr "+:::::::::/usr/sbin/nologin\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1138 msgid "" "This line configures the client to import all entries but to replace the " "shell in those entries with [.filename]#/usr/sbin/nologin#." msgstr "" "Эта строка настраивает клиент на импорт всех записей, но с заменой оболочки " "в этих записях на [.filename]#/usr/sbin/nologin#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1141 msgid "" "Make sure that extra line is placed _after_ `+@IT_EMP:::::::::`. Otherwise, " "all user accounts imported from NIS will have [.filename]#/usr/sbin/nologin# " "as their login shell and no one will be able to login to the system." msgstr "" "Убедитесь, что дополнительная строка добавлена _после_ `+@IT_EMP:::::::::`. " "В противном случае у всех пользовательских учётных записей, импортированных " "из NIS, будет указана оболочка входа [.filename]#/usr/sbin/nologin#, и никто " "не сможет войти в систему." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1143 msgid "" "To configure the less important servers, replace the old `+:::::::::` on the " "servers with these lines:" msgstr "" "Для настройки менее важных серверов замените старые `+:::::::::` на серверах " "следующими строками:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1149 #, no-wrap msgid "" "+@IT_EMP:::::::::\n" "+@IT_APP:::::::::\n" "+:::::::::/usr/sbin/nologin\n" msgstr "" "+@IT_EMP:::::::::\n" "+@IT_APP:::::::::\n" "+:::::::::/usr/sbin/nologin\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1152 msgid "The corresponding lines for the workstations would be:" msgstr "Соответствующие строки для рабочих станций будут:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1158 #, no-wrap msgid "" "+@IT_EMP:::::::::\n" "+@USERS:::::::::\n" "+:::::::::/usr/sbin/nologin\n" msgstr "" "+@IT_EMP:::::::::\n" "+@USERS:::::::::\n" "+:::::::::/usr/sbin/nologin\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1165 msgid "" "NIS supports the creation of netgroups from other netgroups which can be " "useful if the policy regarding user access changes. One possibility is the " "creation of role-based netgroups. For example, one might create a netgroup " "called `BIGSRV` to define the login restrictions for the important servers, " "another netgroup called `SMALLSRV` for the less important servers, and a " "third netgroup called `USERBOX` for the workstations. Each of these " "netgroups contains the netgroups that are allowed to login onto these " "machines. The new entries for the NIS`netgroup` map would look like this:" msgstr "" "NIS поддерживает создание netgroups из других netgroups, что может быть " "полезно при изменении политики доступа пользователей. Одна из возможностей — " "создание ролевых netgroups. Например, можно создать netgroup с именем " "`BIGSRV` для определения ограничений входа на важные серверы, другую " "netgroup `SMALLSRV` для менее важных серверов и третью netgroup `USERBOX` " "для рабочих станций. Каждая из этих netgroups содержит netgroups, которым " "разрешено входить на эти машины. Новые записи для карты NIS `netgroup` будут " "выглядеть так:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1171 #, no-wrap msgid "" "BIGSRV IT_EMP IT_APP\n" "SMALLSRV IT_EMP IT_APP ITINTERN\n" "USERBOX IT_EMP ITINTERN USERS\n" msgstr "" "BIGSRV IT_EMP IT_APP\n" "SMALLSRV IT_EMP IT_APP ITINTERN\n" "USERBOX IT_EMP ITINTERN USERS\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1176 msgid "" "This method of defining login restrictions works reasonably well when it is " "possible to define groups of machines with identical restrictions. " "Unfortunately, this is the exception and not the rule. Most of the time, " "the ability to define login restrictions on a per-machine basis is required." msgstr "" "Этот метод определения ограничений входа работает достаточно хорошо, когда " "можно определить группы машин с одинаковыми ограничениями. К сожалению, это " "скорее исключение, чем правило. В большинстве случаев требуется возможность " "определять ограничения входа для каждой машины отдельно." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1181 msgid "" "Machine-specific netgroup definitions are another possibility to deal with " "the policy changes. In this scenario, the [.filename]#/etc/master.passwd# " "of each system contains two lines starting with \"+\". The first line adds " "a netgroup with the accounts allowed to login onto this machine and the " "second line adds all other accounts with [.filename]#/usr/sbin/nologin# as " "shell. It is recommended to use the \"ALL-CAPS\" version of the hostname as " "the name of the netgroup:" msgstr "" "Определения машинозависимых сетевых групп — ещё один способ справиться с " "изменениями политики. В этом сценарии файл [.filename]#/etc/master.passwd# " "на каждой системе содержит две строки, начинающиеся с \"+\". Первая строка " "добавляет сетевую группу с учётными записями, которым разрешён вход на эту " "машину, а вторая строка добавляет все остальные учётные записи с оболочкой [." "filename]#/usr/sbin/nologin#. Рекомендуется использовать имя сетевой группы " "в версии \"ВСЕ-ЗАГЛАВНЫЕ\", соответствующее имени хоста:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1186 #, no-wrap msgid "" "+@BOXNAME:::::::::\n" "+:::::::::/usr/sbin/nologin\n" msgstr "" "+@BOXNAME:::::::::\n" "+:::::::::/usr/sbin/nologin\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1191 msgid "" "Once this task is completed on all the machines, there is no longer a need " "to modify the local versions of [.filename]#/etc/master.passwd# ever again. " "All further changes can be handled by modifying the NIS map. Here is an " "example of a possible `netgroup` map for this scenario:" msgstr "" "После выполнения этой задачи на всех машинах больше не требуется изменять " "локальные версии файла [.filename]#/etc/master.passwd#. Все дальнейшие " "изменения можно выполнять, редактируя карту NIS. Вот пример возможной карты " "`netgroup` для данного сценария:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1229 #, no-wrap msgid "" "# Define groups of users first\n" "IT_EMP (,alpha,test-domain) (,beta,test-domain)\n" "IT_APP (,charlie,test-domain) (,delta,test-domain)\n" "DEPT1 (,echo,test-domain) (,foxtrott,test-domain)\n" "DEPT2 (,golf,test-domain) (,hotel,test-domain)\n" "DEPT3 (,india,test-domain) (,juliet,test-domain)\n" "ITINTERN (,kilo,test-domain) (,lima,test-domain)\n" "D_INTERNS (,able,test-domain) (,baker,test-domain)\n" "#\n" "# Now, define some groups based on roles\n" "USERS DEPT1 DEPT2 DEPT3\n" "BIGSRV IT_EMP IT_APP\n" "SMALLSRV IT_EMP IT_APP ITINTERN\n" "USERBOX IT_EMP ITINTERN USERS\n" "#\n" "# And a groups for a special tasks\n" "# Allow echo and golf to access our anti-virus-machine\n" "SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain)\n" "#\n" "# machine-based netgroups\n" "# Our main servers\n" "WAR BIGSRV\n" "FAMINE BIGSRV\n" "# User india needs access to this server\n" "POLLUTION BIGSRV (,india,test-domain)\n" "#\n" "# This one is really important and needs more access restrictions\n" "DEATH IT_EMP\n" "#\n" "# The anti-virus-machine mentioned above\n" "ONE SECURITY\n" "#\n" "# Restrict a machine to a single user\n" "TWO (,hotel,test-domain)\n" "# [...more groups to follow]\n" msgstr "" "# Define groups of users first\n" "IT_EMP (,alpha,test-domain) (,beta,test-domain)\n" "IT_APP (,charlie,test-domain) (,delta,test-domain)\n" "DEPT1 (,echo,test-domain) (,foxtrott,test-domain)\n" "DEPT2 (,golf,test-domain) (,hotel,test-domain)\n" "DEPT3 (,india,test-domain) (,juliet,test-domain)\n" "ITINTERN (,kilo,test-domain) (,lima,test-domain)\n" "D_INTERNS (,able,test-domain) (,baker,test-domain)\n" "#\n" "# Now, define some groups based on roles\n" "USERS DEPT1 DEPT2 DEPT3\n" "BIGSRV IT_EMP IT_APP\n" "SMALLSRV IT_EMP IT_APP ITINTERN\n" "USERBOX IT_EMP ITINTERN USERS\n" "#\n" "# And a groups for a special tasks\n" "# Allow echo and golf to access our anti-virus-machine\n" "SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain)\n" "#\n" "# machine-based netgroups\n" "# Our main servers\n" "WAR BIGSRV\n" "FAMINE BIGSRV\n" "# User india needs access to this server\n" "POLLUTION BIGSRV (,india,test-domain)\n" "#\n" "# This one is really important and needs more access restrictions\n" "DEATH IT_EMP\n" "#\n" "# The anti-virus-machine mentioned above\n" "ONE SECURITY\n" "#\n" "# Restrict a machine to a single user\n" "TWO (,hotel,test-domain)\n" "# [...more groups to follow]\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1233 msgid "" "It may not always be advisable to use machine-based netgroups. When " "deploying a couple of dozen or hundreds of systems, role-based netgroups " "instead of machine-based netgroups may be used to keep the size of the NIS " "map within reasonable limits." msgstr "" "Не всегда целесообразно использовать сетевые группы, привязанные к машинам. " "При развертывании нескольких десятков или сотен систем можно использовать " "ролевые сетевые группы вместо машинных, чтобы размер карты NIS оставался в " "разумных пределах." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1234 #, no-wrap msgid "Password Formats" msgstr "Форматы паролей" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1239 msgid "" "NIS requires that all hosts within an NIS domain use the same format for " "encrypting passwords. If users have trouble authenticating on an NIS " "client, it may be due to a differing password format. In a heterogeneous " "network, the format must be supported by all operating systems, where DES is " "the lowest common standard." msgstr "" "NIS требует, чтобы все хосты в домене NIS использовали одинаковый формат " "шифрования паролей. Если у пользователей возникают проблемы с " "аутентификацией на клиенте NIS, это может быть связано с разным форматом " "паролей. В гетерогенной сети формат должен поддерживаться всеми " "операционными системами, где DES является минимальным общим стандартом." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1241 msgid "" "To check which format a server or client is using, look at this section of [." "filename]#/etc/login.conf#:" msgstr "" "Чтобы проверить, какой формат использует сервер или клиент, посмотрите на " "этот раздел в [.filename]#/etc/login.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1248 #, no-wrap msgid "" "default:\\\n" "\t:passwd_format=des:\\\n" -"\t:copyright=/etc/COPYRIGHT:\\\n" "\t[Further entries elided]\n" msgstr "" "default:\\\n" "\t:passwd_format=des:\\\n" -"\t:copyright=/etc/COPYRIGHT:\\\n" "\t[Further entries elided]\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1253 msgid "" "In this example, the system is using the DES format for password hashing. " "Other possible values include `blf` for Blowfish, `md5` for MD5, `sha256` " "and `sha512` for SHA-256 and SHA-512 respectively. For more information and " "the up to date list of what is available on the system, consult the man:" "crypt[3] manpage." msgstr "" "В этом примере система использует формат DES для хеширования паролей. Другие " "возможные значения включают `blf` для Blowfish, `md5` для MD5, `sha256` и " "`sha512` для SHA-256 и SHA-512 соответственно. Для получения дополнительной " "информации и актуального списка доступных вариантов на вашей системе " "обратитесь к man:crypt[3]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1255 msgid "" "If the format on a host needs to be edited to match the one being used in " "the NIS domain, the login capability database must be rebuilt after saving " "the change:" msgstr "" "Если формат на хосте необходимо изменить, чтобы он соответствовал формату, " "используемому в домене NIS, базу данных возможностей входа необходимо " "перестроить после сохранения изменений:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1259 #, no-wrap msgid "# cap_mkdb /etc/login.conf\n" msgstr "# cap_mkdb /etc/login.conf\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1264 msgid "" "The format of passwords for existing user accounts will not be updated until " "each user changes their password _after_ the login capability database is " "rebuilt." msgstr "" "Формат паролей для существующих учётных записей не будет обновлён, пока " "каждый пользователь не изменит свой пароль _после_ перестроения базы данных " "возможностей входа." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:1267 #, no-wrap msgid "Lightweight Directory Access Protocol (LDAP)" msgstr "Протокол LDAP" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1273 msgid "" "The Lightweight Directory Access Protocol (LDAP) is an application layer " "protocol used to access, modify, and authenticate objects using a " "distributed directory information service. Think of it as a phone or record " "book which stores several levels of hierarchical, homogeneous information. " "It is used in Active Directory and OpenLDAP networks and allows users to " "access to several levels of internal information utilizing a single " "account. For example, email authentication, pulling employee contact " "information, and internal website authentication might all make use of a " "single user account in the LDAP server's record base." msgstr "" "Протокол LDAP (Lightweight Directory Access Protocol) — это протокол уровня " "приложений, используемый для доступа, изменения и аутентификации объектов с " "помощью распределённой службы каталогов. Его можно сравнить с телефонной " "книгой или архивом, который хранит несколько уровней иерархической " "однородной информации. Он применяется в сетях Active Directory и OpenLDAP, " "позволяя пользователям получать доступ к различным уровням внутренней " "информации с использованием одной учётной записи. Например, аутентификация " "электронной почты, получение контактных данных сотрудников и аутентификация " "на внутренних веб-сайтах могут осуществляться с помощью одной учётной записи " "в базе данных LDAP-сервера." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1276 msgid "" "This section provides a quick start guide for configuring an LDAP server on " "a FreeBSD system. It assumes that the administrator already has a design " "plan which includes the type of information to store, what that information " "will be used for, which users should have access to that information, and " "how to secure this information from unauthorized access." msgstr "" "В этом разделе представлено краткое руководство по настройке сервера LDAP в " "системе FreeBSD. Предполагается, что администратор уже имеет продуманный " "план, включающий тип хранимой информации, её назначение, перечень " "пользователей с доступом к этой информации и способы защиты от " "несанкционированного доступа." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1277 #, no-wrap msgid "LDAP Terminology and Structure" msgstr "Терминология и структура LDAP" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1283 msgid "" "LDAP uses several terms which should be understood before starting the " "configuration. All directory entries consist of a group of _attributes_. " "Each of these attribute sets contains a unique identifier known as a " "_Distinguished Name_ (DN) which is normally built from several other " "attributes such as the common or _Relative Distinguished Name_ (RDN). " "Similar to how directories have absolute and relative paths, consider a DN " "as an absolute path and the RDN as the relative path." msgstr "" "LDAP использует несколько терминов, которые следует понять перед началом " "настройки. Все записи каталога состоят из группы _атрибутов_. Каждый из этих " "наборов атрибутов содержит уникальный идентификатор, известный как " "_Отличительное имя_ (DN — Distinguished Name), который обычно строится из " "нескольких других атрибутов, таких как общее имя или _Относительное " "отличительное имя_ (RDN — Relative Distinguished Name). Подобно тому, как " "каталоги имеют абсолютные и относительные пути, можно рассматривать DN как " "абсолютный путь, а RDN — как относительный путь." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1286 msgid "" "An example LDAP entry looks like the following. This example searches for " "the entry for the specified user account (`uid`), organizational unit " "(`ou`), and organization (`o`):" msgstr "" "Пример записи LDAP выглядит следующим образом. В этом примере выполняется " "поиск записи для указанной учетной записи пользователя (`uid`), " "организационного подразделения (`ou`) и организации (`o`):" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1297 #, no-wrap msgid "" "% ldapsearch -xb \"uid=trhodes,ou=users,o=example.com\"\n" "# extended LDIF\n" "#\n" "# LDAPv3\n" "# base with scope subtree\n" "# filter: (objectclass=*)\n" "# requesting: ALL\n" "#\n" msgstr "" "% ldapsearch -xb \"uid=trhodes,ou=users,o=example.com\"\n" "# extended LDIF\n" "#\n" "# LDAPv3\n" "# base with scope subtree\n" "# filter: (objectclass=*)\n" "# requesting: ALL\n" "#\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1304 #, no-wrap msgid "" "# trhodes, users, example.com\n" "dn: uid=trhodes,ou=users,o=example.com\n" "mail: trhodes@example.com\n" "cn: Tom Rhodes\n" "uid: trhodes\n" "telephoneNumber: (123) 456-7890\n" msgstr "" "# trhodes, users, example.com\n" "dn: uid=trhodes,ou=users,o=example.com\n" "mail: trhodes@example.com\n" "cn: Tom Rhodes\n" "uid: trhodes\n" "telephoneNumber: (123) 456-7890\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1308 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1582 #, no-wrap msgid "" "# search result\n" "search: 2\n" "result: 0 Success\n" msgstr "" "# search result\n" "search: 2\n" "result: 0 Success\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1311 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1585 #, no-wrap msgid "" "# numResponses: 2\n" "# numEntries: 1\n" msgstr "" "# numResponses: 2\n" "# numEntries: 1\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1315 msgid "" "This example entry shows the values for the `dn`, `mail`, `cn`, `uid`, and " "`telephoneNumber` attributes. The cn attribute is the RDN." msgstr "" "Этот пример записи показывает значения атрибутов `dn`, `mail`, `cn`, `uid` и " "`telephoneNumber`. Атрибут `cn` является RDN." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1317 msgid "" "More information about LDAP and its terminology can be found at http://www." "openldap.org/doc/admin24/intro.html[http://www.openldap.org/doc/admin24/" "intro.html]." msgstr "" "Дополнительная информация о LDAP и его терминологии доступна по адресу " "http://www.openldap.org/doc/admin24/intro.html[http://www.openldap.org/doc/" "admin24/intro.html]." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1319 #, no-wrap msgid "Configuring an LDAP Server" msgstr "Настройка сервера LDAP" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1323 msgid "" "FreeBSD does not provide a built-in LDAP server. Begin the configuration by " "installing package:net/openldap-server[] package or port:" msgstr "" "FreeBSD не предоставляет встроенный LDAP-сервер. Начните настройку с " "установки пакета package:net/openldap-server[] или порта:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1327 #, no-wrap msgid "# pkg install openldap-server\n" msgstr "# pkg install openldap-server\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1332 msgid "" "There is a large set of default options enabled in the extref:{linux-users}" "[package, software]. Review them by running `pkg info openldap-server`. If " "they are not sufficient (for example if SQL support is needed), please " "consider recompiling the port using the appropriate crossref:ports[ports-" "using,framework]." msgstr "" "В extref:{linux-users}[пакете, software] включен большой набор параметров " "по умолчанию. Их можно просмотреть, выполнив команду `pkg info openldap-" "server`. Если их недостаточно (например, требуется поддержка SQL), " "рекомендуется перекомпилировать порт с использованием соответствующего " "crossref:ports[ports-using,фреймворка]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1335 msgid "" "The installation creates the directory [.filename]#/var/db/openldap-data# to " "hold the data. The directory to store the certificates must be created:" msgstr "" "Установка создает каталог [.filename]#/var/db/openldap-data# для хранения " "данных. Необходимо создать каталог для хранения сертификатов:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1339 #, no-wrap msgid "# mkdir /usr/local/etc/openldap/private\n" msgstr "# mkdir /usr/local/etc/openldap/private\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1346 msgid "" "The next phase is to configure the Certificate Authority. The following " "commands must be executed from [.filename]#/usr/local/etc/openldap/" "private#. This is important as the file permissions need to be restrictive " "and users should not have access to these files. More detailed information " "about certificates and their parameters can be found in crossref:" "security[openssl,\"OpenSSL\"]. To create the Certificate Authority, start " "with this command and follow the prompts:" msgstr "" "Следующий этап — настройка Центра Сертификации. Следующие команды должны " "быть выполнены из каталога [.filename]#/usr/local/etc/openldap/private#. Это " "важно, так как права доступа к файлам должны быть строгими, и пользователи " "не должны иметь доступ к этим файлам. Более подробную информацию о " "сертификатах и их параметрах можно найти в crossref:security[openssl," "\"OpenSSL\"]. Чтобы создать Центр Сертификации, начните с этой команды и " "следуйте инструкциям:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1350 #, no-wrap msgid "# openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt\n" msgstr "# openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1355 msgid "" "The entries for the prompts may be generic _except_ for the `Common Name`. " "This entry must be _different_ than the system hostname. If this will be a " "self signed certificate, prefix the hostname with `CA` for Certificate " "Authority." msgstr "" "Записи для запросов могут быть любыми, _за исключением_ `Common Name`. Эта " "запись должна _отличаться_ от имени хоста системы. Если это будет " "самоподписанный сертификат, добавьте к имени хоста префикс `CA` — как Центр " "Сертификации." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1358 msgid "" "The next task is to create a certificate signing request and a private key. " "Input this command and follow the prompts:" msgstr "" "Следующая задача — создать запрос на подпись сертификата и закрытый ключ. " "Введите эту команду и следуйте инструкциям:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1362 #, no-wrap msgid "# openssl req -days 365 -nodes -new -keyout server.key -out server.csr\n" msgstr "# openssl req -days 365 -nodes -new -keyout server.key -out server.csr\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1366 msgid "" "During the certificate generation process, be sure to correctly set the " "`Common Name` attribute. The Certificate Signing Request must be signed " "with the Certificate Authority in order to be used as a valid certificate:" msgstr "" "В процессе генерации сертификата обязательно правильно укажите атрибут " "`Common Name`. Запрос на подпись сертификата (Certificate Signing Request) " "должен быть подписан Центром сертификации, чтобы использоваться в качестве " "действительного сертификата:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1370 #, no-wrap msgid "# openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial\n" msgstr "# openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1373 msgid "" "The final part of the certificate generation process is to generate and sign " "the client certificates:" msgstr "" "Заключительная часть процесса генерации сертификатов — создание и подписание " "клиентских сертификатов:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1378 #, no-wrap msgid "" "# openssl req -days 365 -nodes -new -keyout client.key -out client.csr\n" "# openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key\n" msgstr "" "# openssl req -days 365 -nodes -new -keyout client.key -out client.csr\n" "# openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1382 msgid "" "Remember to use the same `Common Name` attribute when prompted. When " "finished, ensure that a total of eight (8) new files have been generated " "through the proceeding commands." msgstr "" "Помните, что нужно использовать тот же атрибут `Common Name` при запросе. По " "завершении убедитесь, что в результате выполнения команд было создано в " "общей сложности восемь (8) новых файлов." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1385 msgid "" "The daemon running the OpenLDAP server is [.filename]#slapd#. Its " "configuration is performed through [.filename]#slapd.ldif#: the old [." "filename]#slapd.conf# has been deprecated by OpenLDAP." msgstr "" "Демон, запускающий сервер OpenLDAP, называется [.filename]#slapd#. Его " "настройка выполняется через файл [.filename]#slapd.ldif#: старый файл [." "filename]#slapd.conf# больше не используется в OpenLDAP." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1392 msgid "" "http://www.openldap.org/doc/admin24/slapdconf2.html[Configuration examples] " "for [.filename]#slapd.ldif# are available and can also be found in [." "filename]#/usr/local/etc/openldap/slapd.ldif.sample#. Options are " "documented in slapd-config(5). Each section of [.filename]#slapd.ldif#, " "like all the other LDAP attribute sets, is uniquely identified through a " "DN. Be sure that no blank lines are left between the `dn:` statement and " "the desired end of the section. In the following example, TLS will be used " "to implement a secure channel. The first section represents the global " "configuration:" msgstr "" "Есть http://www.openldap.org/doc/admin24/slapdconf2.html[примеры " "конфигурации] для [.filename]#slapd.ldif# доступны, и также их можно найти в " "[.filename]#/usr/local/etc/openldap/slapd.ldif.sample#. Документация " "параметров в slapd-config(5). Каждый раздел [.filename]#slapd.ldif#, как и " "все другие наборы атрибутов LDAP, однозначно идентифицируется через DN. " "Убедитесь, что между строкой `dn:` и желаемым концом раздела нет пустых " "строк. В следующем примере TLS будет использоваться для настройки " "безопасного канала. Первый раздел представляет глобальную конфигурацию:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1414 #, no-wrap msgid "" "#\n" "# See slapd-config(5) for details on configuration options.\n" "# This file should NOT be world readable.\n" "#\n" "dn: cn=config\n" "objectClass: olcGlobal\n" "cn: config\n" "#\n" "#\n" "# Define global ACLs to disable default read access.\n" "#\n" "olcArgsFile: /var/run/openldap/slapd.args\n" "olcPidFile: /var/run/openldap/slapd.pid\n" "olcTLSCertificateFile: /usr/local/etc/openldap/server.crt\n" "olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key\n" "olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt\n" "#olcTLSCipherSuite: HIGH\n" "olcTLSProtocolMin: 3.1\n" "olcTLSVerifyClient: never\n" msgstr "" "#\n" "# See slapd-config(5) for details on configuration options.\n" "# This file should NOT be world readable.\n" "#\n" "dn: cn=config\n" "objectClass: olcGlobal\n" "cn: config\n" "#\n" "#\n" "# Define global ACLs to disable default read access.\n" "#\n" "olcArgsFile: /var/run/openldap/slapd.args\n" "olcPidFile: /var/run/openldap/slapd.pid\n" "olcTLSCertificateFile: /usr/local/etc/openldap/server.crt\n" "olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key\n" "olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt\n" "#olcTLSCipherSuite: HIGH\n" "olcTLSProtocolMin: 3.1\n" "olcTLSVerifyClient: never\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1419 msgid "" "The Certificate Authority, server certificate and server private key files " "must be specified here. It is recommended to let the clients choose the " "security cipher and omit option `olcTLSCipherSuite` (incompatible with TLS " "clients other than [.filename]#openssl#). Option `olcTLSProtocolMin` lets " "the server require a minimum security level: it is recommended. While " "verification is mandatory for the server, it is not for the client: " "`olcTLSVerifyClient: never`." msgstr "" "Здесь необходимо указать файлы Центра сертификации, сертификата сервера и " "закрытого ключа сервера. Рекомендуется позволить клиентам выбирать алгоритм " "шифрования и опустить опцию `olcTLSCipherSuite` (несовместимо с TLS-" "клиентами, кроме [.filename]#openssl#). Опция `olcTLSProtocolMin` позволяет " "серверу требовать минимальный уровень безопасности: это рекомендуется. Хотя " "проверка обязательна для сервера, для клиента она не требуется: " "`olcTLSVerifyClient: never`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1421 msgid "" "The second section is about the backend modules and can be configured as " "follows:" msgstr "" "Второй раздел посвящен серверным модулям и может быть настроен следующим " "образом:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1437 #, no-wrap msgid "" "#\n" "# Load dynamic backend modules:\n" "#\n" "dn: cn=module,cn=config\n" "objectClass: olcModuleList\n" "cn: module\n" "olcModulepath:\t/usr/local/libexec/openldap\n" "olcModuleload:\tback_mdb.la\n" "#olcModuleload:\tback_bdb.la\n" "#olcModuleload:\tback_hdb.la\n" "#olcModuleload:\tback_ldap.la\n" "#olcModuleload:\tback_passwd.la\n" "#olcModuleload:\tback_shell.la\n" msgstr "" "#\n" "# Load dynamic backend modules:\n" "#\n" "dn: cn=module,cn=config\n" "objectClass: olcModuleList\n" "cn: module\n" "olcModulepath:\t/usr/local/libexec/openldap\n" "olcModuleload:\tback_mdb.la\n" "#olcModuleload:\tback_bdb.la\n" "#olcModuleload:\tback_hdb.la\n" "#olcModuleload:\tback_ldap.la\n" "#olcModuleload:\tback_passwd.la\n" "#olcModuleload:\tback_shell.la\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1440 msgid "" "The third section is devoted to load the needed `ldif` schemas to be used by " "the databases: they are essential." msgstr "" "Третий раздел посвящён загрузке необходимых схем `ldif` для использования " "базами данных: они являются важными." #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1446 #, no-wrap msgid "" "dn: cn=schema,cn=config\n" "objectClass: olcSchemaConfig\n" "cn: schema\n" msgstr "" "dn: cn=schema,cn=config\n" "objectClass: olcSchemaConfig\n" "cn: schema\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1451 #, no-wrap msgid "" "include: file:///usr/local/etc/openldap/schema/core.ldif\n" "include: file:///usr/local/etc/openldap/schema/cosine.ldif\n" "include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif\n" "include: file:///usr/local/etc/openldap/schema/nis.ldif\n" msgstr "" "include: file:///usr/local/etc/openldap/schema/core.ldif\n" "include: file:///usr/local/etc/openldap/schema/cosine.ldif\n" "include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif\n" "include: file:///usr/local/etc/openldap/schema/nis.ldif\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1454 msgid "Next, the frontend configuration section:" msgstr "" "Далее, раздел конфигурации фронтенда (уровня взаимодействия с клиентами):" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1488 #, no-wrap msgid "" "# Frontend settings\n" "#\n" "dn: olcDatabase={-1}frontend,cn=config\n" "objectClass: olcDatabaseConfig\n" "objectClass: olcFrontendConfig\n" "olcDatabase: {-1}frontend\n" "olcAccess: to * by * read\n" "#\n" "# Sample global access control policy:\n" "#\tRoot DSE: allow anyone to read it\n" "#\tSubschema (sub)entry DSE: allow anyone to read it\n" "#\tOther DSEs:\n" "#\t\tAllow self write access\n" "#\t\tAllow authenticated users read access\n" "#\t\tAllow anonymous users to authenticate\n" "#\n" "#olcAccess: to dn.base=\"\" by * read\n" "#olcAccess: to dn.base=\"cn=Subschema\" by * read\n" "#olcAccess: to *\n" "#\tby self write\n" "#\tby users read\n" "#\tby anonymous auth\n" "#\n" "# if no access controls are present, the default policy\n" "# allows anyone and everyone to read anything but restricts\n" "# updates to rootdn. (e.g., \"access to * by * read\")\n" "#\n" "# rootdn can always read and write EVERYTHING!\n" "#\n" "olcPasswordHash: {SSHA}\n" "# {SSHA} is already the default for olcPasswordHash\n" msgstr "" "# Frontend settings\n" "#\n" "dn: olcDatabase={-1}frontend,cn=config\n" "objectClass: olcDatabaseConfig\n" "objectClass: olcFrontendConfig\n" "olcDatabase: {-1}frontend\n" "olcAccess: to * by * read\n" "#\n" "# Sample global access control policy:\n" "#\tRoot DSE: allow anyone to read it\n" "#\tSubschema (sub)entry DSE: allow anyone to read it\n" "#\tOther DSEs:\n" "#\t\tAllow self write access\n" "#\t\tAllow authenticated users read access\n" "#\t\tAllow anonymous users to authenticate\n" "#\n" "#olcAccess: to dn.base=\"\" by * read\n" "#olcAccess: to dn.base=\"cn=Subschema\" by * read\n" "#olcAccess: to *\n" "#\tby self write\n" "#\tby users read\n" "#\tby anonymous auth\n" "#\n" "# if no access controls are present, the default policy\n" "# allows anyone and everyone to read anything but restricts\n" "# updates to rootdn. (e.g., \"access to * by * read\")\n" "#\n" "# rootdn can always read and write EVERYTHING!\n" "#\n" "olcPasswordHash: {SSHA}\n" "# {SSHA} is already the default for olcPasswordHash\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1491 msgid "" "Another section is devoted to the _configuration backend_, the only way to " "later access the OpenLDAP server configuration is as a global super-user." msgstr "" "Еще один раздел посвящен _бэкенду конфигурации_ — единственному способу " "последующего доступа к конфигурации сервера OpenLDAP, который доступен " "только глобальному суперпользователю." #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1499 #, no-wrap msgid "" "dn: olcDatabase={0}config,cn=config\n" "objectClass: olcDatabaseConfig\n" "olcDatabase: {0}config\n" "olcAccess: to * by * none\n" "olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U\n" msgstr "" "dn: olcDatabase={0}config,cn=config\n" "objectClass: olcDatabaseConfig\n" "olcDatabase: {0}config\n" "olcAccess: to * by * none\n" "olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1504 msgid "" "The default administrator username is `cn=config`. Type [." "filename]#slappasswd# in a shell, choose a password and use its hash in " "`olcRootPW`. If this option is not specified now, before [.filename]#slapd." "ldif# is imported, no one will be later able to modify the _global " "configuration_ section." msgstr "" "Имя администратора по умолчанию — `cn=config`. Введите [." "filename]#slappasswd# в оболочке, выберите пароль и используйте его хеш в " "`olcRootPW`. Если этот параметр не указан сейчас, до импорта [." "filename]#slapd.ldif#, никто не сможет впоследствии изменить раздел " "_глобальной конфигурации_." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1506 msgid "The last section is about the database backend:" msgstr "" "Последний раздел посвящен бэкенду базы данных (уровню хранения данных):" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1530 #, no-wrap msgid "" "#######################################################################\n" "# LMDB database definitions\n" "#######################################################################\n" "#\n" "dn: olcDatabase=mdb,cn=config\n" "objectClass: olcDatabaseConfig\n" "objectClass: olcMdbConfig\n" "olcDatabase: mdb\n" "olcDbMaxSize: 1073741824\n" "olcSuffix: dc=domain,dc=example\n" "olcRootDN: cn=mdbadmin,dc=domain,dc=example\n" "# Cleartext passwords, especially for the rootdn, should\n" "# be avoided. See slappasswd(8) and slapd-config(5) for details.\n" "# Use of strong authentication encouraged.\n" "olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+\n" "# The database directory MUST exist prior to running slapd AND\n" "# should only be accessible by the slapd and slap tools.\n" "# Mode 700 recommended.\n" "olcDbDirectory:\t/var/db/openldap-data\n" "# Indices to maintain\n" "olcDbIndex: objectClass eq\n" msgstr "" "#######################################################################\n" "# LMDB database definitions\n" "#######################################################################\n" "#\n" "dn: olcDatabase=mdb,cn=config\n" "objectClass: olcDatabaseConfig\n" "objectClass: olcMdbConfig\n" "olcDatabase: mdb\n" "olcDbMaxSize: 1073741824\n" "olcSuffix: dc=domain,dc=example\n" "olcRootDN: cn=mdbadmin,dc=domain,dc=example\n" "# Cleartext passwords, especially for the rootdn, should\n" "# be avoided. See slappasswd(8) and slapd-config(5) for details.\n" "# Use of strong authentication encouraged.\n" "olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+\n" "# The database directory MUST exist prior to running slapd AND\n" "# should only be accessible by the slapd and slap tools.\n" "# Mode 700 recommended.\n" "olcDbDirectory:\t/var/db/openldap-data\n" "# Indices to maintain\n" "olcDbIndex: objectClass eq\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1535 msgid "" "This database hosts the _actual contents_ of the LDAP directory. Types " "other than `mdb` are available. Its super-user, not to be confused with the " "global one, is configured here: a (possibly custom) username in `olcRootDN` " "and the password hash in `olcRootPW`; [.filename]#slappasswd# can be used as " "before." msgstr "" "Эта база данных содержит _фактическое содержимое_ каталога LDAP. Доступны " "типы, отличные от `mdb`. Суперпользователь (не путать с глобальным) " "настраивается здесь: (возможно, пользовательское) имя пользователя в " "`olcRootDN` и хэш пароля в `olcRootPW`; [.filename]#slappasswd# можно " "использовать, как и раньше." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1538 msgid "" "This http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/" "data/regressions/its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;" "hb=HEAD[repository] contains four examples of [.filename]#slapd.ldif#. To " "convert an existing [.filename]#slapd.conf# into [.filename]#slapd.ldif#, " "refer to http://www.openldap.org/doc/admin24/slapdconf2.html[this page] " "(please note that this may introduce some unuseful options)." msgstr "" "Этот http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/" "data/regressions/its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;" "hb=HEAD[репозиторий] содержит четыре примера файла [.filename]#slapd.ldif#. " "Для преобразования существующего [.filename]#slapd.conf# в [.filename]#slapd." "ldif# обратитесь к http://www.openldap.org/doc/admin24/slapdconf2.html[этой " "странице] (обратите внимание, что это может добавить некоторые бесполезные " "опции)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1541 msgid "" "When the configuration is completed, [.filename]#slapd.ldif# must be placed " "in an empty directory. It is recommended to create it as:" msgstr "" "После завершения настройки файл [.filename]#slapd.ldif# должен быть " "скопирован в пустой каталог. Рекомендуется создать его следующим образом:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1545 #, no-wrap msgid "# mkdir /usr/local/etc/openldap/slapd.d/\n" msgstr "# mkdir /usr/local/etc/openldap/slapd.d/\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1548 msgid "Import the configuration database:" msgstr "Импорт базы данных конфигурации:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1552 #, no-wrap msgid "# /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif\n" msgstr "# /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1555 msgid "Start the [.filename]#slapd# daemon:" msgstr "Запустите демон [.filename]#slapd#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1559 #, no-wrap msgid "# /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/\n" msgstr "# /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1563 msgid "" "Option `-d` can be used for debugging, as specified in slapd(8). To verify " "that the server is running and working:" msgstr "" "Опция `-d` может использоваться для отладки, как указано в slapd(8). Чтобы " "проверить, что сервер запущен и работает:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1574 #, no-wrap msgid "" "# ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts\n" "# extended LDIF\n" "#\n" "# LDAPv3\n" "# base <> with scope baseObject\n" "# filter: (objectclass=*)\n" "# requesting: namingContexts\n" "#\n" msgstr "" "# ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts\n" "# extended LDIF\n" "#\n" "# LDAPv3\n" "# base <> with scope baseObject\n" "# filter: (objectclass=*)\n" "# requesting: namingContexts\n" "#\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1578 #, no-wrap msgid "" "#\n" "dn:\n" "namingContexts: dc=domain,dc=example\n" msgstr "" "#\n" "dn:\n" "namingContexts: dc=domain,dc=example\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1590 msgid "" "The server must still be trusted. If that has never been done before, " "follow these instructions. Install the OpenSSL package or port:" msgstr "" "Сервер по-прежнему должен быть доверенным. Если это никогда не делалось " "ранее, следуйте этим инструкциям. Установите пакет или порт OpenSSL:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1594 #, no-wrap msgid "# pkg install openssl\n" msgstr "# pkg install openssl\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1597 msgid "" "From the directory where [.filename]#ca.crt# is stored (in this example, [." "filename]#/usr/local/etc/openldap#), run:" msgstr "" "Из каталога, где находится [.filename]#ca.crt# (в данном примере, [." "filename]#/usr/local/etc/openldap#), выполните:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1601 #, no-wrap msgid "# c_rehash .\n" msgstr "# c_rehash .\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1605 msgid "" "Both the CA and the server certificate are now correctly recognized in their " "respective roles. To verify this, run this command from the [." "filename]#server.crt# directory:" msgstr "" "И сертификат центра сертификации, и сертификат сервера теперь правильно " "распознаются в своих соответствующих ролях. Чтобы проверить это, выполните " "следующую команду из каталога, где находится [.filename]#server.crt#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1609 #, no-wrap msgid "# openssl verify -verbose -CApath . server.crt\n" msgstr "# openssl verify -verbose -CApath . server.crt\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1613 msgid "" "If [.filename]#slapd# was running, restart it. As stated in [.filename]#/" "usr/local/etc/rc.d/slapd#, to properly run [.filename]#slapd# at boot the " "following lines must be added to [.filename]#/etc/rc.conf#:" msgstr "" "Если [.filename]#slapd# был запущен, перезапустите его. Как указано в [." "filename]#/usr/local/etc/rc.d/slapd#, для корректного запуска [." "filename]#slapd# при загрузке следующие строки должны быть добавлены в [." "filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1621 #, no-wrap msgid "" "slapd_enable=\"YES\"\n" "slapd_flags='-h \"ldapi://%2fvar%2frun%2fopenldap%2fldapi/\n" "ldap://0.0.0.0/\"'\n" "slapd_sockets=\"/var/run/openldap/ldapi\"\n" "slapd_cn_config=\"YES\"\n" msgstr "" "slapd_enable=\"YES\"\n" "slapd_flags='-h \"ldapi://%2fvar%2frun%2fopenldap%2fldapi/\n" "ldap://0.0.0.0/\"'\n" "slapd_sockets=\"/var/run/openldap/ldapi\"\n" "slapd_cn_config=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1625 msgid "" "[.filename]#slapd# does not provide debugging at boot. Check [.filename]#/" "var/log/debug.log#, [.filename]#dmesg -a# and [.filename]#/var/log/messages# " "for this purpose." msgstr "" "[.filename]#slapd# не предоставляет отладку при загрузке. Для этой цели " "проверьте [.filename]#/var/log/debug.log#, [.filename]#dmesg -a# и [." "filename]#/var/log/messages#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1628 msgid "" "The following example adds the group `team` and the user `john` to the " "`domain.example` LDAP database, which is still empty. First, create the " "file [.filename]#domain.ldif#:" msgstr "" "Следующий пример добавляет группу `team` и пользователя `john` в базу данных " "LDAP `domain.example`, которая пока пуста. Сначала создайте файл [." "filename]#domain.ldif#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1637 #, no-wrap msgid "" "# cat domain.ldif\n" "dn: dc=domain,dc=example\n" "objectClass: dcObject\n" "objectClass: organization\n" "o: domain.example\n" "dc: domain\n" msgstr "" "# cat domain.ldif\n" "dn: dc=domain,dc=example\n" "objectClass: dcObject\n" "objectClass: organization\n" "o: domain.example\n" "dc: domain\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1642 #, no-wrap msgid "" "dn: ou=groups,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: organizationalunit\n" "ou: groups\n" msgstr "" "dn: ou=groups,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: organizationalunit\n" "ou: groups\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1647 #, no-wrap msgid "" "dn: ou=users,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: organizationalunit\n" "ou: users\n" msgstr "" "dn: ou=users,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: organizationalunit\n" "ou: users\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1653 #, no-wrap msgid "" "dn: cn=team,ou=groups,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: posixGroup\n" "cn: team\n" "gidNumber: 10001\n" msgstr "" "dn: cn=team,ou=groups,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: posixGroup\n" "cn: team\n" "gidNumber: 10001\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1666 #, no-wrap msgid "" "dn: uid=john,ou=users,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: account\n" "objectClass: posixAccount\n" "objectClass: shadowAccount\n" "cn: John McUser\n" "uid: john\n" "uidNumber: 10001\n" "gidNumber: 10001\n" "homeDirectory: /home/john/\n" "loginShell: /usr/bin/bash\n" "userPassword: secret\n" msgstr "" "dn: uid=john,ou=users,dc=domain,dc=example\n" "objectClass: top\n" "objectClass: account\n" "objectClass: posixAccount\n" "objectClass: shadowAccount\n" "cn: John McUser\n" "uid: john\n" "uidNumber: 10001\n" "gidNumber: 10001\n" "homeDirectory: /home/john/\n" "loginShell: /usr/bin/bash\n" "userPassword: secret\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1672 msgid "" "See the OpenLDAP documentation for more details. Use [." "filename]#slappasswd# to replace the plain text password `secret` with a " "hash in `userPassword`. The path specified as `loginShell` must exist in " "all the systems where `john` is allowed to login. Finally, use the `mdb` " "administrator to modify the database:" msgstr "" "См. документацию OpenLDAP для получения более подробной информации. " "Используйте [.filename]#slappasswd# для замены пароля в открытом тексте " "`secret` на хеш в `userPassword`. Путь, указанный как `loginShell`, должен " "существовать во всех системах, где `john` имеет право входить. Наконец, " "используйте администратора `mdb` для изменения базы данных:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1676 #, no-wrap msgid "# ldapadd -W -D \"cn=mdbadmin,dc=domain,dc=example\" -f domain.ldif\n" msgstr "# ldapadd -W -D \"cn=mdbadmin,dc=domain,dc=example\" -f domain.ldif\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1681 msgid "" "Modifications to the _global configuration_ section can only be performed by " "the global super-user. For example, assume that the option " "`olcTLSCipherSuite: HIGH:MEDIUM:SSLv3` was initially specified and must now " "be deleted. First, create a file that contains the following:" msgstr "" "Изменения в разделе _глобальной конфигурации_ могут выполняться только " "глобальным суперпользователем. Например, предположим, что изначально была " "указана опция `olcTLSCipherSuite: HIGH:MEDIUM:SSLv3`, которую теперь " "необходимо удалить. Сначала создайте файл, содержащий следующее:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1688 #, no-wrap msgid "" "# cat global_mod\n" "dn: cn=config\n" "changetype: modify\n" "delete: olcTLSCipherSuite\n" msgstr "" "# cat global_mod\n" "dn: cn=config\n" "changetype: modify\n" "delete: olcTLSCipherSuite\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1691 msgid "Then, apply the modifications:" msgstr "Затем примените изменения:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1695 #, no-wrap msgid "# ldapmodify -f global_mod -x -D \"cn=config\" -W\n" msgstr "# ldapmodify -f global_mod -x -D \"cn=config\" -W\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1700 msgid "" "When asked, provide the password chosen in the _configuration backend_ " "section. The username is not required: here, `cn=config` represents the DN " "of the database section to be modified. Alternatively, use `ldapmodify` to " "delete a single line of the database, `ldapdelete` to delete a whole entry." msgstr "" "При запросе введите пароль, выбранный в разделе _бекенда конфигурации_. Имя " "пользователя не требуется: здесь `cn=config` представляет DN раздела базы " "данных, который нужно изменить. Альтернативно, используйте `ldapmodify` для " "удаления отдельной строки базы данных или `ldapdelete` для удаления всей " "записи." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1702 msgid "" "If something goes wrong, or if the global super-user cannot access the " "configuration backend, it is possible to delete and re-write the whole " "configuration:" msgstr "" "Если что-то пойдет не так или если глобальный суперпользователь не сможет " "получить доступ к бэкенду конфигурации, можно удалить и перезаписать всю " "конфигурацию:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1706 #, no-wrap msgid "# rm -rf /usr/local/etc/openldap/slapd.d/\n" msgstr "# rm -rf /usr/local/etc/openldap/slapd.d/\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1710 msgid "" "[.filename]#slapd.ldif# can then be edited and imported again. Please, " "follow this procedure only when no other solution is available." msgstr "" "[.filename]#slapd.ldif# затем можно отредактировать и снова импортировать. " "Пожалуйста, следуйте этой процедуре только в том случае, если нет другого " "доступного решения." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1713 msgid "" "This is the configuration of the server only. The same machine can also " "host an LDAP client, with its own separate configuration." msgstr "" "Это конфигурация только сервера. На той же машине также может быть размещен " "LDAP-клиент с собственной отдельной конфигурацией." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:1715 #, no-wrap msgid "Dynamic Host Configuration Protocol (DHCP)" msgstr "Протокол динамической конфигурации узла (DHCP)" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1722 msgid "" "The Dynamic Host Configuration Protocol (DHCP) allows a system to connect to " "a network in order to be assigned the necessary addressing information for " "communication on that network. FreeBSD includes the OpenBSD version of " "`dhclient` which is used by the client to obtain the addressing " "information. FreeBSD does not install a DHCP server, but several servers " "are available in the FreeBSD Ports Collection. The DHCP protocol is fully " "described in http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Informational " "resources are also available at http://www.isc.org/downloads/dhcp/[isc.org/" "downloads/dhcp/]." msgstr "" "Протокол динамической конфигурации узла (DHCP —Dynamic Host Configuration " "Protocol) позволяет системе подключаться к сети для получения необходимой " "адресной информации для общения в этой сети. FreeBSD включает версию " "`dhclient` от OpenBSD, которая используется клиентом для получения адресной " "информации. FreeBSD не устанавливает сервер DHCP, но несколько серверов " "доступны в коллекции портов FreeBSD. Протокол DHCP полностью описан в http://" "www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Информационные ресурсы также " "доступны на http://www.isc.org/downloads/dhcp/[isc.org/downloads/dhcp/]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1725 msgid "" "This section describes how to use the built-in DHCP client. It then " "describes how to install and configure a DHCP server." msgstr "" "Этот раздел описывает, как использовать встроенный DHCP-клиент. Затем он " "описывает, как установить и настроить DHCP-сервер." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1731 msgid "" "In FreeBSD, the man:bpf[4] device is needed by both the DHCP server and DHCP " "client. This device is included in the [.filename]#GENERIC# kernel that is " "installed with FreeBSD. Users who prefer to create a custom kernel need to " "keep this device if DHCP is used." msgstr "" "В FreeBSD устройство man:bpf[4] необходимо как для сервера DHCP, так и для " "клиента DHCP. Это устройство включено в ядро [.filename]#GENERIC#, которое " "устанавливается с FreeBSD. Пользователям, предпочитающим создавать " "собственное ядро, необходимо оставить это устройство, если используется DHCP." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1733 msgid "" "It should be noted that [.filename]#bpf# also allows privileged users to run " "network packet sniffers on that system." msgstr "" "Следует отметить, что [.filename]#bpf# также позволяет привилегированным " "пользователям запускать анализаторы сетевых пакетов в этой системе." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1736 #, no-wrap msgid "Configuring a DHCP Client" msgstr "Настройка клиента DHCP" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1740 msgid "" "DHCP client support is included in the FreeBSD installer, making it easy to " "configure a newly installed system to automatically receive its networking " "addressing information from an existing DHCP server. Refer to crossref:" "bsdinstall[bsdinstall-post,\"Accounts, Time Zone, Services and Hardening\"] " "for examples of network configuration." msgstr "" "Поддержка DHCP-клиента включена в установщик FreeBSD, что позволяет легко " "настроить новую систему для автоматического получения сетевой адресации от " "существующего DHCP-сервера. Примеры настройки сети можно найти в разделе " "crossref:bsdinstall[bsdinstall-post,\"Учетные записи, Часовая зона, Службы и " "Защита\"]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1748 msgid "" "When `dhclient` is executed on the client machine, it begins broadcasting " "requests for configuration information. By default, these requests use UDP " "port 68. The server replies on UDP port 67, giving the client an IP address " "and other relevant network information such as a subnet mask, default " "gateway, and DNS server addresses. This information is in the form of a " "DHCP \"lease\" and is valid for a configurable time. This allows stale IP " "addresses for clients no longer connected to the network to automatically be " "reused. DHCP clients can obtain a great deal of information from the " "server. An exhaustive list may be found in man:dhcp-options[5]." msgstr "" "Когда `dhclient` выполняется на клиентской машине, он начинает транслировать " "запросы на получение конфигурационной информации. По умолчанию эти запросы " "используют UDP-порт 68. Сервер отвечает на UDP-порту 67, предоставляя " "клиенту IP-адрес и другую соответствующую сетевую информацию, такую как " "маска подсети, шлюз по умолчанию и адреса DNS-серверов. Эта информация " "предоставляется в форме \"аренды\" DHCP и действительна в течение " "настраиваемого времени. Это позволяет автоматически повторно использовать " "устаревшие IP-адреса для клиентов, которые больше не подключены к сети. " "Клиенты DHCP могут получить от сервера большое количество информации. Полный " "список можно найти в man:dhcp-options[5]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1751 msgid "" "By default, when a FreeBSD system boots, its DHCP client runs in the " "background, or _asynchronously_. Other startup scripts continue to run " "while the DHCP process completes, which speeds up system startup." msgstr "" "По умолчанию, при загрузке системы FreeBSD её DHCP-клиент работает в фоновом " "режиме или _асинхронно_. Другие скрипты запуска продолжают выполняться, пока " "завершается процесс DHCP, что ускоряет загрузку системы." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1756 msgid "" "Background DHCP works well when the DHCP server responds quickly to the " "client's requests. However, DHCP may take a long time to complete on some " "systems. If network services attempt to run before DHCP has assigned the " "network addressing information, they will fail. Using DHCP in _synchronous_ " "mode prevents this problem as it pauses startup until the DHCP configuration " "has completed." msgstr "" "DHCP в фоновом режиме работает хорошо, когда сервер DHCP быстро отвечает на " "запросы клиента. Однако на некоторых системах выполнение DHCP может занять " "много времени. Если сетевые службы пытаются запуститься до того, как DHCP " "назначит информацию о сетевой адресации, они завершатся с ошибкой. " "Использование DHCP в _синхронном_ режиме предотвращает эту проблему, " "приостанавливая запуск до завершения настройки DHCP." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1758 msgid "" "This line in [.filename]#/etc/rc.conf# is used to configure background or " "asynchronous mode:" msgstr "" "Эта строка в [.filename]#/etc/rc.conf# используется для настройки фонового " "или асинхронного режима:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1762 #, no-wrap msgid "ifconfig_fxp0=\"DHCP\"\n" msgstr "ifconfig_fxp0=\"DHCP\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1766 msgid "" "This line may already exist if the system was configured to use DHCP during " "installation. Replace the _fxp0_ shown in these examples with the name of " "the interface to be dynamically configured, as described in crossref:" "config[config-network-setup,“Setting Up Network Interface Cards”]." msgstr "" "Эта строка может уже существовать, если система была настроена на " "использование DHCP во время установки. Замените `_fxp0_`, указанный в этих " "примерах, на имя интерфейса, который нужно настроить динамически, как " "описано в crossref:config[config-network-setup,\"Настройка сетевых " "интерфейсов\"]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1768 msgid "" "To instead configure the system to use synchronous mode, and to pause during " "startup while DHCP completes, use \"`SYNCDHCP`\":" msgstr "" "Для настройки системы на использование синхронного режима с приостановкой во " "время запуска до завершения DHCP используйте \"`SYNCDHCP`\":" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1772 #, no-wrap msgid "ifconfig_fxp0=\"SYNCDHCP\"\n" msgstr "ifconfig_fxp0=\"SYNCDHCP\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1776 msgid "" "Additional client options are available. Search for `dhclient` in man:rc." "conf[5] for details." msgstr "" "Есть еще несколько опций клиента. Подробности смотрите в man:rc.conf[5], " "выполнив поиск по `dhclient`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1778 msgid "The DHCP client uses the following files:" msgstr "Клиент DHCP использует следующие файлы:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1780 msgid "[.filename]#/etc/dhclient.conf#" msgstr "[.filename]#/etc/dhclient.conf#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1784 msgid "" "The configuration file used by `dhclient`. Typically, this file contains " "only comments as the defaults are suitable for most clients. This " "configuration file is described in man:dhclient.conf[5]." msgstr "" "Файл конфигурации, используемый `dhclient`. Обычно этот файл содержит только " "комментарии, так как значения по умолчанию подходят для большинства " "клиентов. Этот конфигурационный файл описан в man:dhclient.conf[5]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1785 msgid "[.filename]#/sbin/dhclient#" msgstr "[.filename]#/sbin/dhclient#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1787 msgid "" "More information about the command itself can be found in man:dhclient[8]." msgstr "" "Дополнительную информацию о самой команде можно найти в man:dhclient[8]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1788 msgid "[.filename]#/sbin/dhclient-script#" msgstr "[.filename]#/sbin/dhclient-script#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1791 msgid "" "The FreeBSD-specific DHCP client configuration script. It is described in " "man:dhclient-script[8], but should not need any user modification to " "function properly." msgstr "" "Специфичный для FreeBSD скрипт конфигурации DHCP-клиента. Он описан в man:" "dhclient-script[8], но для правильной работы не требует изменений со стороны " "пользователя." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1792 msgid "[.filename]#/var/db/dhclient.leases.interface#" msgstr "[.filename]#/var/db/dhclient.leases.interface#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1794 msgid "" "The DHCP client keeps a database of valid leases in this file, which is " "written as a log and is described in man:dhclient.leases[5]." msgstr "" "Клиент DHCP сохраняет базу данных действительных аренд в этом файле, который " "записывается как журнал и описывается в man:dhclient.leases[5]." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1797 #, no-wrap msgid "Installing and Configuring a DHCP Server" msgstr "Установка и настройка сервера DHCP" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1801 msgid "" "This section demonstrates how to configure a FreeBSD system to act as a DHCP " "server using the Internet Systems Consortium (ISC) implementation of the " "DHCP server. This implementation and its documentation can be installed " "using the package:net/isc-dhcp44-server[] package or port." msgstr "" "В этом разделе показано, как настроить систему FreeBSD в качестве DHCP-" "сервера с использованием реализации DHCP-сервера от Консорциума Интернет-" "систем (ISC —Internet Systems Consortium). Эту реализацию и её документацию " "можно установить с помощью пакета package:net/isc-dhcp44-server[] или порта." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1804 msgid "" "The installation of package:net/isc-dhcp44-server[] installs a sample " "configuration file. Copy [.filename]#/usr/local/etc/dhcpd.conf.example# to " "[.filename]#/usr/local/etc/dhcpd.conf# and make any edits to this new file." msgstr "" "Установка пакета package:net/isc-dhcp44-server[] включает образец файла " "конфигурации. Скопируйте [.filename]#/usr/local/etc/dhcpd.conf.example# в [." "filename]#/usr/local/etc/dhcpd.conf# и внесите необходимые изменения в этот " "новый файл." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1807 msgid "" "The configuration file is comprised of declarations for subnets and hosts " "which define the information that is provided to DHCP clients. For example, " "these lines configure the following:" msgstr "" "Файл конфигурации состоит из объявлений для подсетей и хостов, которые " "определяют информацию, предоставляемую клиентам DHCP. Например, следующие " "строки настраивают следующее:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1813 #, no-wrap msgid "" "option domain-name \"example.org\";<.>\n" "option domain-name-servers ns1.example.org;<.>\n" "option subnet-mask 255.255.255.0;<.>\n" msgstr "" "option domain-name \"example.org\";<.>\n" "option domain-name-servers ns1.example.org;<.>\n" "option subnet-mask 255.255.255.0;<.>\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1817 #, no-wrap msgid "" "default-lease-time 600;<.>\n" "max-lease-time 72400;<.>\n" "ddns-update-style none;<.>\n" msgstr "" "default-lease-time 600;<.>\n" "max-lease-time 72400;<.>\n" "ddns-update-style none;<.>\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1822 #, no-wrap msgid "" "subnet 10.254.239.0 netmask 255.255.255.224 {\n" " range 10.254.239.10 10.254.239.20;<.>\n" " option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;<.>\n" "}\n" msgstr "" "subnet 10.254.239.0 netmask 255.255.255.224 {\n" " range 10.254.239.10 10.254.239.20;<.>\n" " option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;<.>\n" "}\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1827 #, no-wrap msgid "" "host fantasia {\n" " hardware ethernet 08:00:07:26:c0:a5;<.>\n" " fixed-address fantasia.fugue.com;<.>\n" "}\n" msgstr "" "host fantasia {\n" " hardware ethernet 08:00:07:26:c0:a5;<.>\n" " fixed-address fantasia.fugue.com;<.>\n" "}\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1830 msgid "" "This option specifies the default search domain that will be provided to " "clients. Refer to man:resolv.conf[5] for more information." msgstr "" "Этот параметр задаёт домен поиска по умолчанию, который будет " "предоставляться клиентам. Дополнительную информацию можно найти в man:resolv." "conf[5]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1831 msgid "" "This option specifies a comma separated list of DNS servers that the client " "should use. They can be listed by their Fully Qualified Domain Names (FQDN), " "as seen in the example, or by their IP addresses." msgstr "" "Эта опция определяет разделённый запятыми список DNS-серверов, которые " "должен использовать клиент. Они могут быть указаны по их Полным Доменным " "Именам (FQDN), как показано в примере, или по их IP-адресам." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1832 msgid "The subnet mask that will be provided to clients." msgstr "Маска подсети, которая будет предоставлена клиентам." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1833 msgid "" "The default lease expiry time in seconds. A client can be configured to " "override this value." msgstr "" "Время истечения аренды по умолчанию в секундах. Клиент может быть настроен " "для переопределения этого значения." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1834 msgid "" "The maximum allowed length of time, in seconds, for a lease. Should a client " "request a longer lease, a lease will still be issued, but it will only be " "valid for `max-lease-time`." msgstr "" "Максимально допустимая продолжительность аренды в секундах. Если клиент " "запросит аренду на более длительный срок, аренда всё равно будет выдана, но " "будет действительна только в течение `max-lease-time`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1835 msgid "" "The default of `none` disables dynamic DNS updates. Changing this to " "`interim` configures the DHCP server to update a DNS server whenever it " "hands out a lease so that the DNS server knows which IP addresses are " "associated with which computers in the network. Do not change the default " "setting unless the DNS server has been configured to support dynamic DNS." msgstr "" "Значение по умолчанию `none` отключает динамические обновления DNS. " "Изменение этого параметра на `interim` настраивает DHCP-сервер на обновление " "DNS-сервера при каждой выдаче аренды, чтобы DNS-сервер знал, какие IP-адреса " "связаны с какими компьютерами в сети. Не изменяйте значение по умолчанию, " "если DNS-сервер не настроен для поддержки динамического DNS." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1836 msgid "" "This line creates a pool of available IP addresses which are reserved for " "allocation to DHCP clients. The range of addresses must be valid for the " "network or subnet specified in the previous line." msgstr "" "Эта строка создает пул доступных IP-адресов, зарезервированных для выделения " "клиентам DHCP. Диапазон адресов должен быть действительным для сети или " "подсети, указанной в предыдущей строке." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1837 msgid "" "Declares the default gateway that is valid for the network or subnet " "specified before the opening `{` bracket." msgstr "" "Объявляет шлюз по умолчанию, действительный для сети или подсети, указанной " "перед открывающей скобкой `{`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1838 msgid "" "Specifies the hardware MAC address of a client so that the DHCP server can " "recognize the client when it makes a request." msgstr "" "Указывает аппаратный MAC-адрес клиента, чтобы DHCP-сервер мог распознать " "клиента при его запросе." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1839 msgid "" "Specifies that this host should always be given the same IP address. Using " "the hostname is correct, since the DHCP server will resolve the hostname " "before returning the lease information." msgstr "" "Указывает, что данный узел всегда должен получать один и тот же IP-адрес. " "Использование имени узла корректно, так как DHCP-сервер разрешит имя узла " "перед возвратом информации об аренде." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1842 msgid "" "This configuration file supports many more options. Refer to dhcpd.conf(5), " "installed with the server, for details and examples." msgstr "" "Этот файл конфигурации поддерживает гораздо больше опций. Подробности и " "примеры смотрите в dhcpd.conf(5), который устанавливается вместе с сервером." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1844 msgid "" "Once the configuration of [.filename]#dhcpd.conf# is complete, enable the " "DHCP server in [.filename]#/etc/rc.conf#:" msgstr "" "После завершения настройки [.filename]#dhcpd.conf# включите DHCP-сервер в [." "filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1849 #, no-wrap msgid "" "dhcpd_enable=\"YES\"\n" "dhcpd_ifaces=\"dc0\"\n" msgstr "" "dhcpd_enable=\"YES\"\n" "dhcpd_ifaces=\"dc0\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1852 msgid "" "Replace the `dc0` with the interface (or interfaces, separated by " "whitespace) that the DHCP server should listen on for DHCP client requests." msgstr "" "Замените `dc0` на интерфейс (или интерфейсы, разделенные пробелами), на " "котором DHCP-сервер должен ожидать запросы от DHCP-клиентов." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1854 msgid "Start the server by issuing the following command:" msgstr "Запустите сервер, выполнив следующую команду:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1858 #, no-wrap msgid "# service isc-dhcpd start\n" msgstr "# service isc-dhcpd start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1861 msgid "" "Any future changes to the configuration of the server will require the dhcpd " "service to be stopped and then started using man:service[8]." msgstr "" "Любые последующие изменения конфигурации сервера потребуют остановки и " "последующего запуска службы dhcpd с помощью man:service[8]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1864 msgid "" "The DHCP server uses the following files. Note that the manual pages are " "installed with the server software." msgstr "" "Сервер DHCP использует следующие файлы. Обратите внимание, что страницы " "руководства устанавливаются вместе с серверным программным обеспечением." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1866 msgid "[.filename]#/usr/local/sbin/dhcpd#" msgstr "[.filename]#/usr/local/sbin/dhcpd#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1868 msgid "More information about the dhcpd server can be found in dhcpd(8)." msgstr "Дополнительную информацию о сервере dhcpd можно найти в dhcpd(8)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1869 msgid "[.filename]#/usr/local/etc/dhcpd.conf#" msgstr "[.filename]#/usr/local/etc/dhcpd.conf#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1872 msgid "" "The server configuration file needs to contain all the information that " "should be provided to clients, along with information regarding the " "operation of the server. This configuration file is described in dhcpd." "conf(5)." msgstr "" "Файл конфигурации сервера должен содержать всю информацию, которую " "необходимо предоставить клиентам, а также сведения о работе сервера. Этот " "конфигурационный файл описан в dhcpd.conf(5)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1873 msgid "[.filename]#/var/db/dhcpd.leases#" msgstr "[.filename]#/var/db/dhcpd.leases#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1876 msgid "" "The DHCP server keeps a database of leases it has issued in this file, which " "is written as a log. Refer to dhcpd.leases(5), which gives a slightly " "longer description." msgstr "" "Сервер DHCP ведет базу данных выданных аренд в этом файле, который " "записывается в виде журнала. Обратитесь к dhcpd.leases(5), где приводится " "несколько более подробное описание." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1877 msgid "[.filename]#/usr/local/sbin/dhcrelay#" msgstr "[.filename]#/usr/local/sbin/dhcrelay#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1881 msgid "" "This daemon is used in advanced environments where one DHCP server forwards " "a request from a client to another DHCP server on a separate network. If " "this functionality is required, install the package:net/isc-dhcp44-relay[] " "package or port. The installation includes dhcrelay(8) which provides more " "detail." msgstr "" "Этот демон используется в сложных средах, где один DHCP-сервер " "перенаправляет запрос от клиента другому DHCP-серверу в отдельной сети. Если " "требуется такая функциональность, установите пакет package:net/isc-dhcp44-" "relay[] или соответствующий порт. Установка включает dhcrelay(8), где " "приведены более подробные сведения." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:1884 #, no-wrap msgid "Domain Name System (DNS)" msgstr "Система доменных имен (DNS)" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1889 msgid "" "Domain Name System (DNS) is the protocol through which domain names are " "mapped to IP addresses, and vice versa. DNS is coordinated across the " "Internet through a somewhat complex system of authoritative root, Top Level " "Domain (TLD), and other smaller-scale name servers, which host and cache " "individual domain information. It is not necessary to run a name server to " "perform DNS lookups on a system." msgstr "" "Система доменных имен (DNS) — это протокол, который сопоставляет доменные " "имена с IP-адресами и наоборот. DNS координируется в масштабах Интернета " "через довольно сложную систему авторитетных корневых серверов, серверов " "доменов верхнего уровня (TLD — Top Level Domain) и других менее масштабных " "серверов имен, которые хранят и кэшируют информацию об отдельных доменах. " "Для выполнения DNS-запросов в системе не обязательно запускать сервер имен." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1891 msgid "The following table describes some of the terms associated with DNS:" msgstr "Следующая таблица описывает некоторые термины, связанные с DNS:" #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:1892 #, no-wrap msgid "DNS Terminology" msgstr "Терминология DNS" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1898 #, no-wrap msgid "Definition" msgstr "Определение" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1899 #, no-wrap msgid "Forward DNS" msgstr "Прямая запись DNS (Forward DNS)" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1901 #, no-wrap msgid "Mapping of hostnames to IP addresses." msgstr "Сопоставление имен хостов с IP-адресами." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1902 #, no-wrap msgid "Origin" msgstr "Зона ответственности (Origin)" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1904 #, no-wrap msgid "Refers to the domain covered in a particular zone file." msgstr "Относится к домену, охватываемому определенным файлом зоны." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1905 #, no-wrap msgid "Resolver" msgstr "Резолвер (Resolver)" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1907 #, no-wrap msgid "A system process through which a machine queries a name server for zone information." msgstr "Системный процесс, с помощью которого машина запрашивает у сервера имен информацию о зоне." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1908 #, no-wrap msgid "Reverse DNS" msgstr "Обратная запись DNS (Reverse DNS)" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1910 #, no-wrap msgid "Mapping of IP addresses to hostnames." msgstr "Сопоставление IP-адресов с именами хостов." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1911 #, no-wrap msgid "Root zone" msgstr "Корневая зона (Root zone)" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1913 #, no-wrap msgid "The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory." msgstr "Начало иерархии зон Интернета. Все зоны находятся под корневой зоной, аналогично тому, как все файлы в файловой системе находятся под корневым каталогом." #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1914 #, no-wrap msgid "Zone" msgstr "Зона (Zone)" #. type: Table #: documentation/content/en/books/handbook/network-servers/_index.adoc:1915 #, no-wrap msgid "An individual domain, subdomain, or portion of the DNS administered by the same authority." msgstr "Отдельный домен, поддомен или часть DNS, управляемые одной организацией." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1918 msgid "Examples of zones:" msgstr "Примеры зон:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1920 msgid "`.` is how the root zone is usually referred to in documentation." msgstr "`.` — так обычно обозначается корневая зона в документации." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1921 msgid "`org.` is a Top Level Domain (TLD) under the root zone." msgstr "`org.` — это домен верхнего уровня (TLD) в корневой зоне." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1922 msgid "`example.org.` is a zone under the `org.`TLD." msgstr "`example.org.` — это зона под доменом `org.`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1923 msgid "" "`1.168.192.in-addr.arpa` is a zone referencing all IP addresses which fall " "under the `192.168.1.*`IP address space." msgstr "" "`1.168.192.in-addr.arpa` — это зона, содержащая ссылки на все IP-адреса, " "входящие в адресное пространство `192.168.1.*`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1927 msgid "" "As one can see, the more specific part of a hostname appears to its left. " "For example, `example.org.` is more specific than `org.`, as `org.` is more " "specific than the root zone. The layout of each part of a hostname is much " "like a file system: the [.filename]#/dev# directory falls within the root, " "and so on." msgstr "" "Как видно, более специфичная часть имени хоста расположена слева. Например, " "`example.org.` более специфично, чем `org.`, а `org.` более специфично, чем " "корневая зона. Структура каждой части имени хоста во многом напоминает " "файловую систему: каталог [.filename]#/dev# находится в корне и так далее." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1928 #, no-wrap msgid "Reasons to Run a Name Server" msgstr "Причины для запуска сервера имен" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1931 msgid "" "Name servers generally come in two forms: authoritative name servers, and " "caching (also known as resolving) name servers." msgstr "" "Серверы имен обычно бывают двух видов: авторитетные DNS-серверы и кэширующие " "DNS-серверы (также известные как резолвинг-серверы)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1933 msgid "An authoritative name server is needed when:" msgstr "Авторитетный сервер имен необходим, когда:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1935 msgid "" "One wants to serve DNS information to the world, replying authoritatively to " "queries." msgstr "" "Хочется предоставлять DNS-информацию для всего мира, отвечая на запросы " "авторитетно." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1936 msgid "" "A domain, such as `example.org`, is registered and IP addresses need to be " "assigned to hostnames under it." msgstr "" "Домен, например `example.org`, зарегистрирован, и IP-адреса должны быть " "назначены именам хостов в нём." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1937 msgid "An IP address block requires reverse DNS entries (IP to hostname)." msgstr "Блок IP-адресов требует обратных DNS-записей (IP к имени хоста)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1938 msgid "A backup or second name server, called a slave, will reply to queries." msgstr "" "Резервный или вторичный сервер имен, называемый подчиненным (slave), будет " "отвечать на запросы." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1940 msgid "A caching name server is needed when:" msgstr "Кэширующий сервер имен необходим, когда:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1942 msgid "" "A local DNS server may cache and respond more quickly than querying an " "outside name server." msgstr "" "Локальный DNS-сервер может кэшировать и отвечать быстрее, чем запрос к " "внешнему серверу имен." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1946 msgid "" "When one queries for `www.FreeBSD.org`, the resolver usually queries the " "uplink ISP's name server, and retrieves the reply. With a local, caching " "DNS server, the query only has to be made once to the outside world by the " "caching DNS server. Additional queries will not have to go outside the " "local network, since the information is cached locally." msgstr "" "Когда кто-нибудь запрашивает информацию о `www.FreeBSD.org`, резолвер обычно " "обращается к серверу имен провайдера и получает ответ. При использовании " "локального кэширующего DNS-сервера запрос во внешний мир выполняется только " "один раз этим сервером. Дополнительные запросы не будут выходить за пределы " "локальной сети, так как информация сохраняется в локальном кэше." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:1947 #, no-wrap msgid "DNS Server Configuration" msgstr "Конфигурация DNS-сервера" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1952 msgid "" "Unbound is provided in the FreeBSD base system. By default, it will provide " "DNS resolution to the local machine only. While the base system package can " "be configured to provide resolution services beyond the local machine, it is " "recommended that such requirements be addressed by installing Unbound from " "the FreeBSD Ports Collection." msgstr "" "Unbound включён в базовую систему FreeBSD. По умолчанию он предоставляет " "разрешение DNS только для локальной машины. Хотя пакет базовой системы можно " "настроить для предоставления служб разрешения за пределами локальной машины, " "рекомендуется удовлетворять такие требования путём установки Unbound из " "коллекции портов FreeBSD." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1954 msgid "To enable Unbound, add the following to [.filename]#/etc/rc.conf#:" msgstr "" "Чтобы включить Unbound, добавьте следующую строку в [.filename]#/etc/rc." "conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1958 #, no-wrap msgid "local_unbound_enable=\"YES\"\n" msgstr "local_unbound_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1961 msgid "" "Any existing nameservers in [.filename]#/etc/resolv.conf# will be configured " "as forwarders in the new Unbound configuration." msgstr "" "Любые существующие серверы имен в [.filename]#/etc/resolv.conf# будут " "настроены как серверы пересылки в новой конфигурации Unbound." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1967 msgid "" "If any of the listed nameservers do not support DNSSEC, local DNS resolution " "will fail. Be sure to test each nameserver and remove any that fail the " "test. The following command will show the trust tree or a failure for a " "nameserver running on `192.168.1.1`:" msgstr "" "Если какой-либо из перечисленных серверов имен не поддерживает DNSSEC, " "локальное разрешение DNS завершится неудачей. Обязательно протестируйте " "каждый сервер имен и удалите те, которые не прошли проверку. Следующая " "команда покажет дерево доверия или ошибку для сервера имен, работающего на " "`192.168.1.1`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1971 #, no-wrap msgid "% drill -S FreeBSD.org @192.168.1.1\n" msgstr "% drill -S FreeBSD.org @192.168.1.1\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1975 msgid "Once each nameserver is confirmed to support DNSSEC, start Unbound:" msgstr "" "После подтверждения поддержки DNSSEC каждым сервером имен, запустите Unbound:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1979 #, no-wrap msgid "# service local_unbound onestart\n" msgstr "# service local_unbound onestart\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:1983 msgid "" "This will take care of updating [.filename]#/etc/resolv.conf# so that " "queries for DNSSEC secured domains will now work. For example, run the " "following to validate the FreeBSD.org DNSSEC trust tree:" msgstr "" "Это обеспечит обновление [.filename]#/etc/resolv.conf#, чтобы запросы к " "доменам, защищённым DNSSEC, теперь работали. Например, выполните следующую " "команду для проверки дерева доверия DNSSEC FreeBSD.org:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:1989 #, no-wrap msgid "" "% drill -S FreeBSD.org\n" ";; Number of trusted keys: 1\n" ";; Chasing: freebsd.org. A\n" msgstr "" "% drill -S FreeBSD.org\n" ";; Number of trusted keys: 1\n" ";; Chasing: freebsd.org. A\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2005 #, no-wrap msgid "" "DNSSEC Trust tree:\n" "freebsd.org. (A)\n" "|---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256)\n" " |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257)\n" " |---freebsd.org. (DS keytag: 32659 digest type: 2)\n" " |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256)\n" " |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257)\n" " |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257)\n" " |---org. (DS keytag: 21366 digest type: 1)\n" " | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256)\n" " | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)\n" " |---org. (DS keytag: 21366 digest type: 2)\n" " |---. (DNSKEY keytag: 40926 alg: 8 flags: 256)\n" " |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)\n" ";; Chase successful\n" msgstr "" "DNSSEC Trust tree:\n" "freebsd.org. (A)\n" "|---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256)\n" " |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257)\n" " |---freebsd.org. (DS keytag: 32659 digest type: 2)\n" " |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256)\n" " |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257)\n" " |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257)\n" " |---org. (DS keytag: 21366 digest type: 1)\n" " | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256)\n" " | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)\n" " |---org. (DS keytag: 21366 digest type: 2)\n" " |---. (DNSKEY keytag: 40926 alg: 8 flags: 256)\n" " |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)\n" ";; Chase successful\n" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2007 #, no-wrap msgid "Authoritative Name Server Configuration" msgstr "Конфигурация Авторитетного Сервера Имен" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2010 msgid "" "FreeBSD does not provide authoritative name server software in the base " "system. Users are encouraged to install third party applications, like " "package:dns/nsd[] or package:dns/bind918[] package or port." msgstr "" "FreeBSD не предоставляет программное обеспечение авторитетного сервера имен " "в базовой системе. Пользователям рекомендуется устанавливать сторонние " "приложения, такие как package:dns/nsd[] или package:dns/bind918[] из пакетов " "или портов." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2012 #, no-wrap msgid "Zero-configuration Networking (mDNS/DNS-SD)" msgstr "Сетевое взаимодействие без настройки (mDNS/DNS-SD)" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2016 msgid "" "https://en.wikipedia.org/wiki/Zero-configuration_networking[Zero-" "configuration networking] (sometimes referred to as _Zeroconf_) is a set of " "technologies, which simplify network configuration. The main parts of " "Zeroconf are:" msgstr "" "https://en.wikipedia.org/wiki/Zero-configuration_networking[Сетевое " "взаимодействие без настройки] (иногда называемое _Zeroconf_) — это набор " "технологий, упрощающих настройку сети. Главные составные части Zeroconf:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2018 msgid "" "Link-Local Addressing providing automatic assignment of numeric network " "addresses." msgstr "" "Линк-локальная адресация, предоставляющая автоматическое назначение числовых " "сетевых адресов." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2019 msgid "" "Multicast DNS (_mDNS_) providing automatic distribution and resolution of " "hostnames." msgstr "" "Мультикаст DNS (_mDNS_), обеспечивающий автоматическое распространение и " "разрешение имён хостов." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2020 msgid "" "DNS-Based Service Discovery (_DNS-SD_) providing automatic discovery of " "service instances." msgstr "" "DNS-Based Service Discovery (_DNS-SD_), обеспечивающий автоматическое " "обнаружение экземпляров сервисов." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2021 #, no-wrap msgid "Configuring and Starting Avahi" msgstr "Настройка и запуск Avahi" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2025 msgid "" "One of the popular implementations of zeroconf is https://avahi.org/" "[Avahi]. Avahi can be installed and configured with the following commands:" msgstr "" "Одной из популярных реализаций zeroconf является https://avahi.org/[Avahi]. " "Avahi можно установить и настроить с помощью следующих команд:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2034 #, no-wrap msgid "" "# pkg install avahi-app nss_mdns\n" "# grep -q '^hosts:.*\\' /etc/nsswitch.conf || sed -i \"\" 's/^hosts: .*/& mdns/' /etc/nsswitch.conf\n" "# service dbus enable\n" "# service avahi-daemon enable\n" "# service dbus start\n" "# service avahi-daemon start\n" msgstr "" "# pkg install avahi-app nss_mdns\n" "# grep -q '^hosts:.*\\' /etc/nsswitch.conf || sed -i \"\" 's/^hosts: .*/& mdns/' /etc/nsswitch.conf\n" "# service dbus enable\n" "# service avahi-daemon enable\n" "# service dbus start\n" "# service avahi-daemon start\n" #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2037 #, no-wrap msgid "Apache HTTP Server" msgstr "HTTP сервер Apache" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2041 msgid "" "The open source Apache HTTP Server is the most widely used web server. " "FreeBSD does not install this web server by default, but it can be installed " "from the package:www/apache24[] package or port." msgstr "" "Веб-сервер с открытым исходным кодом Apache является наиболее широко " "используемым веб-сервером. FreeBSD не устанавливает этот веб-сервер по " "умолчанию, но его можно установить из пакета package:www/apache24[] или " "порта." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2044 msgid "" "This section summarizes how to configure and start version 2._x_ of the " "Apache HTTP Server on FreeBSD. For more detailed information about Apache 2." "X and its configuration directives, refer to http://httpd.apache.org/[httpd." "apache.org]." msgstr "" "В этом разделе приведена сводка по настройке и запуску версии 2._x_ HTTP-" "сервера Apache на FreeBSD. Более подробная информация о Apache 2.X и его " "директивах конфигурации доступна по ссылке http://httpd.apache.org/[httpd." "apache.org]." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2045 #, no-wrap msgid "Configuring and Starting Apache" msgstr "Настройка и запуск Apache" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2050 msgid "" "In FreeBSD, the main Apache HTTP Server configuration file is installed as [." "filename]#/usr/local/etc/apache2x/httpd.conf#, where _x_ represents the " "version number. This ASCII text file begins comment lines with a `+#+`. " "The most frequently modified directives are:" msgstr "" "В FreeBSD основной файл конфигурации сервера Apache HTTP Server " "устанавливается как [.filename]#/usr/local/etc/apache2x/httpd.conf#, где _x_ " "обозначает номер версии. Этот текстовый файл в формате ASCII начинается со " "строк комментариев, обозначенных символом `+#+`. Наиболее часто изменяемые " "директивы:" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2051 #, no-wrap msgid "`ServerRoot \"/usr/local\"`" msgstr "`ServerRoot \"/usr/local\"`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2054 msgid "" "Specifies the default directory hierarchy for the Apache installation. " "Binaries are stored in the [.filename]#bin# and [.filename]#sbin# " "subdirectories of the server root and configuration files are stored in the " "[.filename]#etc/apache2x# subdirectory." msgstr "" "Указывает иерархию каталогов по умолчанию для установки Apache. Исполняемые " "файлы хранятся в подкаталогах [.filename]#bin# и [.filename]#sbin# корня " "сервера, а конфигурационные файлы — в подкаталоге [.filename]#etc/apache2x#." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2055 #, no-wrap msgid "`ServerAdmin \\you@example.com`" msgstr "`ServerAdmin \\you@example.com`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2058 msgid "" "Change this to the email address to receive problems with the server. This " "address also appears on some server-generated pages, such as error documents." msgstr "" "Замените это на адрес электронной почты для получения сообщений о проблемах " "с сервером. Этот адрес также появляется на некоторых страницах, " "сгенерированных сервером, таких как документы об ошибках." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2059 #, no-wrap msgid "`ServerName www.example.com:80`" msgstr "`ServerName www.example.com:80`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2064 msgid "" "Allows an administrator to set a hostname which is sent back to clients for " "the server. For example, `www` can be used instead of the actual hostname. " "If the system does not have a registered DNS name, enter its IP address " "instead. If the server will listen on an alternate report, change `80` to " "the alternate port number." msgstr "" "Позволяет администратору установить имя хоста, которое отправляется клиентам " "сервера. Например, можно использовать `www` вместо фактического имени хоста. " "Если у системы нет зарегистрированного DNS-имени, введите её IP-адрес. Если " "сервер будет прослушивать альтернативный порт, замените `80` на номер " "альтернативного порта." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2065 #, no-wrap msgid "`DocumentRoot \"/usr/local/www/apache2_x_/data\"`" msgstr "`DocumentRoot \"/usr/local/www/apache2_x_/data\"`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2068 msgid "" "The directory where documents will be served from. By default, all requests " "are taken from this directory, but symbolic links and aliases may be used to " "point to other locations." msgstr "" "Каталог, из которого будут обслуживаться документы. По умолчанию все запросы " "обрабатываются из этого каталога, но символические ссылки и псевдонимы могут " "использоваться для указания на другие расположения." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2072 msgid "" "It is always a good idea to make a backup copy of the default Apache " "configuration file before making changes. When the configuration of Apache " "is complete, save the file and verify the configuration using `apachectl`. " "Running `apachectl configtest` should return `Syntax OK`." msgstr "" "Всегда рекомендуется создать резервную копию конфигурационного файла Apache " "по умолчанию перед внесением изменений. После завершения настройки Apache " "сохраните файл и проверьте конфигурацию с помощью `apachectl`. Запуск " "команды `apachectl configtest` должен вернуть `Syntax OK`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2074 msgid "" "To launch Apache at system startup, add the following line to [.filename]#/" "etc/rc.conf#:" msgstr "" "Чтобы запускать Apache при загрузке системы, добавьте следующую строку в [." "filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2078 #, no-wrap msgid "apache24_enable=\"YES\"\n" msgstr "apache24_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2081 msgid "" "If Apache should be started with non-default options, the following line may " "be added to [.filename]#/etc/rc.conf# to specify the needed flags:" msgstr "" "Если Apache должен запускаться с нестандартными параметрами, следующую " "строку можно добавить в [.filename]#/etc/rc.conf# для указания необходимых " "флагов:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2085 #, no-wrap msgid "apache24_flags=\"\"\n" msgstr "apache24_flags=\"\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2088 msgid "If apachectl does not report configuration errors, start `httpd` now:" msgstr "" "Если apachectl не сообщает об ошибках конфигурации, то запустите `httpd`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2092 #, no-wrap msgid "# service apache24 start\n" msgstr "# service apache24 start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2096 msgid "" "The `httpd` service can be tested by entering `http://_localhost_` in a web " "browser, replacing _localhost_ with the fully-qualified domain name of the " "machine running `httpd`. The default web page that is displayed is [." "filename]#/usr/local/www/apache24/data/index.html#." msgstr "" "Службу `httpd` можно проверить, введя `http://_localhost_` в веб-браузере, " "заменив _localhost_ на полное доменное имя машины, на которой работает " "`httpd`. Отображаемая веб-страница по умолчанию находится в [.filename]#/" "usr/local/www/apache24/data/index.html#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2098 msgid "" "The Apache configuration can be tested for errors after making subsequent " "configuration changes while `httpd` is running using the following command:" msgstr "" "Проверить конфигурацию Apache на наличие ошибок после внесения последующих " "изменений в конфигурацию во время работы `httpd` можно с помощью следующей " "команды:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2102 #, no-wrap msgid "# service apache24 configtest\n" msgstr "# service apache24 configtest\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2107 msgid "" "It is important to note that `configtest` is not an man:rc[8] standard, and " "should not be expected to work for all startup scripts." msgstr "" "Важно отметить, что `configtest` не является стандартом man:rc[8], и не " "следует ожидать, что он будет работать для всех стартовых скриптов." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2109 #, no-wrap msgid "Virtual Hosting" msgstr "Виртуальный хостинг" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2115 msgid "" "Virtual hosting allows multiple websites to run on one Apache server. The " "virtual hosts can be _IP-based_ or _name-based_. IP-based virtual hosting " "uses a different IP address for each website. Name-based virtual hosting " "uses the clients HTTP/1.1 headers to figure out the hostname, which allows " "the websites to share the same IP address." msgstr "" "Виртуальный хостинг позволяет запускать несколько веб-сайтов на одном " "сервере Apache. Виртуальные хосты могут быть _IP-ориентированными_ или _имя-" "ориентированными_. IP-ориентированный виртуальный хостинг использует разные " "IP-адреса для каждого сайта. Имя-ориентированный виртуальный хостинг " "использует заголовки HTTP/1.1 клиента для определения имени хоста, что " "позволяет сайтам использовать один и тот же IP-адрес." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2118 msgid "" "To setup Apache to use name-based virtual hosting, add a `VirtualHost` block " "for each website. For example, for the webserver named `www.domain.tld` " "with a virtual domain of `www.someotherdomain.tld`, add the following " "entries to [.filename]#httpd.conf#:" msgstr "" "Для настройки Apache с использованием виртуального хоста на основе имен " "добавьте блок `VirtualHost` для каждого веб-сайта. Например, для веб-сервера " "с именем `www.domain.tld` и виртуальным доменом `www.someotherdomain.tld` " "добавьте следующие записи в [.filename]#httpd.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2125 #, no-wrap msgid "" "\n" " ServerName www.domain.tld\n" " DocumentRoot /www/domain.tld\n" "\n" msgstr "" "\n" " ServerName www.domain.tld\n" " DocumentRoot /www/domain.tld\n" "\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2130 #, no-wrap msgid "" "\n" " ServerName www.someotherdomain.tld\n" " DocumentRoot /www/someotherdomain.tld\n" "\n" msgstr "" "\n" " ServerName www.someotherdomain.tld\n" " DocumentRoot /www/someotherdomain.tld\n" "\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2133 msgid "" "For each virtual host, replace the values for `ServerName` and " "`DocumentRoot` with the values to be used." msgstr "" "Для каждого виртуального хоста замените значения `ServerName` и " "`DocumentRoot` на те, которые должны использоваться." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2135 msgid "" "For more information about setting up virtual hosts, consult the official " "Apache documentation at: http://httpd.apache.org/docs/vhosts/[http://httpd." "apache.org/docs/vhosts/]." msgstr "" "Для получения дополнительной информации о настройке виртуальных хостов " "обратитесь к официальной документации Apache по адресу: http://httpd.apache." "org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2136 #, no-wrap msgid "Apache Modules" msgstr "Модули Apache" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2140 msgid "" "Apache uses modules to augment the functionality provided by the basic " "server. Refer to http://httpd.apache.org/docs/current/mod/[http://httpd." "apache.org/docs/current/mod/] for a complete listing of and the " "configuration details for the available modules." msgstr "" "Apache использует модули для расширения функциональности, предоставляемой " "базовым сервером. Обратитесь к http://httpd.apache.org/docs/current/mod/" "[http://httpd.apache.org/docs/current/mod/] для получения полного списка и " "деталей настройки доступных модулей." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2145 msgid "" "In FreeBSD, some modules can be compiled with the package:www/apache24[] " "port. Type `make config` within [.filename]#/usr/ports/www/apache24# to see " "which modules are available and which are enabled by default. If the module " "is not compiled with the port, the FreeBSD Ports Collection provides an easy " "way to install many modules. This section describes three of the most " "commonly used modules." msgstr "" "В FreeBSD некоторые модули могут быть скомпилированы с портом package:www/" "apache24[]. Введите `make config` в [.filename]#/usr/ports/www/apache24#, " "чтобы увидеть, какие модули доступны и какие включены по умолчанию. Если " "модуль не скомпилирован с портом, коллекция портов FreeBSD предоставляет " "простой способ установки многих модулей. В этом разделе описаны три наиболее " "часто используемых модуля." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2146 #, no-wrap msgid "SSL support" msgstr "Поддержка SSL" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2153 msgid "" "At one point, support for SSL inside of Apache required a secondary module " "called [.filename]#mod_ssl#. This is no longer the case and the default " "install of Apache comes with SSL built into the web server. An example of " "how to enable support for SSL websites is available in the installed file, [." "filename]#httpd-ssl.conf# inside of the [.filename]#/usr/local/etc/apache24/" "extra# directory Inside this directory is also a sample file called named [." "filename]#ssl.conf-sample#. It is recommended that both files be evaluated " "to properly set up secure websites in the Apache web server." msgstr "" "В прошлом поддержка SSL в Apache требовала дополнительного модуля под " "названием [.filename]#mod_ssl#. Сейчас это не так, и стандартная установка " "Apache включает SSL в веб-сервер. Пример настройки поддержки SSL-сайтов " "доступен в установленном файле [.filename]#httpd-ssl.conf# внутри каталога [." "filename]#/usr/local/etc/apache24/extra#. В этом же каталоге также находится " "пример файла с именем [.filename]#ssl.conf-sample#. Рекомендуется изучить " "оба файла для правильной настройки защищённых сайтов в веб-сервере Apache." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2155 msgid "" "After the configuration of SSL is complete, the following line must be " "uncommented in the main [.filename]#http.conf# to activate the changes on " "the next restart or reload of Apache:" msgstr "" "После настройки SSL необходимо раскомментировать следующую строку в основном " "файле [.filename]#http.conf#, чтобы активировать изменения при следующей " "перезагрузке или обновлении Apache:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2159 #, no-wrap msgid "#Include etc/apache24/extra/httpd-ssl.conf\n" msgstr "#Include etc/apache24/extra/httpd-ssl.conf\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2166 msgid "" "SSL version two and version three have known vulnerability issues. It is " "highly recommended TLS version 1.2 and 1.3 be enabled in place of the older " "SSL options. This can be accomplished by setting the following options in " "the [.filename]#ssl.conf#:" msgstr "" "Версии SSL два и три имеют известные уязвимости. Настоятельно рекомендуется " "использовать версии TLS 1.2 и 1.3 вместо старых вариантов SSL. Это можно " "сделать, установив следующие параметры в [.filename]#ssl.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2173 #, no-wrap msgid "" "SSLProtocol all -SSLv3 -SSLv2 +TLSv1.2 +TLSv1.3\n" "SSLProxyProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1\n" msgstr "" "SSLProtocol all -SSLv3 -SSLv2 +TLSv1.2 +TLSv1.3\n" "SSLProxyProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2176 msgid "" "To complete the configuration of SSL in the web server, uncomment the " "following line to ensure that the configuration will be pulled into Apache " "during restart or reload:" msgstr "" "Для завершения настройки SSL в веб-сервере раскомментируйте следующую " "строку, чтобы убедиться, что конфигурация будет загружена в Apache при " "перезапуске или обновлении:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2181 #, no-wrap msgid "" "# Secure (SSL/TLS) connections\n" "Include etc/apache24/extra/httpd-ssl.conf\n" msgstr "" "# Secure (SSL/TLS) connections\n" "Include etc/apache24/extra/httpd-ssl.conf\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2184 msgid "" "The following lines must also be uncommented in the [.filename]#httpd.conf# " "to fully support SSL in Apache:" msgstr "" "Следующие строки также должны быть раскомментированы в файле [." "filename]#httpd.conf# для полной поддержки SSL в Apache:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2190 #, no-wrap msgid "" "LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so\n" "LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so\n" "LoadModule ssl_module libexec/apache24/mod_ssl.so\n" msgstr "" "LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so\n" "LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so\n" "LoadModule ssl_module libexec/apache24/mod_ssl.so\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2194 msgid "" "The next step is to work with a certificate authority to have the " "appropriate certificates installed on the system. This will set up a chain " "of trust for the site and prevent any warnings of self-signed certificates." msgstr "" "Следующий шаг — работа с центром сертификации для установки соответствующих " "сертификатов в системе. Это создаст цепочку доверия для сайта и предотвратит " "появление предупреждений о самоподписанных сертификатах." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2195 #, no-wrap msgid "[.filename]#mod_perl#" msgstr "[.filename]#mod_perl#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2199 msgid "" "The [.filename]#mod_perl# module makes it possible to write Apache modules " "in Perl. In addition, the persistent interpreter embedded in the server " "avoids the overhead of starting an external interpreter and the penalty of " "Perl start-up time." msgstr "" "Модуль [.filename]#mod_perl# позволяет писать модули Apache на Perl. Кроме " "того, встроенный в сервер постоянный интерпретатор избегает накладных " "расходов на запуск внешнего интерпретатора и задержек при старте Perl." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2202 msgid "" "The [.filename]#mod_perl# can be installed using the package:www/mod_perl2[] " "package or port. Documentation for using this module can be found at http://" "perl.apache.org/docs/2.0/index.html[http://perl.apache.org/docs/2.0/index." "html]." msgstr "" "[.filename]#mod_perl# можно установить с помощью пакета package:www/" "mod_perl2[] или порта. Документация по использованию этого модуля доступна " "по адресу http://perl.apache.org/docs/2.0/index.html[http://perl.apache.org/" "docs/2.0/index.html]." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2203 #, no-wrap msgid "[.filename]#mod_php#" msgstr "[.filename]#mod_php#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2207 msgid "" "_PHP: Hypertext Preprocessor_ (PHP) is a general-purpose scripting language " "that is especially suited for web development. Capable of being embedded " "into HTML, its syntax draws upon C, Java(TM), and Perl with the intention of " "allowing web developers to write dynamically generated webpages quickly." msgstr "" "_PHP: Препроцессор Гипертекста_ (PHP) — это язык программирования общего " "назначения, который особенно хорошо подходит для веб-разработки. Способный " "встраиваться в HTML, его синтаксис основан на C, Java(TM) и Perl, что " "позволяет веб-разработчикам быстро создавать динамически генерируемые веб-" "страницы." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2209 msgid "" "Support for PHP for Apache and any other feature written in the language, " "can be added by installing the appropriate port." msgstr "" "Поддержка PHP для Apache и любых других функций, написанных на этом языке, " "может быть добавлена путем установки соответствующего порта." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2211 msgid "For all supported versions, search the package database using `pkg`:" msgstr "" "Для всех поддерживаемых версий выполните поиск в базе данных пакетов с " "помощью `pkg`:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2215 #, no-wrap msgid "# pkg search php\n" msgstr "# pkg search php\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2220 msgid "" "A list will be displayed including the versions and additional features they " "provide. The components are completely modular, meaning features are " "enabled by installing the appropriate port. To install PHP version 7.4 for " "Apache, issue the following command:" msgstr "" "Будет отображен список, включающий версии и дополнительные возможности, " "которые они предоставляют. Компоненты полностью модульные, что означает, что " "функции включаются путем установки соответствующего порта. Чтобы установить " "PHP версии 7.4 для Apache, выполните следующую команду:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2224 #, no-wrap msgid "# pkg install mod_php74\n" msgstr "# pkg install mod_php74\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2227 msgid "" "If any dependency packages need to be installed, they will be installed as " "well." msgstr "" "Если необходимо установить какие-либо зависимости, они также будут " "установлены." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2230 msgid "" "By default, PHP will not be enabled. The following lines will need to be " "added to the Apache configuration file located in [.filename]#/usr/local/etc/" "apache24# to make it active:" msgstr "" "По умолчанию PHP не будет включен. Следующие строки необходимо добавить в " "конфигурационный файл Apache, расположенный в [.filename]#/usr/local/etc/" "apache24#, чтобы активировать его:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2239 #, no-wrap msgid "" "\n" " SetHandler application/x-httpd-php\n" "\n" "\n" " SetHandler application/x-httpd-php-source\n" "\n" msgstr "" "\n" " SetHandler application/x-httpd-php\n" "\n" "\n" " SetHandler application/x-httpd-php-source\n" "\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2242 msgid "" "In addition, the `DirectoryIndex` in the configuration file will also need " "to be updated and Apache will either need to be restarted or reloaded for " "the changes to take effect." msgstr "" "В дополнение, параметр `DirectoryIndex` в конфигурационном файле также " "потребуется обновить, и Apache нужно будет перезапустить или перезагрузить, " "чтобы изменения вступили в силу." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2245 msgid "" "Support for many of the PHP features may also be installed by using `pkg`. " "For example, to install support for XML or SSL, install their respective " "ports:" msgstr "" "Поддержка многих функций PHP также может быть установлена с помощью `pkg`. " "Например, для установки поддержки XML или SSL установите соответствующие " "порты:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2249 #, no-wrap msgid "# pkg install php74-xml php74-openssl\n" msgstr "# pkg install php74-xml php74-openssl\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2252 msgid "" "As before, the Apache configuration will need to be reloaded for the changes " "to take effect, even in cases where it was just a module install." msgstr "" "Как и ранее, для вступления изменений в силу необходимо перезагрузить " "конфигурацию Apache, даже в случаях, когда была просто установка модуля." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2254 msgid "" "To perform a graceful restart to reload the configuration, issue the " "following command:" msgstr "" "Для выполнения плавного перезапуска с целью перезагрузки конфигурации " "выполните следующую команду:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2258 #, no-wrap msgid "# apachectl graceful\n" msgstr "# apachectl graceful\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2262 msgid "" "Once the install is complete, there are two methods of obtaining the " "installed PHP support modules and the environmental information of the " "build. The first is to install the full PHP binary and running the command " "to gain the information:" msgstr "" "После завершения установки есть два способа получить установленные модули " "поддержки PHP и информацию о среде сборки. Первый — установить полный " "бинарный файл PHP и выполнить команду для получения информации:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2266 #, no-wrap msgid "# pkg install php74\n" msgstr "# pkg install php74\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2272 #, no-wrap msgid "# php -i | less\n" msgstr "# php -i | less\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2275 msgid "" "It is necessary to pass the output to a pager, such as the `more` or `less` " "to easier digest the amount of output." msgstr "" "Необходимо передать вывод в постраничный просмотрщик, например, `more` или " "`less`, чтобы упростить восприятие большого объема вывода." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2279 msgid "" "Finally, to make any changes to the global configuration of PHP there is a " "well documented file installed into [.filename]#/usr/local/etc/php.ini#. At " "the time of install, this file will not exist because there are two versions " "to choose from, one is [.filename]#php.ini-development# and the other is [." "filename]#php.ini-production#. These are starting points to assist " "administrators in their deployment." msgstr "" "Наконец, для внесения изменений в глобальную конфигурацию PHP существует " "хорошо документированный файл, установленный в [.filename]#/usr/local/etc/" "php.ini#. На момент установки этот файл не будет существовать, так как есть " "две версии на выбор: [.filename]#php.ini-development# и [.filename]#php.ini-" "production#. Они представляют собой отправные точки, помогающие " "администраторам в развертывании." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2280 #, no-wrap msgid "HTTP2 Support" msgstr "Поддержка HTTP2" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2285 msgid "" "Apache support for the HTTP2 protocol is included by default when installing " "the port with `pkg`. The new version of HTTP includes many improvements " "over the previous version, including utilizing a single connection to a " "website, reducing overall roundtrips of TCP connections. Also, packet " "header data is compressed and HTTP2 requires encryption by default." msgstr "" "Поддержка протокола HTTP2 в Apache включена по умолчанию при установке порта " "с помощью `pkg`. Новая версия HTTP содержит множество улучшений по сравнению " "с предыдущей версией, включая использование одного соединения с веб-сайтом, " "что сокращает общее количество циклов TCP-соединений. Кроме того, данные " "заголовков пакетов сжимаются, а HTTP2 по умолчанию требует шифрования." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2288 msgid "" "When Apache is configured to only use HTTP2, web browsers will require " "secure, encrypted HTTPS connections. When Apache is configured to use both " "versions, HTTP1.1 will be considered a fall back option if any issues arise " "during the connection." msgstr "" "Когда Apache настроен на использование только HTTP2, веб-браузеры будут " "требовать безопасное, зашифрованное HTTPS-соединение. Если Apache настроен " "на использование обеих версий, HTTP1.1 будет считаться резервным вариантом " "при возникновении проблем с соединением." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2291 msgid "" "While this change does require administrators to make changes, they are " "positive and equate to a more secure Internet for everyone. The changes are " "only required for sites not currently implementing SSL and TLS." msgstr "" "Хотя это изменение требует от администраторов внесения изменений, они " "положительные и способствуют более безопасному Интернету для всех. Изменения " "требуются только для сайтов, которые в настоящее время не используют SSL и " "TLS." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2296 msgid "" "This configuration depends on the previous sections, including TLS support. " "It is recommended those instructions be followed before continuing with this " "configuration." msgstr "" "Эта конфигурация зависит от предыдущих разделов, включая поддержку TLS. " "Рекомендуется выполнить эти инструкции перед продолжением с данной " "конфигурацией." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2299 msgid "" "Start the process by enabling the http2 module by uncommenting the line in [." "filename]#/usr/local/etc/apache24/httpd.conf# and replace the mpm_prefork " "module with mpm_event as the former does not support HTTP2." msgstr "" "Начните процесс, включив модуль http2, раскомментировав строку в [." "filename]#/usr/local/etc/apache24/httpd.conf#, и замените модуль mpm_prefork " "на mpm_event, так как первый не поддерживает HTTP2." #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2304 #, no-wrap msgid "" "LoadModule http2_module libexec/apache24/mod_http2.so\n" "LoadModule mpm_event_module libexec/apache24/mod_mpm_event.so\n" msgstr "" "LoadModule http2_module libexec/apache24/mod_http2.so\n" "LoadModule mpm_event_module libexec/apache24/mod_mpm_event.so\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2312 msgid "" "There is a separate [.filename]#mod_http2# port that is available. It " "exists to deliver security and bug fixes quicker than the module installed " "with the bundled [.filename]#apache24# port. It is not required for HTTP2 " "support but is available. When installed, the [.filename]#mod_h2.so# should " "be used in place of [.filename]#mod_http2.so# in the Apache configuration." msgstr "" "Существует отдельный порт [.filename]#mod_http2#, который доступен. Он " "предназначен для более быстрого получения исправлений безопасности и ошибок " "по сравнению с модулем, установленным через встроенный порт [." "filename]#apache24#. Он не обязателен для поддержки HTTP2, но доступен. При " "установке следует использовать [.filename]#mod_h2.so# вместо [." "filename]#mod_http2.so# в конфигурации Apache." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2316 msgid "" "There are two methods to implement HTTP2 in Apache; one way is globally for " "all sites and each VirtualHost running on the system. To enable HTTP2 " "globally, add the following line under the ServerName directive:" msgstr "" "Существует два метода реализации HTTP2 в Apache; один способ — глобально для " "всех сайтов и каждого VirtualHost, работающего в системе. Чтобы включить " "HTTP2 глобально, добавьте следующую строку под директивой ServerName:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2320 #, no-wrap msgid "Protocols h2 http/1.1\n" msgstr "Protocols h2 http/1.1\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2325 msgid "" "To enable HTTP2 over plaintext, use h2h2chttp/1.1 in the [.filename]#httpd." "conf#." msgstr "" "Для включения HTTP2 в незашифрованном виде используйте h2h2chttp/1.1 в файле " "[.filename]#httpd.conf#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2329 msgid "" "Having the h2c here will allow plaintext HTTP2 data to pass on the system " "but is not recommended. In addition, using the http/1.1 here will allow " "fallback to the HTTP1.1 version of the protocol should it be needed by the " "system." msgstr "" "Наличие здесь h2c позволит передавать незашифрованные данные HTTP2 в " "системе, но это не рекомендуется. Кроме того, использование здесь http/1.1 " "позволит системе вернуться к версии протокола HTTP1.1, если это потребуется." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2331 msgid "" "To enable HTTP2 for individual VirtualHosts, add the same line within the " "VirtualHost directive in either [.filename]#httpd.conf# or [.filename]#httpd-" "ssl.conf#." msgstr "" "Для включения HTTP2 для отдельных VirtualHosts добавьте ту же строку в " "директиву VirtualHost в файле [.filename]#httpd.conf# или [.filename]#httpd-" "ssl.conf#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2333 msgid "" "Reload the configuration using the `apachectl`[parameter]#reload# command " "and test the configuration either by using either of the following methods " "after visiting one of the hosted pages:" msgstr "" "Перезагрузите конфигурацию с помощью команды `apachectl`[parameter]#reload# " "и проверьте конфигурацию каким-либо из следующих способов после посещения " "одной из страниц на сервере:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2337 #, no-wrap msgid "# grep \"HTTP/2.0\" /var/log/httpd-access.log\n" msgstr "# grep \"HTTP/2.0\" /var/log/httpd-access.log\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2340 msgid "This should return something similar to the following:" msgstr "Это должно вернуть что-то похожее на следующее:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2347 #, no-wrap msgid "" "192.168.1.205 - - [18/Oct/2020:18:34:36 -0400] \"GET / HTTP/2.0\" 304 -\n" "192.0.2.205 - - [18/Oct/2020:19:19:57 -0400] \"GET / HTTP/2.0\" 304 -\n" "192.0.0.205 - - [18/Oct/2020:19:20:52 -0400] \"GET / HTTP/2.0\" 304 -\n" "192.0.2.205 - - [18/Oct/2020:19:23:10 -0400] \"GET / HTTP/2.0\" 304 -\n" msgstr "" "192.168.1.205 - - [18/Oct/2020:18:34:36 -0400] \"GET / HTTP/2.0\" 304 -\n" "192.0.2.205 - - [18/Oct/2020:19:19:57 -0400] \"GET / HTTP/2.0\" 304 -\n" "192.0.0.205 - - [18/Oct/2020:19:20:52 -0400] \"GET / HTTP/2.0\" 304 -\n" "192.0.2.205 - - [18/Oct/2020:19:23:10 -0400] \"GET / HTTP/2.0\" 304 -\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2350 msgid "" "The other method is using the web browser's built in site debugger or " "`tcpdump`; however, using either method is beyond the scope of this document." msgstr "" "Другой способ — использование встроенного в веб-браузер отладчика сайтов или " "`tcpdump`; однако использование любого из этих методов выходит за рамки " "данного документа." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2353 msgid "" "Support for HTTP2 reverse proxy connections by using the [." "filename]#mod_proxy_http2.so# module. When configuring the ProxyPass or " "RewriteRules [P] statements, they should use h2:// for the connection." msgstr "" "Поддержка обратных прокси-соединений HTTP2 с использованием модуля [." "filename]#mod_proxy_http2.so#. При настройке директив ProxyPass или " "RewriteRules с флагом [P] следует использовать h2:// для соединения." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2354 #, no-wrap msgid "Dynamic Websites" msgstr "Динамические веб-сайты" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2358 msgid "" "In addition to mod_perl and mod_php, other languages are available for " "creating dynamic web content. These include Django and Ruby on Rails." msgstr "" "В дополнение к mod_perl и mod_php, доступны другие языки для создания " "динамического веб-содержимого. Среди них Django и Ruby on Rails." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2359 #, no-wrap msgid "Django" msgstr "Django" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2365 msgid "" "Django is a BSD-licensed framework designed to allow developers to write " "high performance, elegant web applications quickly. It provides an object-" "relational mapper so that data types are developed as Python objects. A " "rich dynamic database-access API is provided for those objects without the " "developer ever having to write SQL. It also provides an extensible template " "system so that the logic of the application is separated from the HTML " "presentation." msgstr "" "Django — это фреймворк под лицензией BSD, разработанный для того, чтобы " "позволить разработчикам быстро создавать высокопроизводительные и элегантные " "веб-приложения. Он предоставляет объектно-реляционный преобразователь (ORM), " "позволяющий разрабатывать типы данных как объекты Python. Для этих объектов " "предоставляется богатый динамический API доступа к базе данных, без " "необходимости написания разработчиком кода на SQL. Также имеется расширяемая " "система шаблонов, чтобы логика приложения была отделена от HTML-" "представления." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2369 msgid "" "Django depends on [.filename]#mod_python#, and an SQL database engine. In " "FreeBSD, the package:www/py-django[] port automatically installs [." "filename]#mod_python# and supports the PostgreSQL, MySQL, or SQLite " "databases, with the default being SQLite. To change the database engine, " "type `make config` within [.filename]#/usr/ports/www/py-django#, then " "install the port." msgstr "" "Django зависит от [.filename]#mod_python# и движка SQL-базы данных. В " "FreeBSD порт package:www/py-django[] автоматически устанавливает [." "filename]#mod_python# и поддерживает базы данных PostgreSQL, MySQL или " "SQLite, по умолчанию используется SQLite. Чтобы изменить движок базы данных, " "введите `make config` в [.filename]#/usr/ports/www/py-django#, затем " "установите порт." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2372 msgid "" "Once Django is installed, the application will need a project directory " "along with the Apache configuration in order to use the embedded Python " "interpreter. This interpreter is used to call the application for specific " "URLs on the site." msgstr "" "После установки Django приложению понадобится каталог проекта вместе с " "конфигурацией Apache для использования встроенного интерпретатора Python. " "Этот интерпретатор используется для вызова приложения при обращении к " "определённым URL на сайте." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2374 msgid "" "To configure Apache to pass requests for certain URLs to the web " "application, add the following to [.filename]#httpd.conf#, specifying the " "full path to the project directory:" msgstr "" "Для настройки Apache для передачи запросов определенных URL веб-приложению " "добавьте следующее в [.filename]#httpd.conf#, указав полный путь к каталогу " "проекта:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2385 #, no-wrap msgid "" "\n" " SetHandler python-program\n" " PythonPath \"['/dir/to/the/django/packages/'] + sys.path\"\n" " PythonHandler django.core.handlers.modpython\n" " SetEnv DJANGO_SETTINGS_MODULE mysite.settings\n" " PythonAutoReload On\n" " PythonDebug On\n" "\n" msgstr "" "\n" " SetHandler python-program\n" " PythonPath \"['/dir/to/the/django/packages/'] + sys.path\"\n" " PythonHandler django.core.handlers.modpython\n" " SetEnv DJANGO_SETTINGS_MODULE mysite.settings\n" " PythonAutoReload On\n" " PythonDebug On\n" "\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2388 msgid "" "Refer to https://docs.djangoproject.com[https://docs.djangoproject.com] for " "more information on how to use Django." msgstr "" "Обратитесь к https://docs.djangoproject.com[https://docs.djangoproject.com] " "для получения дополнительной информации о том, как использовать Django." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2389 #, no-wrap msgid "Ruby on Rails" msgstr "Ruby on Rails" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2394 msgid "" "Ruby on Rails is another open source web framework that provides a full " "development stack. It is optimized to make web developers more productive " "and capable of writing powerful applications quickly. On FreeBSD, it can be " "installed using the package:www/rubygem-rails[] package or port." msgstr "" "Ruby on Rails — это еще один фреймворк с открытым исходным кодом для веб-" "разработки, предоставляющий полный стек разработки. Он оптимизирован для " "повышения продуктивности веб-разработчиков и позволяет быстро создавать " "мощные приложения. В FreeBSD его можно установить с помощью пакета package:" "www/rubygem-rails[] или порта." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2396 msgid "" "Refer to http://guides.rubyonrails.org[http://guides.rubyonrails.org] for " "more information on how to use Ruby on Rails." msgstr "" "Обратитесь к http://guides.rubyonrails.org[http://guides.rubyonrails.org] " "для получения дополнительной информации о том, как использовать Ruby on " "Rails." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2398 #, no-wrap msgid "File Transfer Protocol (FTP)" msgstr "Протокол передачи файлов (FTP)" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2402 msgid "" "The File Transfer Protocol (FTP) provides users with a simple way to " "transfer files to and from an FTP server. FreeBSD includes FTP server " "software, ftpd, in the base system." msgstr "" "Протокол передачи файлов (FTP — File Transfer Protocol) предоставляет " "пользователям простой способ передачи файлов на FTP-сервер и с него. В " "базовую систему FreeBSD включено программное обеспечение FTP-сервера — ftpd." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2406 msgid "" "FreeBSD provides several configuration files for controlling access to the " "FTP server. This section summarizes these files. Refer to man:ftpd[8] for " "more details about the built-in FTP server." msgstr "" "В FreeBSD предусмотрено несколько файлов конфигурации для управления " "доступом к FTP-серверу. В этом разделе приводится их краткое описание. " "Подробнее о встроенном FTP-сервере можно узнать в man:ftpd[8]." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2407 #, no-wrap msgid "Configuration" msgstr "Конфигурация" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2413 msgid "" "The most important configuration step is deciding which accounts will be " "allowed access to the FTP server. A FreeBSD system has a number of system " "accounts which should not be allowed FTP access. The list of users " "disallowed any FTP access can be found in [.filename]#/etc/ftpusers#. By " "default, it includes system accounts. Additional users that should not be " "allowed access to FTP can be added." msgstr "" "Самый важный этап настройки — определение учётных записей, которым будет " "разрешён доступ к FTP-серверу. В системе FreeBSD существует ряд системных " "учётных записей, которым не следует разрешать доступ по FTP. Список " "пользователей, которым запрещён любой доступ по FTP, можно найти в [." "filename]#/etc/ftpusers#. По умолчанию в него включены системные учётные " "записи. Можно добавить дополнительных пользователей, которым не следует " "разрешать доступ к FTP." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2417 msgid "" "In some cases it may be desirable to restrict the access of some users " "without preventing them completely from using FTP. This can be accomplished " "be creating [.filename]#/etc/ftpchroot# as described in man:ftpchroot[5]. " "This file lists users and groups subject to FTP access restrictions." msgstr "" "В некоторых случаях может быть желательно ограничить доступ некоторых " "пользователей, не запрещая им полностью использовать FTP. Это можно сделать, " "создав файл [.filename]#/etc/ftpchroot#, как описано в man:ftpchroot[5]. В " "этом файле перечислены пользователи и группы, подлежащие ограничениям " "доступа к FTP." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2422 msgid "" "To enable anonymous FTP access to the server, create a user named `ftp` on " "the FreeBSD system. Users will then be able to log on to the FTP server " "with a username of `ftp` or `anonymous`. When prompted for the password, " "any input will be accepted, but by convention, an email address should be " "used as the password. The FTP server will call man:chroot[2] when an " "anonymous user logs in, to restrict access to only the home directory of the " "`ftp` user." msgstr "" "Для обеспечения анонимного доступа по FTP к серверу создайте пользователя с " "именем `ftp` в системе FreeBSD. Пользователи смогут входить на FTP-сервер " "под именем `ftp` или `anonymous`. При запросе пароля будет принят любой " "ввод, но по соглашению в качестве пароля следует использовать адрес " "электронной почты. FTP-сервер вызовет man:chroot[2] при входе анонимного " "пользователя, чтобы ограничить доступ только домашним каталогом пользователя " "`ftp`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2427 msgid "" "There are two text files that can be created to specify welcome messages to " "be displayed to FTP clients. The contents of [.filename]#/etc/ftpwelcome# " "will be displayed to users before they reach the login prompt. After a " "successful login, the contents of [.filename]#/etc/ftpmotd# will be " "displayed. Note that the path to this file is relative to the login " "environment, so the contents of [.filename]#~ftp/etc/ftpmotd# would be " "displayed for anonymous users." msgstr "" "Существуют два текстовых файла, которые можно создать для отображения " "приветственных сообщений клиентам FTP. Содержимое файла [.filename]#/etc/" "ftpwelcome# будет показано пользователям до появления запроса на вход. После " "успешного входа будет отображено содержимое файла [.filename]#/etc/ftpmotd#. " "Обратите внимание, что путь к этому файлу указывается относительно окружения " "входа, поэтому для анонимных пользователей будет отображаться содержимое " "файла [.filename]#~ftp/etc/ftpmotd#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2429 msgid "" "Once the FTP server has been configured, set the appropriate variable in [." "filename]#/etc/rc.conf# to start the service during boot:" msgstr "" "После настройки FTP-сервера установите соответствующую переменную в [." "filename]#/etc/rc.conf#, чтобы служба запускалась при загрузке:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2433 #, no-wrap msgid "ftpd_enable=\"YES\"\n" msgstr "ftpd_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2436 msgid "To start the service now:" msgstr "Чтобы запустить службу сейчас:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2440 #, no-wrap msgid "# service ftpd start\n" msgstr "# service ftpd start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2443 msgid "Test the connection to the FTP server by typing:" msgstr "Протестируйте подключение к FTP-серверу, набрав:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2447 #, no-wrap msgid "% ftp localhost\n" msgstr "% ftp localhost\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2452 msgid "" "The ftpd daemon uses man:syslog[3] to log messages. By default, the system " "log daemon will write messages related to FTP in [.filename]#/var/log/" "xferlog#. The location of the FTP log can be modified by changing the " "following line in [.filename]#/etc/syslog.conf#:" msgstr "" "Демон ftpd использует man:syslog[3] для записи сообщений. По умолчанию, " "демон системного журнала записывает сообщения, связанные с FTP, в [." "filename]#/var/log/xferlog#. Местоположение журнала FTP может быть изменено " "путём редактирования следующей строки в [.filename]#/etc/syslog.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2456 #, no-wrap msgid "ftp.info /var/log/xferlog\n" msgstr "ftp.info /var/log/xferlog\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2465 msgid "" "Be aware of the potential problems involved with running an anonymous FTP " "server. In particular, think twice about allowing anonymous users to upload " "files. It may turn out that the FTP site becomes a forum for the trade of " "unlicensed commercial software or worse. If anonymous FTP uploads are " "required, then verify the permissions so that these files cannot be read by " "other anonymous users until they have been reviewed by an administrator." msgstr "" "Имейте в виду потенциальные проблемы, связанные с запуском анонимного FTP-" "сервера. В частности, хорошо подумайте, прежде чем разрешать анонимным " "пользователям загружать файлы. Может оказаться, что FTP-сайт станет " "площадкой для обмена нелицензионным коммерческим программным обеспечением " "или даже чем-то хуже. Если загрузка файлов анонимными пользователями " "необходима, убедитесь в правильности настроек прав доступа, чтобы эти файлы " "не могли быть прочитаны другими анонимными пользователями до тех пор, пока " "их не проверит администратор." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2468 #, no-wrap msgid "File and Print Services for Microsoft(R) Windows(R) Clients (Samba)" msgstr "Услуги файлов и печати для клиентов Microsoft(R) Windows(R) (Samba)" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2475 msgid "" "Samba is a popular open source software package that provides file and print " "services using the SMB/CIFS protocol. This protocol is built into " "Microsoft(R) Windows(R) systems. It can be added to non-Microsoft(R) " "Windows(R) systems by installing the Samba client libraries. The protocol " "allows clients to access shared data and printers. These shares can be " "mapped as a local disk drive and shared printers can be used as if they were " "local printers." msgstr "" "Samba — это популярный пакет открытого программного обеспечения, " "предоставляющий файловые и печатные услуги с использованием протокола SMB/" "CIFS. Этот протокол встроен в системы Microsoft(R) Windows(R). Он может быть " "добавлен в системы, отличные от Microsoft(R) Windows(R), путем установки " "клиентских библиотек Samba. Протокол позволяет клиентам получать доступ к " "общим данным и принтерам. Эти ресурсы могут быть отображены как локальный " "диск, а общие принтеры могут использоваться так, как если бы они были " "локальными." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2478 msgid "" "On FreeBSD, the Samba client libraries can be installed using the package:" "net/samba416[] port or package. The client provides the ability for a " "FreeBSD system to access SMB/CIFS shares in a Microsoft(R) Windows(R) " "network." msgstr "" "На FreeBSD клиентские библиотеки Samba могут быть установлены с помощью " "порта или пакета package:net/samba416[]. Клиент предоставляет возможность " "системе FreeBSD получать доступ к общим ресурсам SMB/CIFS в сети " "Microsoft(R) Windows(R)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2481 msgid "" "A FreeBSD system can also be configured to act as a Samba server by " "installing the same package:net/samba416[] port or package. This allows the " "administrator to create SMB/CIFS shares on the FreeBSD system which can be " "accessed by clients running Microsoft(R) Windows(R) or the Samba client " "libraries." msgstr "" "Система FreeBSD также может быть настроена в качестве сервера Samba путем " "установки порта или пакета package:net/samba416[]. Это позволяет " "администратору создавать общие ресурсы SMB/CIFS на системе FreeBSD, к " "которым могут обращаться клиенты под управлением Microsoft(R) Windows(R) или " "использующие клиентские библиотеки Samba." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2482 #, no-wrap msgid "Server Configuration" msgstr "Конфигурация сервера" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2486 msgid "" "Samba is configured in [.filename]#/usr/local/etc/smb4.conf#. This file " "must be created before Samba can be used." msgstr "" "Samba настраивается в файле [.filename]#/usr/local/etc/smb4.conf#. Этот файл " "должен быть создан до начала использования Samba." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2489 msgid "" "A simple [.filename]#smb4.conf# to share directories and printers with " "Windows(R) clients in a workgroup is shown here. For more complex setups " "involving LDAP or Active Directory, it is easier to use man:samba-tool[8] to " "create the initial [.filename]#smb4.conf#." msgstr "" "Простой пример [.filename]#smb4.conf# для общего доступа к каталогам и " "принтерам с клиентами Windows(R) в рабочей группе показан ниже. Для более " "сложных настроек, включающих LDAP или Active Directory, проще использовать " "man:samba-tool[8] для создания начального [.filename]#smb4.conf#." #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2499 #, no-wrap msgid "" "[global]\n" "workgroup = WORKGROUP\n" "server string = Samba Server Version %v\n" "netbios name = ExampleMachine\n" "wins support = Yes\n" "security = user\n" "passdb backend = tdbsam\n" msgstr "" "[global]\n" "workgroup = WORKGROUP\n" "server string = Samba Server Version %v\n" "netbios name = ExampleMachine\n" "wins support = Yes\n" "security = user\n" "passdb backend = tdbsam\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2511 #, no-wrap msgid "" "# Example: share /usr/src accessible only to 'developer' user\n" "[src]\n" "path = /usr/src\n" "valid users = developer\n" "writable = yes\n" "browsable = yes\n" "read only = no\n" "guest ok = no\n" "public = no\n" "create mask = 0666\n" "directory mask = 0755\n" msgstr "" "# Example: share /usr/src accessible only to 'developer' user\n" "[src]\n" "path = /usr/src\n" "valid users = developer\n" "writable = yes\n" "browsable = yes\n" "read only = no\n" "guest ok = no\n" "public = no\n" "create mask = 0666\n" "directory mask = 0755\n" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2514 #, no-wrap msgid "Global Settings" msgstr "Глобальные Настройки" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2517 msgid "" "Settings that describe the network are added in [.filename]#/usr/local/etc/" "smb4.conf#:" msgstr "" "Настройки, описывающие сеть, добавляются в [.filename]#/usr/local/etc/smb4." "conf#:" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2518 #, no-wrap msgid "`workgroup`" msgstr "`workgroup`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2520 msgid "The name of the workgroup to be served." msgstr "Имя рабочей группы, которая будет обслуживаться." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2521 #, no-wrap msgid "`netbios name`" msgstr "`netbios name`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2524 msgid "" "The NetBIOS name by which a Samba server is known. By default, it is the " "same as the first component of the host's DNS name." msgstr "" "Имя NetBIOS, под которым известен сервер Samba. По умолчанию оно совпадает с " "первой частью DNS-имени хоста." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2525 #, no-wrap msgid "`server string`" msgstr "`server string`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2527 msgid "" "The string that will be displayed in the output of `net view` and some other " "networking tools that seek to display descriptive text about the server." msgstr "" "Строка, которая будет отображаться в выводе команды `net view` и некоторых " "других сетевых инструментов, предназначенных для отображения описательного " "текста о сервере." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2528 #, no-wrap msgid "`wins support`" msgstr "`wins support`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2531 msgid "" "Whether Samba will act as a WINS server. Do not enable support for WINS on " "more than one server on the network." msgstr "" "Будет ли Samba выступать в качестве сервера WINS. Не следует включать " "поддержку WINS более чем на одном сервере в сети." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2533 #, no-wrap msgid "Security Settings" msgstr "Настройки Безопасности" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2537 msgid "" "The most important settings in [.filename]#/usr/local/etc/smb4.conf# are the " "security model and the backend password format. These directives control " "the options:" msgstr "" "Важнейшие настройки в [.filename]#/usr/local/etc/smb4.conf# — это модель " "безопасности и формат хранения паролей. Эти параметры управляются следующими " "директивами:" #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2538 #, no-wrap msgid "`security`" msgstr "`security`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2541 msgid "" "If the clients use usernames that are the same as their usernames on the " "FreeBSD machine, user level security should be used. `security = user` is " "the default security policy and it requires clients to first log on before " "they can access shared resources." msgstr "" "Если клиенты используют имена пользователей, совпадающие с их именами на " "машине FreeBSD, следует использовать уровень безопасности пользователя. " "`security = user` — это политика безопасности по умолчанию, которая требует " "от клиентов сначала войти в систему, прежде чем они смогут получить доступ к " "общим ресурсам." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2543 msgid "" "Refer to man:smb.conf[5] to learn about other supported settings for the " "`security` option." msgstr "" "Обратитесь к man:smb.conf[5], чтобы узнать о других поддерживаемых " "настройках для опции `security`." #. type: Labeled list #: documentation/content/en/books/handbook/network-servers/_index.adoc:2544 #, no-wrap msgid "`passdb backend`" msgstr "`passdb backend`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2550 msgid "" "Samba has several different backend authentication models. Clients may be " "authenticated with LDAP, NIS+, an SQL database, or a modified password " "file. The recommended authentication method, `tdbsam`, is ideal for simple " "networks and is covered here. For larger or more complex networks, " "`ldapsam` is recommended. `smbpasswd` was the former default and is now " "obsolete." msgstr "" "Samba поддерживает несколько различных моделей аутентификации на стороне " "сервера. Клиенты могут быть аутентифицированы с помощью LDAP, NIS+, SQL-базы " "данных или модифицированного файла паролей. Рекомендуемый метод " "аутентификации `tdbsam` идеально подходит для простых сетей, и мы его " "рассмотрим здесь. Для более крупных или сложных сетей рекомендуется " "`ldapsam`. `smbpasswd` был прежним методом по умолчанию и теперь устарел." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2551 #, no-wrap msgid "Samba Users" msgstr "Пользователи Samba" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2555 msgid "" "FreeBSD user accounts must be mapped to the `SambaSAMAccount` database for " "Windows(R) clients to access the share. Map existing FreeBSD user accounts " "using man:pdbedit[8]:" msgstr "" "Пользовательские учетные записи FreeBSD должны быть сопоставлены с базой " "данных `SambaSAMAccount` для доступа клиентов Windows(R) к общему ресурсу. " "Сопоставьте существующие учетные записи FreeBSD с помощью man:pdbedit[8]:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2559 #, no-wrap msgid "# pdbedit -a -u username\n" msgstr "# pdbedit -a -u username\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2563 msgid "" "This section has only mentioned the most commonly used settings. Refer to " "the https://wiki.samba.org[Official Samba Wiki] for additional information " "about the available configuration options." msgstr "" "В этом разделе упомянуты только наиболее часто используемые настройки. " "Дополнительную информацию о доступных параметрах конфигурации можно найти на " "https://wiki.samba.org[Официальном вики Samba]." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2564 #, no-wrap msgid "Starting Samba" msgstr "Начало работы с Samba" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2567 msgid "" "To enable Samba at boot time, add the following line to [.filename]#/etc/rc." "conf#:" msgstr "" "Чтобы включить Samba при загрузке, добавьте следующую строку в [.filename]#/" "etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2571 #, no-wrap msgid "samba_server_enable=\"YES\"\n" msgstr "samba_server_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2574 msgid "To start Samba now:" msgstr "Чтобы сейчас запустить Samba:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2581 #, no-wrap msgid "" "# service samba_server start\n" "Performing sanity check on Samba configuration: OK\n" "Starting nmbd.\n" "Starting smbd.\n" msgstr "" "# service samba_server start\n" "Performing sanity check on Samba configuration: OK\n" "Starting nmbd.\n" "Starting smbd.\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2586 msgid "" "Samba consists of three separate daemons. Both the nmbd and smbd daemons " "are started by `samba_enable`. If winbind name resolution is also required, " "set:" msgstr "" "Samba состоит из трёх отдельных демонов. Оба демона nmbd и smbd запускаются " "параметром `samba_enable`. Если также требуется разрешение имён через " "winbind, укажите:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2590 #, no-wrap msgid "winbindd_enable=\"YES\"\n" msgstr "winbindd_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2593 msgid "Samba can be stopped at any time by typing:" msgstr "Samba можно остановить в любой момент, набрав:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2597 #, no-wrap msgid "# service samba_server stop\n" msgstr "# service samba_server stop\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2601 msgid "" "Samba is a complex software suite with functionality that allows broad " "integration with Microsoft(R) Windows(R) networks. For more information " "about functionality beyond the basic configuration described here, refer to " "https://www.samba.org[https://www.samba.org]." msgstr "" "Samba — это комплексный программный комплект, функциональность которого " "обеспечивает широкую интеграцию с сетями Microsoft(R) Windows(R). Для " "получения дополнительной информации о возможностях, выходящих за рамки " "базовой конфигурации, описанной здесь, обратитесь к https://www.samba." "org[https://www.samba.org]." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2603 #, no-wrap msgid "Clock Synchronization with NTP" msgstr "Синхронизация времени с помощью NTP" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2609 msgid "" "Over time, a computer's clock is prone to drift. This is problematic as " "many network services require the computers on a network to share the same " "accurate time. Accurate time is also needed to ensure that file timestamps " "stay consistent. The Network Time Protocol (NTP) is one way to provide " "clock accuracy in a network." msgstr "" "Со временем часы компьютера могут отставать или спешить. Это создаёт " "проблемы, так как многие сетевые службы требуют, чтобы компьютеры в сети " "использовали одинаковое точное время. Точное время также необходимо для " "обеспечения согласованности временных меток файлов. Протокол сетевого " "времени (NTP — Network Time Protocol) — это один из способов обеспечить " "точность часов в сети." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2611 msgid "" "FreeBSD includes man:ntpd[8] which can be configured to query other NTP " "servers to synchronize the clock on that machine or to provide time services " "to other computers in the network." msgstr "" "FreeBSD включает man:ntpd[8], который можно настроить для запроса к другим " "серверам NTP с целью синхронизации часов на этом компьютере или для " "предоставления сервиса времени другим компьютерам в сети." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2614 msgid "" "This section describes how to configure ntpd on FreeBSD. Further " "documentation can be found in [.filename]#/usr/share/doc/ntp/# in HTML " "format." msgstr "" "В этом разделе описывается, как настроить ntpd в FreeBSD. Дополнительная " "документация доступна в [.filename]#/usr/share/doc/ntp/# в формате HTML." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2615 #, no-wrap msgid "NTP Configuration" msgstr "Конфигурация NTP" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2619 msgid "" "On FreeBSD, the built-in ntpd can be used to synchronize a system's clock. " "ntpd is configured using man:rc.conf[5] variables and [.filename]#/etc/ntp." "conf#, as detailed in the following sections." msgstr "" "На FreeBSD встроенный ntpd может использоваться для синхронизации системных " "часов. Настройка ntpd осуществляется с помощью переменных man:rc.conf[5] и " "файла [.filename]#/etc/ntp.conf#, как подробно описано в следующих разделах." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2622 msgid "" "ntpd communicates with its network peers using UDP packets. Any firewalls " "between the machine and its NTP peers must be configured to allow UDP " "packets in and out on port 123." msgstr "" "ntpd взаимодействует с сетевыми узлами с помощью UDP-пакетов. Любые " "межсетевые экраны между вашей машиной и её NTP-узлами должны быть настроены " "так, чтобы разрешать входящие и исходящие UDP-пакеты через порт 123." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2623 #, no-wrap msgid "The [.filename]#/etc/ntp.conf# file" msgstr "Файл [.filename]#/etc/ntp.conf#" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2633 msgid "" "ntpd reads [.filename]#/etc/ntp.conf# to determine which NTP servers to " "query. Choosing several NTP servers is recommended in case one of the " "servers becomes unreachable or its clock proves unreliable. As ntpd " "receives responses, it favors reliable servers over the less reliable ones. " "The servers which are queried can be local to the network, provided by an " "ISP, or selected from an http://support.ntp.org/bin/view/Servers/" "WebHome[ online list of publicly accessible NTP servers]. When choosing a " "public NTP server, select one that is geographically close and review its " "usage policy. The `pool` configuration keyword selects one or more servers " "from a pool of servers. An http://support.ntp.org/bin/view/Servers/" "NTPPoolServers[ online list of publicly accessible NTP pools] is available, " "organized by geographic area. In addition, FreeBSD provides a project-" "sponsored pool, `0.freebsd.pool.ntp.org`." msgstr "" "ntpd читает файл [.filename]#/etc/ntp.conf#, чтобы определить, к каким " "серверам NTP обращаться. Рекомендуется выбирать несколько серверов NTP на " "случай, если один из серверов станет недоступен или его часы окажутся " "ненадёжными. По мере получения ответов ntpd отдаёт предпочтение более " "надёжным серверам перед менее надёжными. Запрашиваемые серверы могут быть " "локальными в сети, предоставляться ISP или выбираться из http://support.ntp." "org/bin/view/Servers/WebHome[онлайн-списка общедоступных серверов NTP]. При " "выборе общедоступного сервера NTP следует выбирать сервер, географически " "близкий к вам, и ознакомиться с его политикой использования. Ключевое слово " "`pool` в конфигурации выбирает один или несколько серверов из пула серверов. " "Доступен http://support.ntp.org/bin/view/Servers/NTPPoolServers[онлайн-" "список общедоступных пулов NTP], организованный по географическим регионам. " "Кроме того, FreeBSD предоставляет спонсируемый проектом пул `0.freebsd.pool." "ntp.org`." #. type: Block title #: documentation/content/en/books/handbook/network-servers/_index.adoc:2634 #, no-wrap msgid "Sample [.filename]#/etc/ntp.conf#" msgstr "Пример [.filename]#/etc/ntp.conf#" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2639 msgid "" "This is a simple example of an [.filename]#ntp.conf# file. It can safely be " "used as-is; it contains the recommended `restrict` options for operation on " "a publicly-accessible network connection." msgstr "" "Вот простой пример файла [.filename]#ntp.conf#. Его можно безопасно " "использовать в таком виде; он содержит рекомендуемые параметры `restrict` " "для работы в общедоступном сетевом подключении." #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2647 #, no-wrap msgid "" "# Disallow ntpq control/query access. Allow peers to be added only\n" "# based on pool and server statements in this file.\n" "restrict default limited kod nomodify notrap noquery nopeer\n" "restrict source limited kod nomodify notrap noquery\n" msgstr "" "# Disallow ntpq control/query access. Allow peers to be added only\n" "# based on pool and server statements in this file.\n" "restrict default limited kod nomodify notrap noquery nopeer\n" "restrict source limited kod nomodify notrap noquery\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2651 #, no-wrap msgid "" "# Allow unrestricted access from localhost for queries and control.\n" "restrict 127.0.0.1\n" "restrict ::1\n" msgstr "" "# Allow unrestricted access from localhost for queries and control.\n" "restrict 127.0.0.1\n" "restrict ::1\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2654 #, no-wrap msgid "" "# Add a specific server.\n" "server ntplocal.example.com iburst\n" msgstr "" "# Add a specific server.\n" "server ntplocal.example.com iburst\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2658 #, no-wrap msgid "" "# Add FreeBSD pool servers until 3-6 good servers are available.\n" "tos minclock 3 maxclock 6\n" "pool 0.freebsd.pool.ntp.org iburst\n" msgstr "" "# Add FreeBSD pool servers until 3-6 good servers are available.\n" "tos minclock 3 maxclock 6\n" "pool 0.freebsd.pool.ntp.org iburst\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2661 #, no-wrap msgid "" "# Use a local leap-seconds file.\n" "leapfile \"/var/db/ntpd.leap-seconds.list\"\n" msgstr "" "# Use a local leap-seconds file.\n" "leapfile \"/var/db/ntpd.leap-seconds.list\"\n" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2667 msgid "" "The format of this file is described in man:ntp.conf[5]. The descriptions " "below provide a quick overview of just the keywords used in the sample file " "above." msgstr "" "Формат этого файла описан в man:ntp.conf[5]. Приведённые ниже описания дают " "краткий обзор только ключевых слов, использованных в примере файла выше." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2673 msgid "" "By default, an NTP server is accessible to any network host. The `restrict` " "keyword controls which systems can access the server. Multiple `restrict` " "entries are supported, each one refining the restrictions given in previous " "statements. The values shown in the example grant the local system full " "query and control access, while allowing remote systems only the ability to " "query the time. For more details, refer to the `Access Control Support` " "subsection of man:ntp.conf[5]." msgstr "" "По умолчанию сервер NTP доступен для любого узла сети. Ключевое слово " "`restrict` управляет тем, какие системы могут обращаться к серверу. " "Поддерживается несколько записей `restrict`, каждая из которых уточняет " "ограничения, заданные в предыдущих утверждениях. Значения, указанные в " "примере, предоставляют локальной системе полный доступ для запросов и " "управления, в то время как удалённые системы могут только запрашивать " "время. Для получения дополнительной информации обратитесь к подразделу " "`Access Control Support` в man:ntp.conf[5]." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2679 msgid "" "The `server` keyword specifies a single server to query. The file can " "contain multiple server keywords, with one server listed on each line. The " "`pool` keyword specifies a pool of servers. ntpd will add one or more " "servers from this pool as needed to reach the number of peers specified " "using the `tos minclock` value. The `iburst` keyword directs ntpd to " "perform a burst of eight quick packet exchanges with a server when contact " "is first established, to help quickly synchronize system time." msgstr "" "Ключевое слово `server` указывает отдельный сервер для запросов. Файл может " "содержать несколько ключевых слов `server`, по одному серверу на каждой " "строке. Ключевое слово `pool` определяет пул серверов. ntpd добавит один или " "несколько серверов из этого пула по мере необходимости, чтобы достичь " "количества узлов, указанного с помощью значения `tos minclock`. Ключевое " "слово `iburst` предписывает ntpd выполнить серию из восьми быстрых обменов " "пакетами с сервером при первом установлении соединения, чтобы быстро " "синхронизировать системное время." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2683 msgid "" "The `leapfile` keyword specifies the location of a file containing " "information about leap seconds. The file is updated automatically by man:" "periodic[8]. The file location specified by this keyword must match the " "location set in the `ntp_db_leapfile` variable in [.filename]#/etc/rc.conf#." msgstr "" "Ключевое слово `leapfile` указывает расположение файла, содержащего " "информацию о високосных секундах. Этот файл автоматически обновляется с " "помощью man:periodic[8]. Указанное расположение файла должно соответствовать " "значению переменной `ntp_db_leapfile` в файле [.filename]#/etc/rc.conf#." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2684 #, no-wrap msgid "NTP entries in [.filename]#/etc/rc.conf#" msgstr "Записи NTP в [.filename]#/etc/rc.conf#" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2688 msgid "" "Set `ntpd_enable=YES` to start ntpd at boot time. Once `ntpd_enable=YES` " "has been added to [.filename]#/etc/rc.conf#, ntpd can be started immediately " "without rebooting the system by typing:" msgstr "" "Установите `ntpd_enable=YES` для запуска ntpd при загрузке. После добавления " "`ntpd_enable=YES` в [.filename]#/etc/rc.conf#, ntpd можно немедленно " "запустить без перезагрузки системы, введя:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2692 #, no-wrap msgid "# service ntpd start\n" msgstr "# service ntpd start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2696 msgid "" "Only `ntpd_enable` must be set to use ntpd. The [.filename]#rc.conf# " "variables listed below may also be set as needed." msgstr "" "Для использования ntpd необходимо установить только `ntpd_enable`. При " "необходимости также могут быть заданы перечисленные ниже переменные [." "filename]#rc.conf#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2700 msgid "" "Set `ntpd_sync_on_start=YES` to allow ntpd to step the clock any amount, one " "time at startup. Normally ntpd will log an error message and exit if the " "clock is off by more than 1000 seconds. This option is especially useful on " "systems without a battery-backed realtime clock." msgstr "" "Установите `ntpd_sync_on_start=YES`, чтобы разрешить ntpd однократно " "корректировать время при запуске на любую величину. Обычно ntpd записывает " "сообщение об ошибке и завершает работу, если расхождение времени превышает " "1000 секунд. Эта опция особенно полезна для систем без аккумуляторного " "резервного питания часов реального времени." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2702 msgid "" "Set `ntpd_oomprotect=YES` to protect the ntpd daemon from being killed by " "the system attempting to recover from an Out Of Memory (OOM) condition." msgstr "" "Установите `ntpd_oomprotect=YES`, чтобы защитить демон ntpd от завершения " "системой при попытке восстановиться после состояния нехватки памяти (OOM)." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2704 msgid "" "Set `ntpd_config=` to the location of an alternate [.filename]#ntp.conf# " "file." msgstr "" "Установите в значении`ntpd_config=` расположение альтернативного файла [." "filename]#ntp.conf#." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2706 msgid "" "Set `ntpd_flags=` to contain any other ntpd flags as needed, but avoid using " "these flags which are managed internally by [.filename]#/etc/rc.d/ntpd#:" msgstr "" "Установите `ntpd_flags=` с любыми другими флагами ntpd по необходимости, но " "избегайте использования тех флагов, которые устанавливаются внутри файла [." "filename]#/etc/rc.d/ntpd#:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2708 msgid "`-p` (pid file location)" msgstr "`-p` (расположение pid-файла)" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2709 msgid "`-c` (set `ntpd_config=` instead)" msgstr "`-c` (вместо этого установите `ntpd_config=` )" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2711 #, no-wrap msgid "ntpd and the unprivileged `ntpd` user" msgstr "ntpd и непривилегированный пользователь `ntpd`" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2718 msgid "" "ntpd on FreeBSD can start and run as an unprivileged user. Doing so " "requires the man:mac_ntpd[4] policy module. The [.filename]#/etc/rc.d/ntpd# " "startup script first examines the NTP configuration. If possible, it loads " "the `mac_ntpd` module, then starts ntpd as unprivileged user `ntpd` (user id " "123). To avoid problems with file and directory access, the startup script " "will not automatically start ntpd as `ntpd` when the configuration contains " "any file-related options." msgstr "" "ntpd в FreeBSD может запускаться и работать как непривилегированный " "пользователь. Для этого требуется модуль политики man:mac_ntpd[4]. Скрипт " "запуска [.filename]#/etc/rc.d/ntpd# сначала проверяет конфигурацию NTP. Если " "возможно, он загружает модуль `mac_ntpd`, а затем запускает ntpd как " "непривилегированный пользователь `ntpd` (идентификатор пользователя 123). " "Чтобы избежать проблем с доступом к файлам и каталогам, скрипт запуска не " "будет автоматически запускать ntpd как `ntpd`, если конфигурация содержит " "любые файлозависимые параметры." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2720 msgid "" "The presence of any of the following in `ntpd_flags` requires manual " "configuration as described below to run as the `ntpd` user:" msgstr "" "Присутствие любого из следующих параметров в `ntpd_flags` требует ручной " "настройки, как описано ниже, для запуска от пользователя `ntpd`:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2722 msgid "-f or --driftfile" msgstr "-f or --driftfile" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2723 msgid "-i or --jaildir" msgstr "-i or --jaildir" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2724 msgid "-k or --keyfile" msgstr "-k or --keyfile" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2725 msgid "-l or --logfile" msgstr "-l or --logfile" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2726 msgid "-s or --statsdir" msgstr "-s or --statsdir" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2728 msgid "" "The presence of any of the following keywords in [.filename]#ntp.conf# " "requires manual configuration as described below to run as the `ntpd` user:" msgstr "" "Наличие любого из следующих ключевых слов в [.filename]#ntp.conf# требует " "ручной настройки, как описано ниже, для запуска от пользователя `ntpd`:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2730 msgid "crypto" msgstr "crypto" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2731 msgid "driftfile" msgstr "driftfile" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2732 msgid "key" msgstr "key" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2733 msgid "logdir" msgstr "logdir" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2734 msgid "statsdir" msgstr "statsdir" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2736 msgid "To manually configure ntpd to run as user `ntpd`:" msgstr "" "Для ручной настройки ntpd для запуска от пользователя `ntpd` необходимо:" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2738 msgid "" "Ensure that the `ntpd` user has access to all the files and directories " "specified in the configuration." msgstr "" "Убедитесь, что пользователь `ntpd` имеет доступ ко всем файлам и каталогам, " "указанным в конфигурации." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2739 msgid "" "Arrange for the `mac_ntpd` module to be loaded or compiled into the kernel. " "See man:mac_ntpd[4] for details." msgstr "" "Обеспечьте загрузку или компиляцию модуля `mac_ntpd` в ядро. Подробности см. " "в man:mac_ntpd[4]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2740 msgid "Set `ntpd_user=\"ntpd\"` in [.filename]#/etc/rc.conf#" msgstr "Установите `ntpd_user=\"ntpd\"` в [.filename]#/etc/rc.conf#" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2741 #, no-wrap msgid "Using NTP with a PPP Connection" msgstr "Использование NTP с PPP-подключением" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2746 msgid "" "ntpd does not need a permanent connection to the Internet to function " "properly. However, if a PPP connection is configured to dial out on demand, " "NTP traffic should be prevented from triggering a dial out or keeping the " "connection alive. This can be configured with `filter` directives in [." "filename]#/etc/ppp/ppp.conf#. For example:" msgstr "" "ntpd не требует постоянного подключения к Интернету для корректной работы. " "Однако, если PPP-соединение настроено на дозвон по требованию, следует " "предотвратить инициацию дозвона или поддержание соединения из-за трафика " "NTP. Это можно настроить с помощью директив `filter` в [.filename]#/etc/ppp/" "ppp.conf#. Например:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2757 #, no-wrap msgid "" "set filter dial 0 deny udp src eq 123\n" "# Prevent NTP traffic from initiating dial out\n" "set filter dial 1 permit 0 0\n" "set filter alive 0 deny udp src eq 123\n" "# Prevent incoming NTP traffic from keeping the connection open\n" "set filter alive 1 deny udp dst eq 123\n" "# Prevent outgoing NTP traffic from keeping the connection open\n" "set filter alive 2 permit 0/0 0/0\n" msgstr "" "set filter dial 0 deny udp src eq 123\n" "# Prevent NTP traffic from initiating dial out\n" "set filter dial 1 permit 0 0\n" "set filter alive 0 deny udp src eq 123\n" "# Prevent incoming NTP traffic from keeping the connection open\n" "set filter alive 1 deny udp dst eq 123\n" "# Prevent outgoing NTP traffic from keeping the connection open\n" "set filter alive 2 permit 0/0 0/0\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2760 msgid "" "For more details, refer to the `PACKET FILTERING` section in man:ppp[8] and " "the examples in [.filename]#/usr/share/examples/ppp/#." msgstr "" "Для получения более подробной информации обратитесь к разделу `ФИЛЬТРАЦИЯ " "ПАКЕТОВ` в man:ppp[8] и примерам в [.filename]#/usr/share/examples/ppp/#." #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2764 msgid "" "Some Internet access providers block low-numbered ports, preventing NTP from " "functioning since replies never reach the machine." msgstr "" "Некоторые интернет-провайдеры блокируют порты с низкими номерами, что мешает " "работе NTP, так как ответы никогда не достигают машины." #. type: Title == #: documentation/content/en/books/handbook/network-servers/_index.adoc:2767 #, no-wrap msgid "iSCSI Initiator and Target Configuration" msgstr "Настройка инициатора и цели iSCSI" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2771 msgid "" "iSCSI is a way to share storage over a network. Unlike NFS, which works at " "the file system level, iSCSI works at the block device level." msgstr "" "iSCSI — это способ совместного использования хранилища по сети. В отличие от " "NFS, который работает на уровне файловой системы, iSCSI работает на уровне " "блочного устройства." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2775 msgid "" "In iSCSI terminology, the system that shares the storage is known as the " "_target_. The storage can be a physical disk, or an area representing " "multiple disks or a portion of a physical disk. For example, if the disk(s) " "are formatted with ZFS, a zvol can be created to use as the iSCSI storage." msgstr "" "В терминологии iSCSI система, предоставляющая хранилище, называется _целью_. " "Хранилище может быть физическим диском, областью, представляющей несколько " "дисков, или частью физического диска. Например, если диск(и) отформатированы " "с использованием ZFS, можно создать zvol для использования в качестве " "хранилища iSCSI." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2779 msgid "" "The clients which access the iSCSI storage are called _initiators_. To " "initiators, the storage available through iSCSI appears as a raw, " "unformatted disk known as a LUN. Device nodes for the disk appear in [." "filename]#/dev/# and the device must be separately formatted and mounted." msgstr "" "Клиенты, которые обращаются к хранилищу iSCSI, называются _инициаторами_. " "Для инициаторов хранилище, доступное через iSCSI, отображается как " "неформатированный диск, известный как LUN (логический номер устройства). " "Узлы устройств для диска появляются в [.filename]#/dev/#, и устройство " "должно быть отдельно отформатировано и смонтировано." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2782 msgid "" "FreeBSD provides a native, kernel-based iSCSI target and initiator. This " "section describes how to configure a FreeBSD system as a target or an " "initiator." msgstr "" "FreeBSD предоставляет встроенную поддержку iSCSI целевой системы и " "инициатора на уровне ядра. В этом разделе описывается, как настроить систему " "FreeBSD в качестве целевой системы или инициатора." #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2784 #, no-wrap msgid "Configuring an iSCSI Target" msgstr "Настройка цели iSCSI" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2787 msgid "" "To configure an iSCSI target, create the [.filename]#/etc/ctl.conf# " "configuration file, add a line to [.filename]#/etc/rc.conf# to make sure the " "man:ctld[8] daemon is automatically started at boot, and then start the " "daemon." msgstr "" "Для настройки цели iSCSI создайте конфигурационный файл [.filename]#/etc/ctl." "conf#, добавьте строку в [.filename]#/etc/rc.conf#, чтобы убедиться, что " "демон man:ctld[8] автоматически запускается при загрузке, а затем запустите " "демон." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2790 msgid "" "The following is an example of a simple [.filename]#/etc/ctl.conf# " "configuration file. Refer to man:ctl.conf[5] for a complete description of " "this file's available options." msgstr "" "Вот пример простого файла конфигурации [.filename]#/etc/ctl.conf#. Полное " "описание доступных опций этого файла можно найти в man:ctl.conf[5]." #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2798 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2876 #, no-wrap msgid "" "portal-group pg0 {\n" "\tdiscovery-auth-group no-authentication\n" "\tlisten 0.0.0.0\n" "\tlisten [::]\n" "}\n" msgstr "" "portal-group pg0 {\n" "\tdiscovery-auth-group no-authentication\n" "\tlisten 0.0.0.0\n" "\tlisten [::]\n" "}\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2802 #, no-wrap msgid "" "target iqn.2012-06.com.example:target0 {\n" "\tauth-group no-authentication\n" "\tportal-group pg0\n" msgstr "" "target iqn.2012-06.com.example:target0 {\n" "\tauth-group no-authentication\n" "\tportal-group pg0\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2808 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2906 #, no-wrap msgid "" "\tlun 0 {\n" "\t\tpath /data/target0-0\n" "\t\tsize 4G\n" "\t}\n" "}\n" msgstr "" "\tlun 0 {\n" "\t\tpath /data/target0-0\n" "\t\tsize 4G\n" "\t}\n" "}\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2814 msgid "" "The first entry defines the `pg0` portal group. Portal groups define which " "network addresses the man:ctld[8] daemon will listen on. The `discovery-" "auth-group no-authentication` entry indicates that any initiator is allowed " "to perform iSCSI target discovery without authentication. Lines three and " "four configure man:ctld[8] to listen on all IPv4 (`listen 0.0.0.0`) and IPv6 " "(`listen [::]`) addresses on the default port of 3260." msgstr "" "Первая запись определяет группу порталов `pg0`. Группы порталов определяют, " "на каких сетевых адресах будет слушать демон man:ctld[8]. Запись `discovery-" "auth-group no-authentication` указывает, что любой инициатор может выполнять " "обнаружение целей iSCSI без аутентификации. Третья и четвёртая строки " "настраивают man:ctld[8] для прослушивания всех IPv4-адресов (`listen " "0.0.0.0`) и IPv6-адресов (`listen [::]`) на стандартном порту 3260." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2817 msgid "" "It is not necessary to define a portal group as there is a built-in portal " "group called `default`. In this case, the difference between `default` and " "`pg0` is that with `default`, target discovery is always denied, while with " "`pg0`, it is always allowed." msgstr "" "Нет необходимости определять группу порталов, так как существует встроенная " "группа порталов с именем `default`. В этом случае разница между `default` и " "`pg0` заключается в том, что для `default` обнаружение целей всегда " "запрещено, а для `pg0` — всегда разрешено." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2825 msgid "" "The second entry defines a single target. Target has two possible meanings: " "a machine serving iSCSI or a named group of LUNs. This example uses the " "latter meaning, where `iqn.2012-06.com.example:target0` is the target name. " "This target name is suitable for testing purposes. For actual use, change " "`com.example` to the real domain name, reversed. The `2012-06` represents " "the year and month of acquiring control of that domain name, and `target0` " "can be any value. Any number of targets can be defined in this " "configuration file." msgstr "" "Вторая запись определяет одну цель. У цели есть два возможных значения: " "машина, обслуживающая iSCSI, или именованная группа LUN. В этом примере " "используется второе значение, где `iqn.2012-06.com.example:target0` — это " "имя цели. Это имя цели подходит для тестирования. Для реального " "использования замените `com.example` на настоящий домен, записанный в " "обратном порядке. `2012-06` представляет год и месяц получения контроля над " "этим доменом, а `target0` может быть любым значением. В этом файле " "конфигурации можно определить любое количество целей." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2827 msgid "" "The `auth-group no-authentication` line allows all initiators to connect to " "the specified target and `portal-group pg0` makes the target reachable " "through the `pg0` portal group." msgstr "" "Строка `auth-group no-authentication` разрешает всем инициаторам " "подключаться к указанной цели, а `portal-group pg0` делает цель доступной " "через группу порталов `pg0`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2835 msgid "" "The next section defines the LUN. To the initiator, each LUN will be " "visible as a separate disk device. Multiple LUNs can be defined for each " "target. Each LUN is identified by a number, where LUN 0 is mandatory. The " "`path /data/target0-0` line defines the full path to a file or zvol backing " "the LUN. That path must exist before starting man:ctld[8]. The second line " "is optional and specifies the size of the LUN." msgstr "" "Следующий раздел определяет LUN. Для инициатора каждый LUN будет виден как " "отдельное дисковое устройство. Для каждой цели можно определить несколько " "LUN. Каждый LUN идентифицируется числом, где LUN 0 является обязательным. " "Строка `path /data/target0-0` определяет полный путь к файлу или zvol, " "который используется для LUN. Этот путь должен существовать до запуска man:" "ctld[8]. Вторая строка необязательна и указывает размер LUN." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2837 msgid "" "Next, to make sure the man:ctld[8] daemon is started at boot, add this line " "to [.filename]#/etc/rc.conf#:" msgstr "" "Далее, чтобы убедиться, что демон man:ctld[8] запускается при загрузке, " "добавьте эту строку в [.filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2841 #, no-wrap msgid "ctld_enable=\"YES\"\n" msgstr "ctld_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2844 msgid "To start man:ctld[8] now, run this command:" msgstr "Чтобы запустить man:ctld[8] сейчас, выполните следующую команду:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2848 #, no-wrap msgid "# service ctld start\n" msgstr "# service ctld start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2852 msgid "" "As the man:ctld[8] daemon is started, it reads [.filename]#/etc/ctl.conf#. " "If this file is edited after the daemon starts, use this command so that the " "changes take effect immediately:" msgstr "" "Поскольку демон man:ctld[8] запускается, он читает файл [.filename]#/etc/ctl." "conf#. Если этот файл был изменён после запуска демона, используйте " "следующую команду, чтобы изменения вступили в силу немедленно:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2856 #, no-wrap msgid "# service ctld reload\n" msgstr "# service ctld reload\n" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2859 #, no-wrap msgid "Authentication" msgstr "Аутентификация" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2863 msgid "" "The previous example is inherently insecure as it uses no authentication, " "granting anyone full access to all targets. To require a username and " "password to access targets, modify the configuration as follows:" msgstr "" "Предыдущий пример изначально небезопасен, так как не использует " "аутентификацию, предоставляя любому полный доступ ко всем целям. Чтобы " "потребовать имя пользователя и пароль для доступа к целям, измените " "конфигурацию следующим образом:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2870 #, no-wrap msgid "" "auth-group ag0 {\n" "\tchap username1 secretsecret\n" "\tchap username2 anothersecret\n" "}\n" msgstr "" "auth-group ag0 {\n" "\tchap username1 secretsecret\n" "\tchap username2 anothersecret\n" "}\n" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2885 #, no-wrap msgid "" "target iqn.2012-06.com.example:target0 {\n" "\tauth-group ag0\n" "\tportal-group pg0\n" "\tlun 0 {\n" "\t\tpath /data/target0-0\n" "\t\tsize 4G\n" "\t}\n" "}\n" msgstr "" "target iqn.2012-06.com.example:target0 {\n" "\tauth-group ag0\n" "\tportal-group pg0\n" "\tlun 0 {\n" "\t\tpath /data/target0-0\n" "\t\tsize 4G\n" "\t}\n" "}\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2891 msgid "" "The `auth-group` section defines username and password pairs. An initiator " "trying to connect to `iqn.2012-06.com.example:target0` must first specify a " "defined username and secret. However, target discovery is still permitted " "without authentication. To require target discovery authentication, set " "`discovery-auth-group` to a defined `auth-group` name instead of `no-" "authentication`." msgstr "" "Раздел `auth-group` определяет пары имени пользователя и пароля. Инициатор, " "пытающийся подключиться к `iqn.2012-06.com.example:target0`, должен сначала " "указать определённое имя пользователя и секрет. Однако обнаружение цели по-" "прежнему разрешено без аутентификации. Чтобы потребовать аутентификацию при " "обнаружении цели, установите `discovery-auth-group` в определённое имя `auth-" "group` вместо `no-authentication`." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2894 msgid "" "It is common to define a single exported target for every initiator. As a " "shorthand for the syntax above, the username and password can be specified " "directly in the target entry:" msgstr "" "Обычно определяют один экспортируемый объект для каждого инициатора. В " "качестве сокращения для синтаксиса выше, имя пользователя и пароль могут " "быть указаны непосредственно в записи объекта:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2900 #, no-wrap msgid "" "target iqn.2012-06.com.example:target0 {\n" "\tportal-group pg0\n" "\tchap username1 secretsecret\n" msgstr "" "target iqn.2012-06.com.example:target0 {\n" "\tportal-group pg0\n" "\tchap username1 secretsecret\n" #. type: Title === #: documentation/content/en/books/handbook/network-servers/_index.adoc:2910 #, no-wrap msgid "Configuring an iSCSI Initiator" msgstr "Настройка инициатора iSCSI" #. type: delimited block = 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2916 msgid "" "The iSCSI initiator described in this section is supported starting with " "FreeBSD 10.0-RELEASE. To use the iSCSI initiator available in older " "versions, refer to man:iscontrol[8]." msgstr "" "Описанный в этом разделе инициатор iSCSI поддерживается начиная с FreeBSD " "10.0-RELEASE. Для использования инициатора iSCSI, доступного в более старых " "версиях, обратитесь к man:iscontrol[8]." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2921 msgid "" "The iSCSI initiator requires that the man:iscsid[8] daemon is running. This " "daemon does not use a configuration file. To start it automatically at " "boot, add this line to [.filename]#/etc/rc.conf#:" msgstr "" "Инициатору iSCSI требуется, чтобы демон man:iscsid[8] был запущен. Этот " "демон не использует файл конфигурации. Для его автоматического запуска при " "загрузке добавьте следующую строку в [.filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2925 #, no-wrap msgid "iscsid_enable=\"YES\"\n" msgstr "iscsid_enable=\"YES\"\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2928 msgid "To start man:iscsid[8] now, run this command:" msgstr "Чтобы сейчас запустить man:iscsid[8], выполните следующую команду:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2932 #, no-wrap msgid "# service iscsid start\n" msgstr "# service iscsid start\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2936 msgid "" "Connecting to a target can be done with or without an [.filename]#/etc/iscsi." "conf# configuration file. This section demonstrates both types of " "connections." msgstr "" "Подключение к цели может быть выполнено с файлом конфигурации [.filename]#/" "etc/iscsi.conf# или без него. В этом разделе показаны оба типа подключений." #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:2937 #, no-wrap msgid "Connecting to a Target Without a Configuration File" msgstr "Подключение к цели без файла конфигурации" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2940 msgid "" "To connect an initiator to a single target, specify the IP address of the " "portal and the name of the target:" msgstr "" "Для подключения инициатора к одному целевому устройству укажите IP-адрес " "портала и имя целевого устройства:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2944 #, no-wrap msgid "# iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0\n" msgstr "# iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2948 msgid "" "To verify if the connection succeeded, run `iscsictl` without any " "arguments. The output should look similar to this:" msgstr "" "Для проверки успешности соединения выполните команду `iscsictl` без " "аргументов. Вывод должен выглядеть примерно так:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2953 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0\n" msgstr "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2957 msgid "" "In this example, the iSCSI session was successfully established, with [." "filename]#/dev/da0# representing the attached LUN. If the `iqn.2012-06.com." "example:target0` target exports more than one LUN, multiple device nodes " "will be shown in that section of the output:" msgstr "" "В этом примере сеанс iSCSI был успешно установлен, где [.filename]#/dev/da0# " "представляет подключённый LUN. Если цель `iqn.2012-06.com.example:target0` " "экспортирует более одного LUN, в соответствующем разделе вывода будет " "показано несколько устройств:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2961 #, no-wrap msgid "Connected: da0 da1 da2.\n" msgstr "Connected: da0 da1 da2.\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2965 msgid "" "Any errors will be reported in the output, as well as the system logs. For " "example, this message usually means that the man:iscsid[8] daemon is not " "running:" msgstr "" "Любые ошибки будут отображены в выводе, а также в системных журналах. " "Например, это сообщение обычно означает, что демон man:iscsid[8] не запущен:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2970 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8)\n" msgstr "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8)\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2973 msgid "" "The following message suggests a networking problem, such as a wrong IP " "address or port:" msgstr "" "Следующее сообщение указывает на проблему с сетью, например, неверный IP-" "адрес или порт:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2978 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.11 Connection refused\n" msgstr "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.11 Connection refused\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2981 msgid "This message means that the specified target name is wrong:" msgstr "Это сообщение означает, что указано неправильное имя цели:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2986 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Not found\n" msgstr "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Not found\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2989 msgid "This message means that the target requires authentication:" msgstr "Это сообщение означает, что цель требует аутентификации:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:2994 #, no-wrap msgid "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed\n" msgstr "" "Target name Target portal State\n" "iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:2997 msgid "To specify a CHAP username and secret, use this syntax:" msgstr "" "Чтобы указать имя пользователя CHAP и секрет, используйте следующий " "синтаксис:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3001 #, no-wrap msgid "# iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret\n" msgstr "# iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret\n" #. type: Title ==== #: documentation/content/en/books/handbook/network-servers/_index.adoc:3004 #, no-wrap msgid "Connecting to a Target with a Configuration File" msgstr "Подключение к цели с использованием файла конфигурации" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3007 msgid "" "To connect using a configuration file, create [.filename]#/etc/iscsi.conf# " "with contents like this:" msgstr "" "Для подключения с использованием файла конфигурации создайте файл [." "filename]#/etc/iscsi.conf# с содержимым, подобным этому:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3017 #, no-wrap msgid "" "t0 {\n" "\tTargetAddress = 10.10.10.10\n" "\tTargetName = iqn.2012-06.com.example:target0\n" "\tAuthMethod = CHAP\n" "\tchapIName = user\n" "\tchapSecret = secretsecret\n" "}\n" msgstr "" "t0 {\n" "\tTargetAddress = 10.10.10.10\n" "\tTargetName = iqn.2012-06.com.example:target0\n" "\tAuthMethod = CHAP\n" "\tchapIName = user\n" "\tchapSecret = secretsecret\n" "}\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3024 msgid "" "The `t0` specifies a nickname for the configuration file section. It will " "be used by the initiator to specify which configuration to use. The other " "lines specify the parameters to use during connection. The `TargetAddress` " "and `TargetName` are mandatory, whereas the other options are optional. In " "this example, the CHAP username and secret are shown." msgstr "" "`t0` задаёт псевдоним для раздела конфигурационного файла. Он будет " "использоваться инициатором для указания, какую конфигурацию применять. " "Остальные строки определяют параметры, используемые при подключении. " "`TargetAddress` и `TargetName` являются обязательными, тогда как остальные " "параметры — опциональными. В этом примере показаны имя пользователя CHAP и " "секретный ключ." #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3026 msgid "To connect to the defined target, specify the nickname:" msgstr "Для подключения к указанной цели укажите псевдоним:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3030 #, no-wrap msgid "# iscsictl -An t0\n" msgstr "# iscsictl -An t0\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3033 msgid "" "Alternately, to connect to all targets defined in the configuration file, " "use:" msgstr "" "Или для подключения ко всем целям, определенным в файле конфигурации, " "используйте:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3037 #, no-wrap msgid "# iscsictl -Aa\n" msgstr "# iscsictl -Aa\n" #. type: Plain text #: documentation/content/en/books/handbook/network-servers/_index.adoc:3040 msgid "" "To make the initiator automatically connect to all targets in [.filename]#/" "etc/iscsi.conf#, add the following to [.filename]#/etc/rc.conf#:" msgstr "" "Чтобы инициатор автоматически подключался ко всем целям в [.filename]#/etc/" "iscsi.conf#, добавьте следующее в [.filename]#/etc/rc.conf#:" #. type: delimited block . 4 #: documentation/content/en/books/handbook/network-servers/_index.adoc:3045 #, no-wrap msgid "" "iscsictl_enable=\"YES\"\n" "iscsictl_flags=\"-Aa\"\n" msgstr "" "iscsictl_enable=\"YES\"\n" "iscsictl_flags=\"-Aa\"\n" #~ msgid "automatic discovery of service instances (DNS-SD)." #~ msgstr "автоматического обнаружения экземпляров служб (DNS-SD)." diff --git a/documentation/content/zh-cn/books/handbook/mac/_index.adoc b/documentation/content/zh-cn/books/handbook/mac/_index.adoc index 5f9c900f26..9439f34519 100644 --- a/documentation/content/zh-cn/books/handbook/mac/_index.adoc +++ b/documentation/content/zh-cn/books/handbook/mac/_index.adoc @@ -1,949 +1,947 @@ --- title: 第 17 章 强制访问控制 part: 部分 III. 系统管理 prev: books/handbook/jails next: books/handbook/audit showBookMenu: true weight: 21 params: path: "/books/handbook/mac/" --- [[mac]] = 强制访问控制 :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 17 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == 概要 FreeBSD 5.X 在 POSIX(R).1e 草案的基础上引入了 TrustedBSD 项目提供的新的安全性扩展。 新安全机制中最重要的两个, 是文件系统访问控制列表 (ACL) 和强制访问控制 (MAC) 机制。 强制访问控制允许加载新的访问控制模块, 并借此实施新的安全策略, 其中一部分为一个很小的系统子集提供保护并加强特定的服务, 其他的则对所有的主体和客体提供全面的标签式安全保护。 定义中有关强制的部分源于如下事实, 控制的实现由管理员和系统作出, 而不像自主访问控制 (DAC, FreeBSD 中的标准文件以及 System V IPC 权限) 那样是按照用户意愿进行的。 本章将集中讲述强制访问控制框架 (MAC 框架) 以及一套用以实施多种安全策略的插件式的安全策略模块。 阅读本章之后, 您将了解: * 目前 FreeBSD 中具有哪些 MAC 安全策略模块, 以及与之相关的机制。 * MAC 安全策略模块将实施何种策略, 以及标签式与非标签式策略之间的差异。 * 如何高效地配置系统令使其使用 MAC 框架。 * 如何配置 MAC 框架所提供的不同的安全策略模块。 * 如何用 MAC 框架构建更为安全的环境, 并举例说明。 * 如何测试 MAC 配置以确保正确构建了框架。 阅读本章之前, 您应该: * 了解 UNIX(R) 和 FreeBSD 的基础 (crossref:basics[basics,UNIX 基础])。 * 熟悉内核配置/编译 (crossref:kernelconfig[kernelconfig,配置FreeBSD的内核]) 的基础。 * 对安全及其如何与 FreeBSD 相配合有些了解; (crossref:security[security,安全])。 [WARNING] ==== 对本章信息的不当使用可能导致丧失系统访问权, 激怒用户, 或者无法访问 X11 提供的特性。 更重要的是, MAC 不能用于彻底保护一个系统。 MAC 框架仅用于增强现有安全策略; 如果没有健全的安全条例以及定期的安全检查, 系统将永远不会绝对安全。 此外还需要注意的是, 本章中所包含的例子仅仅是例子。 我们并不建议在一个生产用系统上进行这些特别的设置。 实施各种安全策略模块需要谨慎的考虑与测试, 因为那些并不完全理解所有机制如何工作的人, 可能会发现需要对整个系统中很多的文件或目录进行重新配置。 ==== === 未涉及的内容 本章涵盖了与 MAC 框架有关的诸多方面的安全问题; 而新的 MAC 安全策略模块的开发成果则不会涉及。 MAC 框架中所包含的一部分安全策略模块, 具有一些用于测试及新模块开发的特定属性, 其中包括 man:mac_test[4]、 man:mac_stub[4] 以及 man:mac_none[4]。 关于这些安全策略模块及其提供的众多机制的详细信息,请参阅联机手册中的内容。 [[mac-inline-glossary]] == 本章出现的重要术语 在阅读本章之前, 有些关键术语需要解释, 希望能藉此扫清可能出现的疑惑, 并避免在文中对新术语、 新信息进行生硬的介绍。 * _区间_(compartment): (译注: _区间_ 这一术语, 在一些文献中也称做类别 (category)。 此外, 在其它一些翻译文献中, 该术语也翻译为 "象限"。) 指一组被划分或隔离的程序和数据, 其中, 用户被明确地赋予了访问特定系统组件的权限。 同时, 区间也能够表达分组, 例如工作组、 部门、 项目, 或话题。 可以通过使用区间来实施 need-to-know 安全策略。 * _高水位线_(high water mark): 高水位线策略是一种允许提高安全级别, 以期访问更高级别的信息的安全策略。 在多数情况下, 当进程结束时, 又会回到原先的安全级别。 目前, FreeBSD MAC 框架尚未提供这样的策略, 在这里介绍其定义主要是希望给您一个完整的概念。 * _完整性_(integrity): 作为一个关键概念, 完整性是数据可信性的一种程度。 若数据的完整性提高, 则数据的可信性相应提高。 * _标签_(label): 标签是一种可应用于文件、 目录或系统其他客体的安全属性, 它也可以被认为是一种机密性印鉴。 当一个文件被施以标签时, 其标签会描述这一文件的安全参数, 并只允许拥有相似安全性设置的文件、 用户、 资源等访问该文件。 标签值的涵义及解释取决于相应的策略配置: 某些策略会将标签当作对某一客体的完整性和保密性的表述, 而其它一些策略则会用标签保存访问规则。 * _程度_(level): 对某种安全属性加强或削弱的设定。 若程度增加, 其安全性也相应增加。 * _低水位线_(low water mark): 低水位线策略允许降低安全级别, 以访问安全性较差的信息。 多数情况下, 在进程结束时, 又会回到原先的安全级别。 目前在 FreeBSD 中唯一实现这一安全策略的是 man:mac_lomac[4]。 * _多重标签_(multilabel): `multilabel` 属性是一个文件系统选项。 该选项可在单用户模式下通过 man:tunefs[8] 程序进行设置。 可以在引导时使用的 man:fstab[5] 文件中, 也可在创建新文件系统时进行配置。 该选项将允许管理员对不同客体施以不同的 MAC 标签。 该选项仅适用于支持标签的安全策略模块。 * _客体_(object): 客体或系统客体是一种实体, 信息随 _主体_ 的导向在客体内部流动。 客体包括目录、 文件、 区段、 显示器、 键盘、 存储器、 磁存储器、 打印机及其它数据存储/转移设备。 基本上, 客体就是指数据容器或系统资源。 对 _客体_ 的访问实际上意味着对数据的访问。 * _策略_(policy): 一套用以规定如何达成目标的规则。 _策略_ 一般用以描述如何对特定客体进行操作。 本章将在__安全策略__的范畴内讨论__策略__, 一套用以控制数据和信息流并规定其访问者的规则,就是其中一例。 * _敏感性_(sensitivity): 通常在讨论 MLS 时使用。 敏感性程度曾被用来描述数据应该有何等的重要或机密。 若敏感性程度增加, 则保密的重要性或数据的机密性相应增强。 * _单一标签_(single label): 整个文件系统使用一个标签对数据流实施访问控制, 叫做单一标签。 当文件系统使用此设置时, 即无论何时当 `多重标签` 选项未被设定时, 所有文件都将遵守相同标签设定。 * _主体_(subject): 主体就是引起信息在两个 _客体_ 间流动的任意活动实体, 比如用户, 用户进程(译注:原文为 processor), 系统进程等。 在 FreeBSD 中, 主体几乎总是代表用户活跃在某一进程中的一个线程。 [[mac-initial]] == 关于 MAC 的说明 在掌握了所有新术语之后, 我们从整体上来考虑 MAC 是如何加强系统安全性的。 MAC 框架提供的众多安全策略模块可以用来保护网络及文件系统, 也可以禁止用户访问某些特定的端口、 套接字及其它客体。 将策略模块组合在一起以构建一个拥有多层次安全性的环境, 也许是其最佳的使用方式, 这可以通过一次性加载多个安全策略模块来实现。 在多层次安全环境中, 多重策略模块可以有效地控制安全性, 这一点与强化型 (hardening) 策略, 即那种通常只强化系统中用于特定目的的元素的策略是不同的。 相比之下, 多重策略的唯一不足是需要系统管理员先期设置好参数, 如多重文件系统安全标志、 每一位用户的网络访问权限等等。 与采用框架方式实现的长期效果相比, 这些不足之处是微不足道的。 例如, 让系统具有为特定配置挑选必需的策略的能力, 有助于降低性能开销。 而减少对无用策略的支持, 不仅可以提高系统的整体性能, 而且提供了更灵活的选择空间。 好的实施方案中应该考虑到整体的安全性要求, 并有效地利用框架所提供的众多安全策略模块。 这样一个使用 MAC 特性的系统, 至少要保证不允许用户任意更改安全属性; 所有的用户实用工具、 程序以及脚本, 必须在所选安全策略模块提供的访问规则的约束下工作; 并且系统管理员应掌握 MAC 访问规则的一切控制权。 细心选择正确的安全策略模块是系统管理员专有的职责。 某些环境也许需要限制网络的访问控制权, 在这种情况下, 使用 man:mac_portacl[4]、 man:mac_ifoff[4] 乃至 man:mac_biba[4] 安全策略模块都会是不错的开始; 在其他情况下, 系统客体也许需要严格的机密性, 像 man:mac_bsdextended[4] 和 man:mac_mls[4] 这样的安全策略模块就是为此而设。 对安全策略模块的决定可依据网络配置进行, 也许只有特定的用户才应该被允许使用由 man:ssh[1] 提供的程序以访问网络或互联网, man:mac_portacl[4] 安全策略模块应该成为这种情况下的选择。 但对文件系统又该作些什么呢? 是由特定的用户或群组来确定某些目录的访问权限, 抑或是将特定客体设为保密以限制用户或组件访问特定文件? 在文件系统的例子中, 也许访问客体的权限对某些用户是保密的, 但对其他则不是。 比如, 一个庞大的开发团队, 也许会被分成许多由几人组成的小组, A 项目中的开发人员可能不被允许访问 B 项目开发人员创作的客体, 但同时他们还需要访问由 C 项目开发人员创作的客体, 这正符合上述情形。 使用由 MAC 框架提供的不同策略, 用户就可以被分成这种小组, 然后被赋予适当区域的访问权, 由此, 我们就不用担心信息泄漏的问题了。 因此, 每一种安全策略模块都有其处理系统整体安全问题的独特方法。 对安全策略模块的选择应在对安全策略深思熟虑的基础之上进行。 很多情况下, 整体安全策略需要重新修正并在系统上实施。 理解 MAC 框架提供的不同安全策略模块会帮助管理员就其面临的情形选择最佳的策略模块。 FreeBSD 的默认内核并不包含 MAC 框架选项, 因此, 在尝试使用本章中的例子或信息之前, 您应该添加以下内核选项: [.programlisting] .... options MAC .... 此外, 内核还需要重新编译并且重新安装。 [CAUTION] ==== 尽管有关 MAC 的许多联机手册中都声明它们可以被编译到内核中, 但对这些策略模块的使用仍可能导致锁死系统的网络及其他功能。 使用 MAC 就像使用防火墙一样, 因此必须要小心防止将系统完全锁死。 在使用 MAC 时, 应该考虑是否能够回退到之前的配置, 在远程进行配置更应加倍小心。 ==== [[mac-understandlabel]] == 理解 MAC 标签 MAC 标签是一种安全属性, 它可以被应用于整个系统中的主体和客体。 配置标签时, 用户必须能够确切理解其所进行的操作。 客体所具有的属性取决于被加载的策略模块, 不同策略模块解释其属性的方式也差别很大。 由于缺乏理解或无法了解其间联系而导致的配置不当, 会引起意想不到的, 也许是不愿看到的系统异常。 客体上的安全标签是由安全策略模块决定的安全访问控制的一部分。 在某些策略模块中, 标签本身所包含的所有信息足以使其作出决策, 而在其它一些安全策略模块中, 标签则可能被作为一个庞大规则体系的一部分进行处理。 举例来说, 在文件上设定 `biba/low` 标签, 意味着此标签隶属 Biba 策略模块, 其值为 "low"。 某些在 FreeBSD 中支持标签特性的策略会提供三个预定义的标签, 分别是 low、 high 及 equal 标签。 尽管这些标签在不同安全策略模块中会对访问控制采取不同措施, 但有一点是可以肯定的, 那就是 low 标签表示最低限度的设定, equal 标签会将主体或客体设定为被禁用的或不受影响的, high 标签则会应用 Biba 及 MLS 安全策略模块中允许的最高级别的设定。 在单一标签文件系统的环境中, 同一客体上只会应用一个标签, 于是, 一套访问权限将被应用于整个系统, 这也是很多环境所全部需要的。 另一些应用场景中, 我们需要将多重标签应用于文件系统的客体或主体, 如此一来, 就需要使用 man:tunefs[8] 的 `multilabel` 选项。 在使用 Biba 和 MLS 时可以配置数值标签, 以标示分级控制中的层级程度。 数值的程度可以用来划分或将信息按组分类, 从而只允许同程度或更高程度的组对其进行访问。 多数情况下, 管理员将仅对整个文件系统设定单一标签。 __等一下, 这看起来很像 DAC! 但我认为 MAC 确实只将控制权赋予了管理员。 __此句话依然是正确的。 在某种程度上, `root` 是实施控制的用户, 他配置安全策略模块以使用户们被分配到适当的类别/访问 levels 中。 唉, 很多安全策略模块同样可以限制 `root` 用户。 对于客体的基本控制可能会下放给群组, 但 `root` 用户随时可以废除或更改这些设定。 这就是如 Biba 及 MLS 这样一些安全策略模块所包含的 hierarchal/clearance 模型。 === 配置标签 实际上, 有关标签式安全策略模块配置的各种问题都是用基础系统组件实现的。 这些命令为客体和主体配置以及配置的实施和验证提供了一个简便的接口。 所有的配置都应该通过 man:setfmac[8] 及 man:setpmac[8] 组件实施。 `setfmac` 命令是用来对系统客体设置 MAC 标签的, 而 `setpmac` 则是用来对系统主体设置标签的。 例如: [source,shell] .... # setfmac biba/high test .... 若以上命令不发生错误则会直接返回命令提示符, 只有当发生错误时, 这些命令才会给出提示, 这和 man:chmod[1] 和 man:chown[8] 命令类似。 某些情况下, 以上命令产生的错误可能是 `Permission denied`, 一般在受限客体上设置或修改设置时会产生此错误。 系统管理员可使用以下命令解决此问题: [source,shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... 如上所示, 通过 `setpmac` 对被调用的进程赋予不同的标签, 以覆盖安全策略模块的设置。 `getpmac` 组件通常用于当前运行的进程, 如 sendmail: 尽管其使用进程编号来替代命令, 其逻辑是相同的。 如果用户试图对其无法访问的文件进行操作, 根据所加载的安全策略模块的规则, 函数 `mac_set_link` 将会给出 `Operation not permitted` 的错误提示。 ==== 一般标签类型 man:mac_biba[4]、 man:mac_mls[4] 及 man:mac_lomac[4] 策略模块提供了设定简单标签的功能, 其值应该是 high、 equal 及 low 之一。 以下是对这些标签功能的简单描述: * `low` 标签被认为是主体或客体所具有的最低层次的标签设定。 对主体或客体采用此设定, 将阻止其访问标签为 high 的客体或主体。 * `equal` 标签只能被用于不希望受策略控制的客体上。 * `high` 标签对客体或主体采用可能的最高设定。 至于每个策略模块, 每种设定都会产生不同的信息流指令。 阅读联机手册中相关的章节将进一步阐明这些一般标签配置的特点。 ===== 标签高级配置 如下所示, 用于 `比较方式:区间+区间` (`comparison:compartment+compartment`) 的标签等级数: [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... 其含义为: "Biba 策略标签"/"等级 10" :"区间 2、 3及6": ("等级5 ...") 本例中, 第一个等级将被认为是 "有效区间" 的 "有效等级", 第二个等级是低级等级, 最后一个则是高级等级。 大多数配置中并不使用这些设置, 实际上, 它们是为更高级的配置准备的。 当把它们应用在系统客体上时, 则只有当前的等级/区间, 因为它们反映可以实施访问控制的系统中可用的范围, 以及网络接口。 等级和区间, 可以用来在一对主体和客体之间建立一种称为 "支配 (dominance)" 的关系, 这种关系可能是主体支配客体, 客体支配主体, 互不支配或互相支配。 "互相支配" 这种情况会在两个标签相等时发生。 由于 Biba 的信息流特性, 您可以设置一系列区间, "need to know", 这可能发生于项目之间, 而客体也由其对应的区间。 用户可以使用 `su` 和 `setpmac` 来将他们的权限进一步细分, 以便在没有限制的区间里访问客体。 ==== 用户和标签设置 用户本身也需要设置标签, 以使其文件和进程能够正确地与系统上定义的安全策略互动, 这是通过使用登录分级在文件 [.filename]#login.conf# 中配置的。 每个使用标签的策略模块都会进行用户分级设定。 以下是一个使用所有策略模块的例子: [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... `label` 选项用以设定用户分级默认标签, 该标签将由 MAC 执行。 用户绝不会被允许更改该值, 因此其从用户的观点看不是可选的。 当然, 在真实情况的配置中, 管理员不会希望启用所有策略模块。 我们建议您在实施以上配置之前阅读本章的其余部分。 [NOTE] ==== 用户也许会在首次登录后更改其标签, 尽管如此, 这仅仅是策略的主观局限性。 上面的例子告诉 Biba 策略, 进程的最小完整性是为5, 最大完整性为15, 默认且有效的标签为10。 进程将以10的完整性运行直至其决定更改标签, 这可能是由于用户使用了 setpmac 命令 (该操作将在登录时被 Biba 限制在一定用户范围之内)。 ==== 在所有情况下, 修改 [.filename]#login.conf# 之后, 都必须使用 `cap_mkdb` 重编译登录分级 capability 数据库, 这在接下来的例子和讨论中就会有所体现。 很多站点可能拥有数目可观的用户需要不同的用户分级, 注意到这点是大有裨益的。 深入来说就是需要事先做好计划, 因为管理起来可能十分困难。 在 FreeBSD 以后的版本中, 将包含一种将用户映射到标签的新方式, 尽管如此, 这也要到 FreeBSD 5.3 之后的某个时间才能实现。 ==== 网络接口和标签设定 也可以在网络接口上配置标签, 以控制进出网络的数据流。 在所有情况下, 策略都会以适应客体的方式运作。 例如, 在 `biba` 中设置为高的用户, 就不能访问标记为低的网络接口。 `maclabel` 可以作为 `ifconfig` 的参数用于设置网络接口的 MAC 标签。 例如: [source,shell] .... # ifconfig bge0 maclabel biba/equal .... 将在 man:bge[4] 接口上设置 `biba/equal` 的 MAC 标签。 当使用类似 `biba/high(low-high)` 这样的标签时, 整个标签应使用引号括起来; 否则将发生错误。 每一个支持标签的策略模块都提供了用于在网络接口上禁用该 MAC 标签的系统控制变量。 将标签设置为 `equal` 的效果与此类似。 请参见 `sysctl` 的输出、 策略模块的联机手册, 或本章接下来的内容, 以了解更进一步的详情。 === 用单一标签还是多重标签? 默认情况下, 系统采用的是 `singlelabel` 选项。 但这对管理员意味着什么呢? 两种策略之间存在很多的不同之处, 它们在系统安全模型的灵活性方面, 提供了不同的选择。 `singlelabel` 只允许在每个主体或客体上使用一个标签, 如 `biba/high`。 这降低了管理的开销, 但也同时降低了支持标签的策略的灵活性。 许多管理员可能更希望在安全策略中使用 `multilabel`。 `multilabel` 选项允许每一个主体或客体拥有各自独立的 MAC 标签, 起作用与标准的、 只允许整个分区上使用一个的 `singlelabel` 选项类似。 `multilabel` 和 `single` 标签选项只有对实现了标签功能的那些策略, 如 Biba、 Lomac、 MLS 以及 SEBSD 才有意义。 很多情况下是不需要设置 `multilabel` 的。 考虑下列情形和安全模型: * 使用了 MAC 以及许多混合策略的 FreeBSD web-服务器。 * 这台机器上的整个系统中只需要一个标签, 即 `biba/high`。 此处的文件系统并不需要 `multilabel` 选项, 因为有效的 label 只有一个。 * 因为这台机器将作为 Web 服务器使用, 因此应该以 `biba/low` 运行 Web 服务, 以杜绝向上写。 Biba 策略以及它如何运作将在稍后予以讨论, 因此, 如果您感觉前面的说明难以理解的话, 请继续阅读下面的内容, 再回来阅读这些内容就会有较为清晰的认识了。 服务器可以使用设置为 `biba/low` 的单独的分区, 用于保持其运行环境的状态。 这个例子中还省略了许多内容, 例如, 如何为数据配置访问限制、 参数配置和用户的设置; 它只是为前述的内容提供一个简单的例子。 如果打算使用非标签式策略, 就不需要 `multilabel` 选项了。 这些策略包括 `seeotheruids`、 `portacl` 和 `partition`。 另一个需要注意的事情是, 在分区上使用 `multilabel` 并建立基于 `multilabel` 可能会提高系统管理的开销, 因为文件系统中的所有客体都需要指定标签。 这包括对目录、文件, 甚至设备节点。 接下来的命令将在需要使用多个标签的文件系统上设置 `multilabel`。 这一操作只能在单用户模式下完成: [source,shell] .... # tunefs -l enable / .... 交换区不需要如此配置。 [NOTE] ==== 某些用户可能会在根分区上配置 `multilabel` 标志时遇到困难。 如果发生这样的情况, 请复查本章的 <>。 ==== [[mac-planning]] == 规划安全配置 在实施新技术时, 首先进行规划都是非常好的习惯。 在这段时间, 管理员一般都应 "进行全面的考察", 这至少应包括下列因素: * 方案实施的必要条件; * 方案实施的目标; 就实施 MAC 而言, 这包括: * 如何在目标系统上对信息和资源进行分类。 * 需要限制哪类信息或资源的访问, 以及应采用何种限制。 * 需要使用哪些 MAC 模块来完成这些目标。 尽管重新配置并修改系统资源和安全配置是可行的, 但查找整个系统并修复暨存的文件和用户帐号并不是一件轻而易举的事情。 规划有助于完成无问题且有效的可信系统实施。 _事先_ 对采用 MAC 的可信系统, 以及其配置做试运行十分有益, 因为这对实施的成败至关重要。 草率散漫地配置 MAC 通常是导致失败的祸根。 不同的环境可能会有不同的需求。 建立多层次而完备的安全配置, 可以减少系统正式运转之后所需要的微调。 同样地, 接下来的章节将介绍管理员能够使用的各种不同的模块; 描述它们的使用和配置; 除此之外还有一些关于它们最适合的情景的介绍。 例如, web 服务器可能希望使用 man:mac_biba[4] 和 man:mac_bsdextended[4] 策略, 而其他情况下, 例如一台机器上只有少量的本地用户时, man:mac_partition[4] 则是不错的选择。 [[mac-modules]] == 模块配置 在 MAC 框架中的每个模块, 都可以像前述那样连编入内核, 或作为运行时内核模块加载。 推荐的用法, 是通过在 [.filename]#/boot/loader.conf# 加入适当的设置, 以便在系统启动时的初始化操作过程中加载这些模块。 接下来的一些小节, 将讨论许多 MAC 模块, 并简单介绍它们的功能。 此外, 这一章还将介绍一些具体环境中的用例。 某些模块支持一种称为标签 (labeling) 的用法, 它可以通过使用类似 "允许做这个而不允许做那个" 的标签来实现访问控制。 标签配置文件可以控制允许的文件访问方式、 网络通讯, 以及许多其他权限。 在前一节中, 我们已经展示了文件系统中如何通过 `multilabel` 标志来启用基于文件或分区的访问控制的方法。 单标签配置在整个系统中只强制一个标签的限制, 这也是 `tunefs` 选项为什么是 `multilabel` 的原因。 [[mac-seeotheruids]] == MAC seeotheruids 模块 模块名: [.filename]#mac_seeotheruids.ko# 对应的内核配置: `options MAC_SEEOTHERUIDS` 引导选项: `mac_seeotheruids_load="YES"` man:mac_seeotheruids[4] 模块模仿并扩展了 `security.bsd.see_other_uids` 和 `security.bsd.see_other_gids sysctl` 变量。 这一模块并不需要预先配置标签, 它能够透明地与其他模块协同工作。 加载模块之后, 下列 `sysctl` 变量可以用来控制其功能: * `security.mac.seeotheruids.enabled` 将启用模块的功能, 并使用默认的配置。 这些默认设置将阻止用户看到其他用户的进程和 socket。 * `security.mac.seeotheruids.specificgid_enabled` 将允许特定的组从这一策略中和面。 要将某些组排除在这一策略之外, 可以用 `security.mac.seeotheruids.specificgid=XXX sysctl` 变量。 前述例子中, _XXX_ 应替换为希望不受限的组 ID 的数值形式。 * `security.mac.seeotheruids.primarygroup_enabled` 可以用来将特定的主要组排除在策略之外。 使用这一变量时, 不能同时设置 `security.mac.seeotheruids.specificgid_enabled`。 [[mac-bsdextended]] == MAC bsdextended 模块 模块名: [.filename]#mac_bsdextended.ko# 对应的内核配置: `options MAC_BSDEXTENDED` 引导选项: `mac_bsdextended_load="YES"` man:mac_bsdextended[4] 模块能够强制文件系统防火墙策略。 这一模块的策略提供了标准文件系统权限模型的一种扩展, 使得管理员能够建立一种类似防火墙的规则集, 以文件系统层次结构中的保护文件、 实用程序,以及目录。 在尝试访问文件系统客体时, 会遍历规则表, 直至找到匹配的规则, 或到达表尾。 这一行为可以通过修改 man:sysctl[8] 参数, security.mac.bsdextended.firstmatch_enabled 来进行设置。 与 FreeBSD 中的其他防火墙设置类似, 也可以建一个文件来配置访问控制策略, 并通过 man:rc.conf[5] 变量的配置在系统引导时加载它。 规则表可以通过工具 man:ugidfw[8] 工具来输入, 其语法类似 man:ipfw[8]。 此外还可以通过使用 man:libugidfw[3] 库来开发其他的工具。 当使用这一模块模块时应极其小心; 不正确的使用将导致文件系统的某些部分无法访问。 === 例子 在加载了 man:mac_bsdextended[4] 模块之后, 下列命令可以用来列出当前的规则配置: [source,shell] .... # ugidfw list 0 slots, 0 rules .... 如希望的那样, 目前还没有定义任何规则。 这意味着一切都还可以访问。 要创建一个阻止所有用户, 而保持 `root` 不受影响的规则, 只需运行下面的命令: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... 这本身可能是一个很糟糕的主意, 因为它会阻止所有用户执行哪怕最简单的命令, 例如 `ls`。 更富于爱心的规则可能是: [source,shell] .... # ugidfw set 2 subject uid user1 object uid user2 mode n # ugidfw set 3 subject uid user1 object gid user2 mode n .... 这将阻止任何 `user1` 对 `_user2_` 的主目录的全部访问, 包括目录列表。 `user1` 可以用 `not uid _user2_` 代替。 这将同样的强制访问控制实施在所有用户, 而不是单个用户上。 [NOTE] ==== `root` 用户不会受到这些变动的影响。 ==== 我们已经给出了 man:mac_bsdextended[4] 模块如何帮助加强文件系统的大致介绍。 要了解更进一步的信息, 请参见 man:mac_bsdextended[4] 和 man:ugidfw[8] 联机手册。 [[mac-ifoff]] == MAC ifoff 模块 模块名: [.filename]#mac_ifoff.ko# 对应的内核配置: `options MAC_IFOFF` 引导选项: `mac_ifoff_load="YES"` man:mac_ifoff[4] 模块完全是为了立即禁止网络接口, 以及阻止在系统初启时启用网络接口而设计的。 它不需要再系统中配置任何标签, 也不依赖于其他 MAC 模块。 绝大多数特性都可以通过调整下面的 `sysctl` 来加以控制。 * `security.mac.ifoff.lo_enabled` 表示 启用/禁用 环回接口 (man:lo[4]) 上的全部流量。 * `security.mac.ifoff.bpfrecv_enabled` 表示 启用/禁用 伯克利包过滤器 (man:bpf[4]) 接口上的全部流量。 * `security.mac.ifoff.other_enabled` 将在所有其他接口 启用/禁用 网络。 最为常用的 man:mac_ifoff[4] 用法之一是在不允许引导过程中出现网络流量的环境中监视网络。 另一个建议的用法是撰写一个使用 package:security/aide[] 的脚本, 以便自动地在受保护的目录中发现新的或修改过的文件时切断网络。 [[mac-portacl]] == MAC portacl 模块 模块名: [.filename]#mac_portacl.ko# 对应的内核配置: `MAC_PORTACL` 引导选项: `mac_portacl_load="YES"` man:mac_portacl[4] 模块可以用来通过一系列 `sysctl` 变量来限制绑定本地的 TCP 和 UDP 端口。 本质上 man:mac_portacl[4] 使得 非-`root` 用户能够绑定到它所指定的特权端口, 也就是那些编号小于 1024 的端口。 在加载之后, 这个模块将在所有的 socket 上启用 MAC 策略。 可以调整下列一些配置: * `security.mac.portacl.enabled` 将完全 启用/禁用 策略。 * `security.mac.portacl.port_high` 将设置为 man:mac_portacl[4] 所保护的最高端口号。 * `security.mac.portacl.suser_exempt` 如果设置为非零值, 表示将 `root` 用户排除在策略之外。 * `security.mac.portacl.rules` 将指定实际的 mac_portacl 策略; 请参见下文。 实际的 `mac_portacl` 策略, 是在 `security.mac.portacl.rules` sysctl 所指定的一个下列形式的字符串: `rule[,rule,...]` 其中可以给出任意多个规则。 每一个规则的形式都是: `idtype:id:protocol:port`。 这里的 [parameter]#idtype# 参数可以是 `uid` 或 `gid`, 分别表示将 [parameter]#id# 参数解释为用户 id 或组 id。 [parameter]#protocol# 参数可以用来确定希望应用到 TCP 或 UDP 协议上, 方法是把这一参数设置为 `tcp` 或 `udp`。 最后的 [parameter]#port# 参数则给出了所指定的用户或组能够绑定的端口号。 [NOTE] ==== 由于规则集会直接由内核加以解释, 因此只能以数字形式表示用户 ID、 组 ID, 以及端口等参数。 换言之, 您不能使用用户、 组, 或端口服务的名字来指定它们。 ==== 默认情况下, 在 类-UNIX(R) 系统中, 编号小于 1024 的端口只能为特权进程使用或绑定, 也就是那些以 `root` 身份运行的进程。 为了让 man:mac_portacl[4] 能够允许非特权进程绑定低于 1024 的端口, 就必须首先禁用标准的 UNIX(R) 限制。 这可以通过把 man:sysctl[8] 变量 `net.inet.ip.portrange.reservedlow` 和 `net.inet.ip.portrange.reservedhigh` 设置为 0 来实现。 请参见下面的例子, 或 man:mac_portacl[4] 联机手册中的说明, 以了解进一步的信息。 === 例子 下面的例子更好地展示了前面讨论的内容: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 net.inet.ip.portrange.reservedhigh=0 .... 首先我们需要设置使 man:mac_portacl[4] 管理标准的特权端口, 并禁用普通的 UNIX(R) 绑定限制。 [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... 您的 `root` 用户不应因此策略而失去特权, 因此请把 `security.mac.portacl.suser_exempt` 设置为一个非零的值。 现在您已经成功地配置了 man:mac_portacl[4] 模块, 并使其默认与 类-UNIX(R) 系统一样运行了。 [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... 允许 UID 为 80 的用户 (正常情况下, 应该是 `www` 用户) 绑定到 80 端口。 这样 `www` 用户就能够运行 web 服务器, 而不需要使用 `root` 权限了。 [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... 允许 UID 为 1001 的用户绑定 TCP 端口 110 ("pop3") 和 995 ("pop3s")。 这样用户就能够启动接受来发到 110 和 995 的连接请求的服务了。 [[mac-partition]] == MAC partition (分区) 模块 模块名: [.filename]#mac_partition.ko# 对应的内核配置: `options MAC_PARTITION` 引导选项: `mac_partition_load="YES"` man:mac_partition[4] 策略将把进程基于其 MAC 标签放到特定的 "partitions" (分区) 中。 这是一种特殊类型的 man:jail[8], 但对两者进行比较意义不大。 这个模块应加到 man:loader.conf[5] 文件中, 以便在启动过程中启用这些规则。 绝大多数这一策略的配置是通过 man:setpmac[8] 工具来完成的, 它将在后面介绍。 这个策略可以使用下面的 `sysctl`: * `security.mac.partition.enabled` 将启用强制的 MAC 进程 partitions。 当启用了这个规则时, 用户将只能看到他们自己的, 以及其他与他们同处一个 partition 的进程, 而不能使用能够越过 partition 的工具。 例如, `insecure` class 中的用户, 就无法使用 `top` 命令, 以及其他需要产生新进程的工具。 要设置或删除 partition 标签中的工具, 需要使用 `setpmac`: [source,shell] .... # setpmac partition/13 top .... 这将把 `top` 命令加入到 `insecure` class 中的用户的标签集。 注意, 所有由 `insecure` class 中的用户产生的进程, 仍然会留在 `partition/13` 标签中。 === 例子 下面的命令将显示 partition 标签以及进程列表: [source,shell] .... # ps Zax .... 接下来的这个命令将允许察看其他用户的进程 partition 标签, 以及那个用户正在运行的进程: [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== 除非加载了 man:mac_seeotheruids[4] 策略, 否则用户就看不到 `root` 的标签。 ==== 非常手工化的实现, 可能会在 [.filename]#/etc/rc.conf# 中禁用所有的服务, 并用脚本来按不同的标签来启动它们。 [NOTE] ==== 下面的几个策略支持基于所给出的三种标签的完整性设定。 这些选项, 连同它们的限制, 在模块的联机手册中进行了进一步介绍。 ==== [[mac-mls]] == MAC 多级 (Multi-Level) 安全模块 模块名: [.filename]#mac_mls.ko# 对应的内核配置: `options MAC_MLS` 引导选项: `mac_mls_load="YES"` man:mac_mls[4] 策略, 通过严格控制信息流向来控制系统中主体和客体的访问。 在 MLS 环境中, "许可 (clearance)" 级别会在每一个主体或客体标签上进行设置, 连同对应的区间。 由于这些透明度或敏感度可以有六千多个层次, 因此为每一个主体或客体进行配置将是一件让任何系统管理员都感到头疼的任务。 所幸的是, 这个策略中已经包含了三个 "立即可用的" 标签。 这些标签是 `mls/low`、 `mls/equal` 以及 `mls/high`。 由于这些标签已经在联机手册中进行了介绍, 这里只给出简要的说明: * `mls/low` 标签包含了最低配置, 从而允许其他客体支配它。 任何标记为 `mls/low` 的客体将是地透明度的, 从而不允许访问更高级别的信息。 此外, 这个标签也阻止拥有较高透明度的客体向其写入或传递信息。 * `mls/equal` 标签应放到不希望使用这一策略的客体上。 * `mls/high` 标签是允许的最高级别透明度。 指定了这个标签的客体将支配系统中的其他客体; 但是, 它们将不允许向较低级别的客体泄露信息。 MLS 提供了: * 提供了一些非层次分类的层次安全模型; * 固定规则: 不允许向上读, 不允许向下写 (主体可以读取同级或较低级别的客体, 但不能读取高级别的。 类似地, 主体可以向同级或较高级写, 而不能向下写); * 保密 (防止不适当的数据透露); * 系统设计的基础要点, 是在多个敏感级别之间并行地处理数据 (而不泄露秘密的和机密的信息)。 下列 `sysctl` 可以用来配置特殊服务和接口: * `security.mac.mls.enabled` 用来启用/禁用 MLS 策略。 * `security.mac.mls.ptys_equal` 将所有的 man:pty[4] 设备标记为 `mls/equal`。 * `security.mac.mls.revocation_enabled` 可以用来在标签转为较低 grade 时撤销客体访问权。 * `security.mac.mls.max_compartments` 可以用来设置客体的最大区间层次; 基本上, 这也就是系统中所允许的最大区间数。 要管理 MLS 标签, 可以使用 man:setfmac[8] 命令。 要在客体上指定标签, 需要使用下面的命令: [source,shell] .... # setfmac mls/5 test .... 下述命令用于取得文件 [.filename]#test# 上的 MLS 标签: [source,shell] .... # getfmac test .... 以上是对于 MLS 策略提供功能的概要。 另一种做法是在 [.filename]#/etc# 中建立一个主策略文件, 并在其中指定 MLS 策略信息, 作为 `setfmac` 命令的输入。 这种方法, 将在其他策略之后进行介绍。 === 规划托管敏感性 通过使用多级安全策略模块, 管理员可以规划如何控制敏感信息的流向。 默认情况下, 由于其默认的禁止向上读以及向下写的性质, 系统会默认将所有客体置于较低的状态。 这样, 所有的客体都可以访问, 而管理员则可以在配置阶段慢慢地进行提高信息的敏感度这样的修改。 除了前面介绍的三种基本标签选项之外, 管理员还可以根据需要将用户和用户组进行分组, 以阻止它们之间的信息流。 一些人们比较熟悉的信息限界词汇, 如 `机密`、 `秘密`, 以及 `绝密` 可以方便您理解这一概念。 管理员也可以简单地根据项目级别建不同的分组。 无论采用何种分类方法, 在实施限制性的策略之前, 都必须首先想好如何进行规划。 这个安全策略模块最典型的用例是电子商务的 web 服务器, 其上的文件服务保存公司的重要信息以及金融机构的情况。 对于只有两三个用户的个人工作站而言, 则可能不甚适用。 [[mac-biba]] == MAC Biba 模块 模块名: [.filename]#mac_biba.ko# 对应的内核配置: `options MAC_BIBA` 引导选项: `mac_biba_load="YES"` man:mac_biba[4] 模块将加载 MAC Biba 策略。 这个策略与 MLS 策略非常类似, 只是信息流的规则有些相反的地方。 通俗地说, 这就是防止敏感信息向下传播, 而 MLS 策略则是防止敏感信息的向上传播; 因而, 这一节的许多内容都可以同时应用于两种策略。 在 Biba 环境中, "integrity" (完整性) 标签, 将设置在每一个主体或客体上。 这些标签是按照层次级别建立的。 如果客体或主体的级别被提升, 其完整性也随之提升。 被支持的标签是 `biba/low`, `biba/equal` 以及 `biba/high`; 解释如下: * `biba/low` 标签是客体或主体所能拥有的最低完整性级别。 在客体或主体上设置它, 将阻止其在更高级别客体或主体对其进行的写操作, 虽然读仍被允许。 * `biba/equal` 标签只应在那些希望排除在策略之外的客体上设置。 * `biba/high` 允许向较低标签的客体上写, 但不允许读那些客体。 推荐在那些可能影响整个系统完整性的客体上设置这个标签。 Biba 提供了: * 层次式的完整性级别, 并提供了一组非层次式的完整性分类; * 固定规则: 不允许向上写, 不允许向下读 (与 MLS 相反)。 主体可以在它自己和较低的级别写, 但不能向更高级别实施写操作。 类似地, 主体也可以读在其自己的, 或更高级别的客体, 但不能读取较低级别的客体; * 完整性 (防止对数据进行不正确的修改); * 完整性级别 (而不是 MLS 的敏感度级别)。 下列 `sysctl` 可以用于维护 Biba 策略。 * `security.mac.biba.enabled` 可以用来在机器上启用/禁用是否实施 Biba 策略。 * `security.mac.biba.ptys_equal` 可以用来在 man:pty[4] 设备上禁用 Biba 策略。 * `security.mac.biba.revocation_enabled` 将在支配主体发生变化时强制撤销对客体的访问权。 要操作系统客体上的 Biba 策略, 需要使用 `setfmac` 和 `getfmac` 命令: [source,shell] .... # setfmac biba/low test # getfmac test test: biba/low .... === 规划托管完整性 与敏感性不同, 完整性是要确保不受信方不能对信息进行篡改。 这包括了在主体和客体之间传递的信息。 这能够确保用户只能修改甚至访问需要他们的信息。 man:mac_biba[4] 安全策略模块允许管理员指定用户能够看到和执行的文件和程序, 并确保这些文件能够为系统及用户或用户组所信任, 而免受其他威胁。 在最初的规划阶段, 管理员必须做好将用户分成不同的等级、 级别和区域的准备。 在启动前后, 包括数据以及程序和使用工具在内的客体, 用户都会无法访问。 一旦启用了这个策略模块, 系统将默认使用高级别的标签, 而划分用户级别和等级的工作则交由管理员来进行配置。 与前面介绍的级别限界不同, 好的规划方法可能还包括 topic。 例如, 只允许开发人员修改代码库、 使用源代码编译器, 以及其他开发工具, 而其他用户则分入其他类别, 如测试人员、 设计人员, 以及普通用户, 这些用户可能只拥有读这些资料的权限。 通过其自然的安全控制, 完整性级别较低的主体, 就会无法向完整性级别高的主体进行写操作; 而完整性级别较高的主体, 也不能观察或读较低完整性级别的客体。 通过将客体的标签设为最低级, 可以阻止所有主体对其进行的访问操作。 这一安全策略模块预期的应用场合包括受限的 web 服务器、 开发和测试机, 以及源代码库。 而对于个人终端、 作为路由器的计算机, 以及网络防火墙而言, 它的用处就不大了。 [[mac-lomac]] == MAC LOMAC 模块 模块名: [.filename]#mac_lomac.ko# 对应的内核配置: `options MAC_LOMAC` 引导选项: `mac_lomac_load="YES"` 和 MAC Biba 策略不同, man:mac_lomac[4] 策略只允许在降低了完整性级别之后, 才允许在不破坏完整性规则的前提下访问较低完整性级别的客体。 MAC 版本的 Low-watermark 完整性策略不应与较早的 man:lomac[4] 实现相混淆, 除了使用浮动的标签来支持主体通过辅助级别区间降级之外, 其工作方式与 Biba 大体相似。 这一次要的区间以 `[auxgrade]` 的形式出现。 当指定包含辅助级别的 lomac 策略时, 其形式应类似于: `lomac/10[2]` 这里数字二 (2) 就是辅助级别。 MAC LOMAC 策略依赖于系统客体上存在普适的标签, 这样就允许主体从较低完整性级别的客体读取, 并对主体的标签降级, 以防止其在之后写高完整性级别的客体。 这就是前面讨论的 `[auxgrade]` 选项, 因此这个策略能够提供更大的兼容性, 而所需要的初始配置也要比 Biba 少。 === 例子 与 Biba 和 MLS 策略类似; `setfmac` 和 `setpmac` 工具可以用来在系统客体上放置标签: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... 注意, 这里的辅助级别是 `low`, 这一特性只由 MAC LOMAC 策略提供。 [[mac-implementing]] == MAC Jail 中的 Nagios 下面给出了通过多种 MAC 模块, 并正确地配置策略来实现安全环境的例子。 这只是一个测试, 因此不应被看作四海一家的解决之道。 仅仅实现一个策略, 而忽略它不能解决任何问题, 并可能在生产环境中产生灾难性的后果。 在开始这些操作之前, 必须在每一个文件系统上设置 `multilabel` 选项, 这些操作在这一章开始的部分进行了介绍。 不完成这些操作, 将导致错误的结果。 首先, 请确认已经安装了 package:net-mngt/nagios-plugins[]、 package:net-mngt/nagios[], 和 package:www/apache13[] 这些 ports, 并对其进行了配置, 且运转正常。 === 创建一个 insecure (不安全) 用户 Class 首先是在 [.filename]#/etc/login.conf# 文件中加入一个新的用户 class: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... 并在 default 用户 class 中加入: [.programlisting] .... :label=biba/high: .... 一旦完成上述操作, 就需要运行下面的命令来重建数据库: [source,shell] .... # cap_mkdb /etc/login.conf .... === 引导配置 现在暂时还不要重新启动, 我们还需要在 [.filename]#/boot/loader.conf# 中增加下面几行, 以便让模块随系统初始化一同加载: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" .... === 配置用户 使用下面的命令将 `root` 设为属于默认的 class: [source,shell] .... # pw usermod root -L default .... 所有非 `root` 或系统的用户, 现在需要一个登录 class。 登录 class 是必须的, 否则这些用户将被禁止使用类似 man:vi[1] 这样的命令。 下面的 `sh` 脚本应能完成这个工作: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ # /etc/passwd`; do pw usermod $x -L default; done; .... 将 `nagios` 和 `www` 这两个用户归入不安全 class: [source,shell] .... # pw usermod nagios -L insecure .... [source,shell] .... # pw usermod www -L insecure .... === 创建上下文文件 接下来需要创建一个上下文文件; 您可以把下面的实例放到 [.filename]#/etc/policy.contexts# 中。 [.programlisting] .... # This is the default BIBA policy for this system. # System: /var/run biba/equal /var/run/* biba/equal /dev biba/equal /dev/* biba/equal /var biba/equal /var/spool biba/equal /var/spool/* biba/equal /var/log biba/equal /var/log/* biba/equal /tmp biba/equal /tmp/* biba/equal /var/tmp biba/equal /var/tmp/* biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # For Nagios: /usr/local/etc/nagios /usr/local/etc/nagios/* biba/10 /var/spool/nagios biba/10 /var/spool/nagios/* biba/10 # For apache /usr/local/etc/apache biba/10 /usr/local/etc/apache/* biba/10 .... 这个策略通过在信息流上设置限制来强化安全。 在这个配置中, 包括 `root` 和其他用户在内的用户, 都不允许访问 Nagios。 作为 Nagios 一部分的配置文件和进程, 都是完全独立的, 也称为 jailed。 接下来可以用下面的命令将其读入系统: [source,shell] .... # setfsmac -ef /etc/policy.contexts / # setfsmac -ef /etc/policy.contexts / .... [NOTE] ==== 随环境不同前述的文件系统布局可能会有所不同; 不过无论如何, 都只能在一个文件系统上运行它。 ==== 在 [.filename]#/etc/mac.conf# 文件中的 main 小节需要进行下面的修改: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === 启用网络 在 [.filename]#/boot/loader.conf# 中增加下列内容: [.programlisting] .... security.mac.biba.trust_all_interfaces=1 .... 将下述内容加入 [.filename]#rc.conf# 中的网络接口配置。 如果主 Internet 配置是通过 DHCP 完成的, 则需要在每次系统启动之后手工执行类似的配置: [.programlisting] .... maclabel biba/equal .... === 测试配置 首先要确认 web 服务以及 Nagios 不会随系统的初始化和重启过程而自动启动。 在此之前, 请在此确认 `root` 用户不能访问 Nagios 配置目录中的任何文件 如果 `root` 能够在 [.filename]#/var/spool/nagios# 中运行 man:ls[1], 则表示配置有误。 如果配置正确的话, 您会收到一条 "permission denied" 错误信息。 如果一切正常, Nagios、 Apache, 以及 Sendmail 就可以按照适应安全策略的方式启动了。 下面的命令将完成此工作: [source,shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... 再次检查是否一切正常。 如果不是的话, 请检查日志文件和错误信息。 此外, 还可以用 man:sysctl[8] 来临时禁用 man:mac_biba[4] 安全策略模块的强制措施, 并象之前那样进行配置和启动服务。 [NOTE] ==== `root` 用户可以放心大胆地修改安全强制措施, 并编辑配置文件。 下面的命令可以对安全策略进行降级, 并启动一个新的 shell: [source,shell] .... # setpmac biba/10 csh .... 要阻止这种情况发生, 就需要配置 man:login.conf[5] 中许可的命令范围了。 如果 man:setpmac[8] 尝试执行超越许可范围的命令, 则会返回一个错误, 而不是执行命令。 在这个例子中, 可以把 root 设为 `biba/high(high-high)`。 ==== [[mac-userlocked]] == User Lock Down 这个例子针对的是一个相对较小的存储系统, 其用户数少于五十。 用户能够在其上登录, 除了存储数据之外, 还可以访问一些其他资源。 在这个场景中, man:mac_bsdextended[4] 可以与 man:mac_seeotheruids[4] 并存, 以达到禁止访问非授权资源, 同时隐藏其他用户的进程的目的。 首先, 在 [.filename]#/boot/loader.conf# 中加入: [.programlisting] .... mac_seeotheruids_load="YES" .... 随后, 可以通过下述 rc.conf 变量来启用 man:mac_bsdextended[4] 安全策略模块: [.programlisting] .... ugidfw_enable="YES" .... 默认规则保存在 [.filename]#/etc/rc.bsdextended# 中, 并在系统初始化时加载; 但是, 其中的默认项可能需要进行一些改动。 因为这台机器只为获得了授权的用户提供服务, 因此除了最后两项之外, 其它内容都应保持注释的状态。 这两项规则将默认强制加载属于用户的系统客体。 在这台机器上添加需要的用户并重新启动。 出于测试的目的, 请在两个控制台上分别以不同的用户身份登录。 运行 `ps aux` 命令来看看是否能看到其他用户的进程。 此外, 在其他用户的主目录中运行 man:ls[1] 命令, 如果配置正确, 则这个命令会失败。 不要尝试以 `root` 用户的身份进行测试, 除非您已经修改了特定的 `sysctl` 来阻止超级用户的访问。 [NOTE] ==== 在添加新用户时, 他们的 man:mac_bsdextended[4] 规则不会自动出现在规则集表中。 要迅速更新规则集, 只需简单地使用 man:kldunload[8] 和 man:kldload[8] 工具来卸载并重新加载安全策略模块。 ==== [[mac-troubleshoot]] == MAC 框架的故障排除 在开发过程中, 有一些用户报告了正常配置下出现的问题。 其中的一些问题如下所示: === 无法在 [.filename]#/# 上启用 `multilabel` 选项 `multilabel` 标志在根 ([.filename]#/#) 分区上没有保持启用状态! 看起来每五十个用户中就有一个遇到这样的问题, 当然, 在我们的初始配置过程中也出现过这样的问题。 更进一步的观察使得我相信这个所谓的 "bug" 是由于文档中不确切的描述, 或对其产生的误解造成的。 无论它是因为什么引发的, 下面的步骤应该能够解决此问题: [.procedure] ==== . 编辑 [.filename]#/etc/fstab# 并将根分区设置为 `ro`, 表示只读。 . 重新启动并进入单用户模式。 . 在 [.filename]#/# 上运行 `tunefs -l enable` . 重新启动并进入正常的模式。 . 运行 `mount -urw`[.filename]#/# 并把 [.filename]#/etc/fstab# 中的 `ro` 改回 `rw`, 然后再次重新启动。 . 再次检查来自 `mount` 的输出, 已确认根文件系统上正确地设置了 `multilabel`。 ==== === 在 MAC 之后无法启动 X11 了 在使用 MAC 建立安全的环境之后, 就无法启动 X 了! 这可能是由于 MAC `partition` 策略, 或者对某个 MAC 标签策略进行了错误的配置导致的。 要调试这个问题, 请尝试: [.procedure] ==== . 检查错误信息; 如果用户是在 `insecure` class 中, 则 `partition` 策略就可能导致问题。 尝试将用户的 class 重新改为 `default` class, 并使用 `cap_mkdb` 命令重建数据库。 如果这无法解决问题, 则进入第二步。 . 仔细检查标签策略。 确认针对有问题的用户的策略是正确的, 特别是 X11 应用, 以及 [.filename]#/dev# 项。 . 如果这些都无法解决问题, 将出错消息和对您的环境的描述, 发送到 http://www.TrustedBSD.org[TrustedBSD] 网站上的 TrustedBSD 讨论邮件列表, 或者 {freebsd-questions} 邮件列表。 ==== === Error: man:_secure_path[3] cannot stat [.filename]#.login_conf# 当我试图从 `root` 用户切换到其同中的其他用户时, 出现了错误提示 `_secure_path: unable to state .login_conf`。 这个提示通常在用户拥有高于它将要成为的那个用户的 标签设定时出现。 例如, 如果系统上的一个用户 `joe` 拥有默认的 `biba/low` 标签, 而 `root` 用户拥有 `biba/high`, 它也就不能查看 `joe` 的主目录, 无论 `root` 是否使用了 `su` 来成为 `joe`。 这种情况下, Biba 完整性模型, 就不会允许 `root` 查看在较低完整性级别中的客体。 === `root` 用户名被破坏了! 在普通模式, 甚至是单用户模式中, `root` 不被识别。 `whoami` 命令返回了 0 (零) 而 `su` 则提示 `who are you?`。 到底发生了什么? 标签策略被禁用可能会导致这样的问题, 无论是通过 man:sysctl[8] 或是卸载了策略模块。 如果打算禁用策略, 或者临时禁用它, 则登录性能数据库需要重新配置, 在其中删除 `label` 选项。 仔细检查 [.filename]#login.conf# 以确保所有的 `label` 选项都已经删除, 然后使用 `cap_mkdb` 命令来重建数据库。 这种情况也可能在通过策略来限制访问 [.filename]#master.passwd# 文件或对应的那个数据库时发生。 这主要是由于管理员修改受某一 label 限制的文件, 而与系统级的通用策略发生了冲突。 这时, 用户信息将由系统直接读取, 而在文件继承了新的 label 之后则会拒绝访问。 此时, 只需使用 man:sysctl[8] 禁用这一策略, 一切就会恢复正常了。 diff --git a/documentation/content/zh-cn/books/handbook/network-servers/_index.adoc b/documentation/content/zh-cn/books/handbook/network-servers/_index.adoc index 59682f444d..81fd2575f8 100644 --- a/documentation/content/zh-cn/books/handbook/network-servers/_index.adoc +++ b/documentation/content/zh-cn/books/handbook/network-servers/_index.adoc @@ -1,2788 +1,2787 @@ --- title: 第 30 章 网络服务器 part: 部分 IV. 网络通讯 prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 35 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = 网络服务器 :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 30 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == 概要 本章将覆盖某些在 UNIX(R) 系统上常用的网络服务。话题将会涉及 如何安装、配置、测试和维护多种不同类型的网络服务。本章节中将提 供大量配置文件的样例,期望能够对您有所裨益。 在读完本章之后,您将会知道: * 如何管理 inetd。 * 如何设置运行一个网络文件系统。 * 如何配置一个网络信息服务器以共享用户帐号。 * 如何通过DHCP自动配置网络。 * 如何配置一个域名服务器。 * 如何设置Apache HTTP 服务器。 * 如何设置文件传输(FTP)服务器。 * 如何使用Samba为 Windows(R) 客户端设置文件和打印服务。 * 如何同步时间和日期,以及如何设置使用NTP协议的时间服务器。 * 如何配置标准的日志守护进程, `syslogd`, 接受远程主机的日志。 在阅读此章节之前,您应当: * 理解有关[.filename]##/etc/rc##中脚本的基本知识。 * 熟悉基本网络术语。 * 懂得如何安装额外的第三方软件(crossref:ports[ports,安装应用程序. Packages 和 Ports])。 [[network-inetd]] == inetd"超级服务器" [[network-inetd-overview]] === 总览 man:inetd[8] 有时也被称作 "Internet 超级服务器", 因为它可以为多种服务管理连接。 当 inetd 收到连接时, 它能够确定连接所需的程序, 启动相应的进程, 并把 socket 交给它 (服务 socket 会作为程序的标准输入、 输出和错误输出描述符)。 使用 inetd 来运行那些负载不重的服务有助于降低系统负载, 因为它不需要为每个服务都启动独立的服务程序。 一般说来, inetd 主要用于启动其它服务程序, 但它也有能力直接处理某些简单的服务, 例如 chargen、 auth, 以及 daytime。 这一节将介绍关于如何通过命令行选项, 以及配置文件 [.filename]#/etc/inetd.conf# 来对 inetd 进行配置的一些基础知识。 [[network-inetd-settings]] === 设置 inetd 是通过 man:rc[8] 系统启动的。 `inetd_enable` 选项默认设为 `NO`, 但可以在安装系统时, 由用户根据需要通过 sysinstall 来打开。 将: [.programlisting] .... inetd_enable="YES" .... 或 [.programlisting] .... inetd_enable="NO" .... 写入 [.filename]#/etc/rc.conf# 可以启用或禁用系统启动时 inetd 的自动启动。 命令: [source,shell] .... # /etc/rc.d/inetd rcvar .... 可以显示目前的设置。 此外, 您还可以通过 `inetd_flags` 参数来向 inetd 传递额外的其它参数。 [[network-inetd-cmdline]] === 命令行选项 与多数服务程序类似, inetd 也提供了为数众多的用以控制其行为的参数。 完整的参数列表如下: `inetd [-d] [-l] [-w] [-W] [-c maximum] [-C rate] [-a address | hostname] [-p filename] [-R rate] [-s maximum] [configuration file]` 这些参数都可以通过 [.filename]#/etc/rc.conf# 的 `inetd_flags` 选项来传给 inetd。 默认情况下, `inetd_flags` 设为 `-wW -C 60`, 者表示希望为 inetd 的服务启用 TCP wrapping, 并阻止来自同一 IP 每分钟超过 60 次的请求。 虽然我们会在下面介绍关于限制连接频率的选项, 但初学的用户可能会很高兴地发现这些参数通常并不需要进行修改。 在收到超大量的连接请求时, 这些选项则有可能会发挥作用。 完整的参数列表, 可以在 man:inetd[8] 联机手册中找到。 -c maximum:: 指定单个服务的最大并发访问数量,默认为不限。 也可以在此服务的具体配置里面通过``max-child``改掉。 -C rate:: 指定单个服务一分钟内能被单个IP地址调用的最大次数, 默认不限。也可以在此服务的具体配置里面通过``max-connections-per-ip-per-minute`` 改掉。 -R rate:: 指定单个服务一分钟内能被调用的最大次数,默认为256。 设为0 则允许不限次数调用。 -s maximum:: 指定同一 IP 同时请求同一服务时允许的最大值; 默认值为不限制。 您可以通过 `max-child-per-ip` 参数来以服务为单位进行限制。 [[network-inetd-conf]] === [.filename]#inetd.conf# 对于 inetd 的配置, 是通过 [.filename]#/etc/inetd.conf# 文件来完成的。 在修改了 [.filename]#/etc/inetd.conf# 之后, 可以使用下面的命令来强制 inetd 重新读取配置文件: [[network-inetd-reread]] .重新加载 inetd 配置文件 [example] ==== [source,shell] .... # /etc/rc.d/inetd reload .... ==== 配置文件中的每一行都是一个独立的服务程序。 在这个文件中, 前面有 "#" 的内容被认为是注释。 [.filename]##/etc/inetd.conf## 文件的格式如下: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... 下面是针对 IPv4 的 man:ftpd[8] 服务的例子: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... service-name:: 指明各个服务的服务名。其服务名必须与[.filename]##/etc/services##中列出的一致。 这将决定inetd会监听哪个port。 一旦有新的服务需要添加,必须先在[.filename]##/etc/services##里面添加。 socket-type:: 可以是``stream``、``dgram``、``raw``或者 ``seqpacket``。 ``stream`` 用于基于连接的 TCP 服务;而 ``dgram`` 则用于使用 UDP 协议的服务。 protocol:: 下列之一: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | 协议 | 说明 |tcp, tcp4 |TCP IPv4 |udp, udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |Both TCP IPv4 and v6 |udp46 |Both UDP IPv4 and v6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: `wait|nowait` 指明从inetd 里头调用的服务是否可以自己处理socket. `dgram` socket类型必须使用``wait``, 而stream socket daemons, 由于通常使用多线程方式,应当使用 `nowait`. `wait` 通常把多个 socket 丢给单个服务进程, 而 `nowait` 则 会为每个新的 socket 生成一个子进程。 + `max-child` 选项能够配置 inetd 能为本服务派生出的最大子进程数量。 如果某特定服务需要限定最高10个实例, 把``/10`` 放到``nowait``后头就可以了。 指定 ``/0`` 表示不限制子进程的数量。 + 除了 `max-child` 之外, 还有两个选项可以限制来自同一位置到特定服务的最大连接数。 `max-connections-per-ip-per-minute` 可以限制特定 IP 地址每分钟的总连接数, 例如, 限制任何 IP 地址每分钟最多连接十次。 `max-child-per-ip` 则可以限制为某一 IP 地址在任何时候所启动的子进程数量。 这些选项对于防止针对服务器有意或无意的资源耗竭和拒绝服务 (DoS) 攻击十分有用。 + 这个字段中, 必须指定 `wait` 或 `nowait` 两者之一。 而 `max-child`、 `max-connections-per-ip-per-minute` 和 `max-child-per-ip` 则是可选项。 + 流式多线程服务, 并且不配置任何 `max-child`、 `max-connections-per-ip-per-minute` 或 `max-child-per-ip` 限制时, 其配置为: `nowait`。 + 同一个服务, 但希望将服务启动的数量限制为十个时, 则是: `nowait/10`。 + 同样配置, 限制每个 IP 地址每分钟最多连接二十次, 而同时启动的子进程最多十个, 应写作: `nowait/10/20`。 + 下面是 man:fingerd[8] 服务的默认配置: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -s .... + 最后这个例子中, 将子进程数限制为 100 个, 而任意 IP 最多同时建立 5 个连接: `nowait/100/0/5`。 user:: 该开关指定服务将以什么用户身份运行。一般而言,服务运行身份是 ``root``。基于安全目的,可以看到有些服务以 ``daemon``身份,或者是最小特权的 ``nobody``身份运行。 server-program:: 当连接到来时,执行服务程序的全路径。如果服务是由 inetd内置提供的,以``internal``代替。 server-program-arguments:: 当``server-program``调用到时,该开关 的值通过``argv[0]``通过传递给服务而工作。 如果命令行为:``mydaemon -d``,则 ``mydaemon -d``为``server-program-arguments`` 开关的值。同样的,如果服务是由inetd 内置提供的,这里还是 ``internal``。 [[network-inetd-security]] === Security 随安装时所选的模式不同, 许多 inetd 的服务可能已经默认启用。 如果确实不需要某个特定的服务, 则应考虑禁用它。 在 [.filename]#/etc/inetd.conf# 中, 将对应服务的那行前面加上 "#", 然后 <> 就可以了。 某些服务, 例如 fingerd, 可能是完全不需要的, 因为它们提供的信息可能对攻击者有用。 某些服务在设计时是缺少安全意识的, 或者有过长或压根没有连接请求的超时机制。 这使得攻击者能够通过缓慢地对这些服务发起连接, 并耗尽可用的资源。 对于这种情况, 设置 `max-connections-per-ip-per-minute`、 `max-child` 或 `max-child-per-ip` 限制, 来制约服务的行为是个好办法。 默认情况下,TCP wrapping 是打开的。参考 man:hosts_access[5] 手册,以获得更多关于在各种 inetd 调用的服务上设置TCP限制的信息。 [[network-inetd-misc]] === 杂项 daytime、 time、 echo、 discard、 chargen, 以及 auth 都是由 inetd 提供的内建服务。 auth 服务提供了网络身份服务, 它可以配置为提供不同级别的服务, 而其它服务则通常只能简单的打开或关闭。 参考 man:inetd[8] 手册获得更多信息。 [[network-nfs]] == 网络文件系统(NFS) 网络文件系统是FreeBSD支持的文件系统中的一种, 也被称为 NFS。 NFS允许一个系统在网络上与它人共享目录和文件。通过使用NFS,用户和程序可以象访问本地文件 一样访问远端系统上的文件。 以下是NFS最显而易见的好处: * 本地工作站使用更少的磁盘空间,因为通常的数据可以存放在一 台机器上而且可以通过网络访问到。 * 用户不必在每个网络上机器里头都有一个home目录。Home目录 可以被放在NFS服务器上并且在网络上处处可用。 * 诸如软驱,CDROM,和 Zip(R) 之类的存储设备可以在网络上面被别的机器使用。 这可以减少整个网络上的可移动介质设备的数量。 === NFS是如何工作的 NFS 至少包括两个主要的部分: 一台服务器, 以及至少一台客户机, 客户机远程地访问保存在服务器上的数据。 要让这一切运转起来, 需要配置并运行几个程序。 服务器必须运行以下服务: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | 服务 | 描述 |nfsd |NFS,为来自NFS客户端的 请求服务。 |mountd |NFS挂载服务,处理man:nfsd[8]递交过来的请求。 |rpcbind | 此服务允许 NFS 客户程序查询正在被 NFS 服务使用的端口。 |=== 客户端同样运行一些进程,比如 nfsiod。 nfsiod处理来自NFS的请求。 这是可选的,而且可以提高性能,对于普通和正确的操作来说并不是必须的。 参考man:nfsiod[8]手册获得更多信息。 [[network-configuring-nfs]] === 配置NFS NFS的配置过程相对简单。这个过程只需要 对[.filename]##/etc/rc.conf##文件作一些简单修改。 在NFS服务器这端,确认[.filename]##/etc/rc.conf## 文件里头以下开关都配上了: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" .... 只要NFS服务被置为enable,mountd 就能自动运行。 在客户端一侧,确认下面这个开关出现在 [.filename]##/etc/rc.conf##里头: [.programlisting] .... nfs_client_enable="YES" .... [.filename]##/etc/exports##文件指定了哪个文件系统 NFS应该输出(有时被称为"共享")。 [.filename]##/etc/exports##里面每行指定一个输出的文件系统和 哪些机器可以访问该文件系统。在指定机器访问权限的同时,访问选项 开关也可以被指定。有很多开关可以被用在这个文件里头,不过不会在这 里详细谈。您可以通过阅读man:exports[5] 手册来发现这些开关。 以下是一些[.filename]##/etc/exports##的例子: 下面是一个输出文件系统的例子, 不过这种配置与您所处的网络环境及其配置密切相关。 例如, 如果要把 [.filename]#/cdrom# 输出给与服务器域名相同的三台计算机 (因此例子中只有机器名, 而没有给出这些计算机的域名), 或在 [.filename]#/etc/hosts# 文件中进行了这种配置。 `-ro` 标志表示把输出的文件系统置为只读。 由于使用了这个标志, 远程系统在输出的文件系统上就不能写入任何变动了。 [.programlisting] .... /cdrom -ro host1 host2 host3 .... 下面的例子可以输出[.filename]##/home##给三个以IP地址方式表示的主机。 对于在没有配置DNS服务器的私有网络里头,这很有用。 此外, [.filename]##/etc/hosts## 文件也可以用以配置主机名;参看 man:hosts[5] 。 `-alldirs` 标记允许子目录被作为挂载点。 也就是说,客户端可以根据需要挂载需要的目录。 [.programlisting] .... /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... 下面几行输出 [.filename]#/a# ,以便两个来自不同域的客户端可以访问文件系统。 `-maproot=root` 标记授权远端系统上的 `root` 用户在被输出的文件系统上以``root``身份进行读写。 如果没有特别指定 `-maproot=root` 标记, 则即使用户在远端系统上是 `root` 身份, 也不能修改被输出文件系统上的文件。 [.programlisting] .... /a -maproot=root host.example.com box.example.org .... 为了能够访问到被输出的文件系统,客户端必须被授权。 请确认客户端在您的 [.filename]#/etc/exports# 被列出。 在 [.filename]#/etc/exports# 里头,每一行里面,输出信息和文件系统一一对应。 一个远程主机每次只能对应一个文件系统。而且只能有一个默认入口。比如,假设 [.filename]#/usr# 是独立的文件系统。这个 [.filename]#/etc/exports# 就是无效的: [.programlisting] .... # Invalid when /usr is one file system /usr/src client /usr/ports client .... 一个文件系统,[.filename]#/usr#, 有两行指定输出到同一主机, `client`. 解决这一问题的正确的格式是: [.programlisting] .... /usr/src /usr/ports client .... 在同一文件系统中, 输出到指定客户机的所有目录, 都必须写到同一行上。 没有指定客户机的行会被认为是单一主机。 这限制了你可以怎样输出的文件系统, 但对绝大多数人来说这不是问题。 下面是一个有效输出列表的例子, [.filename]#/usr# 和 [.filename]#/exports# 是本地文件系统: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... 在修改了 [.filename]#/etc/exports# 文件之后, 就必须让 mountd 服务重新检查它, 以便使修改生效。 一种方法是通过给正在运行的服务程序发送 HUP 信号来完成: [source,shell] .... # kill -HUP `cat /var/run/mountd.pid` .... 或指定适当的参数来运行 `mountd` man:rc[8] 脚本: [source,shell] .... # /etc/rc.d/mountd onereload .... 关于使用 rc 脚本的细节, 请参见 crossref:config[configtuning-rcd,在 FreeBSD 中使用 rc]。 另外, 系统重启动可以让 FreeBSD 把一切都弄好。 尽管如此, 重启不是必须的。 以 `root` 身份执行下面的命令可以搞定一切。 在 NFS 服务器端: [source,shell] .... # rpcbind # nfsd -u -t -n 4 # mountd -r .... 在 NFS 客户端: [source,shell] .... # nfsiod -n 4 .... 现在每件事情都应该就绪,以备挂载一个远端文件系统。 在这些例子里头, 服务器名字将是:`server` ,而客户端的名字将是: `client`。 如果您只打算临时挂载一个远端文件系统或者只是打算作测试配置正确与否, 只要在客户端以 `root` 身份执行下面的命令: [source,shell] .... # mount server:/home /mnt .... 这条命令会把服务端的 [.filename]#/home# 目录挂载到客户端的 [.filename]#/mnt# 上。 如果配置正确,您应该可以进入客户端的 [.filename]#/mnt# 目录并且看到所有服务端的文件。 如果您打算让系统每次在重启动的时候都自动挂载远端的文件系统,把那个文件系统加到 [.filename]#/etc/fstab# 文件里头去。下面是例子: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... man:fstab[5] 手册里有所有可用的开关。 === 锁 某些应用程序 (例如 mutt) 需要文件上锁支持才能正常运行。 在使用 NFS 时, 可以用 rpc.lockd 来支持文件上锁功能。 要启用它, 需要在服务器和客户机的 [.filename]#/etc/rc.conf# 中加入 (假定两端均已配好了 NFS): [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... 然后使用下述命令启动该程序: [source,shell] .... # /etc/rc.d/lockd start # /etc/rc.d/statd start .... 如果并不需要真的在 NFS 客户机和 NFS 服务器间确保上锁的语义, 可以让 NFS 客户机在本地上锁, 方法是使用 man:mount_nfs[8] 时指定 `-L` 参数。 请参见 man:mount_nfs[8] 联机手册以了解更多细节。 === 实际应用 NFS 有很多实际应用。下面是比较常见的一些: * 多个机器共享一台CDROM或者其他设备。这对于在多台机器中安装软件来说更加便宜跟方便。 * 在大型网络中,配置一台中心 NFS 服务器用来放置所有用户的home目录可能会带来便利。 这些目录能被输出到网络以便用户不管在哪台工作站上登录,总能得到相同的home目录。 * 几台机器可以有通用的[.filename]##/usr/ports/distfiles## 目录。 这样的话,当您需要在几台机器上安装port时,您可以无需在每台设备上下载而快速访问源码。 [[network-amd]] === 通过 amd 自动地挂接 man:amd[8] (自动挂接服务) 能够自动地在访问时挂接远程的文件系统。 如果文件系统在一段时间之内没有活动, 则会被 amd 自动卸下。 通过使用 amd, 能够提供一个持久挂接以外的选择, 而后者往往需要列入 [.filename]#/etc/fstab#。 amd 通过将自己以 NFS 服务器的形式, 附加到 [.filename]#/host# 和 [.filename]#/net# 目录上来工作。 当访问这些目录中的文件时, amd 将查找相应的远程挂接点, 并自动地挂接。 [.filename]#/net# 用于挂接远程 IP 地址上导出的文件系统, 而 [.filename]#/host# 则用于挂接远程主机名上的文件系统。 访问 [.filename]#/host/foobar/usr# 中的文件, 相当于告诉 amd 尝试挂接在主机 `foobar` 上导出的 [.filename]#/usr#。 .通过 amd 来挂接导出的文件系统 [example] ==== 您可以通过使用 `showmount` 命令来查看远程主机上导出的文件系统。 例如, 要查看 `foobar` 上导出的文件系统, 可以用: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /host/foobar/usr .... ==== 如同在前面例子中所看到的, `showmount` 显示了导出的 [.filename]#/usr#。 当进入 [.filename]#/host/foobar/usr# 这个目录时, amd 将尝试解析主机名 `foobar` 并自动地挂接需要的文件系统导出。 amd 可以通过启动脚本来启动, 方法是在 [.filename]#/etc/rc.conf# 中加入: [.programlisting] .... amd_enable="YES" .... 除此之外, 还可以给 amd 通过 `amd_flags` 选项来传递额外的参数。 默认情况下, `amd_flags` 为: [.programlisting] .... amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" .... [.filename]#/etc/amd.map# 文件定义了挂接导出文件系统时所使用的默认选项。 [.filename]#/etc/amd.conf# 文件, 则定义了更多关于 amd 的高级功能选项。 请参考 man:amd[8] 和 man:amd.conf[8] 联机手册, 以了解进一步的情况。 [[network-nfs-integration]] === 与其他系统集成时的常见问题 某些特定的 ISA PC 系统上的以太网适配器上有一些限制, 这些限制可能会导致严重的网络问题, 特别是与 NFS 配合使用时。 这些问题并非 FreeBSD 所特有的, 但 FreeBSD 系统会受到这些问题的影响。 这样的问题, 几乎总是在当 (FreeBSD) PC 系统与高性能的工作站, 例如 Silicon Graphics, Inc., 和 Sun Microsystems, Inc. 的工作站联网时发生。 NFS 挂接能够正常工作, 而且一些操作也可能成功, 但服务器会很快变得对客户机不太理会, 虽然对其他客户机的请求仍然能够正常处理。 这种情况通常发生在客户端, 无论它是一个 FreeBSD 系统或是终端。 在许多系统上, 一旦发生了这样的问题, 通常没办法正常地关闭客户机。 唯一的办法通常是让终端复位, 因为这一 NFS 状况没有办法被解决。 尽管 "正确的" 解决办法, 是为 FreeBSD 系统配备一块高性能的、 适用的以太网适配器, 然而也有办法绕过问题并得到相对满意的结果。 如果 FreeBSD 系统是 _服务器_, 则在客户机挂接时, 应该指定 `-w=1024`。 如果 FreeBSD 系统是 _客户机_, 则应加入 `-r=1024` 参数。 这些选项可以通过在对应的 [.filename]#fstab# 的第四个字段加入, 以便让客户机能够自动地挂接, 或者通过 man:mount[8] 的 `-o` 参数在手工挂接时指定。 还需要注意的是另一个问题, 有时会被误认为是和上面一样的问题。 这个问题多见于 NFS 服务器和客户机在不同的网络上时。 如果是这种情况, 一定要 _确定_ 您的路由器确实把必需的 UDP 信息路由到了目的地, 否则您将什么也做不了。 下面的例子中, `fastws` 是主机 (接口) 的名字, 它是一台高性能的终端, 而 `freebox` 是另一台主机 (接口) 的名字, 它是一个使用较低性能的以太网适配器的 FreeBSD 系统。 同时, [.filename]#/sharedfs# 将被导出成为 NFS 文件系统 (参见 man:exports[5]), 而 [.filename]#/project# 将是客户机上挂接这一导出文件系统的挂接点。 所有的应用场景中, 请注意附加选项, 例如 `hard` 或 `soft` 以及 `bg` 可能是您的应用所需要的。 关于 FreeBSD 系统 (`freebox`) 作为客户机的示范 [.filename]#/etc/fstab# 文件, 见于 `freebox` 之上: [.programlisting] .... fastws:/sharedfs /project nfs rw,-r=1024 0 0 .... 在 `freebox` 上手工挂接: [source,shell] .... # mount -t nfs -o -r=1024 fastws:/sharedfs /project .... 以 FreeBSD 系统作为服务器的例子, 是 `fastws` 上的 [.filename]#/etc/fstab#: [.programlisting] .... freebox:/sharedfs /project nfs rw,-w=1024 0 0 .... 在 `fastws` 上手工挂接的命令是: [source,shell] .... # mount -t nfs -o -w=1024 freebox:/sharedfs /project .... 几乎所有的 16-位 以太网控制器, 都能够在没有上述读写尺寸限制的情况下正常工作。 对于那些关心到底是什么问题的人, 下面是失败如何发生的解释, 同时这也说明了为什么这是一个无法恢复的问题。 典型情况下, NFS 会使用一个 "块" 为单位进行操作, 其尺寸是 8 K (虽然它可能会将操作分成更小尺寸的分片)。 由于最大的以太网包尺寸大约是 1500 字节, 因此 NFS "块" 会分成多个以太网包, 虽然在更高层的代码看来它仍然是一个完整的单元, 并在接收方重新组装, 作为一个整体来 _确认_。 高性能的工作站, 可以将构成 NFS 单元的包迅速发出, 其节奏会快到标准允许的最大限度。 在容量较小的卡上, 后来的包会冲掉同一单元内的较早的包, 因而整个单元无法被重建或确认。 其结果是, 工作站将超时并重试, 但仍然是完整的 8 K 单元, 这一过程将无休止地重复下去。 如果将单元尺寸限制在以太网包尺寸之下, 我们就能够确保每一个以太网包都能够被独立地接收和确认, 从而避免了上面的死锁情形。 溢出在高性能工作站将数据库投向 PC 系统时仍会发生, 但在更好的网卡上, 能够保证这类溢出不会在每一个 NFS "单元" 上都发生。 当出现溢出时, 被影响的单元被重传, 因而此时有很大的机会它将被正确接收、 重组, 并确认。 [[network-nis]] == 网络信息服务 (NIS/YP) === 它是什么? NIS, 表示网络信息服务 (Network Information Services), 最初由 Sun Microsystems 开发, 用于 UNIX(R) (最初是 SunOS(TM)) 系统的集中管理。 目前, 它基本上已经成为了业界标准; 所有主流的类 UNIX(R) 系统 (Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD, FreeBSD, 等等) 都支持 NIS。 NIS 也就是人们所熟知的黄页(Yellow Pages), 但由于商标的问题, Sun 将其改名为现在的名字。 旧的术语 (以及 yp), 仍然经常可以看到, 并被广泛使用。 这是一个基于 RPC 的客户机/服务器系统, 它允许在一个 NIS 域中的一组机器共享一系列配置文件。 这样, 系统管理员就可以配置只包含最基本配置数据的 NIS 客户机系统, 并在单点上增加、 删除或修改配置数据。 尽管实现的内部细节截然不同, 这和 Windows NT(R) 域系统非常类似, 以至于可以将两者的基本功能相互类比。 === 您应该知道的术语和进程 有一系列术语和重要的用户进程将在您在 FreeBSD 上实现 NIS 时用到, 无论是在创建 NIS 服务器, 或作为 NIS 客户机: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | 术语 | 说明 |NIS 域名 |NIS 主服务器和所有其客户机 (包括从服务器) 会使用同一 NIS 域名。 和 Windows NT(R) 域名类似, NIS 域名与 DNS 无关。 |rpcbind |必须运行这个程序, 才能够启用 RPC (远程过程调用, NIS 用到的一种网络协议)。 如果没有运行 rpcbind, 则没有办法运行 NIS 服务器, 或作为 NIS 客户机。 |ypbind |"绑定(bind)" NIS 客户机到它的 NIS 服务器上。 这样, 它将从系统中获取 NIS 域名, 并使用 RPC 连接到服务器上。 ypbind 是 NIS 环境中, 客户机-服务器通讯的核心; 如果客户机上的 ypbind 死掉的话, 它将无法访问 NIS 服务器。 |ypserv |只应在 NIS 服务器上运行它; 这是 NIS 的服务器进程。 如果 man:ypserv[8] 死掉的话, 则服务器将不再具有响应 NIS 请求的能力 (此时, 如果有从服务器的话, 则会接管操作)。 有一些 NIS 的实现 (但不是 FreeBSD 的这个) 的客户机上, 如果之前用过一个服务器, 而那台服务器死掉的话, 并不尝试重新连接到另一个服务器。 通常, 发生这种情况时, 唯一的办法就是重新启动服务器进程 (或者, 甚至重新启动服务器) 或客户机上的 ypbind 进程。 |rpc.yppasswdd |另一个只应在 NIS 主服务器上运行的进程; 这是一个服务程序, 其作用是允许 NIS 客户机改变它们的 NIS 口令。 如果没有运行这个服务, 用户将必须登录到 NIS 主服务器上, 并在那里修改口令。 |=== === 它是如何工作的? 在 NIS 环境中, 有三种类型的主机: 主服务器, 从服务器, 以及客户机。 服务器的作用是充当主机配置信息的中央数据库。 主服务器上保存着这些信息的权威副本, 而从服务器则是保存这些信息的冗余副本。 客户机依赖于服务器向它们提供这些信息。 许多文件的信息可以通过这种方式来共享。 通常情况下, [.filename]#master.passwd#、 [.filename]#group#, 以及 [.filename]#hosts# 是通过 NIS 分发的。 无论什么时候, 如果客户机上的某个进程请求这些本应在本地的文件中的资料的时候, 它都会向所绑定的 NIS 服务器发出请求, 而不使用本地的版本。 ==== 机器类型 * 一台 _NIS 主服务器_。 这台服务器, 和 Windows NT(R) 域控制器类似, 会维护所有 NIS 客户机所使用的文件。 [.filename]#passwd#, [.filename]#group#, 以及许多其他 NIS 客户机所使用的文件, 都被存放到主服务器上。 + [NOTE] ==== 可以将一台 NIS 主服务器用在多个 NIS 域中。 然而, 本书不打算对这种配置进行介绍, 因为这种配置, 通常只出现在小规模的 NIS 环境中。 ==== * _NIS 从服务器_。 这一概念, 与 Windows NT(R) 的备份域控制器类似。 NIS 从服务器, 用于维护 NIS 主服务器的数据文件副本。 NIS 从服务器提供了一种冗余, 这在许多重要的环境中是必需的。 此外, 它也帮助减轻了主服务器的负荷: NIS 客户机总是挂接到最先响应它们的 NIS 服务器上, 而这也包括来自从服务器的响应。 * _NIS 客户机_。 NIS 客户机, 和多数 Windows NT(R) 工作站类似, 通过 NIS 服务器 (或对于 Windows NT(R) 工作站, 则是 Windows NT(R) 域控制器) 来完成登录时的身份验证过程。 === 使用 NIS/YP 这一节将通过实例介绍如何配置 NIS 环境。 ==== 规划 假定您正在管理大学中的一个小型实验室。 在这个实验室中, 有 15 台 FreeBSD 机器, 目前尚没有集中的管理点; 每一台机器上有自己的 [.filename]#/etc/passwd# 和 [.filename]#/etc/master.passwd#。 这些文件通过人工干预的方法来保持与其他机器上版本的同步; 目前, 如果您在实验室中增加一个用户, 将不得不在所有 15 台机器上手工执行 `adduser` 命令。 毋庸置疑, 这一现状必须改变, 因此您决定将整个实验室转为使用 NIS, 并使用两台机器作为服务器。 因此, 实验室的配置应该是这样的: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | 机器名 | IP 地址 | 机器的角色 |`ellington` |`10.0.0.2` |NIS 主服务器 |`coltrane` |`10.0.0.3` |NIS 从服务器 |`basie` |`10.0.0.4` |教员工作站 |`bird` |`10.0.0.5` |客户机 |`cli[1-11]` |`10.0.0.[6-17]` |其他客户机 |=== 如果您是首次配置 NIS, 仔细思考如何进行规划就十分重要。 无论您的网络的大小如何, 都必须进行几个决策。 ===== 选择 NIS 域名 这可能不是您过去使用的 "域名(domainname)"。 它的规范的叫法, 应该是 "NIS 域名"。 当客户机广播对此信息的请求时, 它会将 NIS 域的名字作为请求的一部分发出。 这样, 统一网络上的多个服务器, 就能够知道谁应该回应请求。 您可以把 NIS 域名想象成以某种方式相关的一组主机的名字。 一些机构会选择使用它们的 Internet 域名来作为 NIS 域名。 并不推荐这样做, 因为在调试网络问题时, 这可能会导致不必要的困扰。 NIS 域名应该是在您网络上唯一的, 并且有助于了解它所描述的到底是哪一组机器。 例如对于 Acme 公司的美工部门, 可以考虑使用 "acme-art" 这样的 NIS 域名。 在这个例子中, 您使用的域名是 `test-domain`。 然而, 某些操作系统 (最著名的是 SunOS(TM)) 会使用其 NIS 域名作为 Internet 域名。 如果您的网络上存在包含这类限制的机器, 就 _必须_ 使用 Internet 域名来作为您的 NIS 域名。 ===== 服务器的物理要求 选择 NIS 服务器时, 需要时刻牢记一些东西。 NIS 的一个不太好的特性就是其客户机对于服务器的依赖程度。 如果客户机无法与其 NIS 域的服务器联系, 则这台机器通常会陷于不可用的状态。 缺少用户和组信息, 会使绝大多数系统进入短暂的冻结状态。 基于这样的考虑, 您需要选择一台不经常重新启动, 或用于开发的机器来承担其责任。 如果您的网络不太忙, 也可以使用运行着其他服务的机器来安放 NIS 服务, 只是需要注意, 一旦 NIS 服务器不可用, 则 _所有_ 的 NIS 客户机都会受到影响。 ==== NIS 服务器 所有的 NIS 信息的正规版本, 都被保存在一台单独的称作 NIS 主服务器的机器上。 用于保存这些信息的数据库, 称为 NIS 映射(map)。 在 FreeBSD 中, 这些映射被保存在 [.filename]#/var/yp/[domainname]# 里, 其中 [.filename]#[domainname]# 是提供服务的 NIS 域的名字。 一台 NIS 服务器, 可以同时支持多个域, 因此可以建立很多这样的目录, 所支撑一个域对应一个。 每一个域都会有一组独立的映射。 NIS 主和从服务器, 通过 `ypserv` 服务程序来处理所有的 NIS 请求。 `ypserv` 有责任接收来自 NIS 客户机的请求, 翻译请求的域, 并将名字映射为相关的数据库文件的路径, 然后将来自数据库的数据传回客户机。 ===== 配置 NIS 主服务器 配置主 NIS 服务器相对而言十分的简单, 而其具体步骤则取决于您的需要。 FreeBSD 提供了一步到位的 NIS 支持。 您需要做的全部事情, 只是在 [.filename]#/etc/rc.conf# 中加入一些配置, 其他工作会由 FreeBSD 完成。 [.procedure] ==== [.programlisting] .... nisdomainname="test-domain" .... . 这一行将在网络启动 (例如重新启动) 时, 把 NIS 域名配置为 `test-domain`。 + [.programlisting] .... nis_server_enable="YES" .... . 这将要求 FreeBSD 在网络子系统启动之后立即启动 NIS 服务进程。 + [.programlisting] .... nis_yppasswdd_enable="YES" .... . 这将启用 `rpc.yppasswdd` 服务程序, 如前面提到的, 它允许用户在客户机上修改自己的 NIS 口令。 ==== [NOTE] ==== 随 NIS 配置的不同, 可能还需要增加其他一些项目。 请参见 <> 这一节, 以了解进一步的情况。 ==== 设置好前面这些配置之后, 需要以超级用户身份运行 `/etc/netstart` 命令。 它会根据 [.filename]#/etc/rc.conf# 的设置来配置系统中的其他部分。 最后, 在初始化 NIS 映射之前, 还需要手工启动 ypserv 服务程序: [source,shell] .... # /etc/rc.d/ypserv start .... ===== 初始化 NIS 映射 _NIS 映射_ 是一些数据库文件, 它们位于 [.filename]#/var/yp# 目录中。 这些文件基本上都是根据 NIS 主服务器的 [.filename]#/etc# 目录自动生成的, 唯一的例外是: [.filename]#/etc/master.passwd# 文件。 一般来说, 您会有非常充分的理由不将 `root` 以及其他管理帐号的口令发到所有 NIS 域上的服务器上。 因此, 在开始初始化 NIS 映射之前, 我们应该: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... 这里, 删除掉和系统有关的帐号对应的项 (`bin`、 `tty`、 `kmem`、 `games`, 等等), 以及其他不希望被扩散到 NIS 客户机的帐号 (例如 `root` 和任何其他 UID 0 (超级用户) 的帐号)。 [NOTE] ==== 确认 [.filename]#/var/yp/master.passwd# 这个文件是同组用户, 以及其他用户不可读的 (模式 600)! 如果需要的话, 用 `chmod` 命令来改它。 ==== 完成这些工作之后, 就可以初始化 NIS 映射了! FreeBSD 提供了一个名为 `ypinit` 的脚本来帮助您完成这项工作 (详细信息, 请见其联机手册)。 请注意, 这个脚本在绝大多数 UNIX(R) 操作系统上都可以找到, 但并不是所有操作系统的都提供。 在 Digital UNIX/Compaq Tru64 UNIX 上它的名字是 `ypsetup`。 由于我们正在生成的是 NIS 主服务器的映射, 因此应该使用 `ypinit` 的 `-m` 参数。 如果已经完成了上述步骤, 要生成 NIS 映射, 只需执行: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... `ypinit` 应该会根据 [.filename]#/var/yp/Makefile.dist# 来创建 [.filename]#/var/yp/Makefile# 文件。 创建完之后, 这个文件会假定您正在操作只有 FreeBSD 机器的单服务器 NIS 环境。 由于 `test-domain` 还有一个从服务器, 您必须编辑 [.filename]#/var/yp/Makefile#: [source,shell] .... ellington# vi /var/yp/Makefile .... 应该能够看到这样一行, 其内容是 [.programlisting] .... NOPUSH = "True" .... (如果还没有注释掉的话)。 ===== 配置 NIS 从服务器 配置 NIS 从服务器, 甚至比配置主服务器还要简单。 登录到从服务器上, 并按照前面的方法, 编辑 [.filename]#/etc/rc.conf# 文件。 唯一的区别是, 在运行 `ypinit` 时需要使用 `-s` 参数。 这里的 `-s` 选项, 同时要求提供 NIS 主服务器的名字, 因此我们的命令行应该是: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If you don't, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Don't forget to update map ypservers on ellington. .... 现在应该会有一个叫做 [.filename]#/var/yp/test-domain# 的目录。 在这个目录中, 应该保存 NIS 主服务器上的映射的副本。 接下来需要确定这些文件都及时地同步更新了。 在从服务器上, 下面的 [.filename]#/etc/crontab# 项将帮助您确保这一点: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... 这两行将强制从服务器将映射与主服务器同步。 由于主服务器会尝试确保所有其 NIS 映射的变动都知会从服务器, 因此这些项并不是绝对必需的。 不过, 由于保持其他客户端的口令信息正确性十分重要, 而这则依赖于从服务器, 强烈推荐明确指定让系统时常强制更新口令映射。 对于繁忙的网络而言, 这一点尤其重要, 因为有时可能出现映射更新不完全的情况。 现在, 在从服务器上执行 `/etc/netstart`, 就可以启动 NIS 服务了。 ==== NIS 客户机 NIS 客户机会通过 `ypbind` 服务程序来与特定的 NIS 服务器建立一种称作绑定的联系。 `ypbind` 会检查系统的默认域 (这是通过 `domainname` 命令来设置的), 并开始在本地网络上广播 RPC 请求。 这些请求会指定 `ypbind` 尝试绑定的域名。 如果已经配置了服务器, 并且这些服务器接到了广播, 它将回应 `ypbind`, 后者则记录服务器的地址。 如果有多个可用的服务器 (例如一个主服务器, 加上多个从服务器), `ypbind` 将使用第一个响应的地址。 从这一时刻开始, 客户机会把所有的 NIS 请求直接发给那个服务器。 `ypbind` 偶尔会 "ping" 服务器以确认其仍然在正常运行。 如果在合理的时间内没有得到响应, 则 `ypbind` 会把域标记为未绑定, 并再次发起广播, 以期找到另一台服务器。 ===== 设置 NIS 客户机 配置一台 FreeBSD 机器作为 NIS 客户机是非常简单的。 [.procedure] ==== . 编辑 [.filename]#/etc/rc.conf# 文件, 并在其中加上下面几行, 以设置 NIS 域名, 并在网络启动时启动 `ypbind`: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... + . 要从 NIS 服务器导入所有的口令项, 需要从您的 [.filename]#/etc/master.passwd# 文件中删除所有用户, 并使用 `vipw` 在这个文件的最后一行加入: + [.programlisting] .... +::::::::: .... + [NOTE] ====== 这一行将让 NFS 服务器的口令映射中的帐号能够登录。 也有很多修改这一行来配置 NIS 客户机的办法。 请参见稍后的 <> 以了解进一步的情况。 要了解更多信息, 可以参阅 O'Reilly 的 `Managing NFS and NIS` 这本书。 ====== + [NOTE] ====== 需要至少保留一个本地帐号 (也就是不通过 NIS 导入) 在您的 [.filename]#/etc/master.passwd# 文件中, 而这个帐号应该是 `wheel` 组的成员。 如果 NIS 发生不测, 这个帐号可以用来远程登录, 成为 `root`, 并修正问题。 ====== + . 要从 NIS 服务器上导入组信息, 需要在 [.filename]#/etc/group# 文件末尾加入: + [.programlisting] .... +:*:: .... ==== 想要立即启动 NIS 客户端, 需要以超级用户身份运行执行下列命令: [source,shell] .... # /etc/netstart # /etc/rc.d/ypbind start .... 完成这些步骤之后, 就应该可以通过运行 `ypcat passwd` 来看到 NIS 服务器的口令映射了。 === NIS 的安全性 基本上, 任何远程用户都可以发起一个 RPC 到 man:ypserv[8] 并获得您的 NIS 映射的内容, 如果远程用户了解您的域名的话。 要避免这类未经授权的访问, man:ypserv[8] 支持一个称为 "securenets" 的特性, 用以将访问限制在一组特定的机器上。 在启动过程中, man:ypserv[8] 会尝试从 [.filename]#/var/yp/securenets# 中加载 securenet 信息。 [NOTE] ==== 这个路径随 `-p` 参数改变。 这个文件包含了一些项, 每一项中包含了一个网络标识和子网掩码, 中间用空格分开。 以 "#" 开头的行会被认为是注释。 示范的 securenets 文件如下所示: ==== [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... 如果 man:ypserv[8] 接到了来自匹配上述任一规则的地址的请求, 则它会正常处理请求。 反之, 则请求将被忽略, 并记录一条警告信息。 如果 [.filename]#/var/yp/securenets# 文件不存在, 则 `ypserv` 会允许来自任意主机的请求。 `ypserv` 程序也支持 Wietse Venema 的 TCP Wrapper 软件包。 这样, 管理员就能够使用 TCP Wrapper 的配置文件来代替 [.filename]#/var/yp/securenets# 完成访问控制。 [NOTE] ==== 尽管这两种访问控制机制都能够提供某种程度的安全, 但是, 和特权端口检查一样, 它们无法避免 "IP 伪造" 攻击。 您的防火墙应该阻止所有与 NIS 有关的访问。 使用 [.filename]#/var/yp/securenets# 的服务器, 可能会无法为某些使用陈旧的 TCP/IP 实现的 NIS 客户机服务。 这些实现可能会在广播时, 将主机位都设置为 0, 或在计算广播地址时忽略子网掩码。 尽管这些问题可以通过修改客户机的配置来解决, 其他一些问题也可能导致不得不淘汰那些客户机系统, 或者不使用 [.filename]#/var/yp/securenets#。 在使用陈旧的 TCP/IP 实现的系统上, 使用 [.filename]#/var/yp/securenets# 是一个非常糟糕的做法, 因为这将导致您的网络上的 NIS 丧失大部分功能。 使用 TCP Wrapper 软件包, 会导致您的 NIS 服务器的响应延迟增加。 而增加的延迟, 则可能会导致客户端程序超时, 特别是在繁忙的网络或者很慢的 NIS 服务器上。 如果您的某个客户机因此而产生一些异常, 则应将这些客户机变为 NIS 从服务器, 并强制其绑定自己。 ==== === 不允许某些用户登录 在我们的实验室中, `basie` 这台机器, 是一台教员专用的工作站。 我们不希望将这台机器拿出 NIS 域, 而主 NIS 服务器上的 [.filename]#passwd# 文件, 则同时包含了教员和学生的帐号。 这时应该怎么做? 有一种办法来禁止特定的用户登录机器, 即使他们身处 NIS 数据库之中。 要完成这一工作, 只需要在客户机的 [.filename]#/etc/master.passwd# 文件中加入一些 `-username` 这样的项, 其中, _username_ 是希望禁止登录的用户名。 一般推荐使用 `vipw` 来完成这个工作, 因为 `vipw` 会对您在 [.filename]#/etc/master.passwd# 文件上所作的修改进行合法性检查, 并在编辑结束时重新构建口令数据库。 例如, 如果希望禁止用户 `bill` 登录 `basie`, 我们应该: [source,shell] .... basie# vipw [在末尾加入 -bill, 并退出] vipw: rebuilding the database... vipw: done basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin +::::::::: -bill basie# .... [[network-netgroups]] === 使用 Netgroups 前一节介绍的方法, 在您需要为非常少的用户和/或机器进行特殊的规则配置时还算凑合。 在更大的网络上, 您 _一定会_ 忘记禁止某些用户登录到敏感的机器上, 或者, 甚至必须单独地修改每一台机器的配置, 因而丢掉了 NIS 最重要的优越性: _集中式_ 管理。 NIS 开发人员为这个问题提供的解决方案, 被称作 _netgroups_。 它们的作用和语义, 基本上可以等同于 UNIX(R) 文件系统上使用的组。 主要的区别是它们没有数字化的 ID, 以及可以在 netgroup 中同时包含用户和其他 netgroup。 Netgroups 被设计用来处理大的、 复杂的包含数百用户和机器的网络。 一方面, 在您不得不处理这类情形时, 这是一个很有用的东西。 而另一方面, 它的复杂性又使得通过非常简单的例子很难解释 netgroup 到底是什么。 这一节的其余部分的例子将展示这个问题。 假设您在实验室中成功地部署 NIS 引起了上司的兴趣。 您接下来的任务是将 NIS 域扩展, 以覆盖校园中的一些其他的机器。 下面两个表格中包括了新用户和新机器, 及其简要说明。 [.informaltable] [cols="1,1", frame="none", options="header"] |=== | 用户名 | 说明 |`alpha`, `beta` |IT 部门的普通雇员 |`charlie`, `delta` |IT 部门的学徒 |`echo`, `foxtrott`, `golf`, ... |普通雇员 |`able`, `baker`, ... |目前的实习生 |=== [.informaltable] [cols="1,1", frame="none", options="header"] |=== | 机器名 | 说明 |`war`, `death`, `famine`, `pollution` |最重要的服务器。 只有 IT 部门的雇员才允许登录这些机器。 |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |不太重要的服务器, 所有 IT 部门的成员, 都可以登录这些机器。 |`one`, `two`, `three`, `four`, ... |普通工作站。 只有 _真正的_ 雇员才允许登录这些机器。 |`trashcan` |一台不包含关键数据的旧机器。 即使是实习生, 也允许登录它。 |=== 如果您尝试通过一个一个地阻止用户来实现这些限制, 就需要在每一个系统的 [.filename]#passwd# 文件中, 为每一个不允许登录该系统的用户添加对应的 `-user` 行。 如果忘记了任何一个, 就可能会造成问题。 在进行初始配置时, 正确地配置也许不是什么问题, 但随着日复一日地添加新用户, _总有一天_ 您会忘记为新用户添加某个行。 毕竟, Murphy 是一个乐观的人。 使用 netgroups 来处理这一状况可以带来许多好处。 不需要单独地处理每一个用户; 您可以赋予用户一个或多个 netgroups 身份, 并允许或禁止某一个 netgroup 的所有成员登录。 如果添加了新的机器, 只需要定义 netgroup 的登录限制。 如果增加了新用户, 也只需要将用户加入一个或多个 netgroup。 这些变化是相互独立的: 不再需要 "对每一个用户和机器执行 ......"。 如果您的 NIS 配置经过了谨慎的规划, 就只需要修改一个中央的配置文件, 就能够允许或禁止访问某台机器的权限了。 第一步是初始化 NIS 映射 netgroup。 FreeBSD 的 man:ypinit[8] 默认情况下并不创建这个映射, 但它的 NIS 实现能够在创建这个映射之后立即对其提供支持。 要创建空映射, 简单地输入 [source,shell] .... ellington# vi /var/yp/netgroup .... 并开始增加内容。 在我们的例子中, 至少需要四个 nergruop: IT 雇员, IT 学徒, 普通雇员和实习生。 [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... `IT_EMP`, `IT_APP` 等等, 是 netgroup 的名字。 每一个括号中的组中, 都有一些用户帐号。 组中的三个字段是: . 在哪些机器上能够使用这些项。 如果不指定主机名, 则项在所有机器上都有效。 如果指定了主机, 则很容易造成混淆。 . 属于这个 netgroup 的帐号。 . 帐号的 NIS 域。 您可以从其他 NIS 域中把帐号导入到您的 netgroup 中, 如果您管理多个 NIS 域的话。 每一个字段都可以包括通配符。 参见 man:netgroup[5] 了解更多细节。 [NOTE] ==== Netgroup 的名字一般来说不应超过 8 个字符, 特别是当您的 NIS 域中有机器打算运行其它操作系统的时候。 名字是区分大小写的; 使用大写字母作为 netgroup 的名字, 能够让您更容易地区分用户、 机器和 netgroup 的名字。 某些 NIS 客户程序 (FreeBSD 以外的那些) 可能无法处理含有大量项的 netgroup。 例如, 某些早期版本的 SunOS(TM) 会在 netgroup 中包含多于 15 个 _项_ 时出现问题。 要绕过这个问题, 可以创建多个 子netgroup,每一个中包含少于 15 个用户, 以及一个包含所有 子netgroup 的真正的 netgroup: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... 如果需要超过 225 个用户, 可以继续重复上面的过程。 ==== 激活并分发新的 NIS 映射非常简单: [source,shell] .... ellington# cd /var/yp ellington# make .... 这个操作会生成三个 NIS 映射, 即 [.filename]#netgroup#、 [.filename]#netgroup.byhost# 和 [.filename]#netgroup.byuser#。 用 man:ypcat[1] 可以检查这些 NIS 映射是否可用了: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... 第一个命令的输出, 应该与 [.filename]#/var/yp/netgroup# 的内容相近。 第二个命令, 如果没有指定本机专有的 netgroup, 则应该没有输出。 第三个命令, 则用于显示某个用户对应的 netgroup 列表。 客户机的设置也很简单。 要配置服务器 `war`, 只需进入 man:vipw[8] 并把 [.programlisting] .... +::::::::: .... 改为 [.programlisting] .... +@IT_EMP::::::::: .... 现在, 只有 netgroup `IT_EMP` 中定义的用户会被导入到 `war` 的口令数据库中, 因此只有这些用户能够登录。 不过, 这个限制也会作用于 shell 的 `~`, 以及所有在用户名和数字用户 ID 之间实施转换的函数的功能。 换言之, `cd ~user` 将不会正常工作, 而 `ls -l` 也将显示数字的 ID 而不是用户名, 并且 `find . -user joe -print` 将失败, 并给出 `No such user` 的错误信息。 要修正这个问题, 您需要导入所有的用户项, 而 _不允许他们登录服务器_。 这可以通过在 [.filename]#/etc/master.passwd# 加入另一行来完成。 这行的内容是: `+:::::::::/sbin/nologin`, 意思是 "导入所有的项, 但导入项的 shell 则替换为 [.filename]#/sbin/nologin#"。 通过在 [.filename]#/etc/master.passwd# 中增加默认值, 可以替换掉 `passwd` 中的任意字段。 [WARNING] ==== 务必确认 `+:::::::::/sbin/nologin` 这一行出现在 `+@IT_EMP:::::::::` 之后。 否则, 所有从 NIS 导入的用户帐号将以 [.filename]#/sbin/nologin# 作为登录 shell。 ==== 完成上面的修改之后, 在 IT 部门有了新员工时, 只需修改一个 NIS 映射就足够了。 您也可以用类似的方法, 在不太重要的服务器上, 把先前本地版本的 [.filename]#/etc/master.passwd# 中的 `+:::::::::` 改为: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin .... 相关的用于普通工作站的配置则应是: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/sbin/nologin .... 一切平安无事, 直到数周后, 有一天策略发生了变化: IT 部门也开始招收实习生了。 IT 实习生允许使用普通的终端, 以及不太重要的服务器; 而 IT 学徒, 则可以登录主服务器。 您增加了新的 netgroup `IT_INTERN`, 以及新的 IT 实习生到这个 netgroup 并开始修改每一台机器上的配置...... 老话说得好:"牵一发, 动全身"。 NIS 通过 netgroup 来建立 netgroup 的能力, 正可以避免这样的情形。 一种可能的方法是建立基于角色的 netgroup。 例如, 您可以创建称为 `BIGSRV` 的 netgroup, 用于定义最重要的服务器上的登录限制, 以及另一个成为 `SMALLSRV` 的 netgroup, 用以定义次重要的服务器, 以及第三个, 用于普通工作站的 netgroup `USERBOX`。 这三个 netgroup 中的每一个, 都包含了允许登录到这些机器上的所有 netgroup。 您的 NIS 映射中的新项如下所示: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... 这种定义登录限制的方法, 在您能够将机器分组并加以限制的时候可以工作的相当好。 不幸的是, 这是种例外, 而非常规情况。 多数时候, 需要按机器去定义登录限制。 与机器相关的 netgroup 定义, 是处理上述策略改动的另一种可能的方法。 此时, 每台机器的 [.filename]#/etc/master.passwd# 中, 都包含两个 "+" 开头的行。 第一个用于添加允许登录的 netgroup 帐号, 而第二个则用于增加其它帐号, 并把 shell 设置为 [.filename]#/sbin/nologin#。 使用 "全大写" 的机器名作为 netgroup 名是个好主意。 换言之, 这些行应该类似于: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/sbin/nologin .... 一旦在所有机器上都完成了这样的修改, 就再也不需要修改本地的 [.filename]#/etc/master.passwd# 了。 所有未来的修改都可以在 NIS 映射中进行。 这里是一个例子, 其中展示了在这一应用情景中所需要的 netgroup 映射, 以及其它一些常用的技巧: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... 如果您正使用某种数据库来管理帐号, 应该可以使用您的数据库的报告工具来创建映射的第一部分。 这样, 新用户就自动地可以访问这些机器了。 最后的提醒: 使用基于机器的 netgroup 并不总是适用的。 如果正在为学生实验室部署数十台甚至上百台同样的机器, 您应该使用基于角色的 netgroup, 而不是基于机器的 netgroup, 以便把 NIS 映射的尺寸保持在一个合理的范围内。 === 需要牢记的事项 这里是一些其它在使用 NIS 环境时需要注意的地方。 * 每次需要在实验室中增加新用户时, 必须 _只_ 在 NIS 服务器上加入用户, 而且 _一定要记得重建 NIS 映射_。 如果您忘记了这样做, 新用户将无法登录除 NIS 主服务器之外的任何其它机器。 例如, 如果要在实验室增加新用户 `jsmith`, 我们需要: + [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... + 也可以运行 `adduser jsmith` 而不是 `pw useradd jsmith`. * _将管理用的帐号排除在 NIS 映射之外_。 一般来说, 您不希望这些管理帐号和口令被扩散到那些包含不应使用它们的用户的机器上。 * _确保 NIS 主和从服务器的安全, 并尽可能减少其停机时间_。 如果有人攻入或简单地关闭这些机器, 则整个实验室的任也就无法登录了。 + 这是集中式管理系统中最薄弱的环节。 如果没有保护好 NIS 服务器, 您就有大批愤怒的用户需要对付了! === NIS v1 兼容性 FreeBSD 的 ypserv 提供了某些为 NIS v1 客户提供服务的支持能力。 FreeBSD 的 NIS 实现, 只使用 NIS v2 协议, 但其它实现可能会包含 v1 协议, 以提供对旧系统的向下兼容能力。 随这些系统提供的 ypbind 服务将首先尝试绑定 NIS v1 服务器, 即使它们并不真的需要它 (有些甚至可能会一直广播搜索请求, 即使已经从某台 v2 服务器得到了回应也是如此)。 注意, 尽管支持一般的客户机调用, 这个版本的 ypserv 并不能处理 v1 的映射传送请求; 因而, 它就不能与较早的支持 v1 协议的 NIS 服务器配合使用, 无论是作为主服务器还是从服务器。 幸运的是, 现今应该已经没有仍然在用的这样的服务器了。 [[network-nis-server-is-client]] === 同时作为 NIS 客户机的 NIS 服务器 在多服务器域的环境中, 如果服务器同时作为 NIS 客户, 在运行 ypserv 时要特别小心。 一般来说, 强制服务器绑定自己要比允许它们广播绑定请求要好, 因为这种情况下它们可能会相互绑定。 某些怪异的故障, 很可能是由于某一台服务器停机, 而其它服务器都依赖其服务所导致的。 最终, 所有的客户机都会超时并绑定到其它服务器, 但这个延迟可能会相当可观, 而且恢复之后仍然存在再次发生此类问题的隐患。 您可以强制一台机器绑定到特定的服务器, 这是通过 `ypbind` 的 `-S` 参数来完成的。 如果不希望每次启动 NIS 服务器时都手工完成这项工作, 可以在 [.filename]#/etc/rc.conf# 中加入: [.programlisting] .... nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" .... 参见 man:ypbind[8] 以了解更多情况。 === 口令格式 在实现 NIS 时, 口令格式的兼容性问题是一种最为常见的问题。 假如您的 NIS 服务器使用 DES 加密口令, 则它只能支持使用 DES 的客户机。 例如, 如果您的网络上有 Solaris(TM) NIS 客户机, 则几乎肯定需要使用 DES 加密口令。 要检查您的服务器和客户机使用的口令格式, 需要查看 [.filename]#/etc/login.conf#。 如果主机被配置为使用 DES 加密的口令, 则 `default` class 将包含类似这样的项: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Further entries elided] .... 其他一些可能的 `passwd_format` 包括 `blf` 和 `md5` (分别对应于 Blowfish 和 MD5 加密口令)。 如果修改了 [.filename]#/etc/login.conf#, 就必须重建登录性能数据库, 这是通过以 `root` 身份运行下面的程序来完成的: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== 已经在 [.filename]#/etc/master.passwd# 中的口令的格式不会被更新, 直到用户在登录性能数据库重建 _之后_ 首次修改口令为止。 ==== 接下来, 为了确保所有的口令都按照您选择的格式加密了, 还需要检查 [.filename]#/etc/auth.conf# 中 `crypt_default` 给出的优先选择的口令格式。 要完成此工作, 将您选择的格式放到列表的第一项。 例如, 当使用 DES 加密的口令时, 对应项应为: [.programlisting] .... crypt_default = des blf md5 .... 在每一台基于 FreeBSD 的 NIS 服务器和客户机上完成上述工作之后, 就可以肯定您的网络上它们都在使用同样的口令格式了。 如果在 NIS 客户机上做身份验证时发生问题, 这也是第一个可能出现问题的地方。 注意: 如果您希望在混合的网络上部署 NIS 服务器, 可能就需要在所有系统上都使用 DES, 因为这是所有系统都能够支持的最低限度的公共标准。 [[network-dhcp]] == 网络自动配置 (DHCP) === 什么是 DHCP? DHCP, 动态主机配置协议, 是一种让系统得以连接到网络上, 并获取所需要的配置参数手段。 FreeBSD 使用来自 OpenBSD 3.7 的 OpenBSD `dhclient`。 这里提供的所有关于 `dhclient` 的信息, 都是以 ISC 或 OpenBSD DHCP 客户端程序为准的。 DHCP 服务器是 ISC 软件包的一部分。 === 这一节都介绍哪些内容 这一节描述了 ISC 和 DHCP 系统中的客户端, 以及和 ISC DHCP 系统中的服务器端的组件。 客户端程序, `dhclient`, 是随 FreeBSD 作为它的一部分提供的; 而服务器部分, 则可以通过 package:net/isc-dhcp31-server[] port 得到。 man:dhclient[8]、 man:dhcp-options[5]、 以及 man:dhclient.conf[5] 联机手册, 加上下面所介绍的参考文献, 都是非常有用的资源。 === 它如何工作 当 DHCP 客户程序, `dhclient` 在客户机上运行时, 它会开始广播请求配置信息的消息。 默认情况下, 这些请求是在 UDP 端口 68 上。 服务器通过 UDP 67 给出响应, 向客户机提供一个 IP 地址, 以及其他有关的配置参数, 例如子网掩码、 路由器, 以及 DNS 服务器。 所有这些信息都会以 DHCP "lease" 的形式给出, 并且只在一段特定的时间内有效 (这是由 DHCP 服务器的维护者配置的)。 这样, 那些已经断开网络的客户机使用的陈旧的 IP 地址就能被自动地回收了。 DHCP 客户程序可以从服务器端获取大量的信息。 关于能获得的信息的详细列表, 请参考 man:dhcp-options[5]。 === FreeBSD 集成 FreeBSD 完全地集成了 OpenBSD 的 DHCP 客户端, `dhclient`。 DHCP 客户端支持在安装程序和基本系统中均有提供, 这使得您不再需要去了解那些已经运行了 DHCP 服务器的网络的具体配置参数。 sysinstall 能够支持 DHCP。 在 sysinstall 中配置网络接口时, 它询问的第二个问题便是: "Do you want to try DHCP configuration of the interface? (您是否希望在此接口上尝试 DHCP 配置?)"。 如果做肯定的回答, 则将运行 `dhclient`, 一旦成功, 则将自动地填写网络配置信息。 要在系统启动时使用 DHCP, 您必须做两件事: * 您的内核中, 必须包含 [.filename]#bpf# 设备。 如果需要这样做, 需要将 `device bpf` 添加到内核的编译配置文件中, 并重新编译内核。 要了解关于编译内核的进一步信息, 请参见 crossref:kernelconfig[kernelconfig,配置FreeBSD的内核]。 + [.filename]#bpf# 设备已经是 FreeBSD 发行版中默认的 [.filename]#GENERIC# 内核的一部分了, 因此如果您没有对内核进行定制, 则不用创建一份新的内核配置文件, DHCP 就能工作了。 + [NOTE] ==== 对于那些安全意识很强的人来说, 您应该知道 [.filename]#bpf# 也是包侦听工具能够正确工作的条件之一 (当然, 它们还需要以 `root` 身份运行才行)。 [.filename]#bpf#_是_ 使用 DHCP 所必须的, 但如果您对安全非常敏感, 则很可能会有理由不把 [.filename]#bpf# 加入到您的内核配置中, 直到您真的需要使用 DHCP 为止。 ==== * 编辑您的 [.filename]#/etc/rc.conf# 并加入下面的设置: + [.programlisting] .... ifconfig_fxp0="DHCP" .... + [NOTE] ==== 务必将 `fxp0` 替换为您希望自动配置的网络接口的名字, 您可以在 crossref:config[config-network-setup,设置网卡] 找到更进一步的介绍。 ==== + 如果您希望使用另一位置的 `dhclient`, 或者需要给 `dhclient` 传递其他参数, 还可以添加下面的配置 (根据需要进行修改): + [.programlisting] .... dhclient_program="/sbin/dhclient" dhclient_flags="" .... DHCP 服务器, dhcpd, 是作为 package:net/isc-dhcp31-server[] port 的一部分提供的。 这个 port 包括了 ISC DHCP 服务器及其文档。 === 文件 * [.filename]#/etc/dhclient.conf# + `dhclient` 需要一个配置文件, [.filename]#/etc/dhclient.conf#。 一般说来, 这个文件中只包括注释, 而默认值基本上都是合理的。 这个配置文件在 man:dhclient.conf[5] 联机手册中进行了进一步的阐述。 * [.filename]#/sbin/dhclient# + `dhclient` 是一个静态连编的, 它被安装到 [.filename]#/sbin# 中。 man:dhclient[8] 联机手册给出了关于 `dhclient` 的进一步细节。 * [.filename]#/sbin/dhclient-script# + `dhclient-script` 是一个 FreeBSD 专用的 DHCP 客户端配置脚本。 在 man:dhclient-script[8] 中对它进行了描述, 但一般来说, 用户不需要对其进行任何修改, 就能够让一切正常运转了。 * [.filename]#/var/db/dhclient.leases# + DHCP 客户程序会维护一个数据库来保存有效的 lease, 它们被以日志的形式保存到这个文件中。 man:dhclient.leases[5] 给出了更为细致的介绍。 === 进阶读物 DHCP 协议的完整描述是 http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]。 关于它的其他信息资源的站点 http://www.dhcp.org/[http://www.dhcp.org/] 也提供了详尽的资料。 [[network-dhcp-server]] === 安装和配置 DHCP 服务器 ==== 这一章包含哪些内容 这一章提供了关于如何在 FreeBSD 系统上使用 ISC (Internet 系统协会) 的 DHCP 实现套件来架设 DHCP 服务器的信息。 DHCP 套件中的服务器部分并没有作为 FreeBSD 的一部分来提供, 因此您需要安装 package:net/isc-dhcp31-server[] port 才能提供这个服务。 请参见 crossref:ports[ports,安装应用程序. Packages 和 Ports] 以了解关于如何使用 Ports Collection 的进一步详情。 ==== 安装 DHCP 服务器 为了在您的 FreeBSD 系统上进行配置以便作为 DHCP 服务器来使用, 需要把 man:bpf[4] 设备编译进内核。 要完成这项工作, 需要将 `device bpf` 加入到您的内核配置文件中, 并重新联编内核。 要得到关于如何联编内核的进一步信息, 请参见 crossref:kernelconfig[kernelconfig,配置FreeBSD的内核]。 [.filename]#bpf# 设备是 FreeBSD 所附带的 [.filename]#GENERIC# 内核中已经联入的组件, 因此您并不需要为了让 DHCP 正常工作而特别地定制内核。 [NOTE] ==== 如果您有较强的安全意识, 应该注意 [.filename]#bpf# 同时也是让听包程序能够正确工作的设备 (尽管这类程序仍然需要以特权用户身份运行)。 [.filename]#bpf#_是_ 使用 DHCP 所必需的, 但如果您对安全非常敏感, 您可能会不希望将 [.filename]#bpf# 放进内核, 直到您真的认为 DHCP 是必需的为止。 ==== 接下来要做的是编辑示范的 [.filename]#dhcpd.conf#, 它由 package:net/isc-dhcp31-server[] port 安装。 默认情况下, 它的名字应该是 [.filename]#/usr/local/etc/dhcpd.conf.sample#, 在开始修改之前, 您需要把它复制为 [.filename]#/usr/local/etc/dhcpd.conf#。 ==== 配置 DHCP 服务器 [.filename]#dhcpd.conf# 包含了一系列关于子网和主机的定义, 下面的例子可以帮助您理解它: [.programlisting] .... option domain-name "example.com";<.> option domain-name-servers 192.168.4.100;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 3600;<.> max-lease-time 86400;<.> ddns-update-style none;<.> subnet 192.168.4.0 netmask 255.255.255.0 { range 192.168.4.129 192.168.4.254;<.> option routers 192.168.4.1;<.> } host mailhost { hardware ethernet 02:03:04:05:06:07;<.> fixed-address mailhost.example.com;<.> } .... <.> 这个选项指定了提供给客户机作为默认搜索域的域名。 请参考 man:resolv.conf[5] 以了解关于这一概念的详情。 <.> 这个选项用于指定一组客户机使用的 DNS 服务器, 它们之间以逗号分隔。 <.> 提供给客户机的子网掩码。 <.> 客户机可以请求租约的有效期, 而如果没有, 则服务器将指定一个租约有效期, 也就是这个值 (单位是秒)。 <.> 这是服务器允许租出地址的最大时长。 如果客户机请求了更长的租期, 则它将得到一个地址, 但其租期仅限于 `max-lease-time` 秒。 <.> 这个选项用于指定 DHCP 服务器在一个地址被接受或释放时是否应对应尝试更新 DNS。 在 ISC 实现中, 这一选项是 _必须指定的_。 <.> 指定地址池中可以用来分配给客户机的 IP 地址范围。 在这个范围之间, 以及其边界的 IP 地址将分配给客户机。 <.> 定义客户机的默认网关。 <.> 主机的硬件 MAC 地址 (这样 DHCP 服务器就能够在接到请求时知道请求的主机身份)。 <.> 指定总是得到同一 IP 地址的主机。 请注意在此处使用主机名是对的, 因为 DHCP 服务器会在返回租借地址信息之前自行解析主机名。 在配制好 [.filename]#dhcpd.conf# 之后, 应在 [.filename]#/etc/rc.conf# 中启用 DHCP 服务器, 也就是增加: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... 此处的 `dc0` 接口名应改为 DHCP 服务器需要监听 DHCP 客户端请求的接口 (如果有多个, 则用空格分开)。 接下来, 可以用下面的命令来启动服务: [source,shell] .... # /usr/local/etc/rc.d/isc-dhcpd start .... 如果未来您需要修改服务器的配置, 请务必牢记发送 `SIGHUP` 信号给 dhcpd 并 _不会_ 导致配置文件的重新加载, 而这在其他服务程序中则是比较普遍的约定。 您需要发送 `SIGTERM` 信号来停止进程, 然后使用上面的命令来重新启动它。 ==== 文件 * [.filename]#/usr/local/sbin/dhcpd# + dhcpd 是静态连接的, 并安装到 [.filename]#/usr/local/sbin# 中。 随 port 安装的 man:dhcpd[8] 联机手册提供了关于 dhcpd 更为详尽的信息。 * [.filename]#/usr/local/etc/dhcpd.conf# + dhcpd 需要配置文件, 即 [.filename]#/usr/local/etc/dhcpd.conf# 才能够向客户机提供服务。 这个文件需要包括应提供给客户机的所有信息, 以及关于服务器运行的其他信息。 此配置文件的详细描述可以在随 port 安装的 man:dhcpd.conf[5] 联机手册上找到。 * [.filename]#/var/db/dhcpd.leases# + DHCP 服务器会维护一个它签发的租用地址数据库, 并保存在这个文件中, 这个文件是以日志的形式保存的。 随 port 安装的 man:dhcpd.leases[5] 联机手册提供了更详细的描述。 * [.filename]#/usr/local/sbin/dhcrelay# + dhcrelay 在更为复杂的环境中, 可以用来支持使用 DHCP 服务器转发请求给另一个独立网络上的 DHCP 服务器。 如果您需要这个功能, 需要安装 package:net/isc-dhcp31-relay[] port。 man:dhcrelay[8] 联机手册提供了更为详尽的介绍。 [[network-dns]] == 域名系统 (DNS) === 纵览 FreeBSD 在默认情况下使用一个版本的 BIND (Berkeley Internet Name Domain), 这是目前最为流行的 DNS 协议实现。 DNS 是一种协议, 可以通过它将域名同 IP 地址相互对应。 例如, 查询 `www.FreeBSD.org` 将得到 FreeBSD Project 的 web 服务器的 IP 地址, 而查询 `ftp.FreeBSD.org` 则将得到响应的 FTP 机器的 IP 地址。 类似地, 也可以做相反的事情。 查询 IP 地址可以得到其主机名。 当然, 完成 DNS 查询并不需要在系统中运行域名服务器。 目前, 默认情况下FreeBSD 使用的是 BIND9 DNS 服务软件。 我们内建于系统中的版本提供了增强的安全特性、 新的文件目录结构, 以及自动的 man:chroot[8] 配置。 在 Internet 上的 DNS 是通过一套较为复杂的权威根域名系统, 顶级域名 (TLD), 以及一系列小规模的, 提供少量域名解析服务并对域名信息进行缓存的域名服务器组成的。 目前, BIND 由 Internet Systems Consortium https://www.isc.org/[https://www.isc.org/] 维护。 === 术语 要理解这份文档, 需要首先了解一些相关的 DNS 术语。 [.informaltable] [cols="1,1", frame="none", options="header"] |=== | 术语 | 定义 |正向 DNS |将域名映射到 IP 地址 |原点 (Origin) |表示特定域文件所在的域 |named, BIND |在 FreeBSD 中 BIND 域名服务器软件包的常见叫法。 |解析器 (Resolver) |计算机用以向域名服务器查询域名信息的一个系统进程 |反向 DNS |将 IP 地址映射为主机名 |根域 |Internet 域层次的起点。 所有的域都在根域之下, 类似文件系统中, 文件都在根目录之下那样。 |域 (Zone) |独立的域, 子域, 或者由同一机构管理的 DNS 的一部分。 |=== 域的例子: * `.` 在本文档中通常指代根域。 * `org.` 是根域之下的一个顶级域名 (TLD)。 * `example.org.` 是在 `org.` TLD 之下的一个域。 * `1.168.192.in-addr.arpa` 是一个表示所有 `192.168.1.*` IP 地址空间中 IP 地址的域。 如您所见, 域名中越细节的部分会越靠左出现。 例如, `example.org.` 就比 `org.` 范围更小, 类似地 `org.` 又比根域更小。 域名各个部分的格局与文件系统十分类似: [.filename]#/dev# 目录在根目录之下, 等等。 === 运行域名服务器的理由 域名服务器通常会有两种形式: 权威域名服务器, 以及缓存域名服务器。 下列情况需要有权威域名服务器: * 想要向全世界提供 DNS 信息, 并对请求给出权威应答。 * 注册了类似 `example.org` 的域, 而需要将 IP 指定到其下的主机名上。 * 某个 IP 地址块需要反向 DNS 项 (IP 到主机名)。 * 备份服务器, 或常说的从 (slave) 服务器, 会在主服务器出现问题或无法访问时来应答查询请求。 下列情况需要有缓存域名服务器: * 本地的 DNS 服务器能够缓存, 并比直接向外界的域名服务器请求更快地得到应答。 当有人查询 `www.FreeBSD.org` 时,解析器通常会向上级 ISP 的域名服务器发出请求, 并获得回应。 如果有本地的缓存 DNS 服务器, 查询只有在第一次被缓存 DNS 服务器发到外部世界。 其他的查询不会发向局域网外, 因为它们已经有在本地的缓存了。 === DNS 如何运作 在 FreeBSD 中, BIND 服务程序被称为 named。 [.informaltable] [cols="1,1", frame="none", options="header"] |=== | 文件 | 描述 |man:named[8] |BIND 服务程序 |man:rndc[8] |域名服务控制程序 |[.filename]#/etc/namedb# |BIND 存放域名信息的位置。 |[.filename]#/etc/namedb/named.conf# |域名服务配置文件 |=== 随在服务器上配置的域的性质不同, 域的定义文件一般会存放到 [.filename]#/etc/namedb# 目录中的 [.filename]#master#、 [.filename]#slave#, 或 [.filename]#dynamic# 子目录中。 这些文件中提供了域名服务器在响应查询时所需要的 DNS 信息。 === 启动 BIND 由于 BIND 是默认安装的, 因此配置它相对而言很简单。 默认的 named 配置, 是在 man:chroot[8] 环境中提供基本的域名解析服务, 并且只限于监听本地 IPv4 回环地址 (127.0.0.1)。 如果希望启动这一配置, 可以使用下面的命令: [source,shell] .... # /etc/rc.d/named onestart .... 如果希望 named 服务在每次启动的时候都能够启动, 需要在 [.filename]#/etc/rc.conf# 中加入: [.programlisting] .... named_enable="YES" .... 当然, 除了这份文档所介绍的配置选项之外, 在 [.filename]#/etc/namedb/named.conf# 中还有很多其它的选项。 不过, 如果您需要了解 FreeBSD 中用于启动 named 的那些选项的话, 则可以查看 [.filename]#/etc/defaults/rc.conf# 中的 `named_*` 参数, 并参考 man:rc.conf[5] 联机手册。 除此之外, crossref:config[configtuning-rcd,在 FreeBSD 中使用 rc] 也是一个不错的起点。 === 配置文件 目前, named 的配置文件存放于 [.filename]#/etc/namedb# 目录, 在使用前应根据需要进行修改, 除非您只打算让它完成简单的域名解析服务。 这个目录同时也是您进行绝大多数配置的地方。 ==== [.filename]#/etc/namedb/named.conf# [.programlisting] .... // $FreeBSD$ // // Refer to the named.conf(5) and named(8) man pages, and the documentation // in /usr/shared/doc/bind9 for more details. // // If you are going to set up an authoritative server, make sure you // understand the hairy details of how DNS works. Even with // simple mistakes, you can break connectivity for affected parties, // or cause huge amounts of useless Internet traffic. options { // Relative to the chroot directory, if any directory "/etc/namedb"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; // If named is being used only as a local resolver, this is a safe default. // For named to be accessible to the network, comment this option, specify // the proper IP address, or delete this option. listen-on { 127.0.0.1; }; // If you have IPv6 enabled on this system, uncomment this option for // use as a local resolver. To give access to the network, specify // an IPv6 address, or the keyword "any". // listen-on-v6 { ::1; }; // These zones are already covered by the empty zones listed below. // If you remove the related empty zones below, comment these lines out. disable-empty-zone "255.255.255.255.IN-ADDR.ARPA"; disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; // If you've got a DNS server around at your upstream provider, enter // its IP address here, and enable the line below. This will make you // benefit from its cache, thus reduce overall DNS traffic in the Internet. /* forwarders { 127.0.0.1; }; */ // If the 'forwarders' clause is not empty the default is to 'forward first' // which will fall back to sending a query from your local server if the name // servers in 'forwarders' do not have the answer. Alternatively you can // force your name server to never initiate queries of its own by enabling the // following line: // forward only; // If you wish to have forwarding configured automatically based on // the entries in /etc/resolv.conf, uncomment the following line and // set named_auto_forward=yes in /etc/rc.conf. You can also enable // named_auto_forward_only (the effect of which is described above). // include "/etc/namedb/auto_forward.conf"; .... 正如注释所言, 如果希望从上级缓存中受益, 可以在此处启用 `forwarders`。 正常情况下, 域名服务器会逐级地查询 Internet 来找到特定的域名服务器, 直到得到答案为止。 这个选项将让它首先查询上级域名服务器 (或另外提供的域名服务器), 从而从它们的缓存中得到结果。 如果上级域名服务器是一个繁忙的高速域名服务器, 则启用它将有助于改善服务品质。 [WARNING] ==== ``127.0.0.1``__不会__ 正常工作。 一定要把地址改为您上级服务器的 IP 地址。 ==== [.programlisting] .... /* Modern versions of BIND use a random UDP port for each outgoing query by default in order to dramatically reduce the possibility of cache poisoning. All users are strongly encouraged to utilize this feature, and to configure their firewalls to accommodate it. AS A LAST RESORT in order to get around a restrictive firewall policy you can try enabling the option below. Use of this option will significantly reduce your ability to withstand cache poisoning attacks, and should be avoided if at all possible. Replace NNNNN in the example with a number between 49160 and 65530. */ // query-source address * port NNNNN; }; // If you enable a local name server, don't forget to enter 127.0.0.1 // first in your /etc/resolv.conf so this server will be queried. // Also, make sure to enable it in /etc/rc.conf. // The traditional root hints mechanism. Use this, OR the slave zones below. zone "." { type hint; file "named.root"; }; /* Slaving the following zones from the root name servers has some significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots 3. Greater resilience to any potential root server failure/DDoS On the other hand, this method requires more monitoring than the hints file to be sure that an unexpected failure mode has not incapacitated your server. Name servers that are serving a lot of clients will benefit more from this approach than individual hosts. Use with caution. To use this mechanism, uncomment the entries below, and comment the hint zone above. */ /* zone "." { type slave; file "slave/root.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; zone "arpa" { type slave; file "slave/arpa.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; zone "in-addr.arpa" { type slave; file "slave/in-addr.arpa.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; */ /* Serving the following zones locally will prevent any queries for these zones leaving your network and going to the root name servers. This has two significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots */ // RFC 1912 zone "localhost" { type master; file "master/localhost-forward.db"; }; zone "127.in-addr.arpa" { type master; file "master/localhost-reverse.db"; }; zone "255.in-addr.arpa" { type master; file "master/empty.db"; }; // RFC 1912-style zone for IPv6 localhost address zone "0.ip6.arpa" { type master; file "master/localhost-reverse.db"; }; // "This" Network (RFCs 1912 and 3330) zone "0.in-addr.arpa" { type master; file "master/empty.db"; }; // Private Use Networks (RFC 1918) zone "10.in-addr.arpa" { type master; file "master/empty.db"; }; zone "16.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "17.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "18.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "19.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "20.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "21.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "22.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "23.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "24.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "25.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "26.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "27.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "28.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "29.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "30.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "31.172.in-addr.arpa" { type master; file "master/empty.db"; }; zone "168.192.in-addr.arpa" { type master; file "master/empty.db"; }; // Link-local/APIPA (RFCs 3330 and 3927) zone "254.169.in-addr.arpa" { type master; file "master/empty.db"; }; // TEST-NET for Documentation (RFC 3330) zone "2.0.192.in-addr.arpa" { type master; file "master/empty.db"; }; // Router Benchmark Testing (RFC 3330) zone "18.198.in-addr.arpa" { type master; file "master/empty.db"; }; zone "19.198.in-addr.arpa" { type master; file "master/empty.db"; }; // IANA Reserved - Old Class E Space zone "240.in-addr.arpa" { type master; file "master/empty.db"; }; zone "241.in-addr.arpa" { type master; file "master/empty.db"; }; zone "242.in-addr.arpa" { type master; file "master/empty.db"; }; zone "243.in-addr.arpa" { type master; file "master/empty.db"; }; zone "244.in-addr.arpa" { type master; file "master/empty.db"; }; zone "245.in-addr.arpa" { type master; file "master/empty.db"; }; zone "246.in-addr.arpa" { type master; file "master/empty.db"; }; zone "247.in-addr.arpa" { type master; file "master/empty.db"; }; zone "248.in-addr.arpa" { type master; file "master/empty.db"; }; zone "249.in-addr.arpa" { type master; file "master/empty.db"; }; zone "250.in-addr.arpa" { type master; file "master/empty.db"; }; zone "251.in-addr.arpa" { type master; file "master/empty.db"; }; zone "252.in-addr.arpa" { type master; file "master/empty.db"; }; zone "253.in-addr.arpa" { type master; file "master/empty.db"; }; zone "254.in-addr.arpa" { type master; file "master/empty.db"; }; // IPv6 Unassigned Addresses (RFC 4291) zone "1.ip6.arpa" { type master; file "master/empty.db"; }; zone "3.ip6.arpa" { type master; file "master/empty.db"; }; zone "4.ip6.arpa" { type master; file "master/empty.db"; }; zone "5.ip6.arpa" { type master; file "master/empty.db"; }; zone "6.ip6.arpa" { type master; file "master/empty.db"; }; zone "7.ip6.arpa" { type master; file "master/empty.db"; }; zone "8.ip6.arpa" { type master; file "master/empty.db"; }; zone "9.ip6.arpa" { type master; file "master/empty.db"; }; zone "a.ip6.arpa" { type master; file "master/empty.db"; }; zone "b.ip6.arpa" { type master; file "master/empty.db"; }; zone "c.ip6.arpa" { type master; file "master/empty.db"; }; zone "d.ip6.arpa" { type master; file "master/empty.db"; }; zone "e.ip6.arpa" { type master; file "master/empty.db"; }; zone "0.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "1.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "2.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "3.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "4.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "5.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "6.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "7.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "8.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "9.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "a.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "b.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "0.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "1.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "2.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "3.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "4.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "5.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "6.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "7.e.f.ip6.arpa" { type master; file "master/empty.db"; }; // IPv6 ULA (RFC 4193) zone "c.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "d.f.ip6.arpa" { type master; file "master/empty.db"; }; // IPv6 Link Local (RFC 4291) zone "8.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "9.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "a.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "b.e.f.ip6.arpa" { type master; file "master/empty.db"; }; // IPv6 Deprecated Site-Local Addresses (RFC 3879) zone "c.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "d.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "e.e.f.ip6.arpa" { type master; file "master/empty.db"; }; zone "f.e.f.ip6.arpa" { type master; file "master/empty.db"; }; // IP6.INT is Deprecated (RFC 4159) zone "ip6.int" { type master; file "master/empty.db"; }; // NB: Do not use the IP addresses below, they are faked, and only // serve demonstration/documentation purposes! // // Example slave zone config entries. It can be convenient to become // a slave at least for the zone your own domain is in. Ask // your network administrator for the IP address of the responsible // master name server. // // Do not forget to include the reverse lookup zone! // This is named after the first bytes of the IP address, in reverse // order, with ".IN-ADDR.ARPA" appended, or ".IP6.ARPA" for IPv6. // // Before starting to set up a master zone, make sure you fully // understand how DNS and BIND work. There are sometimes // non-obvious pitfalls. Setting up a slave zone is usually simpler. // // NB: Don't blindly enable the examples below. :-) Use actual names // and addresses instead. /* An example dynamic zone key "exampleorgkey" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "example.org" { type master; allow-update { key "exampleorgkey"; }; file "dynamic/example.org"; }; */ /* Example of a slave reverse zone zone "1.168.192.in-addr.arpa" { type slave; file "slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ .... 在 [.filename]#named.conf# 中, 还给出了从域、转发域和反解析域的例子。 如果新增了域, 就必需在 [.filename]#named.conf# 中加入对应的项目。 例如, 用于 `example.org` 的域文件的描述类似下面这样: [.programlisting] .... zone "example.org" { type master; file "master/example.org"; }; .... 如 `type` 语句所标示的那样, 这是一个主域, 其信息保存在 [.filename]#/etc/namedb/master/example.org# 中, 如 `file` 语句所示。 [.programlisting] .... zone "example.org" { type slave; file "slave/example.org"; }; .... 在从域的情形中, 所指定的域的信息会从主域名服务器传递过来, 并保存到对应的文件中。 当主域服务器发生问题或不可达时, 从域名服务器就有一份可用的域名信息, 从而能够对外提供服务。 ==== 域文件 下面的例子展示了用于 `example.org` 的主域文件 (存放于 [.filename]#/etc/namedb/master/example.org#): [.programlisting] .... $TTL 3600 ; 1 hour default TTL example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ; Negative Reponse TTL ) ; DNS Servers IN NS ns1.example.org. IN NS ns2.example.org. ; MX Records IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Machine Names localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 ; Aliases www IN CNAME example.org. .... 请注意以 "." 结尾的主机名是全称主机名, 而结尾没有 "." 的则是相对于原点的主机名。 例如, `ns1` 将被转换为 `ns1.example.org.` 域信息文件的格式如下: [.programlisting] .... 记录名 IN 记录类型 值 .... 最常用的 DNS 记录: SOA:: 域权威开始 NS:: 权威域名服务器 A:: 主机地址 CNAME:: 别名对应的正规名称 MX:: 邮件传递服务器 PTR:: 域名指针 (用于反向 DNS) [.programlisting] .... example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh after 3 hours 3600 ; Retry after 1 hour 604800 ; Expire after 1 week 300 ) ; Negative Reponse TTL .... `example.org.`:: 域名, 同时也是这个域信息文件的原点。 `ns1.example.org.`:: 该域的主/权威域名服务器。 `admin.example.org.`:: 此域的负责人的电子邮件地址, 其中 "@" 需要换掉 (mailto:admin@example.org[admin@example.org] 对应 `admin.example.org`) `2006051501`:: 文件的序号。 每次修改域文件时都必须增加这个数字。 现今, 许多管理员会考虑使用 `yyyymmddrr` 这样的格式来表示序号。 `2006051501` 通常表示上次修改于 05/15/2006, 而后面的 `01` 则表示在那天的第一次修改。 序号非常重要, 它用于通知从域服务器更新数据。 [.programlisting] .... IN NS ns1.example.org. .... 这是一个 NS 项。 每个准备提供权威应答的服务器都必须有一个对应项。 [.programlisting] .... localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 .... A 记录指明了机器名。 正如在前面所看到的, `ns1.example.org` 将解析为 `192.168.1.2`。 [.programlisting] .... IN A 192.168.1.1 .... 这一行把当前原点 `example.org` 指定为使用 IP 地址 `192.168.1.1`。 [.programlisting] .... www IN CNAME @ .... 正规名 (CNAME) 记录通常用于为某台机器指定别名。 在这个例子中, 将 `www` 指定成了 "主" 机器的一个别名, 后者的名字与域名 `example.org` (`192.168.1.1`) 相同。 CNAME 不能同与之有相同名字的任何其它记录并存。 [.programlisting] .... IN MX 10 mail.example.org. .... MX 记录表示哪个邮件服务器负责接收发到这个域的邮件。 `mail.example.org` 是邮件服务器的主机名, 而 10 则是它的优先级。 可以有多台邮件服务器, 其优先级分别是 10、 20 等等。 尝试向 `example.org` 投递邮件的服务器, 会首先尝试优先级最高的 MX (优先级数值最小的记录)、 接着尝试次高的, 并重复这一过程直到邮件递达为止。 in-addr.arpa 域名信息文件 (反向 DNS), 采用的格式是同样的, 只是 PTR 项代替了 A 或 CNAME 的位置。 [.programlisting] .... $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ) ; Negative Reponse TTL IN NS ns1.example.org. IN NS ns2.example.org. 1 IN PTR example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 4 IN PTR mx.example.org. 5 IN PTR mail.example.org. .... 这个文件给出了上述假想域中 IP 地址到域名的映射关系。 需要说明的是, 在 PTR 记录右侧的名字必须是全称域名 (也就是必须以 "." 结束)。 === 缓存域名服务器 缓存域名服务器是一种主要承担解析递归查询角色的域名服务器。 它简单地自行进行查询, 并将查询结果记住以备后续使用。 === 安全 尽管 BIND 是最为常用的 DNS 实现, 但它总是有一些安全问题。 时常会有人发现一些可能的甚至可以利用的安全漏洞。 尽管 FreeBSD 会自动将 named 放到 man:chroot[8] 环境中运行, 但仍有一些其它可用的安全机制来帮助您规避潜在的针对 DNS 服务的攻击。 阅读 http://www.cert.org/[CERT] 的安全公告, 并订阅 the {freebsd-security-notifications} 是一个有助于帮助您了解最新 Internet 及 FreeBSD 安全问题的好习惯。 [TIP] ==== 如果发现了问题, 确保源代码是最新的, 并重新联编一份 named 有可能会有所帮助。 ==== === 进一步阅读 BIND/named 联机手册: man:rndc[8] man:named[8] man:named.conf[8] * https://www.isc.org/software/bind[官方的 ISC BIND 页面] * https://www.isc.org/software/guild[Official ISC BIND Forum] * http://www.oreilly.com/catalog/dns5/[O'Reilly DNS 和 BIND 第 5 版] * http://www.rfc-editor.org/rfc/rfc1034.txt[RFC1034 - 域名 - 概念和工具] * http://www.rfc-editor.org/rfc/rfc1035.txt[RFC1035 - 域名 - 实现及其标准] [[network-apache]] == Apache HTTP 服务器 === 纵览 FreeBSD 被用于运行许多全球最为繁忙的 web 站点。 大多数 Internet 上的 web 服务器, 都使用 Apache HTTP 服务器。 Apache 软件包可以在您的 FreeBSD 安装盘上找到。 如果没有在首次安装时附带安装 Apache, 则可以通过 package:www/apache13[] 或 package:www/apache22[] port 来安装。 一旦成功地安装了 Apache, 就必须对其进行配置。 [NOTE] ==== 这一节介绍了 1.3.X 版本的 Apache HTTP 服务器 的配置, 因为它是随 FreeBSD 一同使用的最多的版本。 Apache 2.X 引入了很多新技术, 但在此并不讨论。 要了解关于 Apache 2.X 的更多资料, 请参见 http://httpd.apache.org/[http://httpd.apache.org/]。 ==== === 配置 主要的 Apache HTTP Server 配置文件, 在 FreeBSD 上会安装为 [.filename]#/usr/local/etc/apache/httpd.conf#。 这是一个典型的 UNIX(R) 文本配置文件, 它使用 `#` 作为注释符。 关于全部配置选项的详尽介绍超出了本书的范围, 这里将只介绍最常被修改的那些。 `ServerRoot "/usr/local"`:: 这指定了 Apache 安装的顶级目录。 执行文件被放到服务器根目录 (server root) 的 [.filename]#bin# 和 [.filename]#sbin# 子目录中, 而配置文件则位于 [.filename]#etc/apache#。 `ServerAdmin you@your.address`:: 这个地址是在服务器发生问题时应发送电子邮件的地址, 它会出现在服务器生成的页面上, 例如错误页面。 `ServerName www.example.com`:: `ServerName` 允许您配置发送回客户端的主机名, 如果您的服务器被用户以别的名字访问 (例如, 使用 `www` 而不是主机本身的真实名字)。 `DocumentRoot "/usr/local/www/data"`:: `DocumentRoot`: 这个目录是您的文档所在的目录。 默认情况下, 所有的请求都会从这个位置去获取, 但也可以通过符号连接和别名指定其它的位置。 在修改配置之前备份 Apache 的配置文件永远是一个好习惯。 一旦对初始配置满意了, 就可以开始运行 Apache 了。 === 运行 Apache 与许多其它网络服务不同, Apache 并不依赖 inetd 超级服务器来运行。 一般情况下会把它配置为一个独立的服务器, 以期在客户的 web 浏览器连入 HTTP 请求时, 能够获得更好的性能。 它提供了一个 shell 脚本来使启动、 停止和重新启动服务器变得尽可能地简单。 首次启动 Apache, 只需执行: [source,shell] .... # /usr/local/sbin/apachectl start .... 可以在任何时候使用下面的命令来停止服务: [source,shell] .... # /usr/local/sbin/apachectl stop .... 当由于某种原因修改了配置文件之后, 需要重启服务器: [source,shell] .... # /usr/local/sbin/apachectl restart .... 要在重启 Apache 服务器时不中断当前的连接, 则应运行: [source,shell] .... # /usr/local/sbin/apachectl graceful .... 更多的信息, 可以在 man:apachectl[8] 联机手册中找到。 要在系统启动时启动 Apache, 则应在 [.filename]#/etc/rc.conf# 中加入: [.programlisting] .... apache_enable="YES" .... 或者对于Apache 2.2: [.programlisting] .... apache22_enable="YES" .... 如果您希望在系统引导时启动 Apache `httpd` 程序并指定其它一些选项, 则可以把下面的行加到 [.filename]#rc.conf#: [.programlisting] .... apache_flags="" .... 现在 web 服务器就开始运行了, 您可以使用 web 浏览器打开 `http://localhost/`。 默认显示的 web 页面是 [.filename]#/usr/local/www/data/index.html#。 === 虚拟主机 Apache 支持两种不同类型的虚拟主机。 第一种方法是基于名字的虚拟主机。 基于名字的虚拟主机使用客户机发来的 HTTP/1.1 头来辨别主机名。 这使得不同的域得以共享同一个 IP 地址。 要配置 Apache 来使用基于名字的虚拟主机, 需要把类似下面的项加到您的 [.filename]#httpd.conf# 中: [.programlisting] .... NameVirtualHost * .... 如果您的 web 服务器的名字是 `www.domain.tld`, 而您希望建立一个 `www.someotherdomain.tld` 的虚拟域, 则应在 [.filename]#httpd.conf# 中加入: [source,shell] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... 您需要把上面的地址和文档路径改为所使用的那些。 要了解关于虚拟主机的更多信息, 请参考官方的 Apache 文档, 这些文档可以在 http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/] 找到。 === Apache 模块 有许多不同的 Apache 模块, 它们可以在基本的服务器基础上提供许多附加的功能。 FreeBSD 的 Ports Collection 为安装 Apache 和常用的附加模块提供了非常方便的方法。 ==== mod_ssl mod_ssl 这个模块使用 OpenSSL 库, 来提供通过 安全套接字层 (SSL v2/v3) 和 传输层安全 (TLS v1) 协议的强加密能力。 这个模块提供了从某一受信的证书签署机构申请签名证书所需的所有工具, 您可以藉此在 FreeBSD 上运行安全的 web 服务器。 如果您未曾安装 Apache, 也可以直接安装一份包含了 mod_ssl 的版本的 Apache 1.3.X, 其方法是通过 package:www/apache13-modssl[] port 来进行。 SSL 支持已经作为 Apache 2.X 的一部分提供, 您可以通过 package:www/apache22[] port 来安装后者。 ==== 语言绑定 Apache对于一些主要的脚本语言都有相应的模块。 这些模块使得完全使用某种脚本语言来写 Apache 模块成为可能。 他们通常也被嵌入到服务器作为一个常驻内存的解释器, 以避免启动一个外部解释器对于下一节将描述的动态网站所需时间和资源上的开销。 === 动态网站 在过去的十年里,越来越多的企业为了增加收益和暴光率而转向了互联网。 这也同时增进了对于互动网页内容的需求。有些公司,比如 Microsoft(R) 推出了基于他们专有产品的解决方案,开源社区也做出了积极的回应。 比较时尚的选择包括 Django,Ruby on Rails, mod_perl, and mod_php. ==== Django Django 是一个以 BSD 许可证发布的 framework, 能让开发者快速写出高性能高品质的 web 应用程序。 它提供给一个对象关系映射组件,数据类型可以被当 Python 中的对象,和一组丰富的动态数据库访问 API, 使开发者避免了写 SQL 语句。它同时还提供了可扩展的模板系统, 让应用程序的逻辑部分与 HTML 的表现层分离。 Django 依赖与 mod_python, Apache, 和一个可选的 SQL 数据库引擎。 在设置了一些恰当的标志后,FreeBSD 的 Port 系统将会帮助你安装这些必需的依赖库。 [[network-www-django-install]] .安装 Django,Apache2, mod_python3,和 PostgreSQL [example] ==== [source,shell] .... # cd /usr/ports/www/py-django; make all install clean -DWITH_MOD_PYTHON3 -DWITH_POSTGRESQL .... ==== 在安装了 Django 和那些依赖的软件之后, 你需要创建一个 Django 项目的目录,然后配置 Apache,当有对于你网站上应用程序的某些指定的 URL 时调用内嵌的 Python 解释器。 [[network-www-django-apache-config]] .Django/mod_python 有关 Apache 部分的配置 [example] ==== 你需要在 Apache 的配置文件 [.filename]#httpd.conf# 加入以下这几行, 把对某些 URL 的请求传给你的 web 应用程序: [source,shell] .... SetHandler python-program PythonPath "['/dir/to/your/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On .... ==== ==== Ruby on Rails Ruby on Rails 是另外一个开源的 web framework, 提供了一个全面的开发框架,能帮助 web 开发者工作更有成效和快速写出强大的应用。 它能非常容易的从 posts 系统安装。 [source,shell] .... # cd /usr/ports/www/rubygem-rails; make all install clean .... ==== mod_perl Apache/Perl 集成计划, 将 Perl 程序设计语言的强大功能, 与 Apache HTTP 服务器 紧密地结合到了一起。 通过 mod_perl 模块, 可以完全使用 Perl 来撰写 Apache 模块。 此外, 服务器中嵌入的持久性解释器, 消除了由于启动外部的解释器为 Perl 脚本的启动所造成的性能损失。 mod_perl 通过多种方式提供。 要使用 mod_perl, 应该注意 mod_perl 1.0 只能配合 Apache 1.3 而 mod_perl 2.0 只能配合 Apache 2.X 使用。 mod_perl 1.0 可以通过 package:www/mod_perl[] 安装, 而以静态方式联编的版本, 则可以通过 package:www/apache13-modperl[] 来安装。 mod_perl 2.0 则可以通过 package:www/mod_perl2[] 安装。 ==== mod_php PHP, 也称为 "PHP: Hypertext Preprocessor", 是一种特别适合于 Web 开发的通用脚本语言。 它能够很容易地嵌入到 HTML 之中, 其语法接近于 C、 Java(TM), 以及 Perl, 以期让 web 开发人员的一迅速撰写动态生成的页面。 要获得用于 Apache web 服务器的 PHP5 支持, 可以从安装 package:lang/php5[] port 开始。 在首次安装 package:lang/php5[] port 的时候, 系统会自动显示可用的一系列 `OPTIONS` (配置选项)。 如果您没有看到菜单, 例如由于过去曾经安装过 package:lang/php5[] port 等等, 可以用下面的命令再次显示配置菜单, 在 port 的目录中执行: [source,shell] .... # make config .... 在配置选项对话框中, 选中 `APACHE` 这一项, 就可以联编出用于与 Apache web 服务器配合使用的可动态加载的 mod_php5 模块了。 [NOTE] ==== 由于各式各样的原因 (例如, 出于已经部署的 web 应用的兼容性考虑), 许多网站仍在使用 PHP4。 如果您需要 mod_php4 而不是 mod_php5, 请使用 package:lang/php4[] port。 package:lang/php4[] port 也支持许多 package:lang/php5[] port 提供的配置和编译时选项。 ==== 前面我们已经成功地安装并配置了用于支持动态 PHP 应用所需的模块。 请检查并确认您已将下述配置加入到了 [.filename]#/usr/local/etc/apache/httpd.conf# 中: [.programlisting] .... LoadModule php5_module libexec/apache/libphp5.so .... [.programlisting] .... AddModule mod_php5.c DirectoryIndex index.php index.html AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps .... 这些工作完成之后, 还需要使用 `apachectl` 命令来完成一次 graceful restart 以便加载 PHP 模块: [source,shell] .... # apachectl graceful .... 在未来您升级 PHP 时, `make config` 这步操作就不再是必需的了; 您所选择的 `OPTIONS` 会由 FreeBSD 的 Ports 框架自动保存。 在 FreeBSD 中的 PHP 支持是高度模块化的, 因此基本安装的功能十分有限。 增加其他功能的支持非常简单, 只需通过 package:lang/php5-extensions[] port 即可完成。 这个 port 提供了一个菜单驱动的界面来帮助完成 PHP 扩展的安装。 另外, 也可以通过对应的 port 来单独安装扩展。 例如, 要将对于 MySQL 数据库服务器的支持加入 PHP5, 只需简单地安装 [.filename]#databases/php5-mysql#。 安装完扩展之后, 必须重新启动 Apache 服务器, 来令其适应新的配置变更: [source,shell] .... # apachectl graceful .... [[network-ftp]] == 文件传输协议 (FTP) === 纵览 文件传输协议 (FTP) 为用户提供了一个简单的, 与 FTP 服务器交换文件的方法。 FreeBSD 系统中包含了 FTP 服务软件, ftpd。 这使得在 FreeBSD 上建立和管理 FTP 服务器变得非常简单。 === 配置 最重要的配置步骤是决定允许哪些帐号访问 FTP 服务器。 一般的 FreeBSD 系统包含了一系列系统帐号分别用于执行不同的服务程序, 但未知的用户不应被允许登录并使用这些帐号。 [.filename]#/etc/ftpusers# 文件中, 列出了不允许通过 FTP 访问的用户。 默认情况下, 这包含了前述的系统帐号, 但也可以在这里加入其它不应通过 FTP 访问的用户。 您可能会希望限制通过 FTP 登录的某些用户, 而不是完全阻止他们使用 FTP。 这可以通过 [.filename]#/etc/ftpchroot# 文件来完成。 这一文件列出了希望对 FTP 访问进行限制的用户和组的表。 而在 man:ftpchroot[5] 联机手册中, 已经对此进行了详尽的介绍, 故而不再赘述。 如果您想要在服务器上启用匿名的 FTP 访问, 则必须建立一个名为 `ftp` 的 FreeBSD 用户。 这样, 用户就可以使用 `ftp` 或 `anonymous` 和任意的口令 (习惯上, 应该是以那个用户的邮件地址作为口令) 来登录和访问您的 FTP 服务器。 FTP 服务器将在匿名用户登录时调用 man:chroot[2], 以便将其访问限制在 `ftp` 用户的主目录中。 有两个文本文件可以用来指定显示在 FTP 客户程序中的欢迎文字。 [.filename]#/etc/ftpwelcome# 文件中的内容将在用户连接上之后, 在登录提示之前显示。 在成功的登录之后, 将显示 [.filename]#/etc/ftpmotd# 文件中的内容。 请注意后者是相对于登录环境的, 因此对于匿名用户而言, 将显示 [.filename]#~ftp/etc/ftpmotd#。 一旦正确地配置了 FTP 服务器, 就必须在 [.filename]#/etc/inetd.conf# 中启用它。 这里需要做的全部工作就是将注释符 "#" 从已有的 ftpd 行之前去掉: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... 如 <> 所介绍的那样, 修改这个文件之后, 必须让 inetd 重新加载它, 才能使新的设置生效。请参阅 <> 以获取更多有关如何在你系统上启用 inetd 的详细信息。 ftpd 也可以作为一个独立的服务启动。 这样的话就需要在 [.filename]#/etc/rc.conf# 中设置如下的变量: [.programlisting] .... ftpd_enable="YES" .... 在设置了上述变量之后,独立的服务将在下次系统重启的时候启动, 或者通过以 `root` 身份手动执行如下的命令启动: [source,shell] .... # /etc/rc.d/ftpd start .... 现在可以通过输入下面的命令来登录您的 FTP 服务器了: [source,shell] .... % ftp localhost .... === 维护 ftpd 服务程序使用 man:syslog[3] 来记录消息。 默认情况下, 系统日志将把和 FTP 相关的消息记录到 [.filename]#/var/log/xferlog# 文件中。 FTP 日志的位置, 可以通过修改 [.filename]#/etc/syslog.conf# 中如下所示的行来修改: [.programlisting] .... ftp.info /var/log/xferlog .... 一定要小心对待在匿名 FTP 服务器中可能遇到的潜在问题。 一般而言, 允许匿名用户上传文件应三思。 您可能发现自己的 FTP 站点成为了交易未经授权的商业软件的论坛, 或发生更糟糕的情况。 如果不需要匿名的 FTP 上传, 可以在文件上配置权限, 使得您能够在其它匿名用户能够下载这些文件之前复查它们。 [[network-samba]] == 为 Microsoft(R) Windows(R) 客户机提供文件和打印服务 (Samba) === 纵览 Samba 是一个流行的开源软件包, 它提供了针对 Microsoft(R) Windows(R) 客户机的文件和打印服务。 这类客户机可以连接并使用 FreeBSD 系统上的文件空间, 就如同使用本地的磁盘一样, 或者像使用本地打印机一样使用 FreeBSD 上的打印机。 Samba 软件包可以在您的 FreeBSD 安装盘上找到。 如果您没有在初次安装 FreeBSD 时安装 Samba, 则可以通过 package:net/samba34[] port 或 package 来安装。 === 配置 默认的 Samba 配置文件会以 [.filename]#/usr/local/shared/examples/samba34/smb.conf.default# 的名字安装。这个文件必须复制为 [.filename]#/usr/local/etc/smb.conf# 并进行定制, 才能开始使用 Samba。 [.filename]#smb.conf# 文件中包含了 Samba 的运行时配置信息, 例如对于打印机的定义, 以及希望共享给 Windows(R) 客户机的 "共享文件系统"。 Samba 软件包包含了一个称为 swat 的 web 管理工具, 后者提供了配置 [.filename]#smb.conf# 文件的简单方法。 ==== 使用 Samba Web 管理工具 (SWAT) Samba Web 管理工具 (SWAT) 是一个通过 inetd 运行的服务程序。 因此, 需要把 [.filename]#/etc/inetd.conf# 中下面几行的注释去掉, 才能够使用 swat 来配置 Samba: [.programlisting] .... swat stream tcp nowait/400 root /usr/local/sbin/swat swat .... 如 <> 中所介绍的那样, 在修改了这个配置文件之后, 必须让 inetd 重新加载配置, 才能使其生效。 一旦在 [.filename]#inetd.conf# 中启用了 swat, 就可以用浏览器访问 connect to http://localhost:901[http://localhost:901] 了。 您将首先使用系统的 `root` 帐号登录。 只要成功地登录进了 Samba 配置页面, 就可以浏览系统的文档, 或从 menu:Globals[](全局) 选项卡开始配置了。 menu:Globals[] 小节对应于 `[global]` 小节中的变量, 前者位于 [.filename]#/usr/local/etc/smb.conf# 中。 ==== 全局配置 无论是使用 swat, 还是直接编辑 [.filename]#/usr/local/etc/smb.conf#, 通常首先要配置的 Samba 选项都是: `workgroup`:: NT 域名或工作组名, 其他计算机将通过这些名字来找到服务器。 `netbios name`:: 这个选项用于设置 Samba 服务器的 NetBIOS 名字。 默认情况下, 这是所在主机的 DNS 名字的第一部分。 `server string`:: 这个选项用于设置通过 `net view` 命令, 以及某些其他网络工具可以查看到的关于服务器的说明性文字。 ==== 安全配置 在 [.filename]#/usr/local/etc/smb.conf# 中的两个最重要的配置, 是选定的安全模型, 以及客户机上用户的口令存放后端。 下面的语句控制这些选项: `security`:: 最常见的选项形式是 `security = share` 和 `security = user`。 如果您的客户机使用用户名, 并且这些用户名与您的 FreeBSD 机器一致, 一般应选择用户级 (user) 安全。 这是默认的安全策略, 它要求客户机首先登录, 然后才能访问共享的资源。 + 如果采用共享级 (share) 安全, 则客户机不需要用有效的用户名和口令登录服务器, 就能够连接共享的资源。 这是较早版本的 Samba 中的默认值。 `passdb backend`:: Samba 提供了若干种不同的验证后端模型。 您可以通过 LDAP、 NIS+、 SQL 数据库, 或经过修改的口令文件, 来完成客户端的身份验证。 默认的验证模式是 `smbpasswd`, 这也是本章将介绍的全部内容。 假设您使用的是默认的 `smbpasswd` 后端, 则必须首先创建一个 [.filename]#/usr/local/etc/samba/smbpasswd# 文件, 来允许 Samba 对客户进行身份验证。 如果您打算让 UNIX(R) 用户帐号能够从 Windows(R) 客户机上登录, 可以使用下面的命令: [source,shell] .... # smbpasswd -a username .... [NOTE] ==== 目前推荐使用的后端是 `tdbsam`, 您应使用下面的命令来添加用户帐号: [source,shell] .... # pdbedit -a -u username .... ==== 请参考 http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/[官方的 Samba HOWTO] 以了解关于配置选项的进一步信息。 按照前面给出的基本描述, 您应该已经可以启动 Samba 了。 === 启动 Samba package:net/samba34[] port 会增加一个新的用于控制 Samba 的启动脚本。 要启用这个脚本, 以便用它来完成启动、 停止或重启 Samba 的任务, 需要在 [.filename]#/etc/rc.conf# 文件中加入: [.programlisting] .... samba_enable="YES" .... 此外, 也可以进行更细粒度的控制: [.programlisting] .... nmbd_enable="YES" .... [.programlisting] .... smbd_enable="YES" .... [NOTE] ==== 这也同时配置了在系统引导时启动 Samba。 ==== 配置好之后, 就可以在任何时候通过下面的命令来启动 Samba 了: [source,shell] .... # /usr/local/etc/rc.d/samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. .... 请参见 crossref:config[configtuning-rcd,在 FreeBSD 中使用 rc] 以了解关于使用 rc 脚本的进一步信息。 Samba 事实上包含了三个相互独立的服务程序。 您应该能够看到 nmbd 和 smbd 两个服务程序都是通过 [.filename]#samba# 脚本启动的。 如果在 [.filename]#smb.conf# 中启用了 winbind 名字解析服务, 则应该可以看到 winbindd 服务被启动起来。 可以在任何时候通过下面的命令来停止运行 Samba: [source,shell] .... # /usr/local/etc/rc.d/samba stop .... Samba 是一个复杂的软件包, 它提供了用于与 Microsoft(R) Windows(R) 网络进行集成的各式各样的功能。 要了解关于这里所介绍的基本安装以外的其它功能, 请访问 http://www.samba.org[http://www.samba.org]。 [[network-ntp]] == 通过 NTP 进行时钟同步 === 纵览 随着时间的推移, 计算机的时钟会倾向于漂移。 网络时间协议 (NTP) 是一种确保您的时钟保持准确的方法。 许多 Internet 服务依赖、 或极大地受益于本地计算机时钟的准确性。 例如, web 服务器可能会接收到一个请求, 要求如果文件在某一时刻之后修改过才发送它。 在局域网环境中, 共享文件的计算机之间的时钟是否同步至关重要, 因为这样才能使时间戳保持一致。 类似 man:cron[8] 这样的程序, 也依赖于正确的系统时钟, 才能够准确地执行操作。 FreeBSD 附带了 man:ntpd[8] NTP 服务器, 它可以用于查询其它的 NTP 服务器, 并配置本地计算机的时钟, 或者为其它机器提供服务。 === 选择合适的 NTP 服务器 为了同步您的系统时钟, 需要首先找到至少一个 NTP 服务器以供使用。 网络管理员, 或 ISP 都可能会提供用于这样目的的 NTP 服务器-请查看他们的文档以了解是否是这样。 另外, 也有一个在线的 http://ntp.isc.org/bin/view/Servers/WebHome[公开的 NTP 服务器列表], 您可以从中选一个较近的 NTP 服务器。 请确认您选择的服务器的访问策略, 如果需要的话, 申请一下所需的许可。 选择多个相互不连接的 NTP 服务器是一个好主意, 这样在某个服务器不可达, 或者时钟不可靠时就可以有别的选择。 这是因为, man:ntpd[8] 会智能地选择它收到的响应-它会更倾向于使用可靠的服务器。 === 配置您的机器 ==== 基本配置 如果只想在系统启动时同步时钟, 则可以使用 man:ntpdate[8]。 对于经常重新启动, 并且不需要经常同步的桌面系统来说这比较适合, 但绝大多数机器都应该运行 man:ntpd[8]。 在引导时使用 man:ntpdate[8] 来配合运行 man:ntpd[8] 也是一个好主意。 man:ntpd[8] 渐进地修正时钟, 而 man:ntpdate[8] 则直接设置时钟, 无论机器的当前时间和正确时间有多大的偏差。 要启用引导时的 man:ntpdate[8], 需要把 `ntpdate_enable="YES"` 加到 [.filename]#/etc/rc.conf# 中。 此外, 还需要通过 `ntpdate_flags` 来设置同步的服务器和选项, 它们将传递给 man:ntpdate[8]。 ==== 一般配置 NTP 是通过 [.filename]#/etc/ntp.conf# 文件来进行配置的, 其格式在 man:ntp.conf[5] 中进行了描述。 下面是一个例子: [.programlisting] .... server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift .... 这里, `server` 选项指定了使用哪一个服务器, 每一个服务器都独立一行。 如果某一台服务器上指定了 `prefer` (偏好) 参数, 如上面的 `ntplocal.example.com`, 则会优先选择这个服务器。 如果偏好的服务器和其他服务器的响应存在显著的差别, 则丢弃它的响应, 否则将使用来自它的响应, 而不理会其他服务器。 一般来说, `prefer` 参数应该标注在非常精确的 NTP 时源, 例如那些包含特殊的时间监控硬件的服务器上。 而 `driftfile` 选项, 则指定了用来保存系统时钟频率偏差的文件。 man:ntpd[8] 程序使用它来自动地补偿时钟的自然漂移, 从而使时钟即使在切断了外来时源的情况下, 仍能保持相当的准确度。 另外, `driftfile` 选项也保存上一次响应所使用的 NTP 服务器的信息。 这个文件包含了 NTP 的内部信息, 它不应被任何其他进程修改。 ==== 控制您的服务器的访问 默认情况下, NTP 服务器可以被整个 Internet 上的主机访问。 如果在 [.filename]#/etc/ntp.conf# 中指定 `restrict` 参数, 则可以控制允许哪些机器访问您的服务器。 如果希望拒绝所有的机器访问您的 NTP 服务器, 只需在 [.filename]#/etc/ntp.conf# 中加入: [.programlisting] .... restrict default ignore .... [NOTE] ==== 这样做会禁止您的服务器访问在本地配置中列出的服务器。 如果您需要令 NTP 服务器与外界的 NTP 服务器同步时间, 则应允许指定服务器。 请参见联机手册 man:ntp.conf[5] 以了解进一步的细节。 ==== 如果只希望子网内的机器通过您的服务器同步时钟, 而不允许它们配置为服务器, 或作为同步时钟的节点来时用, 则加入 [.programlisting] .... restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap .... 这里, 需要把 `192.168.1.0` 改为您网络上的 IP 地址, 并把 `255.255.255.0` 改为您的子网掩码。 [.filename]#/etc/ntp.conf# 可能包含多个 `restrict` 选项。 要了解进一步的细节, 请参见 man:ntp.conf[5] 的 `Access Control Support`(访问控制支持) 小节。 === 运行 NTP 服务器 要让 NTP 服务器在系统启动时随之开启, 需要把 `ntpd_enable="YES"` 加入到 [.filename]#/etc/rc.conf# 中。 如果希望向 man:ntpd[8] 传递更多参数, 需要编辑 [.filename]#/etc/rc.conf# 中的 `ntpd_flags`。 要在不重新启动机器的前提下启动服务器, 需要手工运行 `ntpd`, 并带上 [.filename]#/etc/rc.conf# 中的 `ntpd_flags` 所指定的参数。 例如: [source,shell] .... # ntpd -p /var/run/ntpd.pid .... === 在临时性的 Internet 连接上使用 ntpd man:ntpd[8] 程序的正常工作并不需要永久性的 Internet 连接。 然而, 如果您的临时性连接是配置为按需拨号的, 那么防止 NTP 通讯频繁触发拨号, 或保持连接就有必要了。 如果您使用用户级 PPP, 可以使用 `filter` 语句, 在 [.filename]#/etc/ppp/ppp.conf# 中进行必要的设置。 例如: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... 要了解进一步的信息, 请参考 man:ppp[8] 的 `PACKET FILTERING`(包过滤) 小节, 以及 [.filename]#/usr/shared/examples/ppp/# 中的例子。 [NOTE] ==== 某些 Internet 访问提供商会阻止低编号的端口, 这会导致 NTP 无法正常工作, 因为响应无法到达您的机器。 ==== === 进一步的信息 关于 NTP 服务器的文档, 可以在 [.filename]#/usr/shared/doc/ntp/# 找到 HTML 格式的版本。 [[network-syslogd]] == 使用 `syslogd` 记录远程主机的日志 处理系统日志对于系统安全和管理是一个重要方面。 当有多台分布在中型或大型网络的机器,再或者是处于各种不同类型的网络中, 监视他们上面的日志文件则显得非常难以操作, 在这种情况下, 配置远程日志记录能使整个处理过程变得更加轻松。 集中记录日志到一台指定的机器能够减轻一些日志文件管理的负担。 日志文件的收集, 合并与循环可以在一处配置, 使用 FreeBSD 原生的工具, 比如 man:syslogd[8] 和 man:newsyslog[8]。 在以下的配置示例中, 主机 `A`, 命名为 `logserv.example.com`, 将用来收集本地网络的日志信息。 主机 `B`, 命名为 `logclient.example.com` 将把日志信息传送给服务器。 在现实中, 这两个主机都需要配置正确的正向和反向的 DNS 或者在 [.filename]#/etc/hosts# 中记录。 否则, 数据将被服务器拒收。 === 日志服务器的配置 日志服务器是配置成用来接收远程主机日志信息的机器。 在大多数的情况下这是为了方便配置, 或者是为了更好的管理。 不论是何原因, 在继续深入之前需要提一些必需条件。 一个正确配置的日志服务器必须符合以下几个最基本的条件: * 服务器和客户端的防火墙规则允许 514 端口上的 UDP 报文通过。 * syslogd 被配置成接受从远程客户发来的消息。 * syslogd 服务器和所有的客户端都必须有配有正确的正向和反向 DNS, 或者在 [.filename]#/etc/hosts# 中有相应配置。 配置日志服务器, 客户端必须在 [.filename]#/etc/syslog.conf# 中列出, 并指定日志的 facility: [.programlisting] .... +logclient.example.com *.* /var/log/logclient.log .... [NOTE] ==== 更多关于各种被支持并可用的 _facility_ 能在 man:syslog.conf[5] 手册页中找到。 ==== 一旦加入以后, 所有此类 `facility` 消息都会被记录到先前指定的文件 [.filename]#/var/log/logclient.log#。 提供服务的机器还需要在其 [.filename]#/etc/rc.conf# 中配置: [.programlisting] .... syslogd_enable="YES" syslogd_flags="-a logclient.example.com -v -v" .... 第一个选项表示在系统启动时启用 `syslogd` 服务, 第二个选项表示允许服务器接收来自指定日志源客户端的数据。 第二行配置中最后的部分, 使用 `-v -v`, 表示增加日志消息的详细程度。 在调整 facility 配置的时候, 这个配置非常有用, 因为管理员能够看到哪些消息将作为哪个 facility 的内容来记录。 可以同时指定多个 `-a` 选项来允许多个客户机。 此外, 还可以指定 IP 地址或网段, 请参阅 man:syslog[3] 联机手册以了解可用配置的完整列表。 最后, 日志文件应该被创建。 不论你用何种方法创建, 比如 man:touch[1] 能很好的完成此类任务: [source,shell] .... # touch /var/log/logclient.log .... 此时, 应该重启并确认一下 `syslogd` 守护进程: [source,shell] .... # /etc/rc.d/syslogd restart # pgrep syslog .... 如果返回了一个 PIC 的话, 服务端应该被成功重启了, 并继续开始配置客户端。 如果服务端没有重启的话, 请在 [.filename]#/var/log/messages# 日志中查阅相关输出。 === 日志客户端配置 日志客户端是一台发送日志信息到日志服务器的机器, 并在本地保存拷贝。 与日志服务器类似, 客户端也需要满足一些最基本的条件: * man:syslogd[8] 必须被配置成发送指定类型的消息到能接收他们的日志服务器。 * 防火墙必须允许 514 端口上的 UDP 包通过; * 必须配置正向与反向 DNS, 或者在 [.filename]#/etc/hosts# 中有正确的记录。 相比服务器来说配置客户端更轻松一些。 客户端的机器在 [.filename]#/etc/rc.conf# 中做如下的设置: [.programlisting] .... syslogd_enable="YES" syslogd_flags="-s -v -v" .... 和前面类似, 这些选项会在系统启动过程中启用 `syslogd` 服务, 并增加日志消息的详细程度。 而 `-s` 选项则表示禁止服务接收来自其他主机的日志。 Facility 是描述某个消息由系统的哪部分生成的。 举例来说, ftp 和 ipfw 都是 facility。 当这两项服务生成日志消息时, 它们通常在日志消息中包含了这两种工具。 Facility 通常带有一个优先级或等级, 就是用来标记一个日志消息的重要程度。 最普通的为 `warning` 和 `info`。 请参阅 man:syslog[3] 手册页以获得一个完整可用的 facility 与优先级列表。 日志服务器必须在客户端的 [.filename]#/etc/syslog.conf# 中指明。 在此例中, `@` 符号被用来表示发送日志数据到远程的服务器, 看上去差不多如下这样: [.programlisting] .... *.* @logserv.example.com .... 添加后, 必须重启 `syslogd` 使得上述修改生效: [source,shell] .... # /etc/rc.d/syslogd restart .... 测试日志消息是否能通过网络发送, 在准备发出消息的客户机上用 man:logger[1] 来向 `syslogd` 发出信息: [source,shell] .... # logger "Test message from logclient" .... 这段消息现在应该同时出现在客户机的 [.filename]#/var/log/messages# 以及日志服务器的 [.filename]#/var/log/logclient.log# 中。 === 调试日志服务器 在某些情况下, 如果日志服务器没有收到消息的话就需要调试一番了。 有几个可能的原因, 最常见的两个是网络连接的问题和 DNS 的问题。 为了测试这些问题, 请确认两边的机器都能使用 [.filename]#/etc/rc.conf# 中所设定的主机名访问到对方。 如果这个能正常工作的话, 那么就需要对 [.filename]#/etc/rc.conf# 中的 `syslogd_flags` 选项做些修改了。 在以下的示例中, [.filename]#/var/log/logclient.log# 是空的, [.filename]#/var/log/message# 中也没有表明任何失败的原因。 为了增加调试的输出, 修改 `ayalogd_flags` 选项至类似于如下的示例, 并重启服务: [.programlisting] .... syslogd_flags="-d -a logclien.example.com -v -v" .... [source,shell] .... # /etc/rc.d/syslogd restart .... 在重启服务之后, 屏幕上将立刻闪现类似这样的调试数据: [source,shell] .... logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel Logging to FILE /var/log/messages syslogd: kernel boot file is /boot/kernel/kernel cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; rejected in rule 0 due to name mismatch. .... 很明显,消息是由于主机名不匹配而被拒收的。 在一点一点的检查了配置文件之后, 发现了 [.filename]#/etc/rc.conf# 中如下这行有输入错误: [.programlisting] .... syslogd_flags="-d -a logclien.example.com -v -v" .... 这行应该包涵有 `logclient`, 而不是 `logclien`。 在做了正确的修改并重启之后便能见到预期的效果了: [source,shell] .... # /etc/rc.d/syslogd restart logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel syslogd: kernel boot file is /boot/kernel/kernel logmsg: pri 166, flags 17, from logserv.example.com, msg Dec 10 20:55:02 logserv.example.com syslogd: exiting on signal 2 cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; accepted in rule 0. logmsg: pri 15, flags 0, from logclient.example.com, msg Dec 11 02:01:28 trhodes: Test message 2 Logging to FILE /var/log/logclient.log Logging to FILE /var/log/messages .... 此刻, 消息能够被正确接收并保存入文件了。 === 安全性方面的思考 就像其他的网络服务一样, 在实现配置之前需要考虑安全性。 有时日志文件也包含了敏感信息, 比如本地主机上所启用的服务, 用户帐号和配置数据。 从客户端发出的数据经过网络到达服务器, 这期间既没有加密也没有密码保护。 如果有加密需要的话, 可以使用 package:security/stunnel[], 它将在一个加密的隧道中传输数据。 本地安全也同样是个问题。 日志文件在使用中或循环转后都没有被加密。 本地用户可能读取这些文件以获得对系统更深入的了解。 对于这类情况, 给这些文件设置正确的权限是非常有必要的。 man:newsyslog[8] 工具支持给新创建和循环的日志设置权限。 把日志文件的权限设置为 `600` 能阻止本地用户不必要的窥探。 diff --git a/documentation/content/zh-tw/books/handbook/mac/_index.adoc b/documentation/content/zh-tw/books/handbook/mac/_index.adoc index 46a49beced..f7225d9f84 100644 --- a/documentation/content/zh-tw/books/handbook/mac/_index.adoc +++ b/documentation/content/zh-tw/books/handbook/mac/_index.adoc @@ -1,810 +1,808 @@ --- title: 章 15. 強制存取控制 (MAC) part: 部 III. 系統管理 prev: books/handbook/jails next: books/handbook/audit showBookMenu: true weight: 19 params: path: "/books/handbook/mac/" --- [[mac]] = 強制存取控制 (MAC) :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 15 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/mac/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[mac-synopsis]] == 概述 FreeBSD supports security extensions based on the POSIX(TM).1e draft. These security mechanisms include file system Access Control Lists (crossref:security[fs-acl,存取控制清單]) and Mandatory Access Control (MAC). MAC allows access control modules to be loaded in order to implement security policies. Some modules provide protections for a narrow subset of the system, hardening a particular service. Others provide comprehensive labeled security across all subjects and objects. The mandatory part of the definition indicates that enforcement of controls is performed by administrators and the operating system. This is in contrast to the default security mechanism of Discretionary Access Control (DAC) where enforcement is left to the discretion of users. This chapter focuses on the MAC framework and the set of pluggable security policy modules FreeBSD provides for enabling various security mechanisms. 讀完這章,您將了解: * The terminology associated with the MAC framework. * The capabilities of MAC security policy modules as well as the difference between a labeled and non-labeled policy. * The considerations to take into account before configuring a system to use the MAC framework. * Which MAC security policy modules are included in FreeBSD and how to configure them. * How to implement a more secure environment using the MAC framework. * How to test the MAC configuration to ensure the framework has been properly implemented. 在開始閱讀這章之前,您需要: * 了解 UNIX(TM) 及 FreeBSD 基礎 (crossref:basics[basics,FreeBSD 基礎])。 * Have some familiarity with security and how it pertains to FreeBSD (crossref:security[security,安全性]). [WARNING] ==== Improper MAC configuration may cause loss of system access, aggravation of users, or inability to access the features provided by Xorg. More importantly, MAC should not be relied upon to completely secure a system. The MAC framework only augments an existing security policy. Without sound security practices and regular security checks, the system will never be completely secure. The examples contained within this chapter are for demonstration purposes and the example settings should _not_ be implemented on a production system. Implementing any security policy takes a good deal of understanding, proper design, and thorough testing. ==== While this chapter covers a broad range of security issues relating to the MAC framework, the development of new MAC security policy modules will not be covered. A number of security policy modules included with the MAC framework have specific characteristics which are provided for both testing and new module development. Refer to man:mac_test[4], man:mac_stub[4] and man:mac_none[4] for more information on these security policy modules and the various mechanisms they provide. [[mac-inline-glossary]] == 關鍵詞 The following key terms are used when referring to the MAC framework: * _compartment_: a set of programs and data to be partitioned or separated, where users are given explicit access to specific component of a system. A compartment represents a grouping, such as a work group, department, project, or topic. Compartments make it possible to implement a need-to-know-basis security policy. * _integrity_: the level of trust which can be placed on data. As the integrity of the data is elevated, so does the ability to trust that data. * _level_: the increased or decreased setting of a security attribute. As the level increases, its security is considered to elevate as well. * _label_: a security attribute which can be applied to files, directories, or other items in the system. It could be considered a confidentiality stamp. When a label is placed on a file, it describes the security properties of that file and will only permit access by files, users, and resources with a similar security setting. The meaning and interpretation of label values depends on the policy configuration. Some policies treat a label as representing the integrity or secrecy of an object while other policies might use labels to hold rules for access. * _multilabel_: this property is a file system option which can be set in single-user mode using man:tunefs[8], during boot using man:fstab[5], or during the creation of a new file system. This option permits an administrator to apply different MAC labels on different objects. This option only applies to security policy modules which support labeling. * _single label_: a policy where the entire file system uses one label to enforce access control over the flow of data. Whenever `multilabel` is not set, all files will conform to the same label setting. * _object_: an entity through which information flows under the direction of a _subject_. This includes directories, files, fields, screens, keyboards, memory, magnetic storage, printers or any other data storage or moving device. An object is a data container or a system resource. Access to an object effectively means access to its data. * _subject_: any active entity that causes information to flow between _objects_ such as a user, user process, or system process. On FreeBSD, this is almost always a thread acting in a process on behalf of a user. * _policy_: a collection of rules which defines how objectives are to be achieved. A policy usually documents how certain items are to be handled. This chapter considers a policy to be a collection of rules which controls the flow of data and information and defines who has access to that data and information. * _high-watermark_: this type of policy permits the raising of security levels for the purpose of accessing higher level information. In most cases, the original level is restored after the process is complete. Currently, the FreeBSD MAC framework does not include this type of policy. * _low-watermark_: this type of policy permits lowering security levels for the purpose of accessing information which is less secure. In most cases, the original security level of the user is restored after the process is complete. The only security policy module in FreeBSD to use this is man:mac_lomac[4]. * _sensitivity_: usually used when discussing Multilevel Security (MLS). A sensitivity level describes how important or secret the data should be. As the sensitivity level increases, so does the importance of the secrecy, or confidentiality, of the data. [[mac-understandlabel]] == 了解 MAC 標籤 A MAC label is a security attribute which may be applied to subjects and objects throughout the system. When setting a label, the administrator must understand its implications in order to prevent unexpected or undesired behavior of the system. The attributes available on an object depend on the loaded policy module, as policy modules interpret their attributes in different ways. The security label on an object is used as a part of a security access control decision by a policy. With some policies, the label contains all of the information necessary to make a decision. In other policies, the labels may be processed as part of a larger rule set. There are two types of label policies: single label and multi label. By default, the system will use single label. The administrator should be aware of the pros and cons of each in order to implement policies which meet the requirements of the system's security model. A single label security policy only permits one label to be used for every subject or object. Since a single label policy enforces one set of access permissions across the entire system, it provides lower administration overhead, but decreases the flexibility of policies which support labeling. However, in many environments, a single label policy may be all that is required. A single label policy is somewhat similar to DAC as `root` configures the policies so that users are placed in the appropriate categories and access levels. A notable difference is that many policy modules can also restrict `root`. Basic control over objects will then be released to the group, but `root` may revoke or modify the settings at any time. When appropriate, a multi label policy can be set on a UFS file system by passing `multilabel` to man:tunefs[8]. A multi label policy permits each subject or object to have its own independent MAC label. The decision to use a multi label or single label policy is only required for policies which implement the labeling feature, such as `biba`, `lomac`, and `mls`. Some policies, such as `seeotheruids`, `portacl` and `partition`, do not use labels at all. Using a multi label policy on a partition and establishing a multi label security model can increase administrative overhead as everything in that file system has a label. This includes directories, files, and even device nodes. The following command will set `multilabel` on the specified UFS file system. This may only be done in single-user mode and is not a requirement for the swap file system: [source,shell] .... # tunefs -l enable / .... [NOTE] ==== Some users have experienced problems with setting the `multilabel` flag on the root partition. If this is the case, please review <>. ==== Since the multi label policy is set on a per-file system basis, a multi label policy may not be needed if the file system layout is well designed. Consider an example security MAC model for a FreeBSD web server. This machine uses the single label, `biba/high`, for everything in the default file systems. If the web server needs to run at `biba/low` to prevent write up capabilities, it could be installed to a separate UFS [.filename]#/usr/local# file system set at `biba/low`. === 標籤設定 Virtually all aspects of label policy module configuration will be performed using the base system utilities. These commands provide a simple interface for object or subject configuration or the manipulation and verification of the configuration. All configuration may be done using `setfmac`, which is used to set MAC labels on system objects, and `setpmac`, which is used to set the labels on system subjects. For example, to set the `biba` MAC label to `high` on [.filename]#test#: [source,shell] .... # setfmac biba/high test .... If the configuration is successful, the prompt will be returned without error. A common error is `Permission denied` which usually occurs when the label is being set or modified on a restricted object. Other conditions may produce different failures. For instance, the file may not be owned by the user attempting to relabel the object, the object may not exist, or the object may be read-only. A mandatory policy will not allow the process to relabel the file, maybe because of a property of the file, a property of the process, or a property of the proposed new label value. For example, if a user running at low integrity tries to change the label of a high integrity file, or a user running at low integrity tries to change the label of a low integrity file to a high integrity label, these operations will fail. The system administrator may use `setpmac` to override the policy module's settings by assigning a different label to the invoked process: [source,shell] .... # setfmac biba/high test Permission denied # setpmac biba/low setfmac biba/high test # getfmac test test: biba/high .... For currently running processes, such as sendmail, `getpmac` is usually used instead. This command takes a process ID (PID) in place of a command name. If users attempt to manipulate a file not in their access, subject to the rules of the loaded policy modules, the `Operation not permitted` error will be displayed. === 預先定義的標籤 A few FreeBSD policy modules which support the labeling feature offer three predefined labels: `low`, `equal`, and `high`, where: * `low` is considered the lowest label setting an object or subject may have. Setting this on objects or subjects blocks their access to objects or subjects marked high. * `equal` sets the subject or object to be disabled or unaffected and should only be placed on objects considered to be exempt from the policy. * `high` grants an object or subject the highest setting available in the Biba and MLS policy modules. Such policy modules include man:mac_biba[4], man:mac_mls[4] and man:mac_lomac[4]. Each of the predefined labels establishes a different information flow directive. Refer to the manual page of the module to determine the traits of the generic label configurations. === 數值標籤 The Biba and MLS policy modules support a numeric label which may be set to indicate the precise level of hierarchical control. This numeric level is used to partition or sort information into different groups of classification, only permitting access to that group or a higher group level. For example: [.programlisting] .... biba/10:2+3+6(5:2+3-20:2+3+4+5+6) .... may be interpreted as "Biba Policy Label/Grade 10:Compartments 2, 3 and 6: (grade 5 ...") In this example, the first grade would be considered the effective grade with effective compartments, the second grade is the low grade, and the last one is the high grade. In most configurations, such fine-grained settings are not needed as they are considered to be advanced configurations. System objects only have a current grade and compartment. System subjects reflect the range of available rights in the system, and network interfaces, where they are used for access control. The grade and compartments in a subject and object pair are used to construct a relationship known as _dominance_, in which a subject dominates an object, the object dominates the subject, neither dominates the other, or both dominate each other. The "both dominate" case occurs when the two labels are equal. Due to the information flow nature of Biba, a user has rights to a set of compartments that might correspond to projects, but objects also have a set of compartments. Users may have to subset their rights using `su` or `setpmac` in order to access objects in a compartment from which they are not restricted. === 使用者標籤 Users are required to have labels so that their files and processes properly interact with the security policy defined on the system. This is configured in [.filename]#/etc/login.conf# using login classes. Every policy module that uses labels will implement the user class setting. To set the user class default label which will be enforced by MAC, add a `label` entry. An example `label` entry containing every policy module is displayed below. Note that in a real configuration, the administrator would never enable every policy module. It is recommended that the rest of this chapter be reviewed before any configuration is implemented. [.programlisting] .... default:\ - :copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:\ :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=partition/13,mls/5,biba/10(5-15),lomac/10[2]: .... While users can not modify the default value, they may change their label after they login, subject to the constraints of the policy. The example above tells the Biba policy that a process's minimum integrity is `5`, its maximum is `15`, and the default effective label is `10`. The process will run at `10` until it chooses to change label, perhaps due to the user using `setpmac`, which will be constrained by Biba to the configured range. After any change to [.filename]#login.conf#, the login class capability database must be rebuilt using `cap_mkdb`. Many sites have a large number of users requiring several different user classes. In depth planning is required as this can become difficult to manage. === 網路介面標籤 Labels may be set on network interfaces to help control the flow of data across the network. Policies using network interface labels function in the same way that policies function with respect to objects. Users at high settings in Biba, for example, will not be permitted to access network interfaces with a label of `low`. When setting the MAC label on network interfaces, `maclabel` may be passed to `ifconfig`: [source,shell] .... # ifconfig bge0 maclabel biba/equal .... This example will set the MAC label of `biba/equal` on the `bge0` interface. When using a setting similar to `biba/high(low-high)`, the entire label should be quoted to prevent an error from being returned. Each policy module which supports labeling has a tunable which may be used to disable the MAC label on network interfaces. Setting the label to `equal` will have a similar effect. Review the output of `sysctl`, the policy manual pages, and the information in the rest of this chapter for more information on those tunables. [[mac-planning]] == 規劃安全架構 Before implementing any MAC policies, a planning phase is recommended. During the planning stages, an administrator should consider the implementation requirements and goals, such as: * How to classify information and resources available on the target systems. * Which information or resources to restrict access to along with the type of restrictions that should be applied. * Which MAC modules will be required to achieve this goal. A trial run of the trusted system and its configuration should occur _before_ a MAC implementation is used on production systems. Since different environments have different needs and requirements, establishing a complete security profile will decrease the need of changes once the system goes live. Consider how the MAC framework augments the security of the system as a whole. The various security policy modules provided by the MAC framework could be used to protect the network and file systems or to block users from accessing certain ports and sockets. Perhaps the best use of the policy modules is to load several security policy modules at a time in order to provide a MLS environment. This approach differs from a hardening policy, which typically hardens elements of a system which are used only for specific purposes. The downside to MLS is increased administrative overhead. The overhead is minimal when compared to the lasting effect of a framework which provides the ability to pick and choose which policies are required for a specific configuration and which keeps performance overhead down. The reduction of support for unneeded policies can increase the overall performance of the system as well as offer flexibility of choice. A good implementation would consider the overall security requirements and effectively implement the various security policy modules offered by the framework. A system utilizing MAC guarantees that a user will not be permitted to change security attributes at will. All user utilities, programs, and scripts must work within the constraints of the access rules provided by the selected security policy modules and control of the MAC access rules is in the hands of the system administrator. It is the duty of the system administrator to carefully select the correct security policy modules. For an environment that needs to limit access control over the network, the man:mac_portacl[4], man:mac_ifoff[4], and man:mac_biba[4] policy modules make good starting points. For an environment where strict confidentiality of file system objects is required, consider the man:mac_bsdextended[4] and man:mac_mls[4] policy modules. Policy decisions could be made based on network configuration. If only certain users should be permitted access to man:ssh[1], the man:mac_portacl[4] policy module is a good choice. In the case of file systems, access to objects might be considered confidential to some users, but not to others. As an example, a large development team might be broken off into smaller projects where developers in project A might not be permitted to access objects written by developers in project B. Yet both projects might need to access objects created by developers in project C. Using the different security policy modules provided by the MAC framework, users could be divided into these groups and then given access to the appropriate objects. Each security policy module has a unique way of dealing with the overall security of a system. Module selection should be based on a well thought out security policy which may require revision and reimplementation. Understanding the different security policy modules offered by the MAC framework will help administrators choose the best policies for their situations. The rest of this chapter covers the available modules, describes their use and configuration, and in some cases, provides insight on applicable situations. [CAUTION] ==== Implementing MAC is much like implementing a firewall since care must be taken to prevent being completely locked out of the system. The ability to revert back to a previous configuration should be considered and the implementation of MAC over a remote connection should be done with extreme caution. ==== [[mac-policies]] == 可用的 MAC 管理政策 The default FreeBSD kernel includes `options MAC`. This means that every module included with the MAC framework can be loaded with `kldload` as a run-time kernel module. After testing the module, add the module name to [.filename]#/boot/loader.conf# so that it will load during boot. Each module also provides a kernel option for those administrators who choose to compile their own custom kernel. FreeBSD includes a group of policies that will cover most security requirements. Each policy is summarized below. The last three policies support integer settings in place of the three default labels. [[mac-seeotheruids]] === MAC See Other UIDs 政策 Module name: [.filename]#mac_seeotheruids.ko# Kernel configuration line: `options MAC_SEEOTHERUIDS` Boot option: `mac_seeotheruids_load="YES"` The man:mac_seeotheruids[4] module extends the `security.bsd.see_other_uids` and `security.bsd.see_other_gids sysctl` tunables. This option does not require any labels to be set before configuration and can operate transparently with other modules. After loading the module, the following `sysctl` tunables may be used to control its features: * `security.mac.seeotheruids.enabled` enables the module and implements the default settings which deny users the ability to view processes and sockets owned by other users. * `security.mac.seeotheruids.specificgid_enabled` allows specified groups to be exempt from this policy. To exempt specific groups, use the `security.mac.seeotheruids.specificgid=_XXX_ sysctl` tunable, replacing _XXX_ with the numeric group ID to be exempted. * `security.mac.seeotheruids.primarygroup_enabled` is used to exempt specific primary groups from this policy. When using this tunable, `security.mac.seeotheruids.specificgid_enabled` may not be set. [[mac-bsdextended]] === MAC BSD Extended 政策 Module name: [.filename]#mac_bsdextended.ko# Kernel configuration line: `options MAC_BSDEXTENDED` Boot option: `mac_bsdextended_load="YES"` The man:mac_bsdextended[4] module enforces a file system firewall. It provides an extension to the standard file system permissions model, permitting an administrator to create a firewall-like ruleset to protect files, utilities, and directories in the file system hierarchy. When access to a file system object is attempted, the list of rules is iterated until either a matching rule is located or the end is reached. This behavior may be changed using `security.mac.bsdextended.firstmatch_enabled`. Similar to other firewall modules in FreeBSD, a file containing the access control rules can be created and read by the system at boot time using an man:rc.conf[5] variable. The rule list may be entered using man:ugidfw[8] which has a syntax similar to man:ipfw[8]. More tools can be written by using the functions in the man:libugidfw[3] library. After the man:mac_bsdextended[4] module has been loaded, the following command may be used to list the current rule configuration: [source,shell] .... # ugidfw list 0 slots, 0 rules .... By default, no rules are defined and everything is completely accessible. To create a rule which blocks all access by users but leaves `root` unaffected: [source,shell] .... # ugidfw add subject not uid root new object not uid root mode n .... While this rule is simple to implement, it is a very bad idea as it blocks all users from issuing any commands. A more realistic example blocks `user1` all access, including directory listings, to ``_user2_``'s home directory: [source,shell] .... # ugidfw set 2 subject uid user1 object uid user2 mode n # ugidfw set 3 subject uid user1 object gid user2 mode n .... Instead of `user1`, `not uid _user2_` could be used in order to enforce the same access restrictions for all users. However, the `root` user is unaffected by these rules. [NOTE] ==== Extreme caution should be taken when working with this module as incorrect use could block access to certain parts of the file system. ==== [[mac-ifoff]] === MAC Interface Silencing 政策 Module name: [.filename]#mac_ifoff.ko# Kernel configuration line: `options MAC_IFOFF` Boot option: `mac_ifoff_load="YES"` The man:mac_ifoff[4] module is used to disable network interfaces on the fly and to keep network interfaces from being brought up during system boot. It does not use labels and does not depend on any other MAC modules. Most of this module's control is performed through these `sysctl` tunables: * `security.mac.ifoff.lo_enabled` enables or disables all traffic on the loopback, man:lo[4], interface. * `security.mac.ifoff.bpfrecv_enabled` enables or disables all traffic on the Berkeley Packet Filter interface, man:bpf[4]. * `security.mac.ifoff.other_enabled` enables or disables traffic on all other interfaces. One of the most common uses of man:mac_ifoff[4] is network monitoring in an environment where network traffic should not be permitted during the boot sequence. Another use would be to write a script which uses an application such as package:security/aide[] to automatically block network traffic if it finds new or altered files in protected directories. [[mac-portacl]] === MAC Port Access Control 政策 Module name: [.filename]#mac_portacl.ko# Kernel configuration line: `MAC_PORTACL` Boot option: `mac_portacl_load="YES"` The man:mac_portacl[4] module is used to limit binding to local TCP and UDP ports, making it possible to allow non-`root` users to bind to specified privileged ports below 1024. Once loaded, this module enables the MAC policy on all sockets. The following tunables are available: * `security.mac.portacl.enabled` enables or disables the policy completely. * `security.mac.portacl.port_high` sets the highest port number that man:mac_portacl[4] protects. * `security.mac.portacl.suser_exempt`, when set to a non-zero value, exempts the `root` user from this policy. * `security.mac.portacl.rules` specifies the policy as a text string of the form `rule[,rule,...]`, with as many rules as needed, and where each rule is of the form `idtype:id:protocol:port`. The [parameter]#idtype# is either `uid` or `gid`. The [parameter]#protocol# parameter can be `tcp` or `udp`. The [parameter]#port# parameter is the port number to allow the specified user or group to bind to. Only numeric values can be used for the user ID, group ID, and port parameters. By default, ports below 1024 can only be used by privileged processes which run as `root`. For man:mac_portacl[4] to allow non-privileged processes to bind to ports below 1024, set the following tunables as follows: [source,shell] .... # sysctl security.mac.portacl.port_high=1023 # sysctl net.inet.ip.portrange.reservedlow=0 # sysctl net.inet.ip.portrange.reservedhigh=0 .... To prevent the `root` user from being affected by this policy, set `security.mac.portacl.suser_exempt` to a non-zero value. [source,shell] .... # sysctl security.mac.portacl.suser_exempt=1 .... To allow the `www` user with UID 80 to bind to port 80 without ever needing `root` privilege: [source,shell] .... # sysctl security.mac.portacl.rules=uid:80:tcp:80 .... This next example permits the user with the UID of 1001 to bind to TCP ports 110 (POP3) and 995 (POP3s): [source,shell] .... # sysctl security.mac.portacl.rules=uid:1001:tcp:110,uid:1001:tcp:995 .... [[mac-partition]] === MAC Partition 政策 Module name: [.filename]#mac_partition.ko# Kernel configuration line: `options MAC_PARTITION` Boot option: `mac_partition_load="YES"` The man:mac_partition[4] policy drops processes into specific "partitions" based on their MAC label. Most configuration for this policy is done using man:setpmac[8]. One `sysctl` tunable is available for this policy: * `security.mac.partition.enabled` enables the enforcement of MAC process partitions. When this policy is enabled, users will only be permitted to see their processes, and any others within their partition, but will not be permitted to work with utilities outside the scope of this partition. For instance, a user in the `insecure` class will not be permitted to access `top` as well as many other commands that must spawn a process. This example adds `top` to the label set on users in the `insecure` class. All processes spawned by users in the `insecure` class will stay in the `partition/13` label. [source,shell] .... # setpmac partition/13 top .... This command displays the partition label and the process list: [source,shell] .... # ps Zax .... This command displays another user's process partition label and that user's currently running processes: [source,shell] .... # ps -ZU trhodes .... [NOTE] ==== Users can see processes in ``root``'s label unless the man:mac_seeotheruids[4] policy is loaded. ==== [[mac-mls]] === MAC Multi-Level Security 模組 Module name: [.filename]#mac_mls.ko# Kernel configuration line: `options MAC_MLS` Boot option: `mac_mls_load="YES"` The man:mac_mls[4] policy controls access between subjects and objects in the system by enforcing a strict information flow policy. In MLS environments, a "clearance" level is set in the label of each subject or object, along with compartments. Since these clearance levels can reach numbers greater than several thousand, it would be a daunting task to thoroughly configure every subject or object. To ease this administrative overhead, three labels are included in this policy: `mls/low`, `mls/equal`, and `mls/high`, where: * Anything labeled with `mls/low` will have a low clearance level and not be permitted to access information of a higher level. This label also prevents objects of a higher clearance level from writing or passing information to a lower level. * `mls/equal` should be placed on objects which should be exempt from the policy. * `mls/high` is the highest level of clearance possible. Objects assigned this label will hold dominance over all other objects in the system; however, they will not permit the leaking of information to objects of a lower class. MLS provides: * A hierarchical security level with a set of non-hierarchical categories. * Fixed rules of `no read up, no write down`. This means that a subject can have read access to objects on its own level or below, but not above. Similarly, a subject can have write access to objects on its own level or above, but not beneath. * Secrecy, or the prevention of inappropriate disclosure of data. * A basis for the design of systems that concurrently handle data at multiple sensitivity levels without leaking information between secret and confidential. The following `sysctl` tunables are available: * `security.mac.mls.enabled` is used to enable or disable the MLS policy. * `security.mac.mls.ptys_equal` labels all man:pty[4] devices as `mls/equal` during creation. * `security.mac.mls.revocation_enabled` revokes access to objects after their label changes to a label of a lower grade. * `security.mac.mls.max_compartments` sets the maximum number of compartment levels allowed on a system. To manipulate MLS labels, use man:setfmac[8]. To assign a label to an object: [source,shell] .... # setfmac mls/5 test .... To get the MLS label for the file [.filename]#test#: [source,shell] .... # getfmac test .... Another approach is to create a master policy file in [.filename]#/etc/# which specifies the MLS policy information and to feed that file to `setfmac`. When using the MLS policy module, an administrator plans to control the flow of sensitive information. The default `block read up block write down` sets everything to a low state. Everything is accessible and an administrator slowly augments the confidentiality of the information. Beyond the three basic label options, an administrator may group users and groups as required to block the information flow between them. It might be easier to look at the information in clearance levels using descriptive words, such as classifications of `Confidential`, `Secret`, and `Top Secret`. Some administrators instead create different groups based on project levels. Regardless of the classification method, a well thought out plan must exist before implementing a restrictive policy. Some example situations for the MLS policy module include an e-commerce web server, a file server holding critical company information, and financial institution environments. [[mac-biba]] === MAC Biba 模組 Module name: [.filename]#mac_biba.ko# Kernel configuration line: `options MAC_BIBA` Boot option: `mac_biba_load="YES"` The man:mac_biba[4] module loads the MAC Biba policy. This policy is similar to the MLS policy with the exception that the rules for information flow are slightly reversed. This is to prevent the downward flow of sensitive information whereas the MLS policy prevents the upward flow of sensitive information. In Biba environments, an "integrity" label is set on each subject or object. These labels are made up of hierarchical grades and non-hierarchical components. As a grade ascends, so does its integrity. Supported labels are `biba/low`, `biba/equal`, and `biba/high`, where: * `biba/low` is considered the lowest integrity an object or subject may have. Setting this on objects or subjects blocks their write access to objects or subjects marked as `biba/high`, but will not prevent read access. * `biba/equal` should only be placed on objects considered to be exempt from the policy. * `biba/high` permits writing to objects set at a lower label, but does not permit reading that object. It is recommended that this label be placed on objects that affect the integrity of the entire system. Biba provides: * Hierarchical integrity levels with a set of non-hierarchical integrity categories. * Fixed rules are `no write up, no read down`, the opposite of MLS. A subject can have write access to objects on its own level or below, but not above. Similarly, a subject can have read access to objects on its own level or above, but not below. * Integrity by preventing inappropriate modification of data. * Integrity levels instead of MLS sensitivity levels. The following tunables can be used to manipulate the Biba policy: * `security.mac.biba.enabled` is used to enable or disable enforcement of the Biba policy on the target machine. * `security.mac.biba.ptys_equal` is used to disable the Biba policy on man:pty[4] devices. * `security.mac.biba.revocation_enabled` forces the revocation of access to objects if the label is changed to dominate the subject. To access the Biba policy setting on system objects, use `setfmac` and `getfmac`: [source,shell] .... # setfmac biba/low test # getfmac test test: biba/low .... Integrity, which is different from sensitivity, is used to guarantee that information is not manipulated by untrusted parties. This includes information passed between subjects and objects. It ensures that users will only be able to modify or access information they have been given explicit access to. The man:mac_biba[4] security policy module permits an administrator to configure which files and programs a user may see and invoke while assuring that the programs and files are trusted by the system for that user. During the initial planning phase, an administrator must be prepared to partition users into grades, levels, and areas. The system will default to a high label once this policy module is enabled, and it is up to the administrator to configure the different grades and levels for users. Instead of using clearance levels, a good planning method could include topics. For instance, only allow developers modification access to the source code repository, source code compiler, and other development utilities. Other users would be grouped into other categories such as testers, designers, or end users and would only be permitted read access. A lower integrity subject is unable to write to a higher integrity subject and a higher integrity subject cannot list or read a lower integrity object. Setting a label at the lowest possible grade could make it inaccessible to subjects. Some prospective environments for this security policy module would include a constrained web server, a development and test machine, and a source code repository. A less useful implementation would be a personal workstation, a machine used as a router, or a network firewall. [[mac-lomac]] === MAC Low-watermark 模組 Module name: [.filename]#mac_lomac.ko# Kernel configuration line: `options MAC_LOMAC` Boot option: `mac_lomac_load="YES"` Unlike the MAC Biba policy, the man:mac_lomac[4] policy permits access to lower integrity objects only after decreasing the integrity level to not disrupt any integrity rules. The Low-watermark integrity policy works almost identically to Biba, with the exception of using floating labels to support subject demotion via an auxiliary grade compartment. This secondary compartment takes the form `[auxgrade]`. When assigning a policy with an auxiliary grade, use the syntax `lomac/10[2]`, where `2` is the auxiliary grade. This policy relies on the ubiquitous labeling of all system objects with integrity labels, permitting subjects to read from low integrity objects and then downgrading the label on the subject to prevent future writes to high integrity objects using `[auxgrade]`. The policy may provide greater compatibility and require less initial configuration than Biba. Like the Biba and MLS policies, `setfmac` and `setpmac` are used to place labels on system objects: [source,shell] .... # setfmac /usr/home/trhodes lomac/high[low] # getfmac /usr/home/trhodes lomac/high[low] .... The auxiliary grade `low` is a feature provided only by the MACLOMAC policy. [[mac-userlocked]] == User Lock Down This example considers a relatively small storage system with fewer than fifty users. Users will have login capabilities and are permitted to store data and access resources. For this scenario, the man:mac_bsdextended[4] and man:mac_seeotheruids[4] policy modules could co-exist and block access to system objects while hiding user processes. Begin by adding the following line to [.filename]#/boot/loader.conf#: [.programlisting] .... mac_seeotheruids_load="YES" .... The man:mac_bsdextended[4] security policy module may be activated by adding this line to [.filename]#/etc/rc.conf#: [.programlisting] .... ugidfw_enable="YES" .... Default rules stored in [.filename]#/etc/rc.bsdextended# will be loaded at system initialization. However, the default entries may need modification. Since this machine is expected only to service users, everything may be left commented out except the last two lines in order to force the loading of user owned system objects by default. Add the required users to this machine and reboot. For testing purposes, try logging in as a different user across two consoles. Run `ps aux` to see if processes of other users are visible. Verify that running man:ls[1] on another user's home directory fails. Do not try to test with the `root` user unless the specific ``sysctl```s have been modified to block super user access. [NOTE] ==== When a new user is added, their man:mac_bsdextended[4] rule will not be in the ruleset list. To update the ruleset quickly, unload the security policy module and reload it again using man:kldunload[8] and man:kldload[8]. ==== [[mac-implementing]] == 在 MAC Jail 中使用 Nagios This section demonstrates the steps that are needed to implement the Nagios network monitoring system in a MAC environment. This is meant as an example which still requires the administrator to test that the implemented policy meets the security requirements of the network before using in a production environment. This example requires `multilabel` to be set on each file system. It also assumes that package:net-mgmt/nagios-plugins[], package:net-mgmt/nagios[], and package:www/apache22[] are all installed, configured, and working correctly before attempting the integration into the MAC framework. === 建立不安全的使用者類別 Begin the procedure by adding the following user class to [.filename]#/etc/login.conf#: [.programlisting] .... insecure:\ -:copyright=/etc/COPYRIGHT:\ :welcome=/etc/motd:\ :setenv=MAIL=/var/mail/$,BLOCKSIZE=K:\ :path=~/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin :manpath=/usr/shared/man /usr/local/man:\ :nologin=/usr/sbin/nologin:\ :cputime=1h30m:\ :datasize=8M:\ :vmemoryuse=100M:\ :stacksize=2M:\ :memorylocked=4M:\ :memoryuse=8M:\ :filesize=8M:\ :coredumpsize=8M:\ :openfiles=24:\ :maxproc=32:\ :priority=0:\ :requirehome:\ :passwordtime=91d:\ :umask=022:\ :ignoretime@:\ :label=biba/10(10-10): .... Then, add the following line to the default user class section: [.programlisting] .... :label=biba/high: .... Save the edits and issue the following command to rebuild the database: [source,shell] .... # cap_mkdb /etc/login.conf .... === 設定使用者 Set the `root` user to the default class using: [source,shell] .... # pw usermod root -L default .... All user accounts that are not `root` will now require a login class. The login class is required, otherwise users will be refused access to common commands. The following `sh` script should do the trick: [source,shell] .... # for x in `awk -F: '($3 >= 1001) && ($3 != 65534) { print $1 }' \ /etc/passwd`; do pw usermod $x -L default; done; .... Next, drop the `nagios` and `www` accounts into the insecure class: [source,shell] .... # pw usermod nagios -L insecure # pw usermod www -L insecure .... === 建立關聯檔 (Context File) A contexts file should now be created as [.filename]#/etc/policy.contexts#: [.programlisting] .... # This is the default BIBA policy for this system. # System: /var/run(/.*)? biba/equal /dev/(/.*)? biba/equal /var biba/equal /var/spool(/.*)? biba/equal /var/log(/.*)? biba/equal /tmp(/.*)? biba/equal /var/tmp(/.*)? biba/equal /var/spool/mqueue biba/equal /var/spool/clientmqueue biba/equal # For Nagios: /usr/local/etc/nagios(/.*)? biba/10 /var/spool/nagios(/.*)? biba/10 # For apache /usr/local/etc/apache(/.*)? biba/10 .... This policy enforces security by setting restrictions on the flow of information. In this specific configuration, users, including `root`, should never be allowed to access Nagios. Configuration files and processes that are a part of Nagios will be completely self contained or jailed. This file will be read after running `setfsmac` on every file system. This example sets the policy on the root file system: [source,shell] .... # setfsmac -ef /etc/policy.contexts / .... Next, add these edits to the main section of [.filename]#/etc/mac.conf#: [.programlisting] .... default_labels file ?biba default_labels ifnet ?biba default_labels process ?biba default_labels socket ?biba .... === 載入程式設定 To finish the configuration, add the following lines to [.filename]#/boot/loader.conf#: [.programlisting] .... mac_biba_load="YES" mac_seeotheruids_load="YES" security.mac.biba.trust_all_interfaces=1 .... And the following line to the network card configuration stored in [.filename]#/etc/rc.conf#. If the primary network configuration is done via DHCP, this may need to be configured manually after every system boot: [.programlisting] .... maclabel biba/equal .... === 測試設定 First, ensure that the web server and Nagios will not be started on system initialization and reboot. Ensure that `root` cannot access any of the files in the Nagios configuration directory. If `root` can list the contents of [.filename]#/var/spool/nagios#, something is wrong. Instead, a "permission denied" error should be returned. If all seems well, Nagios, Apache, and Sendmail can now be started: [source,shell] .... # cd /etc/mail && make stop && \ setpmac biba/equal make start && setpmac biba/10\(10-10\) apachectl start && \ setpmac biba/10\(10-10\) /usr/local/etc/rc.d/nagios.sh forcestart .... Double check to ensure that everything is working properly. If not, check the log files for error messages. If needed, use man:sysctl[8] to disable the man:mac_biba[4] security policy module and try starting everything again as usual. [NOTE] ==== The `root` user can still change the security enforcement and edit its configuration files. The following command will permit the degradation of the security policy to a lower grade for a newly spawned shell: [source,shell] .... # setpmac biba/10 csh .... To block this from happening, force the user into a range using man:login.conf[5]. If man:setpmac[8] attempts to run a command outside of the compartment's range, an error will be returned and the command will not be executed. In this case, set root to `biba/high(high-high)`. ==== [[mac-troubleshoot]] == MAC 架構疑難排解 This section discusses common configuration errors and how to resolve them. The `multilabel` flag does not stay enabled on the root ([.filename]#/#) partition::: The following steps may resolve this transient error: [.procedure] ==== .. Edit [.filename]#/etc/fstab# and set the root partition to `ro` for read-only. .. Reboot into single user mode. .. Run `tunefs -l enable` on [.filename]#/#. .. Reboot the system. .. Run `mount -urw`[.filename]#/# and change the `ro` back to `rw` in [.filename]#/etc/fstab# and reboot the system again. .. Double-check the output from `mount` to ensure that `multilabel` has been properly set on the root file system. ==== After establishing a secure environment with MAC, Xorg no longer starts::: This could be caused by the MAC `partition` policy or by a mislabeling in one of the MAC labeling policies. To debug, try the following: [.procedure] ==== .. Check the error message. If the user is in the `insecure` class, the `partition` policy may be the culprit. Try setting the user's class back to the `default` class and rebuild the database with `cap_mkdb`. If this does not alleviate the problem, go to step two. .. Double-check that the label policies are set correctly for the user, Xorg, and the [.filename]#/dev# entries. .. If neither of these resolve the problem, send the error message and a description of the environment to the http://lists.FreeBSD.org/mailman/listinfo/freebsd-questions[FreeBSD general questions mailing list]. ==== The `_secure_path: unable to stat .login_conf` error appears::: This error can appear when a user attempts to switch from the `root` user to another user in the system. This message usually occurs when the user has a higher label setting than that of the user they are attempting to become. For instance, if `joe` has a default label of `biba/low` and `root` has a label of `biba/high`, `root` cannot view ``joe``'s home directory. This will happen whether or not `root` has used `su` to become `joe` as the Biba integrity model will not permit `root` to view objects set at a lower integrity level. The system no longer recognizes `root`::: When this occurs, `whoami` returns `0` and `su` returns `who are you?`. + This can happen if a labeling policy has been disabled by man:sysctl[8] or the policy module was unloaded. If the policy is disabled, the login capabilities database needs to be reconfigured. Double check [.filename]#/etc/login.conf# to ensure that all `label` options have been removed and rebuild the database with `cap_mkdb`. + This may also happen if a policy restricts access to [.filename]#master.passwd#. This is usually caused by an administrator altering the file under a label which conflicts with the general policy being used by the system. In these cases, the user information would be read by the system and access would be blocked as the file has inherited the new label. Disable the policy using man:sysctl[8] and everything should return to normal. diff --git a/documentation/content/zh-tw/books/handbook/network-servers/_index.adoc b/documentation/content/zh-tw/books/handbook/network-servers/_index.adoc index 70ead253bc..de276c150f 100644 --- a/documentation/content/zh-tw/books/handbook/network-servers/_index.adoc +++ b/documentation/content/zh-tw/books/handbook/network-servers/_index.adoc @@ -1,2467 +1,2466 @@ --- title: 章 29. 網路伺服器 part: 部 IV. 網路通訊 prev: books/handbook/mail next: books/handbook/firewalls showBookMenu: true weight: 34 params: path: "/books/handbook/network-servers/" --- [[network-servers]] = 網路伺服器 :doctype: book :toc: macro :toclevels: 1 :icons: font :sectnums: :sectnumlevels: 6 :sectnumoffset: 29 :partnums: :source-highlighter: rouge :experimental: :images-path: books/handbook/network-servers/ ifdef::env-beastie[] ifdef::backend-html5[] :imagesdir: ../../../../images/{images-path} endif::[] ifndef::book[] include::shared/authors.adoc[] include::shared/mirrors.adoc[] include::shared/releases.adoc[] include::shared/attributes/attributes-{{% lang %}}.adoc[] include::shared/{{% lang %}}/teams.adoc[] include::shared/{{% lang %}}/mailing-lists.adoc[] include::shared/{{% lang %}}/urls.adoc[] toc::[] endif::[] ifdef::backend-pdf,backend-epub3[] include::../../../../../shared/asciidoctor.adoc[] endif::[] endif::[] ifndef::env-beastie[] toc::[] include::../../../../../shared/asciidoctor.adoc[] endif::[] [[network-servers-synopsis]] == 概述 本章節涵蓋一些在 UNIX(TM) 系統常用的網路服務,包含安裝、設定、測試及維護各種不同類型的網路服務。本章會提供範例設定檔以供參考。 讀完本章,您將了解: * 如何管理 inetd Daemon。 * 如何設定網路檔案系統 (Network File System, NFS)。 * 如何設定網路資訊伺服器 (Network Information Server, NIS) 來集中管理及共用使用者帳號。 * 如何設定 FreeBSD 成為 LDAP 伺服器或客戶端 * 如何設定使用 DHCP 自動網路設定。 * 如何設定網域名稱伺服器 (Domain Name Server, DNS)。 * 如何設定 ApacheHTTP 伺服器。 * 如何設定檔案傳輸協定 (File Transfer Protocol, FTP) 伺服器。 * 如何設定 Samba 檔案與列印伺服器供 Windows(TM) 客戶端使用。 * 如何同步時間與日期,並使用網路時間協定 (Network Time Protocol, NTP) 設定時間伺服器。 * 如何設定 iSCSI。 本章假設您有以下基礎知識: * [.filename]#/etc/rc# Script。 * 網路術語。 * 安裝其他第三方軟體 (crossref:ports[ports,安裝應用程式:套件與 Port])。 [[network-inetd]] == inetd 超級伺服器 The man:inetd[8] daemon is sometimes referred to as a Super-Server because it manages connections for many services. Instead of starting multiple applications, only the inetd service needs to be started. When a connection is received for a service that is managed by inetd, it determines which program the connection is destined for, spawns a process for that program, and delegates the program a socket. Using inetd for services that are not heavily used can reduce system load, when compared to running each daemon individually in stand-alone mode. Primarily, inetd is used to spawn other daemons, but several trivial protocols are handled internally, such as chargen, auth, time, echo, discard, and daytime. This section covers the basics of configuring inetd. [[network-inetd-conf]] === 設定檔 Configuration of inetd is done by editing [.filename]#/etc/inetd.conf#. Each line of this configuration file represents an application which can be started by inetd. By default, every line starts with a comment (`#`), meaning that inetd is not listening for any applications. To configure inetd to listen for an application's connections, remove the `#` at the beginning of the line for that application. After saving your edits, configure inetd to start at system boot by editing [.filename]#/etc/rc.conf#: [.programlisting] .... inetd_enable="YES" .... To start inetd now, so that it listens for the service you configured, type: [source,shell] .... # service inetd start .... Once inetd is started, it needs to be notified whenever a modification is made to [.filename]#/etc/inetd.conf#: [[network-inetd-reread]] .重新庫入 inetd 設定檔 [example] ==== [source,shell] .... # service inetd reload .... ==== Typically, the default entry for an application does not need to be edited beyond removing the `#`. In some situations, it may be appropriate to edit the default entry. As an example, this is the default entry for man:ftpd[8] over IPv4: [.programlisting] .... ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l .... The seven columns in an entry are as follows: [.programlisting] .... service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments .... where: service-name:: The service name of the daemon to start. It must correspond to a service listed in [.filename]#/etc/services#. This determines which port inetd listens on for incoming connections to that service. When using a custom service, it must first be added to [.filename]#/etc/services#. socket-type:: Either `stream`, `dgram`, `raw`, or `seqpacket`. Use `stream` for TCP connections and `dgram` for UDP services. protocol:: Use one of the following protocol names: + [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Protocol Name | Explanation |tcp or tcp4 |TCP IPv4 |udp or udp4 |UDP IPv4 |tcp6 |TCP IPv6 |udp6 |UDP IPv6 |tcp46 |Both TCP IPv4 and IPv6 |udp46 |Both UDP IPv4 and IPv6 |=== {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]:: In this field, `wait` or `nowait` must be specified. `max-child`, `max-connections-per-ip-per-minute` and `max-child-per-ip` are optional. + `wait|nowait` indicates whether or not the service is able to handle its own socket. `dgram` socket types must use `wait` while `stream` daemons, which are usually multi-threaded, should use `nowait`. `wait` usually hands off multiple sockets to a single daemon, while `nowait` spawns a child daemon for each new socket. + The maximum number of child daemons inetd may spawn is set by `max-child`. For example, to limit ten instances of the daemon, place a `/10` after `nowait`. Specifying `/0` allows an unlimited number of children. + `max-connections-per-ip-per-minute` limits the number of connections from any particular IP address per minute. Once the limit is reached, further connections from this IP address will be dropped until the end of the minute. For example, a value of `/10` would limit any particular IP address to ten connection attempts per minute. `max-child-per-ip` limits the number of child processes that can be started on behalf on any single IP address at any moment. These options can limit excessive resource consumption and help to prevent Denial of Service attacks. + An example can be seen in the default settings for man:fingerd[8]: + [.programlisting] .... finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s .... user:: The username the daemon will run as. Daemons typically run as `root`, `daemon`, or `nobody`. server-program:: The full path to the daemon. If the daemon is a service provided by inetd internally, use `internal`. server-program-arguments:: Used to specify any command arguments to be passed to the daemon on invocation. If the daemon is an internal service, use `internal`. [[network-inetd-cmdline]] === 指令列選項 Like most server daemons, inetd has a number of options that can be used to modify its behavior. By default, inetd is started with `-wW -C 60`. These options enable TCP wrappers for all services, including internal services, and prevent any IP address from requesting any service more than 60 times per minute. To change the default options which are passed to inetd, add an entry for `inetd_flags` in [.filename]#/etc/rc.conf#. If inetd is already running, restart it with `service inetd restart`. The available rate limiting options are: -c maximum:: Specify the default maximum number of simultaneous invocations of each service, where the default is unlimited. May be overridden on a per-service basis by using `max-child` in [.filename]#/etc/inetd.conf#. -C rate:: Specify the default maximum number of times a service can be invoked from a single IP address per minute. May be overridden on a per-service basis by using `max-connections-per-ip-per-minute` in [.filename]#/etc/inetd.conf#. -R rate:: Specify the maximum number of times a service can be invoked in one minute, where the default is `256`. A rate of `0` allows an unlimited number. -s maximum:: Specify the maximum number of times a service can be invoked from a single IP address at any one time, where the default is unlimited. May be overridden on a per-service basis by using `max-child-per-ip` in [.filename]#/etc/inetd.conf#. Additional options are available. Refer to man:inetd[8] for the full list of options. [[network-inetd-security]] === 安全注意事項 Many of the daemons which can be managed by inetd are not security-conscious. Some daemons, such as fingerd, can provide information that may be useful to an attacker. Only enable the services which are needed and monitor the system for excessive connection attempts. `max-connections-per-ip-per-minute`, `max-child` and `max-child-per-ip` can be used to limit such attacks. By default, TCP wrappers is enabled. Consult man:hosts_access[5] for more information on placing TCP restrictions on various inetd invoked daemons. [[network-nfs]] == 網路檔案系統 (NFS) FreeBSD supports the Network File System (NFS), which allows a server to share directories and files with clients over a network. With NFS, users and programs can access files on remote systems as if they were stored locally. NFS has many practical uses. Some of the more common uses include: * Data that would otherwise be duplicated on each client can be kept in a single location and accessed by clients on the network. * Several clients may need access to the [.filename]#/usr/ports/distfiles# directory. Sharing that directory allows for quick access to the source files without having to download them to each client. * On large networks, it is often more convenient to configure a central NFS server on which all user home directories are stored. Users can log into a client anywhere on the network and have access to their home directories. * Administration of NFS exports is simplified. For example, there is only one file system where security or backup policies must be set. * Removable media storage devices can be used by other machines on the network. This reduces the number of devices throughout the network and provides a centralized location to manage their security. It is often more convenient to install software on multiple machines from a centralized installation media. NFS consists of a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly, a few processes have to be configured and running. These daemons must be running on the server: [.informaltable] [cols="1,1", frame="none", options="header"] |=== | Daemon | 說明 |nfsd |The NFS daemon which services requests from NFS clients. |mountd |The NFS mount daemon which carries out requests received from nfsd. |rpcbind | This daemon allows NFS clients to discover which port the NFS server is using. |=== Running man:nfsiod[8] on the client can improve performance, but is not required. [[network-configuring-nfs]] === 設定伺服器 The file systems which the NFS server will share are specified in [.filename]#/etc/exports#. Each line in this file specifies a file system to be exported, which clients have access to that file system, and any access options. When adding entries to this file, each exported file system, its properties, and allowed hosts must occur on a single line. If no clients are listed in the entry, then any client on the network can mount that file system. The following [.filename]#/etc/exports# entries demonstrate how to export file systems. The examples can be modified to match the file systems and client names on the reader's network. There are many options that can be used in this file, but only a few will be mentioned here. See man:exports[5] for the full list of options. This example shows how to export [.filename]#/cdrom# to three hosts named _alpha_, _bravo_, and _charlie_: [.programlisting] .... /cdrom -ro alpha bravo charlie .... The `-ro` flag makes the file system read-only, preventing clients from making any changes to the exported file system. This example assumes that the host names are either in DNS or in [.filename]#/etc/hosts#. Refer to man:hosts[5] if the network does not have a DNS server. The next example exports [.filename]#/home# to three clients by IP address. This can be useful for networks without DNS or [.filename]#/etc/hosts# entries. The `-alldirs` flag allows subdirectories to be mount points. In other words, it will not automatically mount the subdirectories, but will permit the client to mount the directories that are required as needed. [.programlisting] .... /usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 .... This next example exports [.filename]#/a# so that two clients from different domains may access that file system. The `-maproot=root` allows `root` on the remote system to write data on the exported file system as `root`. If `-maproot=root` is not specified, the client's `root` user will be mapped to the server's `nobody` account and will be subject to the access limitations defined for `nobody`. [.programlisting] .... /a -maproot=root host.example.com box.example.org .... A client can only be specified once per file system. For example, if [.filename]#/usr# is a single file system, these entries would be invalid as both entries specify the same host: [.programlisting] .... # Invalid when /usr is one file system /usr/src client /usr/ports client .... The correct format for this situation is to use one entry: [.programlisting] .... /usr/src /usr/ports client .... The following is an example of a valid export list, where [.filename]#/usr# and [.filename]#/exports# are local file systems: [.programlisting] .... # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro .... To enable the processes required by the NFS server at boot time, add these options to [.filename]#/etc/rc.conf#: [.programlisting] .... rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" .... The server can be started now by running this command: [source,shell] .... # service nfsd start .... Whenever the NFS server is started, mountd also starts automatically. However, mountd only reads [.filename]#/etc/exports# when it is started. To make subsequent [.filename]#/etc/exports# edits take effect immediately, force mountd to reread it: [source,shell] .... # service mountd reload .... === 設定客戶端 To enable NFS clients, set this option in each client's [.filename]#/etc/rc.conf#: [.programlisting] .... nfs_client_enable="YES" .... Then, run this command on each NFS client: [source,shell] .... # service nfsclient start .... The client now has everything it needs to mount a remote file system. In these examples, the server's name is `server` and the client's name is `client`. To mount [.filename]#/home# on `server` to the [.filename]#/mnt# mount point on `client`: [source,shell] .... # mount server:/home /mnt .... The files and directories in [.filename]#/home# will now be available on `client`, in the [.filename]#/mnt# directory. To mount a remote file system each time the client boots, add it to [.filename]#/etc/fstab#: [.programlisting] .... server:/home /mnt nfs rw 0 0 .... Refer to man:fstab[5] for a description of all available options. === 鎖定 Some applications require file locking to operate correctly. To enable locking, add these lines to [.filename]#/etc/rc.conf# on both the client and server: [.programlisting] .... rpc_lockd_enable="YES" rpc_statd_enable="YES" .... Then start the applications: [source,shell] .... # service lockd start # service statd start .... If locking is not required on the server, the NFS client can be configured to lock locally by including `-L` when running mount. Refer to man:mount_nfs[8] for further details. [[network-amd]] === 使用 man:amd[8] 自動掛載 The automatic mounter daemon, amd, automatically mounts a remote file system whenever a file or directory within that file system is accessed. File systems that are inactive for a period of time will be automatically unmounted by amd. This daemon provides an alternative to modifying [.filename]#/etc/fstab# to list every client. It operates by attaching itself as an NFS server to the [.filename]#/host# and [.filename]#/net# directories. When a file is accessed within one of these directories, amd looks up the corresponding remote mount and automatically mounts it. [.filename]#/net# is used to mount an exported file system from an IP address while [.filename]#/host# is used to mount an export from a remote hostname. For instance, an attempt to access a file within [.filename]#/host/foobar/usr# would tell amd to mount the [.filename]#/usr# export on the host `foobar`. .使用 amd 掛載 Export [example] ==== In this example, `showmount -e` shows the exported file systems that can be mounted from the NFS server, `foobar`: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /host/foobar/usr .... ==== The output from `showmount` shows [.filename]#/usr# as an export. When changing directories to [.filename]#/host/foobar/usr#, amd intercepts the request and attempts to resolve the hostname `foobar`. If successful, amd automatically mounts the desired export. To enable amd at boot time, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... amd_enable="YES" .... To start amd now: [source,shell] .... # service amd start .... Custom flags can be passed to amd from the `amd_flags` environment variable. By default, `amd_flags` is set to: [.programlisting] .... amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" .... The default options with which exports are mounted are defined in [.filename]#/etc/amd.map#. Some of the more advanced features of amd are defined in [.filename]#/etc/amd.conf#. Consult man:amd[8] and man:amd.conf[5] for more information. [[network-autofs]] === 使用 man:autofs[5] 自動掛載 [NOTE] ==== The man:autofs[5] automount facility is supported starting with FreeBSD 10.1-RELEASE. To use the automounter functionality in older versions of FreeBSD, use man:amd[8] instead. This chapter only describes the man:autofs[5] automounter. ==== The man:autofs[5] facility is a common name for several components that, together, allow for automatic mounting of remote and local filesystems whenever a file or directory within that file system is accessed. It consists of the kernel component, man:autofs[5], and several userspace applications: man:automount[8], man:automountd[8] and man:autounmountd[8]. It serves as an alternative for man:amd[8] from previous FreeBSD releases. Amd is still provided for backward compatibility purposes, as the two use different map format; the one used by autofs is the same as with other SVR4 automounters, such as the ones in Solaris, MacOS X, and Linux. The man:autofs[5] virtual filesystem is mounted on specified mountpoints by man:automount[8], usually invoked during boot. Whenever a process attempts to access file within the man:autofs[5] mountpoint, the kernel will notify man:automountd[8] daemon and pause the triggering process. The man:automountd[8] daemon will handle kernel requests by finding the proper map and mounting the filesystem according to it, then signal the kernel to release blocked process. The man:autounmountd[8] daemon automatically unmounts automounted filesystems after some time, unless they are still being used. The primary autofs configuration file is [.filename]#/etc/auto_master#. It assigns individual maps to top-level mounts. For an explanation of [.filename]#auto_master# and the map syntax, refer to man:auto_master[5]. There is a special automounter map mounted on [.filename]#/net#. When a file is accessed within this directory, man:autofs[5] looks up the corresponding remote mount and automatically mounts it. For instance, an attempt to access a file within [.filename]#/net/foobar/usr# would tell man:automountd[8] to mount the [.filename]#/usr# export from the host `foobar`. .使用 man:autofs[5] 掛載 Export [example] ==== In this example, `showmount -e` shows the exported file systems that can be mounted from the NFS server, `foobar`: [source,shell] .... % showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 % cd /net/foobar/usr .... ==== The output from `showmount` shows [.filename]#/usr# as an export. When changing directories to [.filename]#/host/foobar/usr#, man:automountd[8] intercepts the request and attempts to resolve the hostname `foobar`. If successful, man:automountd[8] automatically mounts the source export. To enable man:autofs[5] at boot time, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... autofs_enable="YES" .... Then man:autofs[5] can be started by running: [source,shell] .... # service automount start # service automountd start # service autounmountd start .... The man:autofs[5] map format is the same as in other operating systems. Information about this format from other sources can be useful, like the http://web.archive.org/web/20160813071113/http://images.apple.com/business/docs/Autofs.pdf[Mac OS X document]. Consult the man:automount[8], man:automountd[8], man:autounmountd[8], and man:auto_master[5] manual pages for more information. [[network-nis]] == 網路資訊系統 (NIS) Network Information System (NIS) is designed to centralize administration of UNIX(TM)-like systems such as Solaris(TM), HP-UX, AIX(TM), Linux, NetBSD, OpenBSD, and FreeBSD. NIS was originally known as Yellow Pages but the name was changed due to trademark issues. This is the reason why NIS commands begin with `yp`. NIS is a Remote Procedure Call (RPC)-based client/server system that allows a group of machines within an NIS domain to share a common set of configuration files. This permits a system administrator to set up NIS client systems with only minimal configuration data and to add, remove, or modify configuration data from a single location. FreeBSD uses version 2 of the NIS protocol. === NIS 術語與程序 Table 28.1 summarizes the terms and important processes used by NIS: .NIS 術語 [cols="1,1", frame="none", options="header"] |=== | 術語 | 說明 |NIS domain name |NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS. |man:rpcbind[8] |This service enables RPC and must be running in order to run an NIS server or act as an NIS client. |man:ypbind[8] |This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server. |man:ypserv[8] |This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-FreeBSD clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients. |man:rpc.yppasswdd[8] |This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there. |=== === 主機類型 There are three types of hosts in an NIS environment: * NIS master server + This server acts as a central repository for host configuration information and maintains the authoritative copy of the files used by all of the NIS clients. The [.filename]#passwd#, [.filename]#group#, and other various files used by NIS clients are stored on the master server. While it is possible for one machine to be an NIS master server for more than one NIS domain, this type of configuration will not be covered in this chapter as it assumes a relatively small-scale NIS environment. * NIS slave servers + NIS slave servers maintain copies of the NIS master's data files in order to provide redundancy. Slave servers also help to balance the load of the master server as NIS clients always attach to the NIS server which responds first. * NIS clients + NIS clients authenticate against the NIS server during log on. Information in many files can be shared using NIS. The [.filename]#master.passwd#, [.filename]#group#, and [.filename]#hosts# files are commonly shared via NIS. Whenever a process on a client needs information that would normally be found in these files locally, it makes a query to the NIS server that it is bound to instead. === 規劃注意事項 This section describes a sample NIS environment which consists of 15 FreeBSD machines with no centralized point of administration. Each machine has its own [.filename]#/etc/passwd# and [.filename]#/etc/master.passwd#. These files are kept in sync with each other only through manual intervention. Currently, when a user is added to the lab, the process must be repeated on all 15 machines. The configuration of the lab will be as follows: [.informaltable] [cols="1,1,1", frame="none", options="header"] |=== | Machine name | IP 位址 | Machine role |`ellington` |`10.0.0.2` |NIS master |`coltrane` |`10.0.0.3` |NIS slave |`basie` |`10.0.0.4` |Faculty workstation |`bird` |`10.0.0.5` |Client machine |`cli[1-11]` |`10.0.0.[6-17]` |Other client machines |=== If this is the first time an NIS scheme is being developed, it should be thoroughly planned ahead of time. Regardless of network size, several decisions need to be made as part of the planning process. ==== 選擇 NIS 網域名稱 When a client broadcasts its requests for info, it includes the name of the NIS domain that it is part of. This is how multiple servers on one network can tell which server should answer which request. Think of the NIS domain name as the name for a group of hosts. Some organizations choose to use their Internet domain name for their NIS domain name. This is not recommended as it can cause confusion when trying to debug network problems. The NIS domain name should be unique within the network and it is helpful if it describes the group of machines it represents. For example, the Art department at Acme Inc. might be in the "acme-art"NIS domain. This example will use the domain name `test-domain`. However, some non-FreeBSD operating systems require the NIS domain name to be the same as the Internet domain name. If one or more machines on the network have this restriction, the Internet domain name _must_ be used as the NIS domain name. ==== 實體伺服器需求 There are several things to keep in mind when choosing a machine to use as a NIS server. Since NIS clients depend upon the availability of the server, choose a machine that is not rebooted frequently. The NIS server should ideally be a stand alone machine whose sole purpose is to be an NIS server. If the network is not heavily used, it is acceptable to put the NIS server on a machine running other services. However, if the NIS server becomes unavailable, it will adversely affect all NIS clients. === 設定 NIS Master 伺服器 The canonical copies of all NIS files are stored on the master server. The databases used to store the information are called NIS maps. In FreeBSD, these maps are stored in [.filename]#/var/yp/[domainname]# where [.filename]#[domainname]# is the name of the NIS domain. Since multiple domains are supported, it is possible to have several directories, one for each domain. Each domain will have its own independent set of maps. NIS master and slave servers handle all NIS requests through man:ypserv[8]. This daemon is responsible for receiving incoming requests from NIS clients, translating the requested domain and map name to a path to the corresponding database file, and transmitting data from the database back to the client. Setting up a master NIS server can be relatively straight forward, depending on environmental needs. Since FreeBSD provides built-in NIS support, it only needs to be enabled by adding the following lines to [.filename]#/etc/rc.conf#: [.programlisting] .... nisdomainname="test-domain" <.> nis_server_enable="YES" <.> nis_yppasswdd_enable="YES" <.> .... <.> This line sets the NIS domain name to `test-domain`. <.> This automates the start up of the NIS server processes when the system boots. <.> This enables the man:rpc.yppasswdd[8] daemon so that users can change their NIS password from a client machine. Care must be taken in a multi-server domain where the server machines are also NIS clients. It is generally a good idea to force the servers to bind to themselves rather than allowing them to broadcast bind requests and possibly become bound to each other. Strange failure modes can result if one server goes down and others are dependent upon it. Eventually, all the clients will time out and attempt to bind to other servers, but the delay involved can be considerable and the failure mode is still present since the servers might bind to each other all over again. A server that is also a client can be forced to bind to a particular server by adding these additional lines to [.filename]#/etc/rc.conf#: [.programlisting] .... nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" .... After saving the edits, type `/etc/netstart` to restart the network and apply the values defined in [.filename]#/etc/rc.conf#. Before initializing the NIS maps, start man:ypserv[8]: [source,shell] .... # service ypserv start .... ==== 初始化 NIS 對應表 NIS maps are generated from the configuration files in [.filename]#/etc# on the NIS master, with one exception: [.filename]#/etc/master.passwd#. This is to prevent the propagation of passwords to all the servers in the NIS domain. Therefore, before the NIS maps are initialized, configure the primary password files: [source,shell] .... # cp /etc/master.passwd /var/yp/master.passwd # cd /var/yp # vi master.passwd .... It is advisable to remove all entries for system accounts as well as any user accounts that do not need to be propagated to the NIS clients, such as the `root` and any other administrative accounts. [NOTE] ==== Ensure that the [.filename]#/var/yp/master.passwd# is neither group or world readable by setting its permissions to `600`. ==== After completing this task, initialize the NIS maps. FreeBSD includes the man:ypinit[8] script to do this. When generating maps for the master server, include `-m` and specify the NIS domain name: [source,shell] .... ellington# ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a . master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. .... This will create [.filename]#/var/yp/Makefile# from [.filename]#/var/yp/Makefile.dist#. By default, this file assumes that the environment has a single NIS server with only FreeBSD clients. Since `test-domain` has a slave server, edit this line in [.filename]#/var/yp/Makefile# so that it begins with a comment (`#`): [.programlisting] .... NOPUSH = "True" .... ==== 新增使用者 Every time a new user is created, the user account must be added to the master NIS server and the NIS maps rebuilt. Until this occurs, the new user will not be able to login anywhere except on the NIS master. For example, to add the new user `jsmith` to the `test-domain` domain, run these commands on the master server: [source,shell] .... # pw useradd jsmith # cd /var/yp # make test-domain .... The user could also be added using `adduser jsmith` instead of `pw useradd smith`. === 設定 NIS Slave 伺服器 To set up an NIS slave server, log on to the slave server and edit [.filename]#/etc/rc.conf# as for the master server. Do not generate any NIS maps, as these already exist on the master server. When running `ypinit` on the slave server, use `-s` (for slave) instead of `-m` (for master). This option requires the name of the NIS master in addition to the domain name, as seen in this example: [source,shell] .... coltrane# ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. .... This will generate a directory on the slave server called [.filename]#/var/yp/test-domain# which contains copies of the NIS master server's maps. Adding these [.filename]#/etc/crontab# entries on each slave server will force the slaves to sync their maps with the maps on the master server: [.programlisting] .... 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid .... These entries are not mandatory because the master server automatically attempts to push any map changes to its slaves. However, since clients may depend upon the slave server to provide correct password information, it is recommended to force frequent password map updates. This is especially important on busy networks where map updates might not always complete. To finish the configuration, run `/etc/netstart` on the slave server in order to start the NIS services. === 設定 NIS 客戶端 An NIS client binds to an NIS server using man:ypbind[8]. This daemon broadcasts RPC requests on the local network. These requests specify the domain name configured on the client. If an NIS server in the same domain receives one of the broadcasts, it will respond to ypbind, which will record the server's address. If there are several servers available, the client will use the address of the first server to respond and will direct all of its NIS requests to that server. The client will automatically ping the server on a regular basis to make sure it is still available. If it fails to receive a reply within a reasonable amount of time, ypbind will mark the domain as unbound and begin broadcasting again in the hopes of locating another server. To configure a FreeBSD machine to be an NIS client: [.procedure] ==== . Edit [.filename]#/etc/rc.conf# and add the following lines in order to set the NIS domain name and start man:ypbind[8] during network startup: + [.programlisting] .... nisdomainname="test-domain" nis_client_enable="YES" .... + . To import all possible password entries from the NIS server, use `vipw` to remove all user accounts except one from [.filename]#/etc/master.passwd#. When removing the accounts, keep in mind that at least one local account should remain and this account should be a member of `wheel`. If there is a problem with NIS, this local account can be used to log in remotely, become the superuser, and fix the problem. Before saving the edits, add the following line to the end of the file: + [.programlisting] .... +::::::::: .... + This line configures the client to provide anyone with a valid account in the NIS server's password maps an account on the client. There are many ways to configure the NIS client by modifying this line. One method is described in <>. For more detailed reading, refer to the book `Managing NFS and NIS`, published by O'Reilly Media. + . To import all possible group entries from the NIS server, add this line to [.filename]#/etc/group#: + [.programlisting] .... +:*:: .... ==== To start the NIS client immediately, execute the following commands as the superuser: [source,shell] .... # /etc/netstart # service ypbind start .... After completing these steps, running `ypcat passwd` on the client should show the server's [.filename]#passwd# map. === NIS 安全性 Since RPC is a broadcast-based service, any system running ypbind within the same domain can retrieve the contents of the NIS maps. To prevent unauthorized transactions, man:ypserv[8] supports a feature called "securenets" which can be used to restrict access to a given set of hosts. By default, this information is stored in [.filename]#/var/yp/securenets#, unless man:ypserv[8] is started with `-p` and an alternate path. This file contains entries that consist of a network specification and a network mask separated by white space. Lines starting with `#` are considered to be comments. A sample [.filename]##securenets## might look like this: [.programlisting] .... # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 .... If man:ypserv[8] receives a request from an address that matches one of these rules, it will process the request normally. If the address fails to match a rule, the request will be ignored and a warning message will be logged. If the [.filename]#securenets# does not exist, `ypserv` will allow connections from any host. crossref:security[tcpwrappers,TCP Wrapper] is an alternate mechanism for providing access control instead of [.filename]#securenets#. While either access control mechanism adds some security, they are both vulnerable to "IP spoofing" attacks. All NIS-related traffic should be blocked at the firewall. Servers using [.filename]#securenets# may fail to serve legitimate NIS clients with archaic TCP/IP implementations. Some of these implementations set all host bits to zero when doing broadcasts or fail to observe the subnet mask when calculating the broadcast address. While some of these problems can be fixed by changing the client configuration, other problems may force the retirement of these client systems or the abandonment of [.filename]#securenets#. The use of TCP Wrapper increases the latency of the NIS server. The additional delay may be long enough to cause timeouts in client programs, especially in busy networks with slow NIS servers. If one or more clients suffer from latency, convert those clients into NIS slave servers and force them to bind to themselves. ==== 阻擋部份使用者 In this example, the `basie` system is a faculty workstation within the NIS domain. The [.filename]#passwd# map on the master NIS server contains accounts for both faculty and students. This section demonstrates how to allow faculty logins on this system while refusing student logins. To prevent specified users from logging on to a system, even if they are present in the NIS database, use `vipw` to add `-_username_` with the correct number of colons towards the end of [.filename]#/etc/master.passwd# on the client, where _username_ is the username of a user to bar from logging in. The line with the blocked user must be before the `+` line that allows NIS users. In this example, `bill` is barred from logging on to `basie`: [source,shell] .... basie# cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin operator:*:2:5::0:0:System &:/:/usr/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/shared/man:/usr/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin -bill::::::::: +::::::::: basie# .... [[network-netgroups]] === 使用 Netgroups Barring specified users from logging on to individual systems becomes unscaleable on larger networks and quickly loses the main benefit of NIS: _centralized_ administration. Netgroups were developed to handle large, complex networks with hundreds of users and machines. Their use is comparable to UNIX(TM) groups, where the main difference is the lack of a numeric ID and the ability to define a netgroup by including both user accounts and other netgroups. To expand on the example used in this chapter, the NIS domain will be extended to add the users and systems shown in Tables 28.2 and 28.3: .其他使用者 [cols="1,1", frame="none", options="header"] |=== | 使用者名稱 | 說明 |`alpha`, `beta` |IT department employees |`charlie`, `delta` |IT department apprentices |`echo`, `foxtrott`, `golf`, ... |employees |`able`, `baker`, ... |interns |=== .其他系統 [cols="1,1", frame="none", options="header"] |=== | 機器名稱 | 說明 |`war`, `death`, `famine`, `pollution` |Only IT employees are allowed to log onto these servers. |`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth` |All members of the IT department are allowed to login onto these servers. |`one`, `two`, `three`, `four`, ... |Ordinary workstations used by employees. |`trashcan` |A very old machine without any critical data. Even interns are allowed to use this system. |=== When using netgroups to configure this scenario, each user is assigned to one or more netgroups and logins are then allowed or forbidden for all members of the netgroup. When adding a new machine, login restrictions must be defined for all netgroups. When a new user is added, the account must be added to one or more netgroups. If the NIS setup is planned carefully, only one central configuration file needs modification to grant or deny access to machines. The first step is the initialization of the NIS `netgroup` map. In FreeBSD, this map is not created by default. On the NIS master server, use an editor to create a map named [.filename]#/var/yp/netgroup#. This example creates four netgroups to represent IT employees, IT apprentices, employees, and interns: [.programlisting] .... IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) .... Each entry configures a netgroup. The first column in an entry is the name of the netgroup. Each set of brackets represents either a group of one or more users or the name of another netgroup. When specifying a user, the three comma-delimited fields inside each group represent: . The name of the host(s) where the other fields representing the user are valid. If a hostname is not specified, the entry is valid on all hosts. . The name of the account that belongs to this netgroup. . The NIS domain for the account. Accounts may be imported from other NIS domains into a netgroup. If a group contains multiple users, separate each user with whitespace. Additionally, each field may contain wildcards. See man:netgroup[5] for details. Netgroup names longer than 8 characters should not be used. The names are case sensitive and using capital letters for netgroup names is an easy way to distinguish between user, machine and netgroup names. Some non-FreeBSD NIS clients cannot handle netgroups containing more than 15 entries. This limit may be circumvented by creating several sub-netgroups with 15 users or fewer and a real netgroup consisting of the sub-netgroups, as seen in this example: [.programlisting] .... BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 .... Repeat this process if more than 225 (15 times 15) users exist within a single netgroup. To activate and distribute the new NIS map: [source,shell] .... ellington# cd /var/yp ellington# make .... This will generate the three NIS maps [.filename]#netgroup#, [.filename]#netgroup.byhost# and [.filename]#netgroup.byuser#. Use the map key option of man:ypcat[1] to check if the new NIS maps are available: [source,shell] .... ellington% ypcat -k netgroup ellington% ypcat -k netgroup.byhost ellington% ypcat -k netgroup.byuser .... The output of the first command should resemble the contents of [.filename]#/var/yp/netgroup#. The second command only produces output if host-specific netgroups were created. The third command is used to get the list of netgroups for a user. To configure a client, use man:vipw[8] to specify the name of the netgroup. For example, on the server named `war`, replace this line: [.programlisting] .... +::::::::: .... with [.programlisting] .... +@IT_EMP::::::::: .... This specifies that only the users defined in the netgroup `IT_EMP` will be imported into this system's password database and only those users are allowed to login to this system. This configuration also applies to the `~` function of the shell and all routines which convert between user names and numerical user IDs. In other words, `cd ~_user_` will not work, `ls -l` will show the numerical ID instead of the username, and `find . -user joe -print` will fail with the message `No such user`. To fix this, import all user entries without allowing them to login into the servers. This can be achieved by adding an extra line: [.programlisting] .... +:::::::::/usr/sbin/nologin .... This line configures the client to import all entries but to replace the shell in those entries with [.filename]#/usr/sbin/nologin#. Make sure that extra line is placed _after_ `+@IT_EMP:::::::::`. Otherwise, all user accounts imported from NIS will have [.filename]#/usr/sbin/nologin# as their login shell and no one will be able to login to the system. To configure the less important servers, replace the old `+:::::::::` on the servers with these lines: [.programlisting] .... +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/usr/sbin/nologin .... The corresponding lines for the workstations would be: [.programlisting] .... +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/usr/sbin/nologin .... NIS supports the creation of netgroups from other netgroups which can be useful if the policy regarding user access changes. One possibility is the creation of role-based netgroups. For example, one might create a netgroup called `BIGSRV` to define the login restrictions for the important servers, another netgroup called `SMALLSRV` for the less important servers, and a third netgroup called `USERBOX` for the workstations. Each of these netgroups contains the netgroups that are allowed to login onto these machines. The new entries for the NIS `netgroup` map would look like this: [.programlisting] .... BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS .... This method of defining login restrictions works reasonably well when it is possible to define groups of machines with identical restrictions. Unfortunately, this is the exception and not the rule. Most of the time, the ability to define login restrictions on a per-machine basis is required. Machine-specific netgroup definitions are another possibility to deal with the policy changes. In this scenario, the [.filename]#/etc/master.passwd# of each system contains two lines starting with "+". The first line adds a netgroup with the accounts allowed to login onto this machine and the second line adds all other accounts with [.filename]#/usr/sbin/nologin# as shell. It is recommended to use the "ALL-CAPS" version of the hostname as the name of the netgroup: [.programlisting] .... +@BOXNAME::::::::: +:::::::::/usr/sbin/nologin .... Once this task is completed on all the machines, there is no longer a need to modify the local versions of [.filename]#/etc/master.passwd# ever again. All further changes can be handled by modifying the NIS map. Here is an example of a possible `netgroup` map for this scenario: [.programlisting] .... # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] .... It may not always be advisable to use machine-based netgroups. When deploying a couple of dozen or hundreds of systems, role-based netgroups instead of machine-based netgroups may be used to keep the size of the NIS map within reasonable limits. === 密碼格式 NIS requires that all hosts within an NIS domain use the same format for encrypting passwords. If users have trouble authenticating on an NIS client, it may be due to a differing password format. In a heterogeneous network, the format must be supported by all operating systems, where DES is the lowest common standard. To check which format a server or client is using, look at this section of [.filename]#/etc/login.conf#: [.programlisting] .... default:\ :passwd_format=des:\ - :copyright=/etc/COPYRIGHT:\ [Further entries elided] .... In this example, the system is using the DES format. Other possible values are `blf` for Blowfish and `md5` for MD5 encrypted passwords. If the format on a host needs to be edited to match the one being used in the NIS domain, the login capability database must be rebuilt after saving the change: [source,shell] .... # cap_mkdb /etc/login.conf .... [NOTE] ==== The format of passwords for existing user accounts will not be updated until each user changes their password _after_ the login capability database is rebuilt. ==== [[network-ldap]] == 輕量級目錄存取協定 (LDAP) 輕量級目錄存取協定 (Lightweight Directory Access Protocol, LDAP) 是一個利用分散式目錄資訊服務來做到存取、修改與認証物件的應用層通訊協定,可以想像成是一本可以儲存數個階層、同質資訊的電話簿或記錄簿。它用在 Active Directory 及 OpenLDAP 網路,允許使用者利用一個帳號來存取數個階層的內部資訊,例如:電子郵件認証、取得員工聯絡資訊及內部網站的認証皆可使用 LDAP 伺服器資料庫中的單一使用者帳號來存取。 本章節將介紹在 FreeBSD 系統上如何快速的設定一個 LDAP 伺服器。本章節假設管理者已做好規劃,這包含:要儲存何種類型的資訊、這些資訊要來做什麼、那些使用者擁有存取這些資訊的權限以及如何確保這些資訊不會被未經授權存取。 === LDAP 術語與結構 LDAP 使用了數個術語在開始設置之前必須先了解。所有的目錄項目由一群屬性 (_attributes_) 所組成,每個屬性集皆有一個獨特的辨識碼稱為辨識名稱 (_Distinguished Name_, DN),這個辨識碼會由數個其他的屬性,如:常用或相對辨識名稱 (_Relative Distinguished Name_, RDN) 所組成,這就像目錄有絕對路徑與相對路徑,可以把 DN 當做絕對路徑,RDN 當做相對路徑。 LDAP 項目的例子如下。這個例子會搜尋指定使用者帳號 (`uid`)、組織單位 (`ou`) 及組織的項目 (`o`): [source,shell] .... % ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... 這個範例項目會顯示 `dn`, `mail`, `cn`, `uid` 以及 `telephoneNumber` 屬性的數值。而 cn 屬性則是 RDN。 更多有關 LDAP 以及其術語的資訊可在 http://www.openldap.org/doc/admin24/intro.html[http://www.openldap.org/doc/admin24/intro.html] 找到。 [[ldap-config]] === 設定 LDAP 伺服器 FreeBSD 並未提供內建的 LDAP 伺服器,要開始設定前請先安裝 package:net/openldap-server[] 套件或 Port: [source,shell] .... # pkg install openldap-server .... 在extref:{linux-users}[套件, software]中已開啟了許多的預設選項,可以透過執行 `pkg info openldap-server` 來查看已開啟的選項,若有不足的地方 (例如需要開啟 SQL 的支援),請考慮使用適當的crossref:ports[ports-using,方式]重新編譯該 Port。 安裝程序會建立目錄 [.filename]#/var/db/openldap-data# 來儲存資料,同時需要建立儲存憑證的目錄: [source,shell] .... # mkdir /usr/local/etc/openldap/private .... 接下來是設定憑証機構 (Certificate authority)。以下指令必須在 [.filename]#/usr/local/etc/openldap/private# 下執行,這很重要是由於檔案權限須要被限制且其他使用者不應有這些檔案的存取權限,更多有關憑証的詳細資訊以及相關的參數可在 crossref:security[openssl,OpenSSL] 中找到。要建立憑証授權,需先輸人這個指令並依提示操作: [source,shell] .... # openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt .... 提示輸入的項目__除了__通用名稱 (`Common Name`) 外其他是可以一樣的,這個項目必須使用跟系統主機名稱 _不同_ 的名稱。若這是一個自行簽署的憑証 (Self signed certificate),則在憑証機構 `CA` 的前面加上主機名稱。 接下來的工作是建立一個伺服器的憑証簽署請求與一個私鑰。請輸入以下指令然後依提示操作: [source,shell] .... # openssl req -days 365 -nodes -new -keyout server.key -out server.csr .... 在憑証產生程序的過程中請確認 `Common Name` 屬性設定正確。憑証簽署請求 (Certificate Signing Request) 必須經過憑証機構簽署後才會成為有效的憑証: [source,shell] .... # openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial .... 在憑証產生程序的最後一步是產生並簽署客戶端憑証: [source,shell] .... # openssl req -days 365 -nodes -new -keyout client.key -out client.csr # openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key .... 記得當提示時要使用同樣的 `Common Name` 屬性。完成之後,請確認執行的指令產生了 8 個新檔案。 OpenLDAP 伺服器所執行的 Daemon 為 [.filename]#slapd#,OpenLDAP 是透過 [.filename]#slapd.ldif# 來做設定, OpenLDAP 官方已停止採用舊的 [.filename]#slapd.conf# 格式。 這裡有些 [.filename]#slapd.ldif# 的 http://www.openldap.org/doc/admin24/slapdconf2.html[設定檔範例] 可以使用,同時您也可以在 [.filename]#/usr/local/etc/openldap/slapd.ldif.sample# 找到範例資訊。相關可用的選項在 slapd-config(5) 文件會有說明。[.filename]#slapd.ldif# 的每個段落,如同其他 LDAP 屬性設定一樣會透過獨一無二 DN 來辨識,並請確保 `dn:` 描述與其相關屬性之間沒有空行。以下的範例中會實作一個使用 TLS 的安全通道,首先是全域的設定: [.programlisting] .... # # See slapd-config(5) for details on configuration options. # This file should NOT be world readable. # dn: cn=config objectClass: olcGlobal cn: config # # # Define global ACLs to disable default read access. # olcArgsFile: /var/run/openldap/slapd.args olcPidFile: /var/run/openldap/slapd.pid olcTLSCertificateFile: /usr/local/etc/openldap/server.crt olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt #olcTLSCipherSuite: HIGH olcTLSProtocolMin: 3.1 olcTLSVerifyClient: never .... 這個檔案中必須指定憑証機構 (Certificate Authority)、伺服器憑証 (Server Certificate) 與伺服器私鑰 (Server Private Key),建議可讓客戶端決定使用的安全密碼 (Security Cipher),略過 `olcTLSCipherSuite` 選項 (此選項不相容 [.filename]#openssl# 以外的 TLS 客戶端)。選項 `olcTLSProtocolMin` 讓伺服器可要求一個安全等級的最低限度,建議使用。伺服器有進行驗証的必要,但客戶端並不需要,因此可設定 `olcTLSVerifyClient: never`。 第二個部份是設定後端要採用的模組有那些,可使用以下方式設定: [.programlisting] .... # # Load dynamic backend modules: # dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/local/libexec/openldap olcModuleload: back_mdb.la #olcModuleload: back_bdb.la #olcModuleload: back_hdb.la #olcModuleload: back_ldap.la #olcModuleload: back_passwd.la #olcModuleload: back_shell.la .... 第三個部份要載入資料庫所需的 `ldif` 綱要 (Schema),這個動作是必要的。 [.programlisting] .... dn: cn=schema,cn=config objectClass: olcSchemaConfig cn: schema include: file:///usr/local/etc/openldap/schema/core.ldif include: file:///usr/local/etc/openldap/schema/cosine.ldif include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif include: file:///usr/local/etc/openldap/schema/nis.ldif .... 接下來是前端設定的部份: [.programlisting] .... # Frontend settings # dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: to * by * read # # Sample global access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # #olcAccess: to dn.base="" by * read #olcAccess: to dn.base="cn=Subschema" by * read #olcAccess: to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! # olcPasswordHash: {SSHA} # {SSHA} is already the default for olcPasswordHash .... 再來是__設定後端__的部份,之後唯一能夠存取 OpenLDAP 伺服器設定的方式是使用全域超級使用者。 [.programlisting] .... dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: to * by * none olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U .... 預設的管理者使用者名稱是 `cn=config`,可在 Shell 中輸入 [.filename]#slappasswd#,決定要使用的密碼並將其產生的編碼放到 `olcRootPW` 欄位中。若這個選項在這時沒有設定好,在匯入 [.filename]#slapd.ldif# 之後將沒有任何人有辦法修改__全域的設定__。 最後一個部份是有關資料庫後端的設定: [.programlisting] .... ####################################################################### # LMDB database definitions ####################################################################### # dn: olcDatabase=mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: mdb olcDbMaxSize: 1073741824 olcSuffix: dc=domain,dc=example olcRootDN: cn=mdbadmin,dc=domain,dc=example # Cleartext passwords, especially for the rootdn, should # be avoided. See slappasswd(8) and slapd-config(5) for details. # Use of strong authentication encouraged. olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+ # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. olcDbDirectory: /var/db/openldap-data # Indices to maintain olcDbIndex: objectClass eq .... 這裡指定的資料庫即__實際用來保存__LDAP 目錄的資料,也可以使用 `mdb` 以外的項目,資料庫的超級使用者可在這裡設定 (與全域的超級使用者是不同的東西):`olcRootDN` 需填寫使用者名稱 (可自訂),`olcRootPW` 需填寫該使用者編碼後的密碼,將密碼編碼可使用 [.filename]#slappasswd# 如同前面所述。 這裡有個link:http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/data/regressions/its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;hb=HEAD[檔案庫]內有四個 [.filename]#slapd.ldif# 的範例,要將現有的 [.filename]#slapd.conf# 轉換成 [.filename]#slapd.ldif# 格式,可參考link:http://www.openldap.org/doc/admin24/slapdconf2.html[此頁] (注意,這裡面的說明也會介紹一些不常用的選項)。 當設定完成之後,需將 [.filename]#slapd.ldif# 放在一個空的目錄當中,建議如以下方式建立: [source,shell] .... # mkdir /usr/local/etc/openldap/slapd.d/ .... 匯入設定資料庫: [source,shell] .... # /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif .... 啟動 [.filename]#slapd# Daemon: [source,shell] .... # /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/ .... 選項 `-d` 可以用來除錯使用,如同 slapd(8) 中所說明的,若要檢驗伺服器是否正常執行與運作可以: [source,shell] .... # ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContexts # # dn: namingContexts: dc=domain,dc=example # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 .... 伺服器端仍必須受到信任,若在此之前未做過這個動作,請依照以下指示操作。安裝 OpenSSL 套件或 Port: [source,shell] .... # pkg install openssl .... 進入 [.filename]#ca.crt# 所在的目錄 (以這邊使用的例子來說則是 [.filename]#/usr/local/etc/openldap#),執行: [source,shell] .... # c_rehash . .... 現在 CA 與伺服器憑証可以依其用途被辨識,可進入 [.filename]#server.crt# 所在的目錄執行以下指令來檢查: [source,shell] .... # openssl verify -verbose -CApath . server.crt .... 若 [.filename]#slapd# 已正在執行,就重新啟動它。如同 [.filename]#/usr/local/etc/rc.d/slapd# 所述,要讓 [.filename]#slapd# 開機時可正常執行,須要加入以下行到 [.filename]#/etc/rc.conf#: [.programlisting] .... lapd_enable="YES" slapd_flags='-h "ldapi://%2fvar%2frun%2fopenldap%2fldapi/ ldap://0.0.0.0/"' slapd_sockets="/var/run/openldap/ldapi" slapd_cn_config="YES" .... 開機啟動 [.filename]#slapd# 並不會提供除錯的功能,您可以檢查 [.filename]#/var/log/debug.log#, [.filename]#dmesg -a# 及 [.filename]#/var/log/messages# 檢確認是否有正常運作。 以下範例會新增群組 `team` 及使用者 `john` 到 `domain.example` LDAP 資料庫,而該資料庫目前是空的。首先要先建立 [.filename]#domain.ldif# 檔: [source,shell] .... # cat domain.ldif dn: dc=domain,dc=example objectClass: dcObject objectClass: organization o: domain.example dc: domain dn: ou=groups,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: groups dn: ou=users,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: users dn: cn=team,ou=groups,dc=domain,dc=example objectClass: top objectClass: posixGroup cn: team gidNumber: 10001 dn: uid=john,ou=users,dc=domain,dc=example objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: John McUser uid: john uidNumber: 10001 gidNumber: 10001 homeDirectory: /home/john/ loginShell: /usr/bin/bash userPassword: secret .... 請查看 OpenLDAP 說明文件取得更詳細的資訊,使用 [.filename]#slappasswd# 來將純文字的密碼 `secret` 更改為已編碼的型式來填寫 `userPassword` 欄位。在 `loginShell` 所指定的路徑,必須在所有可讓 `john` 登入的系統中存在。最後是使用 `mdb` 管理者修改資料庫: [source,shell] .... # ldapadd -W -D "cn=mdbadmin,dc=domain,dc=example" -f domain.ldif .... 要修改__全域設定__只能使用全域的超及使用者。例如,假設一開始採用了 `olcTLSCipherSuite: HIGH:MEDIUM:SSLv3` 選項,但最後想要把它移除,可以建立一個有以下內容的檔案: [source,shell] .... # cat global_mod dn: cn=config changetype: modify delete: olcTLSCipherSuite .... 然後套用修改內容: [source,shell] .... # ldapmodify -f global_mod -x -D "cn=config" -W .... 當提示輸入密碼時,提供當時在__設定後端__一節所設定的密碼,在這裡無須填寫使用者名稱,`cn=config` 代表要修改資料庫資料的位置。也可以使用 `ldapmodify` 刪除其中一行屬性,或是 `ldapdelete` 刪除整筆資料。 若有問題無法正常執行,或是全域的超級使用者無法存取後端的設定,可以刪除並重建整個後端設定: [source,shell] .... # rm -rf /usr/local/etc/openldap/slapd.d/ .... 可以修改 [.filename]#slapd.ldif# 後再重新匯入一次。請注意,這個步驟只在沒有其他方式可用時才使用。 本章節的設定說明只針對伺服器端的部份,在同一台主機中也可以同時有安裝 LDAP 客戶端但需要額外做設定。 [[network-dhcp]] == 動態主機設置協定 (DHCP) 動態主機設置協定 (Dynamic Host Configuration Protocol, DHCP) 可分配必要的位置資訊給一個連線到網路的系統以在該網路通訊。FreeBSD 內含 OpenBSD 版本的 `dhclient`,可用來做為客戶端來取得位置資訊。FreeBSD 預設並不會安裝 DHCP 伺服器,但在 FreeBSD Port 套件集中有許多可用的伺服器。有關 DHCP 通訊協定的完整說明位於 http://www.freesoft.org/CIE/RFC/2131/[RFC 2131],相關資源也可至 http://www.isc.org/downloads/dhcp/[isc.org/downloads/dhcp/] 取得。 本節將介紹如何使用內建的 DHCP 客戶端,接著會介紹如何安裝並設定一個 DHCP 伺服器。 [NOTE] ==== 在 FreeBSD 中,man:bpf[4] 裝置同時會被 DHCP 伺服器與 DHCP 客戶端所使用。這個裝置會在 [.filename]#GENERIC# 核心中被引用並隨著 FreeBSD 安裝。想要建立自訂核心的使用者若要使用 DHCP 則須保留這個裝置。 另外要注意 [.filename]#bpf# 也會讓有權限的使用者在該系統上可執行網路封包監聽程式。 ==== === 設定 DHCP 客戶端 DHCP 客戶端內含在 FreeBSD 安裝程式當中,這讓在新安裝的系統上設定自動從 DHCP 伺服器接收網路位置資訊變的更簡單。請參考 crossref:bsdinstall[bsdinstall-post,安裝後注意事項] 取得網路設置的範例。 當 `dhclient` 在客戶端機器上執行時,它便會開始廣播請求取得設置資訊。預設這些請求會使用 UDP 埠號 68。而伺服器則會在 UDP 埠號 67 來回覆,將 IP 位址與其他相關的網路資訊,如:子網路遮罩、預設閘道及 DNS 伺服器位址告訴客戶端,詳細的清單可在 man:dhcp-options[5] 找到。 預設當 FreeBSD 系統開機時,其 DHCP 客戶端會在背景執行或稱非同步 (_Asynchronously_) 執行,在完成 DHCP 程序的同時其他啟動 Script 會繼續執行,來加速系統啟動。 背景 DHCP 在 DHCP 伺服器可以快速的回應客戶端請求時可運作的很好。然而 DHCP 在某些系統可能需要較長的時間才能完成,若網路服務嘗試在 DHCP 尚未分配網路位置資訊前執行則會失敗。使用同步 (_Synchronous_) 模式執行 DHCP 可避免這個問題,因為同步模式會暫停啟動直到 DHCP 已設置完成。 在 [.filename]#/etc/rc.conf# 中的這行用來設定採用背景 (非同步模式): [.programlisting] .... ifconfig_fxp0="DHCP" .... 若系統已經在安裝時設定使用 DHCP,這行可能會已存在。替換在例子中的 _fxp0_ 為實際要動態設置的網路介面名稱,如 crossref:config[config-network-setup,設定網路介面卡] 中的說明。 要改設定系統採用同步模式,在啟動時暫停等候 DHCP 完成,使用 "`SYNCDHCP`": [.programlisting] .... ifconfig_fxp0="SYNCDHCP" .... 尚有其他可用的客戶端選項,請在 man:rc.conf[5] 搜尋 `dhclient` 來取得詳細資訊。 DHCP 客戶端會使用到以下檔案: * [.filename]#/etc/dhclient.conf# + `dhclient` 用到的設定檔。通常這個檔案只會有註解,因為預設便適用大多數客戶端。這個設定檔在 man:dhclient.conf[5] 中有說明。 * [.filename]#/sbin/dhclient# + 有關指令本身的更多資訊可於 man:dhclient[8] 找到。 * [.filename]#/sbin/dhclient-script# + FreeBSD 特定的 DHCP 客戶端設定 Script。在 man:dhclient-script[8] 中有說明,但應不須做任何修改便可正常運作。 * [.filename]#/var/db/dhclient.leases.interface# + DHCP 客戶端會在這個檔案中儲存有效租約的資料,寫入的格式類似日誌,在 man:dhclient.leases[5] 有說明。 [[network-dhcp-server]] === 安裝並設定 DHCP 伺服器 本節將示範如何設定 FreeBSD 系統成為 DHCP 伺服器,使用 Internet Systems Consortium (ISC) 所實作的 DHCP 伺服器,這個伺服器及其文件可使用 package:net/isc-dhcp44-server[] 套件或 Port 安裝。 package:net/isc-dhcp44-server[] 的安裝程式會安裝一份範例設定檔,複製 [.filename]#/usr/local/etc/dhcpd.conf.example# 到 [.filename]#/usr/local/etc/dhcpd.conf# 並在這個新檔案做編輯。 這個設定檔內容包括了子網路及主機的宣告,用來定義要提供給 DHCP 客戶端的資訊。如以下行設定: [.programlisting] .... option domain-name "example.org";<.> option domain-name-servers ns1.example.org;<.> option subnet-mask 255.255.255.0;<.> default-lease-time 600;<.> max-lease-time 72400;<.> ddns-update-style none;<.> subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20;<.> option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;<.> } host fantasia { hardware ethernet 08:00:07:26:c0:a5;<.> fixed-address fantasia.fugue.com;<.> } .... <.> 這個選項指定了要提供給客戶端的預設搜尋網域。請參考 man:resolv.conf[5] 取得更多資訊。 <.> 這個選項指定了客戶端應使用的 DNS 伺服器清單 (以逗號分隔)。如範例中所示,可使用伺服器的完整網域名稱 (Fully Qualified Domain Names, FQDN) 或伺服器的 IP 位址。 <.> 要提供給客戶端的子網路遮罩。 <.> 預設租約到期時間 (秒)。客戶端可以自行設定覆蓋這個數值。 <.> 一個租約最多允許的時間長度 (秒)。若客戶端請求更長的租約,仍會發出租約,但最多只會在 `max-lease-time` 內有效。 <.> 預設的 `none` 會關閉動態 DNS 更新。更改此值為 `interim` 可讓 DHCP 伺服器每當發出一個租約便通知 DNS 伺服器更新,如此一來 DNS 伺服器便知道網路中該電腦的 IP 位址。不要更改此預設值,除非 DNS 伺服器已設定為支援動態 DNS。 <.> 此行會建立一個可用 IP 位址的儲存池來保留這些要分配給 DHCP 客戶端的位址。位址範圍必須在前一行所指定的網路或子網路中有效。 <.> 宣告在開始的 `{` 括號之前所指定的網路或子網路中有效的預設通訊閘。 <.> 指定客戶端的硬體 MAC 位址,好讓 DHCP 伺服器在客戶端發出請求時可以辨識客戶端。 <.> 指定這個主機應分配相同的 IP 位址。在此處用主機名稱是正確的,由於 DHCP 伺服器會在回傳租約資訊前先解析主機名稱。 此設定檔還支援其他選項,請參考隨伺服器一併安裝的 dhcpd.conf(5) 來取得詳細資訊與範例。 完成 [.filename]#dhcpd.conf# 的設定之後,在 [.filename]#/etc/rc.conf# 啟動 DHCP 伺服器: [.programlisting] .... dhcpd_enable="YES" dhcpd_ifaces="dc0" .... 替換 `dc0` 為 DHCP 伺服器要傾聽 DHCP 客戶端請求的網路介面 (多個介面可以空白分隔)。 執行以下指令來啟動伺服器: [source,shell] .... # service isc-dhcpd start .... 往後任何對伺服器設定的變更會需要使用 man:service[8] 中止 dhcpd 服務然後啟動。 DHCP 伺服器會使用到以下檔案。注意,操作手冊會與伺服器軟體一同安裝。 * [.filename]#/usr/local/sbin/dhcpd# + 更多有關 dhcpd 伺服器的資訊可在 dhcpd(8) 找到。 * [.filename]#/usr/local/etc/dhcpd.conf# + 伺服器設定檔需要含有所有要提供給客戶端的資訊以及有關伺服器運作的資訊。在 dhcpd.conf(5) 有此設定檔的說明。 * [.filename]#/var/db/dhcpd.leases# + DHCP 伺服器會儲存一份已發出租約的資料於這個檔案,寫入的格式類似日誌。參考 dhcpd.leases(5) 會有更完整的說明。 * [.filename]#/usr/local/sbin/dhcrelay# + 這個 Daemon 會用在更進階的環境中,在一個 DHCP 伺服器要轉發來自客戶端的請求到另一個網路的另一個 DHCP 伺服器的環境。若需要使用此功能,請安裝 package:net/isc-dhcp44-relay[] 套件或 Port,安裝會包含 dhcrelay(8),裡面有提供更詳細的資訊。 [[network-dns]] == 網域名稱系統 (DNS) 網域名稱系統 (Domain Name System, DNS) 是一種協定用來轉換網域名稱為 IP 位址,反之亦然。DNS 會協調網際網路上有權的根節點 (Authoritative root)、最上層網域 (Top Level Domain, TLD) 及其他小規模名稱伺服器來取得結果,而這些伺服器可管理與快取個自的網域資訊。要在系統上做 DNS 查詢並不需要架設一個名稱伺服器。 以下表格會說明一些與 DNS 有關的術語: .DNS 術語 [cols="1,1", frame="none", options="header"] |=== | 術語 | 定義 |正向 DNS (Forward DNS) |將主機名稱對應 IP 位址的動作。 |源頭 (Origin) |代表某個轄區檔案中所涵蓋的網域。 |解析器 (Resolver) |主機向名稱伺服器查詢轄區資訊的系統程序。 |反向 DNS (Reverse DNS) |將 IP 對應主機名稱的動作。 |根轄區 (Root zone) |網際網路轄區階層的最開始,所有的轄區會在根轄區之下,類似在檔案系統中所有的檔案會在根目錄底下。 |轄區 (Zone) |獨立的網域、子網域或或由相同授權 (Authority) 管理的部分 DNS。 |=== 轄區範例: * `.` 是一般在文件中表達根轄區的方式。 * `org.` 是一個在根轄區底下的最上層網域 (Top Level Domain , TLD)。 * `example.org.` 是一個在 `org.` TLD 底下的轄區。 * `1.168.192.in-addr.arpa` 是一個轄區用來代表所有在 `192.168.1.*` IP 位址空間底下的 IP 位址。 如您所見,更詳細的主機名稱會加在左方,例如 `example.org.` 比 `org.` 更具體,如同 `org.` 比根轄區更具體,主機名稱每一部份的架構很像檔案系統:[.filename]#/dev# 目錄在根目錄底下,以此類推。 === 要架設名稱伺服器的原因 名稱伺服器通常有兩種形式:有權的 (Authoritative) 名稱伺服器與快取 (或稱解析) 名稱伺服器。 以下情況會需要一台有權的名稱伺服器: * 想要提供 DNS 資訊給全世界,做為官方回覆查詢。 * 已經註冊了一個網域,例如 `example.org`,且要將 IP 位址分配到主機名稱下。 * 一段 IP 位址範圍需要反向 DNS 項目 (IP 轉主機名稱)。 * 要有一台備援或次要名稱伺服器用來回覆查詢。 以下情況會需要一台快取名稱伺服器: * 比起查詢外部的名稱伺服器本地 DNS 伺服器可以快取並更快的回應。 當查詢 `www.FreeBSD.org` 時,解析程式通常會查詢上游 ISP 的名稱伺服器然後接收其回覆,使用本地、快取 DNS 伺服器,只需要由快取 DNS 伺服器對外部做一次查詢,其他的查詢則不需要再向區域網路之外查詢,因為這些資訊已經在本地被快取了。 === DNS 伺服器設定 Unbound 由 FreeBSD 基礎系統提供,預設只會提供本機的 DNS 解析,雖然基礎系統的套件可被設定提供本機以外的解析服務,但要解決這樣的需求仍建議安裝 FreeBSD Port 套件集中的 Unbound。 要開啟 Unbound 可加入下行到 [.filename]#/etc/rc.conf#: [.programlisting] .... local_unbound_enable="YES" .... 任何已存在於 [.filename]#/etc/resolv.conf# 中的名稱伺服器會在新的 Unbound 設定中被設為追隨者 (Forwarder)。 [NOTE] ==== 若任一個列在清單中的名稱伺服器不支援 DNSSEC,則本地的 DNS 解析便會失敗,請確認有測試每一台名稱伺服器並移除所有測試失敗的項目。以下指令會顯示出信認樹或在 `192.168.1.1` 上執行失敗的名稱伺服器: ==== [source,shell] .... % drill -S FreeBSD.org @192.168.1.1 .... 確認完每一台名稱伺服器都支援 DNSSEC 後啟動 Unbound: [source,shell] .... # service local_unbound onestart .... 這將會更新 [.filename]#/etc/resolv.conf# 來讓查詢已用 DNSSEC 確保安全的網域現在可以運作,例如,執行以下指令來檢驗 FreeBSD.org DNSSEC 信任樹: [source,shell] .... % drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful .... [[network-apache]] == Apache HTTP 伺服器 開放源碼的 Apache HTTP Server 是目前最廣泛被使用的網頁伺服器,FreeBSD 預設並不會安裝這個網頁伺服器,但可從 package:www/apache24[] 套件或 Port 安裝。 本節將會摘要如何設定並啟動在 FreeBSD 上 2._x_ 版的 Apache HTTP Server,要取得有關 Apache 更詳細的資訊及其設定項目請參考 http://httpd.apache.org/[httpd.apache.org]。 === 設定並啟動 Apache 在 FreeBSD 中,主 Apache HTTP Server 設定檔會安裝於 [.filename]#/usr/local/etc/apache2x/httpd.conf#,其中 _x_ 代表版號,這份 ASCII 文字檔中以 `#` 做為行首的是註解,而最常需修改的項目有: `ServerRoot "/usr/local"`:: 指定該 Apache 的預設安裝路徑,Binary 檔會儲存在伺服器根目錄 (Server root) 下的 [.filename]#bin# 與 [.filename]#sbin# 子目錄,而設定檔會儲存在 [.filename]#etc/apache2x# 子目錄。 `ServerAdmin you@example.com`:: 更改此項目為您要接收問題回報的電子郵件位址,這個位址也會顯示在一些伺服器產生的頁面上,如:錯誤頁面。 `ServerName www.example.com:80`:: 讓管理者可以設定伺服器要回傳給客戶端的主機名稱 (Hostname),例如,`www` 可以更改為實際的主機名稱,若系統並未有註冊的 DNS 名稱,則可改輸入其 IP 位址,若伺服器需要傾聽其他埠號,可更改 `80` 為其他埠號。 `DocumentRoot "/usr/local/www/apache2__x__/data"`:: 提供文件的目錄,預設所有的請求均會到此目錄,但可以使用符號連結與別名來指向其他地方。 在對 Apache 設定檔做變更之前,建議先做備份,在 Apache 設定完成之後,儲存讓檔案並使用 `apachectl` 檢驗設定,執行 `apachectl configtest` 的結果應回傳 `Syntax OK`。 要在系統啟動時執行 Apache,可加入下行到 [.filename]#/etc/rc.conf#: [.programlisting] .... apache24_enable="YES" .... 若 Apache 要使用非預設的選項啟動,可加入下行到 [.filename]#/etc/rc.conf# 來指定所需的旗標參數: [.programlisting] .... apache24_flags="" .... 若 apachectl 未回報設定錯,則可啟動 `httpd`: [source,shell] .... # service apache24 start .... `httpd` 服務可以透過在網頁瀏覽器中輸入 `http://_localhost_` 來測試,將 _localhost_ 更改為執行 `httpd` 那台主機的完整網域名稱 (Fully-qualified domain name)。預設會顯示的網頁為 [.filename]#/usr/local/www/apache24/data/index.html#。 後續若有在 `httpd` 執行中時修改 Apache 設定檔可使用以下指令來測試是否有誤: [source,shell] .... # service apache24 configtest .... [NOTE] ==== 注意,`configtest` 並非採用 man:rc[8] 標準,不應預期其可在所有的啟動 Script 中正常運作。 ==== === 虛擬主機 虛擬主機允許在一個 Apache 伺服器執行多個網站,虛擬主機可以是以 IP 為主 (_IP-based_) 或以名稱為主 (_name-based_)。以 IP 為主的虛擬主機中的每一個網站要使用不同的 IP 位址。以名稱為主的虛擬主機會使用客戶端的 HTTP/1.1 標頭來判斷主機名稱,這可讓不同的網站共用相同的 IP 位址。 要設定 Apache 使用以名稱為主的虛擬主機可在每一個網站加入 `VirtualHost` 區塊,例如,有一個名稱為 `www.domain.tld` 的主機擁有一個 `www.someotherdomain.tld` 的虛擬網域,可加入以下項目到 [.filename]#httpd.conf#: [.programlisting] .... ServerName www.domain.tld DocumentRoot /www/domain.tld ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld .... 每一個虛擬主機均需更改其 `ServerName` 與 `DocumentRoot` 的值為實際要使用的值。 更多有關設定虛擬主機的資訊,可參考 Apache 官方說明文件於:link:http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/]。 === Apache 模組 Apache 使用模組 (Module) 來擴充伺服器所提供的功能。請參考 http://httpd.apache.org/docs/current/mod/[http://httpd.apache.org/docs/current/mod/] 來取得可用模組的完整清單與設定詳細資訊。 在 FreeBSD 中有些模組可以隨著 package:www/apache24[] Port 編譯,只要在 [.filename]#/usr/ports/www/apache24# 輸入 `make config` 便可查看有那一些模組是預設開啟的,若模組未與 Port 一併編譯,FreeBSD Port 套件集也提供了一個簡單的方式可安裝各種模組,本節將介紹最常使用的三個模組。 ==== [.filename]#mod_ssl# [.filename]#mod_ssl# 模組利用了 OpenSSL 透過 Secure Sockets Layer (SSLv3) 與 Transport Layer Security (TLSv1) 通訊協定來提供強大的加密,這個模組提供了向受信認的憑証簽署機構申請簽章憑証所需的任何東西,讓 FreeBSD 上能夠執行安全的網頁伺服器。 在 FreeBSD 中 [.filename]#mod_ssl# 模組預設在套件與 Port 均是開啟的,可用的設定項目在 http://httpd.apache.org/docs/current/mod/mod_ssl.html[http://httpd.apache.org/docs/current/mod/mod_ssl.html] 會說明。 ==== [.filename]#mod_perl# [.filename]#mod_perl# 模組讓您可以使用 Perl 撰寫 Apache 模組,除此之外,嵌入到伺服器的直譯器可避免啟動外部直譯器的額外開銷與 Perl 耗費的啟動時間。 [.filename]#mod_perl# 可以使用 package:www/mod_perl2[] 套件或 Port 安裝,有關使用此模組的說明文件可在 http://perl.apache.org/docs/2.0/index.html[http://perl.apache.org/docs/2.0/index.html] 中找到。 ==== [.filename]#mod_php# _PHP: Hypertext Preprocessor_ (PHP) 是一般用途的腳本 (Script) 語言,特別適用於網站開發,能夠嵌入在 HTML 當中,它的語法參考自 C, Java(TM) 及 Perl,目的在讓網頁開發人員能快速的寫出動態網頁。 要在 Apache 網頁伺服器上加入對 PHP5 的支援,可安裝 package:www/mod_php56[] 套件或 Port,這會安裝並設定支援動態 PHP 應用程式所需的模組。安裝過程會自動加入下行到 [.filename]#/usr/local/etc/apache24/httpd.conf#: [.programlisting] .... LoadModule php5_module libexec/apache24/libphp5.so .... 接著,執行 graceful 重新啟動來載入 PHP 模組: [source,shell] .... # apachectl graceful .... 由 package:www/mod_php56[] 所提供的 PHP 支援是有限的,若需要額外的支援可以使用 package:lang/php56-extensions[] Port 來安裝,該 Port 提供了選單介面來選擇可用的 PHP 擴充套件。 或者,可以找到適當的 Port 來安裝各別的擴充套件,例如,要增加 PHP 對 MySQL 資料庫伺服器的支援可安裝 package:databases/php56-mysql[]。 在安裝完擴充套件之後,必須重新載入 Apache 伺服器來使用新的設定值: [source,shell] .... # apachectl graceful .... === 動態網站 除了 mod_perl 與 mod_php 外,也有其他語言可用來建立動態網頁內容,這包含了 Django 與 Ruby on Rails。 ==== Django Django 是以 BSD 授權的框架 (Framework),指在讓開發人員能快速的寫出高效、優雅的網頁應用程式。它提供了物件關聯對應器 (Object-relational mapper),所以各種資料型態可當做 Python 的物件來開發,且提供了豐富的動態資料庫存取 API 給這些物件,讓開發人員不再需要寫 SQL。它也同時提供了可擴充的樣板系統,來讓應用程式的邏輯與 HTML 呈現能夠被拆開。 Django 需要 [.filename]#mod_python#,以及一個 SQL 資料庫引擎才能運作。在 FreeBSD 中的 package:www/py-django[] Port 會自動安裝 [.filename]#mod_python# 以及對 PostgreSQL, MySQL 或 SQLite 資料庫的支援,預設為 SQLite,要更改資料庫引擎可在 [.filename]#/usr/ports/www/py-django# 輸入 `make config` 然後再安裝該 Port。 Django 安裝完成之後,應用程式會需要一個專案目錄並搭配 Apache 設定才能使用內嵌的 Python 直譯器,此直譯器會用來呼叫網站上指定 URL 的應用程式。 要設定 Apache 傳遞某個 URL 請求到網站應用程式,可加入下行到 [.filename]#httpd.conf# 來指定專案目錄的完整路徑: [.programlisting] .... SetHandler python-program PythonPath "['/dir/to/the/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On .... 請參考 https://docs.djangoproject.com[https://docs.djangoproject.com] 來取得如何使用 Django 的更多資訊。 ==== Ruby on Rails Ruby on Rails 是另外一套開放源碼的網站框架 (Framework),提供了完整的開發堆疊,這使得網頁開發人員可以更有生產力且能夠快速的寫出強大的應用程式,在 FreeBSD 它可以使用 package:www/rubygem-rails[] 套件或 Port 安裝。 請參考 http://guides.rubyonrails.org[http://guides.rubyonrails.org] 來取得更多有關如何使用 Ruby on Rails 的資訊。 [[network-ftp]] == 檔案傳輸協定 (FTP) 檔案傳輸協定 (File Transfer Protocol, FTP) 提供了使用一個簡單的方式能夠將檔案傳輸到與接收自 FTP 伺服器,FreeBSD 內建了 FTP 伺服器軟體 ftpd 在基礎系統 (Base system) 中。 FreeBSD 提供了多個設定檔來控制對 FTP 伺服器的存取,本節將摘要這些檔案的設定方式,請參考 man:ftpd[8] 來取得更多有關內建 FTP 伺服器的詳細資訊。 === 設定 最重要的一個設定步驟便是決定那些帳號能夠存取 FTP 伺服器,FreeBSD 系統有數個系統帳號,這些帳號不應該能夠擁有 FTP 存取權,不允許存取 FTP 的使用者清單可在 [.filename]#/etc/ftpusers# 找到,預設該檔案內會有所有的系統帳號,其他不應允許存取 FTP 的使用者也可在此加入。 在某些情況可能會布望限制某些使用者的存取,而不是完全避免這些使用者使用 FTP,這可以透過建立 [.filename]#/etc/ftpchroot# 來完成,詳如 man:ftpchroot[5] 所述,這個檔案會列出受到 FTP 存取限制的使用者與群組。 要在伺服器上開啟匿名 FTP 存取權,可在 FreeBSD 系統上建立一個名稱為 `ftp` 使用者,使用者將能夠使用 `ftp` 或 `anonymous` 使用者名稱來登入 FTP 伺服器,當提示輸入密碼時,輸入任何值都會被接受,但是慣例上應使用電子郵件位址來當做密碼。當匿名使用者登入時 FTP 伺服器會呼叫 man:chroot[2] 來限制使用者只能存取 `ftp` 使用者的家目錄。 要設定顯示給 FTP 客戶端的歡迎訊息有兩個文字檔可以建立,[.filename]#/etc/ftpwelcome# 的內容會在收到登入提示前顯示給使用者看,登入成功能後,則會顯示 [.filename]#/etc/ftpmotd# 的內容。注意,這個檔案的路徑是相對於登入環境的,所以 [.filename]#~ftp/etc/ftpmotd# 的內容只會對匿名使用者顯示。 設定完 FTP 伺服器之後,在 [.filename]#/etc/rc.conf# 設定適當的變數來在開機時啟動該服務: [.programlisting] .... ftpd_enable="YES" .... 要立即啟動服務可: [source,shell] .... # service ftpd start .... 要測試到 FTP 伺服器的連線可輸入: [source,shell] .... % ftp localhost .... ftpd daemon 會使用 man:syslog[3] 來記錄訊息,預設,系統記錄 Daemon 會寫入有關 FTP 的訊息到 [.filename]#/var/log/xferlog#,FTP 記錄的位置可以透過更改 [.filename]#/etc/syslog.conf# 中下行來做修改: [.programlisting] .... ftp.info /var/log/xferlog .... [NOTE] ==== 要注意啟動匿名 FTP 伺服器可能的潛藏問題,尤其是要讓匿名使用者上傳檔案時要再次確認,因為這可能讓該 FTP 站變成用來交換未授權商業軟體的交流平台或者更糟的狀況。若真的需要匿名 FTP 上傳,那麼請檢查權限設定,讓這些檔案在尚未被管理者審查前不能夠被其他匿名使用者讀取。 ==== [[network-samba]] == Microsoft(TM)Windows(TM) 用戶端檔案與列印服務 (Samba) Samba 是熱門的開放源碼軟體套件,使用 SMB/CIFS 通訊協定提供檔案與列印服務,此通訊協定內建於 Microsoft(TM) Windows(TM) 系統,在非 Microsoft(TM) Windows(TM) 的系統可透過安裝 Samba 客戶端程式庫來支援此協定。此通訊協定讓客戶端可以存取共享的資料與印表機,這些共享的資源可掛載到一個本機的磁碟機,而共享的印表機則可以當做本機的印表機使用。 在 FreeBSD 上,可以使用 package:net/samba48[] Port 或套件來安裝 Samba 客戶端程式庫,這個客戶端提供了讓 FreeBSD 系統能存取 SMB/CIFS 在 Microsoft(TM) Windows(TM) 網路中共享的資源。 FreeBSD 系統也可以透過安裝 package:net/samba48[] Port 或套件來設定成 Samba 伺服器,這讓管理者可以在 FreeBSD 系統上建立 SMB/CIFS 的共享資源,讓執行 Microsoft(TM) Windows(TM) 或 Samba 客戶端程式庫的客戶端能夠存取。 === 伺服器設定 Samba 的設定位於 [.filename]#/usr/local/etc/smb4.conf#,必須先設定這個檔案才可使用 Samba。 要共享目錄與印表機給在工作群組中的 Windows(TM) 客戶端的簡易 [.filename]#smb4.conf# 範例如下。對於涉及 LDAP 或 Active Directory 的複雜安裝,可使用 man:samba-tool[8] 來建立初始的 [.filename]#smb4.conf#。 [.programlisting] .... [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = ExampleMachine wins support = Yes security = user passdb backend = tdbsam # Example: share /usr/src accessible only to 'developer' user [src] path = /usr/src valid users = developer writable = yes browsable = yes read only = no guest ok = no public = no create mask = 0666 directory mask = 0755 .... ==== 全域設定 在 [.filename]#/usr/local/etc/smb4.conf# 中加入用來描述網路環境的設定有: `workgroup`:: 要提供的工作群組名稱。 `netbios name`:: Samba 伺服器已知的 NetBIOS 名稱,預設為主機的 DNS 名稱第一節。 `server string`:: 會顯示於 `net view` 輸出結果以及其他會尋找伺服器描述文字並顯示的網路工具的文字。 `wins support`:: 不論 Samba 是否要作為 WINS 伺服器,請不要在網路上開啟超過一台伺服器的 WINS 功能。 ==== 安全性設定 在 [.filename]#/usr/local/etc/smb4.conf# 中最重要的設定便是安全性模式以及後端密碼格式,以下項目管控的選項有: `security`:: 最常見的設定為 `security = share` 以及 `security = user`,若客戶端使用的使用者名稱與在 FreeBSD 主機上使用的使用者名稱相同,則應該使用使用者 (user) 層級的安全性,這是預設的安全性原則且它會要求客戶端在存取共享資源前先登入。 + 安全性為共享 (share) 層級時,客戶端存取共享資源不需要先使用有效的使用者名稱與密碼登入伺服器,在是在舊版 Samba 所採用的預設安全性模式。 `passdb backend`:: Samba 支援數種不同的後端認証模式,客戶端可以使用 LDAP, NIS+, SQL 資料庫或修改過的密碼檔來認証,建議的認証方式是 `tdbsam`,適用於簡易的網路環境且在此處說明,對於較大或更複雜的網路則較建議使用 `ldapsam`,而 `smbpasswd` 是舊版的預設值,現在已廢棄不使用。 ==== Samba 使用者 FreeBSD 使用者帳號必須對應 `SambaSAMAccount` 資料庫, 才能讓 Windows(TM) 客戶端存取共享資源,要對應既有的 FreeBSD 使用者帳號可使用 man:pdbedit[8]: [source,shell] .... # pdbedit -a username .... 本節只會提到一些最常用的設定,請參考 http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/[官方 Samba HOWTO] 來取得有關可用設定選項的額外資訊。 === 啟動 Samba 要在開機時啟動 Samba,可加入下行到 [.filename]#/etc/rc.conf#: [.programlisting] .... samba_server_enable="YES" .... 要立即啟動 Samba: [source,shell] .... # service samba_server start Performing sanity check on Samba configuration: OK Starting nmbd. Starting smbd. .... Samba 由三個獨立的 Daemon 所組成,nmbd 與 smbd daemon 可透過 `samba_enable` 來啟動,若同時也需要 winbind 名稱解析服務則需額外設定: [.programlisting] .... winbindd_enable="YES" .... Samba 可以隨時停止,要停止可輸入: [source,shell] .... # service samba_server stop .... Samba 是一套擁有能整合 Microsoft(TM) Windows(TM) 網路功能的複雜軟體套件,除了在此處說明的基礎設定,要取得更多的功能資訊,請參考 http://www.samba.org[http://www.samba.org]。 [[network-ntp]] == NTP 時間校對 隨著使用時間,電腦的時鐘會逐漸偏移,這對需要網路上電腦有相同準確度時間的許多網路服務來說是一個大問題。準確的時間同樣能確保檔案時間戳記的一致性。網路時間協定 (Network Time Protocol, NTP) 是一種在網路上可以確保時間準確的方式。 FreeBSD 內含 man:ntpd[8] 可設定來查詢其他 NTP 伺服器來同步電腦的時間或提供時間服務給其他在網路上的電腦。 本節將會介紹如何設定 FreeBSD 上的 ntpd,更進一步的說明文件可於 [.filename]#/usr/shared/doc/ntp/# 找到 HTML 格式的版本。 === NTP 設定 在 FreeBSD,內建的 ntpd 可用來同步系統的時間,Ntpd 要使用 man:rc.conf[5] 中的變數以及下一節會詳細說明的 [.filename]#/etc/ntp.conf# 來設定。 Ntpd 與網路中各節點的通訊採用 UDP 封包,在伺服器與 NTP 各節點間的防火牆必須設定成可允許進/出埠 123 的 UDP 封包。 ==== [.filename]#/etc/ntp.conf# 檔 Ntpd 會讀取 [.filename]#/etc/ntp.conf# 來得知要從那些 NTP 伺服器查詢時間,建議可設定多個 NTP 伺服器,來避免萬一其中一個伺服器無法連線或是時間不可靠的問題,當 ntpd 收到回應,它會偏好先採用較可信賴的伺服器。查詢的伺服器可以是來自本地網路的 ISP 所提供,也可從link:http://support.ntp.org/bin/view/Servers/WebHome[線上可公開存取的NTP 伺服器清單]中挑選,您可以選擇一個離您地理位置較近的伺服器並閱讀它的使用規則。也有 http://support.ntp.org/bin/view/Servers/NTPPoolServers[可公開存取的 NTP 池線上清單]可用,由一個地理區域所組織,除此之外 FreeBSD 提供了計劃贊助的伺服器池,`0.freebsd.pool.ntp.org`。 .[.filename]#/etc/ntp.conf# 範例 [example] ==== 這份簡單的 [.filename]#ntp.conf# 範例檔可以放心的使用,其中包含了建議的 `restrict` 選項可避免伺服器被公開存取。 [.programlisting] .... # Disallow ntpq control/query access. Allow peers to be added only # based on pool and server statements in this file. restrict default limited kod nomodify notrap noquery nopeer restrict source limited kod nomodify notrap noquery # Allow unrestricted access from localhost for queries and control. restrict 127.0.0.1 restrict ::1 # Add a specific server. server ntplocal.example.com iburst # Add FreeBSD pool servers until 3-6 good servers are available. tos minclock 3 maxclock 6 pool 0.freebsd.pool.ntp.org iburst # Use a local leap-seconds file. leapfile "/var/db/ntpd.leap-seconds.list" .... ==== 這個檔案的格式在 man:ntp.conf[5] 有詳細說明,以下的說明僅快速的帶過以上範例檔有用到的一些關鍵字。 預設 NTP 伺服器是可以被任何網路主機所存取,`restrict` 關鍵字可以控制有那些系統可以存取伺服器。`restrict` 支援設定多項,每一項可再更進一步調整前面所做的設定。範例中的設定授權本地系統有完整的查詢及控制權限,而遠端系統只有查詢時間的權限。要了解更詳細的資訊請參考 man:ntp.conf[5] 中的 `Access Control Support` 一節。 `server` 關鍵字可指定要查詢的伺服器,設定檔中可以使用多個 server 關鍵字,一個伺服器列一行。`pool` 關鍵字可指定伺服器池,Ntpd 會加入該伺服器池中的一或多台伺服器,直到數量滿足 `tos minclock` 的設定。`iburst` 關鍵字會指示 ntpd 在建立連線時執行 8 連發快速封包交換,可以更快的同步系統時間。 `leapfile` 關鍵字用來指定含有閏秒 (Leap second) 資訊的檔案位置,該檔案是由 man:periodic[8] 自動更新。這個關鍵字指定的檔案位置必須與 [.filename]#/etc/rc.conf# 中設定的 `ntp_db_leapfile` 相同。 ==== 在 [.filename]#/etc/rc.conf# 中的 NTP 設定項目 設定 `ntpd_enable="YES"` 可讓開機時會啟動 ntpd。將 `ntpd_enable=YES` 加到 [.filename]#/etc/rc.conf# 之後,可輸入以下指令讓 ntpd 不需重新開機立即啟動: [source,shell] .... # service ntpd start .... 要使用 ntpd 必須設定 `ntpd_enable`,以下所列的 [.filename]#rc.conf# 變數可視所需請況設定。 設定 `ntpd_sync_on_start=YES` 可讓 ntpd 可以在系統啟動時一次同步任何差距的時間,正常情況若時鐘的差距超過 1000 秒便會記錄錯誤並且中止。這個設定項目在沒有電池備援的時鐘上特別有用。 設定 `ntpd_oomprotect=YES` 可保護 ntpd daemon 被系統中止並嘗試從記憶體不足 (Out Of Memory, OOM) 的情況恢復運作。 設定 `ntpd_config=` 可更改 [.filename]#ntp.conf# 檔案的位置。 設定 `ntpd_flags=` 可設定使用任何其他所需 ntpd 參數,但要避免使用由 [.filename]#/etc/rc.d/ntpd# 內部控管的參數如下: * `-p` (pid 檔案位置) * `-c` (改用 `ntpd_config=` 設定) ==== 使用無特權的 `ntpd` 使用者執行 Ntpd 在 FreeBSD 上的 Ntpd 現在可以使用無特權的使用者啟動並執行,要達到這個功能需要 man:mac_ntpd[4] 規則模組。[.filename]#/etc/rc.d/ntpd# 啟動 Script 會先檢查 NTP 的設定,若可以的話它會載入 `mac_ntpd` 模組,然後以無特權的使用者 `ntpd` (user id 123) 來啟動 ntpd。為了避免檔案與目錄存取權限的問題,當設定中有任何檔案相關的選項時,啟動 Script 不會自動以 `ntpd` 身份啟動 ntpd。 在 `ntpd_flags` 若出現以下任何參數則需要以最下面的方式手動設定才能以 `ntpd` 使用者的身份執行: * -f 或 --driftfile * -i 或 --jaildir * -k 或 --keyfile * -l 或 --logfile * -s 或 --statsdir 在 [.filename]#ntp.conf# 若出現以下任何關鍵字則需要以最下面的方式手動設定才能以 `ntpd` 使用者的身份執行: * crypto * driftfile * key * logdir * statsdir 要手動設定以使用者 `ntpd` 身份執行 ntpd 你必須: * 確保 `ntpd` 使用者有權限存取所有在設定檔中指定的檔案與目錄。 * 讓 `mac_ntpd` 模組載入或編譯至核心,請參考 man:mac_ntpd[4] 取得詳細資訊。 * 在 [.filename]#/etc/rc.conf# 中設定 `ntpd_user="ntpd"` === 在 PPP 連線使用 NTP ntpd 並不需要永久的網際網路連線才能正常運作,若有一個 PPP 連線是設定成需要時撥號,那麼便需要避免 NTP 的流量觸發撥號或是保持連線不中斷,這可在 [.filename]#/etc/ppp/ppp.conf# 使用 `filter` 項目設定,例如: [.programlisting] .... set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 .... 要取得更詳細的資訊,請參考於 man:ppp[8] 的 `PACKET FILTERING` 小節以及在 [.filename]#/usr/shared/examples/ppp/# 中的範例。 [NOTE] ==== 部份網際網路存取提供商會封鎖較小編號的埠,這會讓 NTP 無法運作,因為回應永遠無到傳送到該主機。 ==== [[network-iscsi]] == iSCSI Initiator 與 Target 設定 iSCSI is a way to share storage over a network. Unlike NFS, which works at the file system level, iSCSI works at the block device level. In iSCSI terminology, the system that shares the storage is known as the _target_. The storage can be a physical disk, or an area representing multiple disks or a portion of a physical disk. For example, if the disk(s) are formatted with ZFS, a zvol can be created to use as the iSCSI storage. The clients which access the iSCSI storage are called _initiators_. To initiators, the storage available through iSCSI appears as a raw, unformatted disk known as a LUN. Device nodes for the disk appear in [.filename]#/dev/# and the device must be separately formatted and mounted. FreeBSD provides a native, kernel-based iSCSI target and initiator. This section describes how to configure a FreeBSD system as a target or an initiator. [[network-iscsi-target]] === 設定 iSCSI Target To configure an iSCSI target, create the [.filename]#/etc/ctl.conf# configuration file, add a line to [.filename]#/etc/rc.conf# to make sure the man:ctld[8] daemon is automatically started at boot, and then start the daemon. The following is an example of a simple [.filename]#/etc/ctl.conf# configuration file. Refer to man:ctl.conf[5] for a more complete description of this file's available options. [.programlisting] .... portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... The first entry defines the `pg0` portal group. Portal groups define which network addresses the man:ctld[8] daemon will listen on. The `discovery-auth-group no-authentication` entry indicates that any initiator is allowed to perform iSCSI target discovery without authentication. Lines three and four configure man:ctld[8] to listen on all IPv4 (`listen 0.0.0.0`) and IPv6 (`listen [::]`) addresses on the default port of 3260. It is not necessary to define a portal group as there is a built-in portal group called `default`. In this case, the difference between `default` and `pg0` is that with `default`, target discovery is always denied, while with `pg0`, it is always allowed. The second entry defines a single target. Target has two possible meanings: a machine serving iSCSI or a named group of LUNs. This example uses the latter meaning, where `iqn.2012-06.com.example:target0` is the target name. This target name is suitable for testing purposes. For actual use, change `com.example` to the real domain name, reversed. The `2012-06` represents the year and month of acquiring control of that domain name, and `target0` can be any value. Any number of targets can be defined in this configuration file. The `auth-group no-authentication` line allows all initiators to connect to the specified target and `portal-group pg0` makes the target reachable through the `pg0` portal group. The next section defines the LUN. To the initiator, each LUN will be visible as a separate disk device. Multiple LUNs can be defined for each target. Each LUN is identified by a number, where LUN 0 is mandatory. The `path /data/target0-0` line defines the full path to a file or zvol backing the LUN. That path must exist before starting man:ctld[8]. The second line is optional and specifies the size of the LUN. Next, to make sure the man:ctld[8] daemon is started at boot, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... ctld_enable="YES" .... To start man:ctld[8] now, run this command: [source,shell] .... # service ctld start .... As the man:ctld[8] daemon is started, it reads [.filename]#/etc/ctl.conf#. If this file is edited after the daemon starts, use this command so that the changes take effect immediately: [source,shell] .... # service ctld reload .... ==== 認證 The previous example is inherently insecure as it uses no authentication, granting anyone full access to all targets. To require a username and password to access targets, modify the configuration as follows: [.programlisting] .... auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } .... The `auth-group` section defines username and password pairs. An initiator trying to connect to `iqn.2012-06.com.example:target0` must first specify a defined username and secret. However, target discovery is still permitted without authentication. To require target discovery authentication, set `discovery-auth-group` to a defined `auth-group` name instead of `no-authentication`. It is common to define a single exported target for every initiator. As a shorthand for the syntax above, the username and password can be specified directly in the target entry: [.programlisting] .... target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } .... [[network-iscsi-initiator]] === 設定 iSCSI Initiator [NOTE] ==== The iSCSI initiator described in this section is supported starting with FreeBSD 10.0-RELEASE. To use the iSCSI initiator available in older versions, refer to man:iscontrol[8]. ==== The iSCSI initiator requires that the man:iscsid[8] daemon is running. This daemon does not use a configuration file. To start it automatically at boot, add this line to [.filename]#/etc/rc.conf#: [.programlisting] .... iscsid_enable="YES" .... To start man:iscsid[8] now, run this command: [source,shell] .... # service iscsid start .... Connecting to a target can be done with or without an [.filename]#/etc/iscsi.conf# configuration file. This section demonstrates both types of connections. ==== 不使用設定檔連線到 Target To connect an initiator to a single target, specify the IP address of the portal and the name of the target: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 .... To verify if the connection succeeded, run `iscsictl` without any arguments. The output should look similar to this: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 .... In this example, the iSCSI session was successfully established, with [.filename]#/dev/da0# representing the attached LUN. If the `iqn.2012-06.com.example:target0` target exports more than one LUN, multiple device nodes will be shown in that section of the output: [source,shell] .... Connected: da0 da1 da2. .... Any errors will be reported in the output, as well as the system logs. For example, this message usually means that the man:iscsid[8] daemon is not running: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) .... The following message suggests a networking problem, such as a wrong IP address or port: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused .... This message means that the specified target name is wrong: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Not found .... This message means that the target requires authentication: [.programlisting] .... Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed .... To specify a CHAP username and secret, use this syntax: [source,shell] .... # iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret .... ==== 使用設定檔連線到 Target To connect using a configuration file, create [.filename]#/etc/iscsi.conf# with contents like this: [.programlisting] .... t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } .... The `t0` specifies a nickname for the configuration file section. It will be used by the initiator to specify which configuration to use. The other lines specify the parameters to use during connection. The `TargetAddress` and `TargetName` are mandatory, whereas the other options are optional. In this example, the CHAP username and secret are shown. To connect to the defined target, specify the nickname: [source,shell] .... # iscsictl -An t0 .... Alternately, to connect to all targets defined in the configuration file, use: [source,shell] .... # iscsictl -Aa .... To make the initiator automatically connect to all targets in [.filename]#/etc/iscsi.conf#, add the following to [.filename]#/etc/rc.conf#: [.programlisting] .... iscsictl_enable="YES" iscsictl_flags="-Aa" ....