Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then.
Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
Etymology
According to the Oxford English Dictionary, the first known use of the word “computer” was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: “I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number.” This usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.[1]
The Online Etymology Dictionary gives the first attested use of “computer” in the “1640s, [meaning] “one who calculates,”; this is an “… agent noun from compute (v.)”. The Online Etymology Dictionary states that the use of the term to mean “calculating machine” (of any type) is from 1897.“ The Online Etymology Dictionary indicates that the “modern use” of the term, to mean “programmable digital electronic computer” dates from “… 1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine”.
History
Pre-20th century-The earliest counting device was probably a form of tally stick.
-calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers.[3][4] The use of counting rods is one example.
-The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC.
-In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.
-The Antikythera mechanism is believed to be the earliest mechanical analog “computer”, according to Derek J. de Solla Price.[5] It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.
-The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century.
-A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy.
-An astrolabe incorporating a mechanical calendar computer.
-The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.
-The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolictrigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft. To save space and for ease of reading, these are typically circular devices rather than the classic linear slide rule shape. A popular example is the E6B.
-In the 1770s Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automata) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically “programmed” to read instructions. Along with two other complex machines, the doll is at the Musée d’Art et d’Histoire of Neuchâtel, Switzerland, and still operates.
-The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.
-The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 Lord Kelvin had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators.[14] In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.
First computing device
–Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the “father of the computer”,[15] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.
-The machine was about a century ahead of its time. All the parts for his machine had to be made by hand — this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage’s failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine’s computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.
Analog computers
-During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[18] The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872.
-The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.
-The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927.
-This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (control systems) and aircraft (slide rule).
Digital computers
-Electromechanical
-By 1938 the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.
-Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.
-Vacuum tubes and digital electronic circuits
-The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.
-In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[26] the first “automatic electronic digital computer”.[27] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.
-During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newmanand his colleagues commissioned Flowers to build the Colossus.[28] He spent eleven months from early February 1943 designing and building the first Colossus.[29] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[30] and attacked its first message on 5 February.
-Colossus was the world’s first electronicdigitalprogrammable computer.[18] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process
-The U.S.-built ENIAC[33] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. Like the Colossus, a “program” on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC’s development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.
Modern computers
-Concept of modern computer
The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper,[35]On Computable Numbers. Turing proposed a simple device that he called “Universal Computing machine” and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing’s design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[36] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
-Stored programs
Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[28] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report “Proposed Electronic Calculator” was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.
-Transistors
The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the “second generation” of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space.
-Integrated circuits
The next great advance in computing power came with the advent of the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[46]
The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[47] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[48] In his patent application of 6 February 1959, Kilby described his new device as “a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated”.[49][50] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[51] His chip solved many practical problems that Kilby’s had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby’s chip was made of germanium.
This new development heralded an explosion in the commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term “microprocessor”, it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[52] designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[53]
-Mobile computers become dominant
With the continued miniaturization of computing resources, and advancements in portable battery life, portable computers grew in popularity in the 2000s.[54]The same developments that spurred the growth of laptop computers and other portable computers allowed manufacturers to integrate computing resources into cellular phones. These so-called smartphones and tablets run on a variety of operating systems and have become the dominant computing device on the market, with manufacturers reporting having shipped an estimated 237 million devices in 2Q 2013.
OpenSSH (also known as OpenBSD Secure Shell) is a suite of security-related network-level utilities based on the Secure Shell (SSH) protocol, which help to secure network communications via the encryption of network traffic over multiple authentication methods and by providing secure tunneling capabilities.
OpenSSH is not a single computer program, but rather a suite of programs that serve as alternatives to unencrypted network communication protocols like FTPand rlogin. Active development primarily takes place within the OpenBSD source tree. OpenSSH is integrated into the base system of several other BSD projects, while the portable version is available as a package in other Unix-like systems.
OpenSSH was created by the OpenBSD team as an alternative to the original SSH software by Tatu Ylönen, which is now proprietary software. Although source code is available for the original SSH, various restrictions are imposed on its use and distribution. OpenSSH was created as a fork of Björn Grönvall’s OSSH that itself was a fork of Tatu Ylönen’s original free SSH 1.2.12 release, which was the last one having a license suitable for forking. The OpenSSH developers claim that their application is more secure than the original, due to their policy of producing clean and auditedcode and because it is released under the BSD license, the open source license to which the word open in the name refers.
OpenSSH first appeared in OpenBSD 2.6. The first portable release was made in October 1999. Developments since then have included the addition of ciphers (e.g., chacha20–poly1305 in 6.5 of January 2014), cutting the dependency on OpenSSL (6.7, October 2014) and an extension to facilitate public key discovery and rotation for trusted hosts (for transition from DSA to Ed25519 public host keys, version 6.8 of March 2015).
What is a kernel? If you spend any time reading Android forums, blogs, how-to posts or online discussion you’ll soon hear people talking about the kernel. A kernel isn’t something unique to Android – iOS and MacOS have one, Windowshas one, BlackBerry’s QNX has one, in fact all high level operating systems have one. The one we’re interested in is Linux, as it’s the one Android uses. Let’s try to break down what it is and what it does.
Android devices use the Linux kernel, bet every phone uses their own version of it. Linux kernel maintainers keep everything tidy and available, contributors (like Google) add or alter things to better meet their needs, and the people making the hardware contribute as well, because they need to develop hardware drivers for the parts they’re using for the kernel version they’re using. This is why it takes a while for independent Android developers and hackers to port new versions to older devices and get everything working. Drivers written to work with one version of the kernel for a phone might not work with a different version of software on the same phone. And that’s important, because one of the kernel’s main functions is to control the hardware. It’s a whole lot of source code, with more options while building it than you can imagine, but in the end it’s just the intermediary between the hardware and the software.
When software needs the hardware to do anything, it sends a request to the kernel. And when we say anything, we mean anything. From the brightness of the screen, to the volume level, to initiating a call through the radio, even what’s drawn on the display is ultimately controlled by the kernel. For example – when you tap the search button on your phone, you tell the software to open the search application. What happens is that you touched a certain point on the digitizer, which tells the software that you’ve touched the screen at those coordinates. The software knows that when that particular spot is touched, the search dialog is supposed to open. The kernel is what tells the digitizer to look (or listen, events are “listened” for) for touches, helps figure out where you touched, and tells the system you touched it. In turn, when the system receives a touch event at a specific point from the kernel (through the driver) it knows what to draw on your screen. Both the hardware and the software communicate both ways with the kernel, and that’s how your phone knows when to do something. Input from one side is sent as output to the other, whether it’s you playing Angry Birds, or connecting to your car’s Bluetooth.
It sounds complicated, and it is. But it’s also pretty standard computer logic — there’s an action of some sort generated for every event, and depending on that action things happen to the running software. Without the kernel to accept and send information, developers would have to write code for every single event for every single piece of hardware in your device. With the kernel, all they have to do is communicate with it through the Android system API’s, and hardware developers only have to make the device hardware communicate with the kernel. The good thing is that you don’t need to know exactly how or why the kernel does what it does, just understanding that it’s the go-between from software to hardware gives you a pretty good grasp of what’s happening under the glass.
This is a term for the computing elite, so proceed at your own risk. To understand what a kernel is, you first need to know that today’s operating systems are built in “layers.” Each layer has different functions such as serial port access, disk access, memory management, and the user interface itself. The base layer, or the foundation of the operating system, is called the kernel. The kernel provides the most basic “low-level” services, such as the hardware-software interaction and memory management. The more efficient the kernel is, the more efficiently the operating system will run.
When referring to an operating system, the kernelis the first section of the operating system to load into memory. As the center of the operating system, the kernel need to be small, efficient and loaded into a protected area in the memory; so as not to be overwritten. It can be responsible for such things as disk drive management, interrupthandler, file management, memory management, process management, etc.
The kernel is a program that constitutes the central core of a computer operating system. It has complete control over everything that occurs in the system.
A kernel can be contrasted with a shell (such as bash, csh or ksh in Unix-likeoperating systems), which is the outermost part of an operating system and a program that interacts with user commands. The kernel itself does not interact directly with the user, but rather interacts with the shell and other programs as well as with the hardware devices on the system, including the processor (also called the central processing unit or CPU), memory and disk drives.
The kernel is the first part of the operating system to load into memory during booting (i.e., system startup), and it remains there for the entire duration of the computer session because its services are required continuously. Thus it is important for it to be as small as possible while still providing all the essential services needed by the other parts of the operating system and by the various application programs.
When a computer crashes, it actually means the kernel has crashed. If only a single program has crashed but the rest of the system remains in operation, then the kernel itself has not crashed. A crash is the situation in which a program, either a user application or a part of the operating system, stops performing its expected function(s) and responding to other parts of the system. The program might appear to the user to freeze. If such program is a critical to the operation of the kernel, the entire computer could stall or shut down.
The kernel provides basic services for all other parts of the operating system, typically including memory management, process management, file management and I/O (input/output) management (i.e., accessing the peripheral devices). These services are requested by other parts of the operating system or by application programs through a specified set of program interfaces referred to as system calls.
Process management, possibly the most obvious aspect of a kernel to the user, is the part of the kernel that ensures that each process obtains its turn to run on the processor and that the individual processes do not interfere with each other by writing to their areas of memory. A process, also referred to as a task, can be defined as an executing (i.e., running) instance of a program.
The contents of a kernel vary considerably according to the operating system, but they typically include (1) a scheduler, which determines how the various processes share the kernel’s processing time (including in what order), (2) a supervisor, which grants use of the computer to each process when it is scheduled, (3) an interrupt handler, which handles all requests from the various hardware devices (such as disk drives and the keyboard) that compete for the kernel’s services and (4) a memory manager, which allocates the system’s address spaces (i.e., locations in memory) among all users of the kernel’s services.
The kernel should not be confused with the BIOS (Basic Input/Output System). The BIOS is an independent program stored in a chip on the motherboard (the main circuit board of a computer) that is used during the booting process for such tasks as initializing the hardware and loading the kernel into memory. Whereas the BIOS always remains in the computer and is specific to its particular hardware, the kernel can be easily replaced or upgraded by changing or upgrading the operating system or, in the case of Linux, by adding a newer kernel or modifying an existing kernel.
Most kernels have been developed for a specific operating system, and there is usually only one version available for each operating system. For example, the Microsoft Windows 2000 kernel is the only kernel for Microsoft Windows 2000 and the Microsoft Windows 98 kernel is the only kernel for Microsoft Windows 98. Linux is far more flexible in that there are numerous versions of the Linux kernel, and each of these can be modified in innumerable ways by an informed user.
A few kernels have been designed with the goal of being suitable for use with any operating system. The best known of these is the Mach kernel, which was developed at Carnegie-Mellon University and is used in the Macintosh OS X operating system.
The term kernel is frequently used in books and discussions about Linux, whereas it is used less often when discussing some other operating systems, such as the Microsoft Windows systems. The reasons are that the kernel is highly configurable in the case of Linux and users are encouraged to learn about and modify it and to download and install updated versions. With the Microsoft Windows operating systems, in contrast, there is relatively little point in discussing kernels because they cannot be modified or replaced.
Categories of Kernels
Kernels can be classified into four broad categories: monolithic kernels, microkernels, hybrid kernels and exokernels. Each has its own advocates and detractors.
Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers(small programs that allow the operating system to interact with hardware devices, such as disk drives, video cards and printers). Modern monolithic kernels, such as those of Linux and FreeBSD, both of which fall into the category of Unix-like operating systems, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel’s capabilities as required, while helping to minimize the amount of code running in kernel space.
A microkernel usually provides only minimal services, such as defining memory address spaces, interprocess communication (IPC) and process management. All other functions, such as hardware management, are implemented as processes running independently of the kernel. Examples of microkernel operating systems are AIX, BeOS, Hurd, Mach, Mac OS X, MINIX and QNX.
Hybrid kernels are similar to microkernels, except that they include additional code in kernel space so that such code can run more swiftly than it would were it in user space. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure microkernels can provide high performance. Hybrid kernels should not be confused with monolithic kernels that can load modules after booting (such as Linux).
Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000 and XP. DragonFly BSD, a recent fork (i.e., variant) of FreeBSD, is the first non-Mach based BSD operating system to employ a hybrid kernel architecture.
Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, and they provide no hardware abstractions on top of which applications can be constructed. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.
Exokernels in themselves they are extremely small. However, they are accompanied by library operating systems, which provide application developers with the conventional functionalities of a complete operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API (application programming interface), such as one for Linux and one for Microsoft Windows, thus making it possible to simultaneously run both Linux and Windows applications.
The Monolithic Versus Micro Controversy
In the early 1990s, many computer scientists considered monolithic kernels to be obsolete, and they predicted that microkernels would revolutionize operating system design. In fact, the development of Linux as a monolithic kernel rather than a microkernel led to a famous flame war (i.e., a war of words on the Internet) between Andrew Tanenbaum, the developer of the MINIX operating system, and Linus Torvalds, who originally developed Linux based largely on MINIX.
Proponents of microkernels point out that monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error. Although this sounds sensible, it is questionable how important it is in reality, because operating systems with monolithic kernels such as Linux have become extremely stable and can run for years without crashing.
Another disadvantage cited for monolithic kernels is that they are not portable; that is, they must be rewritten for each new architecture (i.e., processor type) that the operating system is to be used on. However, in practice, this has not appeared to be a major disadvantage, and it has not prevented Linux from being ported to numerous processors.
Monolithic kernels also appear to have the disadvantage that their source codecan become extremely large. Source code is the version of software as it is originally written (i.e., typed into a computer) by a human in plain text (i.e., human readable alphanumeric characters) and before it is converted by a compiler into object code that a computer’s processor can directly read and execute.
For example, the source code for the Linux kernel version 2.4.0 is approximately 100MB and contains nearly 3.38 million lines, and that for version 2.6.0 is 212MB and contains 5.93 million lines. This adds to the complexity of maintaining the kernel, and it also makes it difficult for new generations of computer science students to study and comprehend the kernel. However, the advocates of monolithic kernels claim that in spite of their size such kernels are easier to design correctly, and thus they can be improved more quickly than can microkernel-based systems.
Moreover, the size of the compiled kernel is only a tiny fraction of that of the source code, for example roughly 1.1MB in the case of Linux version 2.4 on a typical Red Hat Linux 9 desktop installation. Contributing to the small size of the compiled Linux kernel is its ability to dynamically load modules at runtime, so that the basic kernel contains only those components that are necessary for the system to start itself and to load modules.
The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional operating system (one of the most popular of which is muLinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems(i.e., computer circuitry built into other products).
Although microkernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency.
A kernel is the core component of an operating system. Using interprocess communication and system calls, it acts as a bridge between applications and the data processing performed at the hardware level.
When an operating system is loaded into memory, the kernel loads first and remains in memory until the operating system is shut down again. The kernel is responsible for low-level tasks such as disk management, task management and memory management.
A computer kernel interfaces between the three major computer hardware components, providing services between the application/user interface and the CPU, memory and other hardware I/O devices.
The kernel provides and manages computer resources, allowing other programs to run and use these resources. The kernel also sets up memory address space for applications, loads files with application code into memory, sets up the execution stack for programs and branches out to particular locations inside programs for execution.
The kernel is responsible for:
Process management for application execution
Memory management, allocation and I/O
Device management through the use of device drivers
System call control, which is essential for the execution of kernel services
There are five types of kernels:
Monolithic Kernels: All operating system services run along the main kernel thread in a monolithic kernel, which also resides in the same memory area, thereby providing powerful and rich hardware access.
Microkernels: Define a simple abstraction over hardware that use primitives or system calls to implement minimum OS services such as multitasking, memory management and interprocess communication.
Hybrid Kernels: Run a few services in the kernel space to reduce the performance overhead of traditional microkernels where the kernel code is still run as a server in the user space.
Nano Kernels: Simplify the memory requirement by delegating services, including the basic ones like interrupt controllers or timers to device drivers.
Exo Kernels: Allocate physical hardware resources such as processor time and disk block to other programs, which can link to library operating systems that use the kernel to simulate operating system abstractions.
The Operating System is a generic name given to all of the elements (user interface, libraries, resources) which make up the system as a whole.
The kernel is “brain” of the operating system, which controls everything from access to the hard disk to memory management. Whenever you want to do anything, it goes though the kernel.
a kernel is part of the operating system, it is the first thing that the boot loader loads onto the cpu (for most operating systems), it is the part that interfaces with the hardware, and it also manages what programs can do what with the hardware, it is really the central part of the os, it is made up of drivers, a driver is a program that interfaces with a particular piece of hardware, for example: if I made a digital camera for computers, I would need to make a driver for it, the drivers are the only programs that can control the input and output of the computer
The Kernel is the core piece of the operating system. It is not necessarily an operating system in and of itself.
Everything else is built around it.
In computing, the ‘kernel’ is the central component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel’s responsibilities include managing the system’s resources (the communication between hardware and software components).
What happens if we have ONLY the kernel BUT NO shell? You then have a machine with the actual OS but there is NO way you can use it. There is no “interface” for the human to interact with the OS and hence the machine. (Assuming GUIs don’t exist, for simplicity 🙂
What happens if we have ONLY the shell BUT NO kernel? This is impossible. Shell is a program provided by the OS so that you can interact with it. Without the kernel/OS nothing can execute (in a sense, not 100% true though, but you get the idea)
A shell is just a program that offers some functionality that runs on the OS. The kernel is the “essence/core” of the OS. The words can be confusing so here’s the dictionary definition of kernel:
See how the words kernel/shell relate? That’s the origin of it and its borrowed use in computing. The kernel is the essence/core of the OS. You access the machine via the OS and the OS via a “shell” that seems to “contain” the kernel.
Hope this clarifies your confusion 🙂
The core inner part of the OS is the Kernel (linux kernel or Windows kernel or FreeBSD kernel) but users interact with this by using the outer part or shell (eg bash shell or cmd.exe or korn shell)
Users can not directly control hardware like printers or monitors. Users can not directly control virtual memory or process scheduling. While the kernel takes care of such matters, the user uses the UI or shell to communicate with the kernel. The UI can be CLI (bash shell or DOS shell) or GUI (Kde for Linux or Metro UI for Windows)
The kernel is the part of the operating system that runs in privileged mode. It does all sorts of things like interact with hardware, do file I/O, and spawn off processes.
The shell (e.g., bash), by contrast, is a specific program which runs in user-space (i.e., unprivileged mode). Whenever you try to start a process with the shell, the shell has to ask the kernel to make it. In Linux, it would probably do this with the system calls fork and execve. Furthermore, the shell will forward its input (usually, from your own key presses) to the running program’s stdin, and it will forward the program’s output (stdout and stderr) to its own output (usually displayed on your screen). Again, it does all this with the help of the kernel.
Basically the kernel is the center of the operating system that manages everything. The shell is just a particular program, a friendly interface that translates your commands into some low-level calls to the kernel.
Analogical we try to say like, Kernel is chef who prepare food, and shell is kind of waiter who take the order and deliver it to the user.
Technically, Shell is software program which understood that what user want and convey it to kernel. Kernel perform work according to the instruction and return back to user via shell.
If you really enthusiastic about how shell and kernel work together then please feel free to browse code NEKTech-Linux Jitendra-khasdev/NEKTech-Linux-Shell
A shell is a command interpreter, i.e. the program that either process the command you enter in your terminal emulator (interactive mode) or process shell scripts (text files containing commands) (batch mode). In early Unix times, it used to be the unique way for users to interact with their machines. Nowadays, graphical environments are replacing the shell for most casual users.
A kernel is a low level program interfacing with the hardware (CPU, RAM, disks, network, …) on top of which applications are running. It is the lowest level program running on computers although with virtualization you can have multiple kernels running on top of virtual machines which themselves run on top of another operating system.
An API is a generic term defining the interface developers have to use when writing code using libraries and a programming language. Kernels have no APIs as they are not libraries. They do have an ABI, which, beyond other things, define how do applications interact with them through system calls. Unix application developers use the standard C library (eg: libc, glibc) to build ABI compliant binaries. printf(3) and fopen(3) are not wrappers to system calls but (g)libc standard facilities. The low level system calls they eventually use are write(2) and open(2) and possibly others like brk, mmap. The number in parentheses is a convention to tell in what manual the command is to be found.
The first volume of the Unix manual pages contains the shell commands.
The second one contains the system call wrappers like write and open. They form the interface to the kernel.
The third one contains the standard library (including the Unix standard API) functions (excluding system calls) like fopen and printf. These are not wrappers to specific system calls but just code using system calls when required.
A KERNEL is the part of the Operating System that communicates between the hardware and software of a computer and manages how hardware resources are used to meet software requirements.
A SHELL is the user interface that allows users to request specific tasks from the computer. Two types of user interfaces are the text based “Command Line Interface”, (CLI), or the Icon based “Graphical User Interface”, (GUI).
Note that the “CLI” uses less resources and is a more stable interface.
However, those of us using home routers will use a “GUI”. Note that the Operating System, (OS), of a home router is actually called “Firmware”.
Both the Shell and the Kernel are the Parts of this Operating System. These Both Parts are used for performing any Operation on the System. When a user gives his Command for Performing Any Operation, then the Request Will goes to the Shell Parts, The Shell Parts is also called as the Interpreter which translate the Human Program into the Machine Language and then the Request will be transferred to the Kernel. So that Shell is just called as the interpreter of the Commands which Converts the Request of the User into the Machine Language.
Kernel is also called as the heart of the Operating System and the Every Operation is performed by using the Kernel , When the Kernel Receives the Request from the Shell then this will Process the Request and Display the Results on the Screen.
As we have learned there are Many Programs or Functions those are Performed by the Kernel But the Functions those are Performed by the Kernel will never be Shown to the user. And the Functions of the Kernel are Transparent to the user.
Simply put, the shell is a program that takes your commands from the keyboard and gives them to the operating system to perform. In the old days, it was the only user interface available on a Unix computer. Nowadays, we have graphical user interfaces (GUIs) in addition to command line interfaces (CLIs) such as the shell.
On most Linux systems a program called bash (which stands for Bourne Again SHell, an enhanced version of the original Bourne shell program, sh, written by Steve Bourne) acts as the shell program. There are several additional shell programs available on a typical Linux system. These include:ksh, tcsh and zsh.
In computing, the superuser is a special user account used for system administration. Depending on the operating system (OS), the actual name of this account might be root, administrator,admin or supervisor. In some cases, the actual name of the account is not the determining factor; on Unix-like systems, for example, the user with a user identifier (UID) of zero is the superuser, regardless of the name of that account;[1] and in systems which implement a role based security model, any user with the role of superuser (or its synonyms) can carry out all actions of the superuser account).
The principle of least privilege recommends that most users and applications run under an ordinary account to perform their work, as a superuser account is capable of making unrestricted, potentially adverse, system-wide changes.
In computing, a shell is an operating system’s user interface for access to that operating system. It is named a shell because it is a layer around the operating system kernel.
Most operating system shells are not direct interfaces to the underlying kernel, even if a shell communicates with the user via peripheral devices attached to the computer directly. Shells are actually special applications that use the kernel API in just the same way as it is used by other application programs. A shell manages the user–system interaction by prompting users for input, interpreting their input, and then handling an output from the underlying operating system (much like a read–eval–print loop, REPL).[1] Since the operating system shell is actually an application, it may easily be replaced with another similar application, for most operating systems.
Most operating system shells fall into one of two categories – command-line and graphical. Command line shells provide a command-line interface (CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). Other possibilities, although not so common, include voice user interfaceand various implementations of a text-based user interface (TUI) that are not CLI. The relative merits of CLI- and GUI-based shells are often debated.
Text (CLI) shells
A command-line interface (CLI) is an operating system shell that uses alphanumeric characters typed on a keyboard to provide instructions and data to the operating system, interactively. For example, a teletypewriter can send codes representing keystrokes to a command interpreter program running on the computer; the command interpreter parses the sequence of keystrokes and responds with an error message if it cannot recognize the sequence of characters, or it may carry out some other program action such as loading an application program, listing files, logging in a user and many others.
A feature of many command-line shells is the ability to save sequences of commands for re-use. Such batch files (script files) can be used repeatedly to automate routine operations such as initializing a set of programs when a system is restarted. Batch mode use of shells usually involves structures, conditionals, variables, and other elements of programming languages; some have the bare essentials needed for such a purpose, others are very sophisticated programming languages in and of themselves. Conversely, some programming languages can be used interactively from an operating system shell or in a purpose-built program.
The command-line shell may offer features such as command-line completion, where the interpreter expands commands based on a few characters input by the user. A command-line interpreter may offer a history function, so that the user can recall earlier commands issued to the system and repeat them, possibly with some editing. Since all commands to the operating system had to be typed by the user, short command names and compact systems for representing program options were common. Short names were sometimes hard for a user to recall, and early systems lacked the storage resources to provide a detailed on-line user instruction guide.
Graphical shells
Graphical shells provide means for manipulating programs based on graphical user interface (GUI), by allowing for operations such as opening, closing, moving and resizing windows, as well as switching focus between windows. Graphical shells may be included with desktop environments or come separately, even as a set of loosely coupled utilities.
curl is used in command lines or scripts to transfer data. It is also used in cars, television sets, routers, printers, audio equipment, mobile phones, tablets, settop boxes, media players and is the internet transfer backbone for thousands of software applications affecting billions of humans daily.
Supports…
DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, HTTP/2, cookies, user+password authentication (Basic, Plain, Digest, CRAM-MD5, NTLM, Negotiate and Kerberos), file transfer resume, proxy tunneling and more.
The Internet Protocol address which is usually abbreviated as IP address is a 32-bit number or a 128-bit number which in itself is a unique identifier of a machine over a network.
A database is an organized collection of data.[1] It is the collection of schemas, tables, queries, reports, views, and other objects. The data are typically organized to model aspects of reality in a way that supports processes requiring information, such as modelling the availability of rooms in hotels in a way that supports finding a hotel with vacancies.
A database management system (DBMS) is a computer software application that interacts with the user, other applications, and the database itself to capture and analyze data. A general-purpose DBMS is designed to allow the definition, creation, querying, update, and administration of databases. Well-known DBMSs include MySQL, PostgreSQL, MongoDB, MariaDB, Microsoft SQL Server, Oracle, Sybase, SAP HANA, MemSQL, SQLite and IBM DB2. A database is not generally portable across different DBMSs, but different DBMS can interoperate by using standards such as SQL and ODBC or JDBC to allow a single application to work with more than one DBMS. Database management systems are often classified according to the database model that they support; the most popular database systems since the 1980s have all supported the relational model as represented by the SQL language.[disputed – discuss]Sometimes a DBMS is loosely referred to as a “database”.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. RAID is used for recovery of data if any of the disks fail. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.
Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans.[8]
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
Applications
Databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers (see Enterprise software).
Databases are used to hold administrative information and more specialized data, such as engineering data or economic models. Examples of database applications include computerized librarysystems, flight reservation systems, computerized parts inventory systems, and many content management systems that store websites as collections of webpages in a database.
History
Following the technology progress in the areas of processors, computer memory, computer storage, and computer networks, the sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. The development of database technology can be divided into three eras based on data model or structure: navigational,[9] SQL/relational, and post-relational.
The two main early navigational data models were the hierarchical model, epitomized by IBM’s IMS system, and the CODASYL model (network model), implemented in a number of products such as IDMS.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and as of 2015 they remain dominant: IBM DB2, Oracle, MySQL, and Microsoft SQL Server are the top DBMS.[10] The dominant database language, standardised SQL for the relational model, has influenced database languages for other data models.[citation needed]
Object databases were developed in the 1980s to overcome the inconvenience of object-relational impedance mismatch, which led to the coining of the term “post-relational” and also the development of hybrid object-relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key-value stores and document-oriented databases. A competing “next generation” known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.
1960s, navigational DBMS:
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites[11] a 1962 report by the System Development Corporation of California as the first to use the term “data-base” in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the “Database Task Group” within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the “CODASYL approach”, and soon a number of commercial products based on this approach entered the market.
The CODASYL approach relied on the “manual” navigation of a linked data set which was formed into a large network. Applications could find records by one of three methods:
Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a very straightforward query language. However, in the final tally, CODASYL was very complex and required significant training and effort to produce useful applications.
IBM also had their own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL’s network model. Both concepts later became known as navigational databases due to the way data was accessed, and Bachman’s 1973 Turing Award presentation was The Programmer as Navigator. IMS is classified[by whom?] as a hierarchical database. IDMS and Cincom Systems’ TOTAL database are classified as network databases. IMS remains in use as of 2014.
1970s, relational DBMS:
Edgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a “search” facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.[13]
In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd’s idea was to use a “table” of fixed-length records, with each table used for a different type of entity. A linked-list system would be very inefficient when storing “sparse” databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of normalized tables (or relations), with optional elements being moved out of the main table to where they would take up room only if needed. Data may be freely inserted, deleted and edited in these tables, with the DBMS doing whatever maintenance needed to present a table view to the application/user.
The relational model also allowed the content of the database to evolve without constant rewriting of links and pointers. The relational part comes from entities referencing other entities in what is known as one-to-many relationship, like a traditional hierarchical model, and many-to-many relationship, like a navigational (network) model. Thus, a relational model can express both hierarchical and navigational models, as well as its native tabular model, allowing for pure or combined modeling in terms of these three models, as the application requires.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single record, and unused items would simply not be placed in the database. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
Linking the information back together is the key to this system. In the relational model, some bit of information was used as a “key”, uniquely defining a particular record. When information was being collected about a user, information stored in the optional tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses and phone numbers for that user would be recorded with the login name as its key. This simple “re-linking” of related data back into a single collection is something that traditional computer languages are not designed for.
Just as the navigational approach would require programs to loop in order to collect records, the relational approach would require loops to collect information about any one record. Codd’s suggestions was a set-oriented language, that would later spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he demonstrated that such a system could support all the operations of normal databases (inserting, updating etc.) as well as providing a simple system for finding and returning sets of data in a single operation.
Codd’s paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a “language” for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relationalare actually SQL DBMSs.
In 1970, the University of Michigan began development of the MICRO Information Management System[14] based on D.L. Childs’ Set-Theoretic Data model.[15][16][17] MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System.[18] The system remained in production until 1998.
Integrated approach:
In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.
Another approach to hardware support for database management was ICL’s CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).
Late 1970s, SQL DBMS:
IBM started working on a prototype system loosely based on Codd’s concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large “chunk”. Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL[citation needed] – had been added. Codd’s ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).
Larry Ellison’s Oracle started from a different chain, based on IBM’s papers on System R, and beat IBM to market when the first version was released in 1978.[citation needed]
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd’s paper was also read and Mimer SQL was developed from the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise. In the early 1980s, Mimer introduced transaction handling for high robustness in applications, an idea that was subsequently implemented on most other DBMSs.
Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two have become irrelevant.
1980s, on the desktop:
The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff the creator of dBASE stated: “dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation.”[19] dBASE was one of the top selling software titles in the 1980s and early 1990s.
1990s, object-oriented:
The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person’s data were in a database, that person’s attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be relations to objects and their attributes and not to individual fields.[20] The term “object-relational impedance mismatch” described the inconvenience of translating between programmed objects and database tables. Object databases and object-relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object-relational mappings (ORMs) attempt to solve the same problem.
2000s, NoSQL and NewSQL:
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in enterprise database management, where XML is being used as the machine-to-machine data interoperability standard. XML database management systems include commercial software MarkLogic and Oracle Berkeley DB XML, and a free use software Clusterpoint Distributed XML/JSON Database. All are enterprise software database platforms and support industry standard ACID-compliant transaction processing with strong database consistency characteristics and high level of database security.[21][22][23]
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally. The most popular NoSQL systems include MongoDB, Couchbase, Riak, Memcached, Redis, CouchDB, Hazelcast, Apache Cassandra, and HBase,[24] which are all open-source software products.
In recent years, there was a high demand for massively distributed databases with high partition tolerance but according to the CAP theorem it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system. Such databases include Google F1/Spanner , CockroachDB, TiDB, ScaleBase, MemSQL, NuoDB,[25]and VoltDB.
Examples:
One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface type. This section lists a few of the adjectives used to characterize different kinds of databases.
An in-memory database is a database that primarily resides in main memory, but is typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases, and so are often used where response time is critical, such as in telecommunications network equipment.[26]SAP HANA platform is a very hot topic for in-memory database. By May 2012, HANA was able to run on servers with 100TB main memory powered by IBM. The co founder of the company claimed that the system was big enough to run the 8 largest SAP customers.
An active database includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many databases provide active database features in the form of database triggers.
A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, “in the cloud”, while its applications are both developed by programmers and later maintained and utilized by (application’s) end-users through a web browser and Open APIs.
Data warehouses archive data from operational databases and often from external sources such as market research firms. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading, and managing data so as to make them available for further use.
A deductive database combines logic programming with a relational database, for example by using the Datalog language.
A distributed database is one in which both the data and the DBMS span multiple computers.
A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi structured data, information. Document-oriented databases are one of the main categories of NoSQL databases.
An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in such a way that the DBMS is hidden from the application’s end-users and requires little or no ongoing maintenance.[27]
End-user databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases. Some of them are much simpler than full-fledged DBMSs, with more elementary DBMS functionality.
A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a single database by a federated database management system (FDBMS), which transparently integrates multiple autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system), and provides them with an integrated conceptual view.
Sometimes the term multi-database is used as a synonym to federated database, though it may refer to a less integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single application. In this case, typically middleware is used for distribution, which typically includes an atomic commit protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating databases.
A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.
An array DBMS is a kind of NoSQL DBMS that allows to model, store, and retrieve (usually large) multi-dimensional arrays such as satellite images and climate simulation output.
In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias, where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext database.
A knowledge base (abbreviated KB, kb or Δ[28][29]) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences.
A mobile database can be carried on or synchronized from a mobile computing device.
Operational databases store detailed data about the operations of an organization. They typically process relatively high volumes of updates using transactions. Examples include customer databases that record contact, credit, and demographic information about a business’ customers, personnel databases that hold information such as salary, benefits, skills data about employees, enterprise resource planning systems that record details about product components, parts inventory, and financial databases that keep track of the organization’s money, accounting and financial dealings.
A parallel database seeks to improve performance through parallelization for tasks such as loading data, building indexes and evaluating queries.
The major parallel DBMS architectures which are induced by the underlying
hardware
architecture are:
Shared memory architecture, where multiple processors share the main memory space, as well as other data storage.
Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its own main memory, but all units share the other storage.
Shared nothing architecture, where each processing unit has its own main memory and other storage.
Probabilistic databases employ fuzzy logic to draw inferences from imprecise data.
Real-time databases process transactions fast enough for the result to come back and be acted on right away.
A spatial database can store the data with multidimensional features. The queries on such data include location-based queries, like “Where is the closest hotel in my area?”.
A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL. More specifically the temporal aspects usually include valid-time and transaction-time.
A terminology-oriented database builds upon an object-oriented database, often customized for a specific field.
An unstructured data database is intended to store in a manageable and protected way diverse objects that do not fit naturally and conveniently in common databases. It may include email messages, documents, journals, multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now support unstructured data in various ways, and new dedicated DBMSs are emerging.
Design and modeling
The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity-relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organisation, like “can a customer also be a supplier?”, or “if a product is sold with two different forms of packaging, are those the same product or different products?”, or “if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?”. The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the analysis of workflow in the organization. This can help to establish what information is needed in the database, and what can be left out. For example, it can help when deciding whether the database needs to hold historic data as well as current data.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modelling notation used to express that design.)
The most popular database model for general-purpose databases is the relational model, or more precisely, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that each elementary “fact” is only recorded in one place, so that insertions, updates, and deletions automatically maintain consistency.
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like. This is often called physical database design. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. There are two types of data independence: Physical data independence and logical data independence. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control to database objects as well as defining security levels and methods for the data itself.
Models:
A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format.
Common logical data models for databases include:
Navigational databases
Relational model
Entity–relationship model
Object model
Document model
Entity–attribute–value model
Star schema
Hierarchical database model
Network model
Graph database
Enhanced entity–relationship model
An object-relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file
Other models include:
Associative model
Multidimensional model
Array model
Multivalue model
Specialized models are optimized for particular types of data:
XML database
Semantic model
Content store
Event store
Time series model
External, conceptual, and internal views:
A database management system provides three views of the database data:
The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level.
The conceptual level unifies the various external views into a compatible global view.[31] It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators.
The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views’ performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities.
While there is typically only one conceptual (or logical) and physical (or internal) view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company’s expenses, but does not need details about employees that are the interest of the human resources department. Thus different departments need different views of the company’s database.
The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance.
The conceptual view provides a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types.
Separating the external, conceptual and internal levels was a major feature of the relational database model implementations that dominate 21st century databases.
Languages:
Database languages are special-purpose languages, which do one or more of the following:
Data definition language – defines data types such as creating, altering, or dropping and the relationships among them
Data manipulation language – performs tasks such as inserting, updating, or deleting data occurrences
Query language – allows searching for information and computing derived information
Database languages are specific to a particular data model.Notable examples include:
SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the first commercial languages for the relational model, although it departs in some respects from the relational model as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of the American National Standards Institute(ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The standards have been regularly enhanced since and is supported (with varying degrees of conformance) by all mainstream commercial relational DBMSs.[32][33]
OQL is an object model language standard (from the Object Data Management Group). It has influenced the design of some of the newer query languages like JDOQL and EJB QL.
XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist, by relational databases with XML capability such as Oracle and DB2, and also by in-memory XML processors such as Saxon.
SQL/XML combines XQuery with SQL.[34]
A database language may also incorporate features like:
DBMS-specific Configuration and storage engine management
Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing
Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car)
Application programming interface version of the query language, for programmer convenience
Performance, security, and availability:
Because of the critical importance of database technology to the smooth running of an enterprise, database systems include complex mechanisms to deliver the required performance, security, and availability, and allow database administrators to control the use of these features.
Storage[
edit
]Main articles:
Computer data storage
and
Database engine
Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, “data about the data”, and internal data structures) to reconstruct the conceptual level and external levelfrom the internal level when needed. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. “storage engine”. Though typically accessed by a DBMS through the underlying operating system (and often utilizing the operating systems’ file systems as intermediates for storage layout), storage properties and configuration setting are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look in the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels’ reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).
Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be used in the same database.
Various low-level database storage structures are used by the storage engine to serialize the data model so it can be written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional storage is row-oriented, but there are also column-oriented and correlation databases.
Materialized views[
edit
]Main article:
Materialized view
Often storage redundancy is employed to increase performance. A common example is storing materialized views, which consist of frequently needed external views or query results. Storing such views saves the expensive computing of them each time they are needed. The downsides of materialized views are the overhead incurred when updating them to keep them synchronized with their original updated database data, and the cost of storage redundancy.
Replication[
edit
]Main article:
Database replication
Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to increase data availability (both to improve performance of simultaneous multiple end-user accesses to a same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a replicated object need to be synchronized across the object copies. In many cases, the entire database is replicated.
Security
Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program).
Database access control deals with controlling who (a person or a certain computer program) is allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or utilizing specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called “subschemas”. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases.
Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see data encryption).
Change and access logging records who accessed which attributes, what was changed, and when it was changed. Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes. Sometimes application-level code is used to record changes rather than leaving this to the database. Monitoring can be set up to attempt to detect security breaches.
Transactions and concurrency[
edit
]Further information:
Concurrency control
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction’s programmer via special transaction commands).
The acronym ACID describes some ideal properties of a database transaction: Atomicity, Consistency, Isolation, and Durability.
Migration[
edit
]See also:
Data migration § Database migration
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations, it is desirable to move, migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database’s transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database’s conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This in spite of the fact that tools may exist to help migration between specific DBMSs. Typically, a DBMS vendor provides tools to help importing databases from other popular DBMSs.
Building, maintaining, and tuning[
edit
]Main article:
Database tuning
After designing a database for an application, the next stage is building the database. Typically, an appropriate general-purpose DBMS can be selected to be utilized for this purpose. A DBMS provides the needed user interfaces to be utilized by database administrators to define the needed application’s data structures within the DBMS’s respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.).
When the database is ready (all its data structures and other needed components are defined), it is typically populated with initial application’s data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases, the database becomes operational while empty of application data, and data are accumulated during its operation.
After the database is created, initialised and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned (tuning) for better performance; application’s data structures may be changed or added, new related application programs may be written to add to the application’s functionality, etc.
Backup and restore[
edit
]Main article:
Backup
Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this, a backupoperation is done occasionally or continuously, where each desired database state (i.e., the values of its data and their embedding in database’s data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When this state is needed, i.e., when it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are utilized to restore that state.
Static analysis[
edit
]
Static analysis techniques for software verification can be applied also in the scenario of query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages for relational databases as a way to support sound approximation techniques.[35] The semantics of query languages can be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database system has many interesting applications, in particular, for security purposes, such as fine grained access control, watermarking, etc.
Other[
edit
]
Other DBMS features might include:
Database logs
Graphics component for producing graphs and charts, especially in a data warehouse system
Query optimizer – Performs query optimization on every query to choose an efficient query plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a particular storage engine.
Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a DBMS and related database may span computers, networks, and storage units) and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc.
Increasingly, there are calls for a single system that incorporates all of these core functionalities into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some market such offerings as “DevOps for database”.
HTTP Daemon is a software program that runs in the background of a web server and waits for the incoming server requests. The daemon answers the request automatically and serves the hypertext and multimedia documents over the internet using HTTP.
httpd.conf is a configuration file which is used by the Apache HTTP Server. It is the file which Apache server looks at for its different configuration properties . Properties can be directly edited from the file using super user permissions.
The httpd.conf file can be located on any Unix-based system that complies with the Filesystem Hierarchy Standard under the following path: /etc/httpd/httpd.conf.
It will be good to note that if apache is running it doesnt mean that the HTTP connection is working. To test HTTP connection of apache you can read about telnet.
1. Some folders cant be logged into with just cd Folder, some folders works with
cd /Folder
This is true for Usr folder
2. To find a file whose location you happen to have forgotten use: Sudo find / -name file_name
3. Apache2ctl -V is used to view info about the apache2 if it is installed. It includes info such as the configuration file location and name of file as well.
In addition to redirecting the output from one process and sending it to another process, we can also write that output to a file using the > operator.
2. If it was an output from online using Curl then use ‘-O’ to send the output to a file
Write output to file instead of stdout. If you are using {} or [] to fetch multiple documents, you can use ’#’ followed by a number in the file specifier. That variable will be replaced with the current string for the URL being fetched. Like in:curl http://{one,two}.site.com -o “file_#1.txt”or use several variables like:curl http://{site,host}.host[1-5].com -o ”#1_#2″You may use this option as many times as the number of URLs you have. See also –create-dirs option to create the local directories dynamically. Specify ’-’ to force the output to stdout.
To store the output in a file, you an redirect it as shown below. This will also display some additional download statistics.
$ curl http://www.centos.org > centos-org.html % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 27329 0 27329 0 0 104k 0 –:–:– –:–:– –:–:– 167k
4. Save the cURL Output to a file
We can save the result of the curl command to a file by using -o/-O options.
-o (lowercase o) the result will be saved in the filename provided in the command line
-O (uppercase O) the filename in the URL will be taken and it will be used as the filename to store the result
Now the page gettext.html will be saved in the file named ‘mygettext.html’. You can also note that when running curl with -o option, it displays the progress meter for the download as follows.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
66 1215k 66 805k 0 0 33060 0 0:00:37 0:00:24 0:00:13 45900
100 1215k 100 1215k 0 0 39474 0 0:00:31 0:00:31 --:--:-- 68987
When you use curl -O (uppercase O), it will save the content in the file named ‘gettext.html’ itself in the local machine.
Note: When curl has to write the data to the terminal, it disables the Progress Meter, to avoid confusion in printing. We can use ‘>’|’-o’|’-O’ options to move the result to a file.
Similar to cURL, you can also use wget to download files. Refer to wget examplesto understand how to use wget effectively.
Here, we’re feeding the response retrieved by curl into another new command, pbcopy. This is a little bit nicer on the eyes and the brain, since it just puts the curl results straight to your clipboard, which allows you to paste straight into your favorite text editor. No code will be printed in your Terminal, only a confirmation graph of curl’s download.
We can also use redirection with curl to copy it straight to a file, skipping the middleman.
This will append the response into google.txt, located in your home directory. You could also use a single ’>’ to obliterate what’s in that file, leaving only Google’s source in the file.
Exchange Server was initially Microsoft’s internal mail server. The first version of Exchange Server to be published outside Microsoft was Exchange Server 4.0. Exchange initially used the X.400 directory service but switched to Active Directory later.
X.400 is a suite of ITU-T Recommendations that define standards for Data Communication Networks for Message Handling Systems (MHS) — more commonly known as email
The ITU Telecommunication Standardization Sector (ITU-T) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU); it coordinates standards for telecommunications.The standardization efforts of ITU commenced in 1865 with the formation of the International Telegraph Union (ITU). ITU became a Specialized agency of the United Nations in 1947. The International Telegraph and Telephone Consultative Committee (CCITT, from French: Comité Consultatif International Téléphonique et Télégraphique) was created in 1956, and was renamed ITU-T in 1993. ITU-T has a permanent secretariat, the Telecommunication Standardization Bureau (TSB), based at the ITU headquarters in Geneva, Switzerland. The current Director of the Bureau is Chaesub Lee, whose 4-year term commenced on 1 January 2015, who replaced Malcolm Johnson of the United Kingdom, who was director from 1 January 2007 to 2014.
Primary function of ITU-T:
The ITU-T mission is to ensure the efficient and timely production of standards covering all fields of telecommunications on a worldwide basis, as well as defining tariff and accounting principles for international telecommunication services.
The international standards that are produced by the ITU-T are referred to as “Recommendations” (with the word ordinarily capitalized to distinguish its meaning from the ordinary sense of the word “recommendation”), as they become mandatory only when adopted as part of a national law.
Since the ITU-T is part of the ITU, which is a United Nations specialized agency, its standards carry more formal international weight than those of most other standards development organizations that publish technical specifications of a similar form)
At one time, the designers of X.400 were expecting it to be the predominant form of email, but this role has been taken by the SMTP-based Internet e-mail. Despite this, it has been widely used within organizations and was a core part of Microsoft Exchange Server until 2006; variants continue to be important in military and aviation contexts ]
Versions 4.0 and 5.0 came bundled with an email client called Microsoft Exchange Client. It was discontinued in favor of Microsoft Outlook.
Exchange Server uses a proprietary protocol called MAPI. Over time, however, it added support for POP3, IMAP, SMTP, and EAS.
Post Office Protocol
POP – In computing, the Post Office Protocol (POP) is an application-layerInternet standardprotocol used by local e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection.[1] POP has been developed through several versions, with version 3 (POP3) being the last standard in common use before largely being made obsolete by the more advanced IMAP as well as webmail.
Overview of POP
POP supports download-and-delete requirements for access to remote mailboxes (termed maildrop in the POP RFC‘s). Although most POP clients have an option to leave mail on server after download, e-mail clients using POP generally connect, retrieve all messages, store them on the user’s PC as new messages, delete them from the server, and then disconnect. Other protocols, notably IMAP, (Internet Message Access Protocol) provide more complete and complex remote access to typical mailbox operations. In the late 1990s and early 2000s, fewer Internet Service Providers (ISPs) supported IMAP due to the storage space that was required on the ISP’s hardware. Contemporary e-mail clients supported POP, then over time popular mail client software added IMAP support.
A POP3 server listens on well-known port 110. Encrypted communication for POP3 is either requested after protocol initiation, using the STLS command, if supported, or by POP3S, which connects to the server using Transport Layer Security (TLS) or Secure Sockets Layer (SSL) on well-known TCP port 995.
Available messages to the client are fixed when a POP session opens the maildrop, and are identified by message-number local to that session or, optionally, by a unique identifier assigned to the message by the POP server. This unique identifier is permanent and unique to the maildrop and allows a client to access the same message in different POP sessions. Mail is retrieved and marked for deletion by message-number. When the client exits the session, the mail marked for deletion is removed from the maildrop.
Internet Message Access Protocol
IMAP – In computing, the Internet Message Access Protocol (IMAP) is an Internet standardprotocol used by e-mail clients to retrieve e-mail messages from a mail server over a TCP/IP connection. IMAP is defined by RFC 3501.IMAP was designed with the goal of permitting complete management of an email box by multiple email clients, therefore clients generally leave messages on the server until the user explicitly deletes them. An IMAP server typically listens on port number 143. IMAP over SSL (IMAPS) is assigned the port number 993.Virtually all modern e-mail clients and servers support IMAP. IMAP and the earlier POP3 (Post Office Protocol) are the two most prevalent standard protocols for email retrieval,[2] with many webmail service providers such as Gmail, Outlook.com and Yahoo! Mail also providing support for either IMAP or POP3.
E-mail protocols of IMAP
The Internet Message Access Protocol is an Application Layer Internet protocol that allows an e-mail client to access e-mail on a remote mail server. The current version, IMAP version 4 revision 1 (IMAP4rev1), is defined by RFC 3501. An IMAP server typically listens on well-known port 143. IMAP over SSL (IMAPS) is assigned well-known port number 993.
IMAP supports both on-line and off-line modes of operation. E-mail clients using IMAP generally leave messages on the server until the user explicitly deletes them. This and other characteristics of IMAP operation allow multiple clients to manage the same mailbox. Most e-mail clients support IMAP in addition to Post Office Protocol (POP) to retrieve messages; however, fewer e-mail services support IMAP.[3] IMAP offers access to the mail storage. Clients may store local copies of the messages, but these are considered to be a temporary cache.
Incoming e-mail messages are sent to an e-mail server that stores messages in the recipient’s e-mail box. The user retrieves the messages with an e-mail client that uses one of a number of e-mail retrieval protocols. Some clients and servers preferentially use vendor-specific, proprietary protocols, but most support SMTP for sending e-mail and POP and IMAP for retrieving e-mail, allowing interoperability with other servers and clients. For example, Microsoft‘s Outlook client uses MAPI, a Microsoft proprietary protocol, to communicate with a Microsoft Exchange Server. IBM‘s Notes client works in a similar fashion when communicating with a Domino server. All of these products also support POP, IMAP, and outgoing SMTP. Support for the Internet standard protocols[citation needed] allows many e-mail clients such as Pegasus Mail or Mozilla Thunderbird to access these servers, and allows the clients to be used with other servers.
Simple Mail Transfer Protocol
SMTP – Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (email) transmission. First defined by RFC 821 in 1982, it was last updated in 2008 with Extended SMTP additions by RFC 5321, which is the protocol in widespread use today.Although electronic mail servers and other mail transfer agents use SMTP to send and receive mail messages, user-level client mail applications typically use SMTP only for sending messages to a mail server for relaying. For retrieving messages, client applications usually use either IMAP or POP3.SMTP communication between mail servers uses port 25. Mail clients on the other hand, often submit the outgoing emails to a mail server on port 587. Despite being deprecated, mail providers sometimes still permit the use of nonstandard port 465 for this purpose.SMTP connections secured by SSL, known as SMTPS, can be made using STARTTLS.[1]Although proprietary systems (such as Microsoft Exchange and IBM Notes) and webmail systems (such as Outlook.com, Gmail and Yahoo! Mail) use their own non-standard protocols to access mail box accounts on their own mail servers, all use SMTP when sending or receiving email from outside their own systems.
Mail processing model of SMTP
Email is submitted by a mail client (mail user agent, MUA [also known as Email Client e.g Outlook 2010]) to a mail server (mail submission agent, MSA [A message submission agent (MSA) or mail submission agent is a computer program or software agent that receives electronic mail messages from a mail user agent (MUA) and cooperates with a mail transfer agent (MTA) for delivery of the mail. It uses ESMTP, a variant of the Simple Mail Transfer Protocol (SMTP), as specified in RFC 6409.[1]Many MTAs perform the function of an MSA as well, but there are also programs that are specially designed as MSAs without full MTA functionality. Historically, in Internet mail, both MTA and MSA functions use port number 25, but the official port for MSAs is 587.[1] The MTA accepts incoming mail, while the MSA accepts outgoing mail.]) using SMTP on TCP port 587. Most mailbox providers still allow submission on traditional port 25.
The MSA delivers the mail to its mail transfer agent (mail transfer agent, MTA). Often, these two agents are instances of the same software launched with different options on the same machine. Local processing can be done either on a single machine, or split among multiple machines; mail agent processes on one machine can share files, but if processing is on multiple machines, they transfer messages between each other using SMTP, where each machine is configured to use the next machine as a smart host. Each process is an MTA (an SMTP server) in its own right.
The boundary MTA uses the Domain name system (DNS) to look up the mail exchanger record (MX record) for the recipient’s domain (the part of the email address on the right of @). The MX record contains the name of the target host. Based on the target host and other factors, the MTA selects an exchange server: see the article MX record. The MTA connects to the exchange server as an SMTP client.
Message transfer can occur in a single connection between two MTAs, or in a series of hops through intermediary systems. A receiving SMTP server may be the ultimate destination, an intermediate “relay” (that is, it stores and forwards the message) or a “gateway” (that is, it may forward the message using some protocol other than SMTP). Each hop is a formal handoff of responsibility for the message, whereby the receiving server must either deliver the message or properly report the failure to do so.[15]
Once the final hop accepts the incoming message, it hands it to a mail delivery agent (MDA) for local delivery. An MDA saves messages in the relevant mailbox format. As with sending, this reception can be done using one or multiple computers, but in the diagram above the MDA is depicted as one box near the mail exchanger box. An MDA may deliver messages directly to storage, or forward them over a network using SMTP or other protocol such as Local Mail Transfer Protocol (LMTP), a derivative of SMTP designed for this purpose.
Once delivered to the local mail server, the mail is stored for batch retrieval by authenticated mail clients (MUAs). Mail is retrieved by end-user applications, called email clients, using Internet Message Access Protocol (IMAP), a protocol that both facilitates access to mail and manages stored mail, or the Post Office Protocol (POP) which typically uses the traditional mbox mail file format or a proprietary system such as Microsoft Exchange/Outlook or Lotus Notes/Domino. Webmail clients may use either method, but the retrieval protocol is often not a formal standard.
SMTP defines message transport, not the message content. Thus, it defines the mail envelope and its parameters, such as the envelope sender, but not the header (except trace information) nor the body of the message itself. STD 10 and RFC 5321 define SMTP (the envelope), while STD 11 and RFC 5322 define the message (header and body), formally referred to as the Internet Message Format.
Exchange ActiveSync
EAS – Exchange ActiveSync (commonly known as EAS) is a communications protocol designed for the synchronization of email, contacts, calendar, tasks, and notes from a messaging server to a smartphone or other mobile devices. The protocol also provides mobile device management and policy controls. The protocol is based on XML. The mobile device communicates over HTTP or HTTPS. Originally branded as AirSync and only supporting Microsoft Exchange Servers and Pocket PC devices, the protocol has since become a de facto standard for synchronization between groupware and mobile devices.Microsoft licenses the technology. Support for EAS is now implemented in a number of competing collaboration platforms, including GroupWise with the Novell GroupWise Mobility Services software and Lotus Notes with IBM Notes Traveler. Google previously offered support for the protocol for personal Gmail and free Google Apps accounts, but began removing support from all but paid Google Apps for Work subscriptions in 2013. Beyond on premises installations of Exchange, the various personal and enterprise hosted services from Microsoft also utilize EAS, including Outlook.com and Office 365.In addition to support on Windows Phone, EAS client support is included on Android, iOS, BlackBerry 10 smartphones and the BlackBerry PlayBook tablet computer. The built-in email application for Windows 8 desktop, Mail app, also supports the protocol.
SysPrep is supported for all installations of SQL Server. SysPrep now supports failover cluster installations. For more information, see Considerations for Installing SQL Server Using SysPrep and Install SQL Server using SysPrep.
Hardware and Software Requirements for Installing SQL Server
The article lists the minimum hardware and software requirements to install and run SQL Server on the Windows operating system.
This article applies to SQL Server 2016 and later.
The following considerations apply to all editions:
We recommend that you run SQL Server on computers with the NTFS or ReFS file formats. Installing SQL Server on a computer with FAT32 file system is supported but not recommended as it is less secure than the NTFS or ReFS file systems.
SQL Server Setup will block installations on read-only, mapped, or compressed drives.
Installation fails if you launch setup through Remote Desktop Connection with the media on a local resource in the RDC client. To install remotely the media must be on a network share or local to the physical or virtual machine. SQL Server installation media may be either on a network share, a mapped drive, a local drive, or presented as an ISO to a virtual machine.
SQL Server Management Studio installation requires installing .NET 4.6.1 as a prerequisite. .NET 4.6.1 will be automatically installed by setup when SQL Server Management Studio is selected.
SQL Server Setup installs the following software components required by the product:
SQL Server Native Client
SQL Server Setup support files
Hardware and Software Requirements
The following requirements apply to all installations:
Component
Requirement
.NET Framework
SQL Server 2016 RC1 and later require .NET Framework 4.6 for the Database Engine, Master Data Services, or Replication. SQL Server 2016 setup automatically installs .NET Framework. You can also manually install .NET Framework from Microsoft .NET Framework 4.6 (Web Installer) for Windows. For more information, recommendations, and guidance about .NET Framework 4.6 see .NET Framework Deployment Guide for Developers. Windows 8.1, and Windows Server 2012 R2 require KB2919355 before installing .NET Framework 4.6.
Hard Disk
SQL Server requires a minimum of 6 GB of available hard-disk space. Disk space requirements will vary with the SQL Server components you install. For more information, see Hard Disk Space Requirements later in this article. For information on supported storage types for data files, see Storage Types for Data Files.
Processor, Memory, and Operating System Requirements
The following memory and processor requirements apply to all editions of SQL Server:
Component
Requirement
Memory *
Minimum: Express Editions: 512 MB All other editions: 1 GB Recommended: Express Editions: 1 GB All other editions: At least 4 GB and should be increased as database size increases to ensure optimal performance.
Hard Disk Space Requirements
During installation of SQL Server, Windows Installer creates temporary files on the system drive. Before you run Setup to install or upgrade SQL Server, verify that you have at least 6.0 GB of available disk space on the system drive for these files. This requirement applies even if you install SQL Server components to a non-default drive.
Security Considerations for a SQL Server Installation
Disable NetBIOS and Server Message Block
Servers in the perimeter network should have all unnecessary protocols disabled, including NetBIOS and server message block (SMB).
NetBIOS uses the following ports:
UDP/137 (NetBIOS name service)
UDP/138 (NetBIOS datagram service)
TCP/139 (NetBIOS session service)
SMB uses the following ports:
TCP/139
TCP/445
Web servers and Domain Name System (DNS) servers do not require NetBIOS or SMB. On these servers, disable both protocols to reduce the threat of user enumeration.
Install SQL Server on Windows
In this exercise, you will install the SQL Server database engine and client tools to access SQL Server.
The Azure VM should be stopped when you have completed a lab so that your subscription is not charged (for free trial subscriptions, this will ensure you will have sufficient credits left to complete the labs over the duration of the course).
Install the SQL Server Database Engine on Windows
In this task, you will install the SQL Server database engine in a Windows virtual machine.
In the SQL Server Installation Center window, on the Installation page, click ew SQL Server stand-alone installation or add features to an existing installation and wait for SQL Server setup to start.
On the Product Key page, in the Specify a free edition box, select Evaluation, and then click Next.
On the License Terms page, note the Microsoft Software License Terms, select I accept the license terms, and then click Next.
On the Microsoft Update page ensure that Use Microsoft Update to check for updates is cleared and click Next. Note that this is to save time in the lab and, in a normal installation, you should select the checkbox.
On the Install Rules page note that there is a warning that you will need to configure your firewall and click Next.
On the Feature Selection page, under Instance Features, select Database Engine Services, and then click Next.
On the Instance Configuration page, ensure that Default instance is selected and click Next.
On the Server Configuration page click Next.
On the Database Engine Configuration page, on the Server Configuration tab, in the Authentication Mode section, select Mixed Mode (SQL Server authentication and Windows authentication). Then enter and confirm the password, Pa$$w0rd.
Click Add Current User; this will add the user that you set up for the virtual machine.
On the FILESTREAM tab, ensure that Enable FILESTREAM for Transact-SQL access is not selected, and then click Next.
On the Ready to Install page, review the summary, then click Install and wait for the installation to complete.
On the Complete page, click Close.
Install SQL Server Management Tools
In this task, you will install SQL Server management tools.
In the SQL Server Installation Center window, on the Installation page, click Install SQL Server Management Tools.
Click Download SQL Server Management Studio.
Click Run.
When setup is complete, click Close and close Internet Explorer.
To add a shortcut to the taskbar, at the bottom-left corner, click the Windows icon, and then commence typing SQL Server.
In the Apps section, when the search result appears, right-click Microsoft SQL Server Management Studio, and then select Pin to Taskbar.
In the context of IBMmainframe computers, a data set (IBM preferred) or dataset is a computer file having a record organization. Use of this term began with OS/360 and is still used by its successors, including the current z/OS. Documentation for these systems historically preferred this term rather than file.
A data set is typically stored on a direct access storage device (DASD) or magnetic tape, however unit record devices, such as punch card readers, card punch, and line printers can provide input/output (I/O) for a data set (file).[1]
Data sets are not unstructured streams of bytes, but rather are organized in various logical record and block structures determined by the DSORG (data set organization), RECFM (record format), and other parameters. These parameters are specified at the time of the data set allocation (creation), for example with Job Control LanguageDD statements. Inside a job they are stored in the Data Control Block (DCB), which is a data structure used to access data sets, for example using access methods.
Migration lets you to move the configuration of an existing server to a new server computer. Migrations are often selected over upgrades because the process is less destructive and more recoverable. Depending on the services and functionality that you need to migrate to a new server instance, there will be different requirements and actions you need to take. Fundamentally though, migration can be broken down into three phases.
Pre-migration
Installing, running migration tools and identifying any prerequisites. For example, drivers and ports.
Preparing source server. For example, backing up your data.
Preparing destination server. For example, ensuring drivers and ports are available
migration
Exporting or migrating data from source server.
Importing or migrating data to destination server.
Post-migration
Verify destination server is running successfully.
Decommission source server.
Post-Installation Configuration steps
The post-installation process involves configuring all of the other settings that the server requires before it can be deployed to a production environment.
The post-installation process involves configuring all of the other settings that the server requires before it can be deployed to a production environment.
Server Manager is the primary graphical tool used to manage both local and remote servers. With Server Manager you can create groups of servers. This enables you to perform administrative tasks quickly across multiple servers that perform the same role, or are members of the same group. Additionally, Server Manager provides access to many administrative tools.
Windows Server 2012 features are independent components that often support role services or support the server directly. For example, Windows Server Backup is a feature because it only provides backup support for the local server. It is not a resource that other servers on the network can use.
The Add Roles and Features Wizard and the Remove Roles and Features Wizard in Server Manager modifies the features that are installed on the server.
Which server role enables you to centrally configure, mange, and provide temporary IP addresses and related information for client computers?
Dynamic Host Configuration Protocol (DHCP) Server. The DHCP server enables you to centrally configure, manage, and provide temporary IP addresses and related information for client computers. IP addresses are used to uniquely identify the client computers on your network.
Which server role provides the services that you can use to create and manage virtual machines and their resources?
Hyper-V Server. The Hyper-V Server provides services to create and manage virtual machines and their resources. Each virtual machine is a virtualized computer system that operates in an isolated execution environment. This allows you to run multiple operating systems simultaneously. etwork.
Which server role provides a reliable, manageable, and scalable Web application infrastructure?
Web Server (IIS). The Web Server provides a reliable, manageable, and scalable Web application infrastructure. IIS supports hosting of Web content in production environments.etwork.
Which server role stores information about objects on the network and makes this information available to users and network administrators?
Active Directory Domain Services (AD DS) Server. The AD DS server stores information about objects on the network and makes this information available to users and network administrators. Servers that run the AD DS Server role are called Domain Controllers. These servers provide network users access to resources through a single logon process.
Which server role allows network administrators to specify the Microsoft updates that should be installed on different computers?
Windows Server Update Services (WSUS) Server. The WSUS server allows network administrators to specify the Microsoft updates that should be installed on different computers. Keeping your computers updated with the latest updates is an important part of securing the network. With WSUS you can automate this process and create different update schedules for your computers
—-
Which server role enables you to centrally configure, mange, and provide temporary IP addresses and related information for client computers?
Show Answer
Dynamic Host Configuration Protocol (DHCP) Server. The DHCP server enables you to centrally configure, manage, and provide temporary IP addresses and related information for client computers. IP addresses are used to uniquely identify the client computers on your network.
Which server role provides the services that you can use to create and manage virtual machines and their resources?
Show Answer
Hyper-V Server. The Hyper-V Server provides services to create and manage virtual machines and their resources. Each virtual machine is a virtualized computer system that operates in an isolated execution environment. This allows you to run multiple operating systems simultaneously. etwork.
Which server role provides a reliable, manageable, and scalable Web application infrastructure?
Show Answer
Web Server (IIS). The Web Server provides a reliable, manageable, and scalable Web application infrastructure. IIS supports hosting of Web content in production environments.etwork.
Which server role stores information about objects on the network and makes this information available to users and network administrators?
Show Answer
Active Directory Domain Services (AD DS) Server. The AD DS server stores information about objects on the network and makes this information available to users and network administrators. Servers that run the AD DS Server role are called Domain Controllers. These servers provide network users access to resources through a single logon process.
Which server role allows network administrators to specify the Microsoft updates that should be installed on different computers?
Show Answer
Windows Server Update Services (WSUS) Server. The WSUS server allows network administrators to specify the Microsoft updates that should be installed on different computers. Keeping your computers updated with the latest updates is an important part of securing the network. With WSUS you can automate this process and create different update schedules for your computers.
Which server feature allows multiple servers to work together to provide high availability of server roles?
Show Answer
Failover Clustering. Failover clustering is often used for File Services, virtual machines, database applications, and mail applications.
Which server feature includes snap-ins and command line tools for remotely managing roles and features?
Show Answer
Remote Server Administration Tools (RSAT). RSAT Tools are divided into Feature Administration Tools and Role Administration Tools. Feature Administration Tools include Failover Clustering Tools, IPAM Client, and Network Load Balancing Tools. Role Administration Tools include Hyper-V Management Tools, DHCP Server Tools, and Remote Access Management Tools.
Which server feature distributes network traffic across several servers, using the TCP/IP protocol?
Show Answer
Network Load Balancing (NLB). NLB is particularly useful for ensuring stateless applications, such as Web Servers running IIS, are scalable by adding additional services as the load increases.
Which server feature includes Windows PowerShell cmdlets that facilitate migration of server roles, operating system settings, files, and shares from computers that are running earlier versions of Windows Server?
Show Answer
Windows Server Migration Tools. Windows Server Migration Tools can also facilitate migration from one computer that is running Windows Server 2012 to another server that is running Windows Server 2012. For example when you are creating a backup server.
Which server feature provides a central framework for managing your IP address space and DHCP and DNS servers?
Show Answer
IP Address Management Server (IPAM). IPAM supports automated discovery of DHCP and DNS servers in the Active Directory forest. IPAM can also track and monitor IPv4 and IPv6 addresses, as well as providing utilization tools.
Subnets identify the network addresses that map computers to AD DS sites. A subnet is a segment of a TCP/IP network to which a set of logical IP addresses are assigned. A site can consist of one or more subnets.
Keep your subnet information up to date
When you design your AD DS site configuration, it’s critical that you correctly map IP subnets to sites. Similarly, if the underlying network configuration changes, make sure that you update the configuration to reflect the new site mapping. Domain controllers use the AD DS subnet information to map client computers and servers to sites. If this mapping isn’t accurate, operations such as logon traffic and applying GPOs are likely to occur across WAN links, and may be disruptive.
When to create more OUs?
Although you can manage a small organization without creating additional OUs, even small organizations typically create an OU hierarchy. An OU hierarchy lets you subdivide the administration of your domain for management purposes. There are basically two reasons to create OUs.
Application of GPOs. To group objects together to make it easier to manage them by applying Group Policy Objects (GPOs) to the whole group. You can link GPOs to the OU, and the settings apply to all objects within the OU. For example, you create an OU for contractors who have different security requirements than full-time employees.
Delegation of control. To delegate administrative control of objects within the OU. You can assign management permissions on an OU, thereby delegating control of that OU to an AD DS user or group. For example, you create an OU to manage a satellite office in a different geographical location. Then, you delegate control of the OU to a group.
What is Active Directory Domain Services (AD DS)?
Active Directory Domain Services (AD DS)is a scalable, secure, and manageable infrastructure for user and resource management. AD DS is a Windows Server role that’s installed and hosted on a server known as a domain controller. AD DS uses Lightweight Directory Access Protocol (LDAP) to access, search, and change the directory service. LDAP is a based on the X.500 standard and TCP/IP.
AD DS provides a centralized system for managing users, computers, and other resources on the network. AD DS features a centralized directory, single sign-on access, integrated security, scalability, and a common management interface.
This command configures the computer to receive remote commands.
The Enable-PSRemoting cmdlet configures the computer to receive Windows PowerShell remote commands that are sent.
to enable Windows PowerShell remoting on other supported versions of Windows
You need to run this command only once on each computer that will receive commands. You do not need to run it on computers that only send commands. Because the configuration activates listeners, it is prudent to run it only where it is needed.
To run this cmdlet, start Windows PowerShell with the “Run as administrator” option.
CAUTION: On systems that have both Windows PowerShell 3.0 and the Windows PowerShell 2.0 engine, do not use Windows PowerShell 2.0 to run the Enable-PSRemoting and Disable-PSRemoting cmdlets. The commands might appear to succeed, but the remoting is not configured correctly. Remote commands, and later attempts to enable and disable remoting, are likely to fail.
In Windows PowerShell 3.0, Enable-PSRemoting creates the following firewall exceptions for WS-Management communications.On server versions of Windows, Enable-PSRemoting creates firewall rules for private and domain networks that allow remote access, and creates a firewall rule for public networks that allows remote access only from computers in the same local subnet.On client versions of Windows, Enable-PSRemoting in Windows PowerShell 3.0 creates firewall rules for private and domain networks that allow unrestricted remote access. To create a firewall rule for public networks that allows remote access from the same local subnet, use the SkipNetworkProfileCheckparameter.On client or server versions of Windows, to create a firewall rule for public networks that removes the local subnet restriction and allows remote access , use the Set-NetFirewallRule cmdlet in the NetSecurity module to run the following command: Set-NetFirewallRule -Name "WINRM-HTTP-In-TCP-PUBLIC" -RemoteAddress Any
In Windows PowerShell 2.0, Enable-PSRemoting creates the following firewall exceptions for WS-Management communications.On server versions of Windows, it creates firewall rules for all networks that allow remote access.On client versions of Windows, Enable-PSRemoting in Windows PowerShell 2.0 creates a firewall exception only for domain and private network locations. To minimize security risks, Enable-PSRemoting does not create a firewall rule for public networks on client versions of Windows. When the current network location is public, Enable-PSRemoting returns the following message: “Unable to check the status of the firewall.”
Beginning in Windows PowerShell 3.0, Enable-PSRemoting enables all session configurations by setting the value of the Enabled property of all session configurations (WSMan:\<ComputerName>\Plugin\<SessionConfigurationName>\Enabled) to True ($true).
In Windows PowerShell 2.0, Enable-PSRemoting removes the Deny_All setting from the security descriptor of session configurations. In Windows PowerShell 3.0, Enable-PSRemoting removes the Deny_All and Network_Deny_All settings, thereby providing remote access to session configurations that were reserved for local use.
Enable-PSRemoting -Force
This command configures the computer to receive remote commands. It uses the Force parameter to suppress the user prompts.
Enable-PSRemoting -SkipNetworkProfileCheck -Force
Set-NetFirewallRule -Name "WINRM-HTTP-In-TCP-PUBLIC" -RemoteAddress Any
This example shows how to allow remote access from public networks on client versions of Windows. Before using these commands, analyze the security setting and verify that the computer network will be safe from harm.
The first command enables remoting in Windows PowerShell. By default, this creates network rules that allow remote access from private and domain networks. The command uses the SkipNetworkProfileCheckparameter to allow remote access from public networks in the same local subnet. The command uses the Force parameter to suppress confirmation messages.
The SkipNetworkProfileCheck parameter has no effect on server version of Windows, which allow remote access from public networks in the same local subnet by default.
The second command eliminates the subnet restriction. The command uses the Set-NetFirewallRule cmdlet in the NetSecurity module to add a firewall rule that allows remote access from public networks from any remote location, including locations in different subnets.
-SkipNetworkProfileCheck
Enables remoting on client versions of Windows when the computer is on a public network. This parameter enables a firewall rule for public networks that allows remote access only from computers in the same local subnet.
This parameter has no effect on server versions of Windows, which, by default, have a local subnet firewall rule for public networks. If the local subnet firewall rule is disabled on a server version of Windows, Enable-PSRemoting re-enables it, regardless of the value of this parameter.
To remove the local subnet restriction and enable remote access from all locations on public networks, use the Set-NetFirewallRule cmdlet in the NetSecurity module.
How to Run PowerShell Commands on Remote Computers
PowerShell Remoting lets you run PowerShell commands or access full PowerShell sessions on remote Windows systems. It’s similar to SSH for accessing remote terminals on other operating systems.
PowerShell is locked-down by default, so you’ll have to enable PowerShell Remoting before using it. This setup process is a bit more complex if you’re using a workgroup instead of a domain—for example, on a home network—but we’ll walk you through it.
Enable PowerShell Remoting on the PC You Want to Access Remotely
Your first step is to enable PowerShell Remoting on the PC to which you want to make remote connections. On that PC, you’ll need to open PowerShell with administrative privileges.
-In Windows 10, press Windows+X and then choose PowerShell (Admin) from the Power User menu.
-In Windows 7 or 8, hit Start, and then type “powershell.” Right-click the result and choose “Run as administrator.”
-In the PowerShell window, type the following cmdlet (PowerShell’s name for a command), and then hit Enter:
Enable-PSRemoting -Force
This command starts the WinRM service, sets it to start automatically with your system, and creates a firewall rule that allows incoming connections. The -Force part of the cmdlet tells PowerShell to perform these actions without prompting you for each step.
If your PCs are part of a domain, that’s all the setup you have to do. You can skip on ahead to testing your connection. If your computers are part of a workgroup—which they probably are on a home or small business network—you have a bit more setup work to do.
Note: Your success in setting up remoting in a domain environment depends entirely on your network’s setup. Remoting might be disabled—or even enabled—automatically by group policy configured by an admin. You might also not have the permissions you need to run PowerShell as an administrator. As always, check with your admins before you try anything like this. They might have good reasons for not allowing the practice, or they might be willing to set it up for you.
Set Up Your Workgroup
If your computers aren’t on a domain, you need to perform a few more steps to get things set up. You should have already enabled Remoting on the PC to which you want to connect, as we described in the previous section.
Note: For PowerShell Remoting to work in a workgroup environment, you must configure your network as a private, not public, network.
Next, you need to configure the TrustedHosts setting on both the PC to which you want to connect and the PC (or PCs) you want to connect from, so the computers will trust each other. You can do this in one of two ways.
If you’re on a home network where you want to go ahead and trust any PC to connect remotely, you can type the following cmdlet in PowerShell (again, you’ll need to run it as Administrator).
Set-Item wsman:\localhost\client\trustedhosts *
The asterisk is a wildcard symbol for all PCs. If instead you want to restrict computers that can connect, you can replace the asterisk with a comma-separated list of IP addresses or computer names for approved PCs.
After running that command, you’ll need to restart the WinRM service so your new settings take effect. Type the following cmdlet and then hit Enter:
Restart-Service WinRM
And remember, you’ll need to run those two cmdlets on the PC to which you want to connect, as well as on any PCs you want to connect from.
Test the Connection
Now that you’ve got your PCs set up for PowerShell Remoting, it’s time to test the connection. On the PC you want to access the remote system from, type the following cmdlet into PowerShell (replacing “COMPUTER” with the name or IP address of the remote PC), and then hit Enter:
Test-WsMan COMPUTER
This simple command tests whether the WinRM service is running on the remote PC. If it completes successfully, you’ll see information about the remote computer’s WinRM service in the window—signifying that WinRM is enabled and your PC can communicate. If the command fails, you’ll see an error message instead.
Execute a Single Remote Command
To run a command on the remote system, use the Invoke-Command cmdlet using the following syntax:
“COMPUTER” represents the remote PC’s name or IP address. “COMMAND” is the command you want to run. “USERNAME” is the username you want to run the command as on the remote computer. You’ll be prompted to enter a password for the username.
Here’s an example. I want to view the contents of the C:\ directory on a remote computer with the IP address 10.0.0.22. I want to use the username “wjgle,” so I would use the following command:
If you have several cmdlets you want to run on the remote PC, instead of repeatedly typing the Invoke-Command cmdlet and the remote IP address, you can start a remote session instead. Just type the following cmdlet and then hit Enter:
Enter-PSSession -ComputerName COMPUTER -Credential USER
Again, replace “COMPUTER” with the name or IP address of the remote PC and replace “USER” with the name of the user account you want to invoke.
Your prompt changes to indicate the remote computer to which you’re connected, and you can execute any number of PowerShell cmdlets directly on the remote system.
Enable-PSRemoting
Enable-PSRemoting configures a computer to receive PowerShell remote commands sent with WS-Management technology.
PS Remoting only needs to be enabled once on each computer that will receive commands.
Computers that only send commands do not need to have PS Remoting enabled; because the configuration activates listeners (and starts the WinRM service), it is prudent to run it only where needed.
The comma-separated list can be IP addresses or computer names or even a * wildcard to match all.
run : Restart-Service WinRM
To view the current trusted hosts: Get-Item WSMan:\localhost\Client\TrustedHosts
Examples
Configure the local computer to receive remote commands:
PS C:\> Enable-PSRemoting
Configure the computer to receive remote commands & suppress user prompts:
PS C:\> Enable-PSRemoting -Force
Configure the remote computer workstation64 to receive remote commands, via psexec. If you are running this from an account which is NOT a domain administrator, then specify the username/password of an account with admin rights to the remote machine:
Question: PowerShell: Configure WinRM and enable PSRemoting
1 – Enable WinRM
First thing to do before starting to manage your server remotely is to enable this function in your server. For this, you need to use the Windows Remote Management (WinRM) service. WinRM is the service which will allow you to use the WS-Management protocol necessary for the PowerShell remoting.
Enable WinRM is quite simple to do, you just need to run this command in a PowerShell prompt:
Winrm quickconfig or winrm qc
It should display a message like this if it is already configured:
Otherwise it will ask you to configure it:
2 – Enable PSRemoting
Once you have started your WinRM service, you must configure PowerShell itself to allow the remoting:
Enable-PSRemoting
3 – TrustedHosts file configuration
3.1 – Add server to the TrustedHosts file
The configuration above implies a domain environment. If you are working with servers which are not in your domain or in a trusted domain, you will have to add them in the TrustedHosts list of your local server. To do so, you must run the command below:
winrm s winrm/config/client ‘@{TrustedHosts=”MyServerName”}’
And the result you should see (you just need to replace “MyServerName” by the name of your server):
Another way to add a server to this file, by using the Set-Item cmdlet, like below:
In the command above you can see that I added two values between the quotes “ “. If you want to add more than one server to this file, you must add them separated by a coma. Attention anyway, if one day you decide to add a new server, if you run the same command with only one server name, it will overwrite the existing file. You need to add all the server names’ that must be in this file.
PowerShell will also prompt you to warn about the risks of adding a computer which is not trustworthy in this file.
And if I do a Get-Item, I should see my two servers:
Get-Item WSMan:\localhost\Client\TrustedHosts |fl
If you want to trust every servers which are not in your domain, even if it far far… far away from being secure… you can use the wildcard, like that:
Get-Item WSMan:\localhost\Client\TrustedHosts |fl Name, Value
And of course, sometimes it can also be interesting to be able to check this TrustedHosts file to see what is inside. You can also use PowerShell to it by using the Get-Item cmdlet:
Get-Item WSMan:\localhost\Client\TrustedHosts
3.2 – Remove servers from the TrustedHosts file
While you can easily add servers to your TrustedHosts file it can also be interesting to be able to remove a server from it, for security reasons, if you don’t need to use it anymore.
And by using this command, we can remove one server but still keeping the other servers in the list. As we can see on the output below:
Before clearing:
And after:
And there we are! Your PowerShell is now configured to handle the remote management.
Question: PowerShell Remoting Cheatsheet
I have become a big fan of PowerShell Remoting. I find my self using it for both penetration testing and standard management tasks. In this blog I’ll share a basic PowerShell Remoting cheatsheet so you can too.
Enabling PowerShell Remoting
Before we get started let’s make sure PowerShell Remoting is all setup on your system.
In a PowerShell console running as administrator enable PowerShell Remoting.Enable-PSRemoting –forceThis should be enough, but if you have to troubleshoot you can use the commands below
Make sure the WinRM service is setup to start automatically.# Set start mode to automatic
Set-Service WinRM -StartMode Automatic
# Verify start mode and state - it should be running
Get-WmiObject -Class win32_service | Where-Object {$_.name -like "WinRM"}
Set all remote hosts to trusted. Note: You may want to unset this later.
# Trust all hosts
Set-Item WSMan:localhost\client\trustedhosts -value *
# Verify trusted hosts configuration
Get-Item WSMan:\localhost\Client\TrustedHosts
Executing Remote Commands with PowerShell Remoting
Executing a Single Command on a Remote SystemThe “Invoke-Command” command can be used to run commands on remote systems. It can run as the current user or using alternative credentials from a non domain system. Examples below.Invoke-Command –ComputerName MyServer1 -ScriptBlock {Hostname}
Invoke-Command –ComputerName MyServer1 -Credential demo\serveradmin -ScriptBlock {Hostname}
If the ActiveDirectory PowerShell module is installed it’s possible to execute commands on many systems very quickly using the pipeline. Below is a basic example.Get-ADComputer -Filter * -properties name | select @{Name="computername";Expression={$_."name"}} | Invoke-Command -ScriptBlock {hostname}
Sometimes it’s nice to run scripts stored locally on your system against remote systems. Below are a few basic examples.Invoke-Command -ComputerName MyServer1 -FilePath C:\pentest\Invoke-Mimikatz.ps1
Invoke-Command -ComputerName MyServer1 -FilePath C:\pentest\Invoke-Mimikatz.ps1 -Credential demo\serveradmin
Also, if your dynamically generating commands or functions being passed to remote systems you can use invoke-expression through invoke-command as shown below.$MyCommand = "hostname"
$MyFunction = "function evil {write-host `"Getting evil...`";iex -command $MyCommand};evil"
invoke-command -ComputerName MyServer1 -Credential demo\serveradmin -ScriptBlock {Invoke-Expression -Command "$args"} -ArgumentList $MyFunction
Establishing an Interactive PowerShell Console on a Remote SystemAn interactive PowerShell console can be obtained on a remote system using the “Enter-PsSession” command. It feels a little like SSH. Similar to “Invoke-Command”, “Enter-PsSession” can be run as the current user or using alternative credentials from a non domain system. Examples below.Enter-PsSession –ComputerName server1.domain.com
Enter-PsSession –ComputerName server1.domain.com –Credentials domain\serveradmin
If you want out of the PowerShell session the “Exit-PsSession” command can be used.Exit-PsSession
Creating Background SessionsThere is another cool feature of PowerShell Remoting that allows users to create background sessions using the “New-PsSession” command. Background sessions can come in handy if you want to execute multiple commands against many systems. Similar to the other commands, the “New-PsSession” command can run as the current user or using alternative credentials from a non domain system. Examples below.New-PSSession -ComputerName server1.domain.com
New-PSSession –ComputerName server1.domain.com –Credentials domain\serveradmin
If the ActiveDirectory PowerShell module is installed it’s possible to create background sessions for many systems at a time (However, this can be done in many ways). Below is a command example showing how to create background sessions for all of the domain systems. The example shows how to do this from a non domain system using alternative domain credentials.New-PSDrive -PSProvider ActiveDirectory -Name RemoteADS -Root "" -Server a.b.c.d -credential domain\user
cd RemoteADS:
Get-ADComputer -Filter * -Properties name | select @{Name="ComputerName";Expression={$_."name"}} | New-PSSession
Listing Background SessionsOnce a few sessions have been established the “Get-PsSession” command can be used to view them.Get-PSSession
Interacting with Background SessionsThe first time I used this feature I felt like I was working with Metasploit sessions, but these sessions are a little more stable. Below is an example showing how to interact with an active session using the session id.Enter-PsSession –id 3
To exit the session use the “Exit-PsSession” command. This will send the session into the background again.Exit-PsSession
Executing Commands through Background SessionsIf your goal is to execute a command on all active sessions the “Invoke-Command” and “Get-PsSession” commands can be used together. Below is an example.Invoke-Command -Session (Get-PSSession) -ScriptBlock {Hostname}
Removing Background SessionsFinally, to remove all of your active sessions the “Disconnect-PsSession” command can be used as shown below.Get-PSSession | Disconnect-PSSession
Wrap Up
Naturally PowerShell Remoting offers a lot of options for both administrators and penetration testers. Regardless of your use case I think it boils down to this:
Use “Invoke-Command” if you’re only going to run one command against a system
Use “Enter-PSSession” if you want to interact with a single system
Use PowerShell sessions when you’re going to run multiple commands on multiple systems
Hopefully this cheatsheet will be useful. Have fun and hack responsibly.
So it’s been an interesting week for me at work as we brought a new customer online. It’s really great to be working with a dynamic team in a rapidly evolving environment. One of the things that’s keeping us ahead of the game is relying on PowerShell when performing repetitive tasks. In this week’s article I’m going to talk about a set of functions I had to come up this week to start PSRemoting remotely.
I’d seen a bunch of postings where people used Schtasks.exe and or PSExec to enable PSRemoting but I didn’t like either of those approaches. I wanted to do it in a more native powershell way. I got a lot of help from Thomas Lee’s blog where he talked about writing registry keys remotely using powershell.
From there I went on to write a set of functions, 5 total, that will perform all the functions required to enable PSRemoting. In order to accomplish the configuration for the WinRM service and the windows firewall remotely I had the functions write entries in the policy node of the registry.
Set-WinRMListener, works by creating 3 registry keys that configure the WinRM service when it restarts.
Restart-WinRM, Uses Get-WmiObject to start and stop the WinRM service.
Set-WinRMStartup, sets the startup type of the WinRM service to automatic.
Set-WinRMFirewallRule, creates 2 registry keys to configure the firewall exemptions required by PSRemoting.
Restart-WindowsFirewall, restarts the windows firewall service to allow the registry configurations to take hold.
Anyway, all the functions are defined in the script on the TechNet gallery. I hope you guys like the functions and get some use out of them. Go ahead and leave a comment or email me if you’re interested in further explanation. Also, feel free to leave comments on the TechNet entry.
Question: Enable-PSRemoting
The Enable-PSRemoting cmdlet configures the computer to receive Windows PowerShell remote commands that are sent by using the WS-Management technology.
By default, on Windows Serverr 2012, Windows PowerShell remoting is enabled. You can use Enable-PSRemoting to enable Windows PowerShell remoting on other supported versions of Windows and to re-enable remoting on Windows Server 2012 if it becomes disabled.
You have to run this command only one time on each computer that will receive commands. You do not have to run it on computers that only send commands. Because the configuration starts listeners, it is prudent to run it only where it is needed.
Beginning in Windows PowerShell 3.0, the Enable-PSRemoting cmdlet can enable Windows PowerShell remoting on client versions of Windows when the computer is on a public network. For more information, see the description of the SkipNetworkProfileCheck parameter.
The Enable-PSRemoting cmdlet performs the following operations:
— Runs the Set-WSManQuickConfig cmdlet, which performs the following tasks:
—– Starts the WinRM service.
—– Sets the startup type on the WinRM service to Automatic.
—– Creates a listener to accept requests on any IP address, if one does not already exist.
—– Enables a firewall exception for WS-Management communications.
—– Registers the Microsoft.PowerShell and Microsoft.PowerShell.Workflow session configurations, if it they are not already registered.
—– Registers the Microsoft.PowerShell32 session configuration on 64-bit computers, if it is not already registered.
—– Enables all session configurations.
—– Changes the security descriptor of all session configurations to allow remote access.
—– Restarts the WinRM service to make the preceding changes effective.
To run this cmdlet, start Windows PowerShell by using the Run as administrator option.
CAUTION: On systems that have both Windows PowerShell 3.0 and Windows PowerShell 2.0, do not use Windows PowerShell 2.0 to run the Enable-PSRemoting and Disable-PSRemoting cmdlets. The commands might appear to succeed, but the remoting is not configured correctly. Remote commands and later attempts to enable and disable remoting, are likely to fail.
Examples
Configure a computer to receive remote commands:PS C:> Enable-PSRemoting
This command configures the computer to receive remote commands.
Configure a computer to receive remote commands without a confirmation prompt:PS C:> Enable-PSRemoting -Force
This command configures the computer to receive remote commands. It uses the Force parameter to suppress the user prompts.
Allow remote access on clients:PS C:> Enable-PSRemoting -SkipNetworkProfileCheck -Force
PS C:> Set-NetFirewallRule -Name "WINRM-HTTP-In-TCP-PUBLIC" -RemoteAddress Any
This example shows how to allow remote access from public networks on client versions of the Windows operating system. Before using these commands, analyze the security setting and verify that the computer network will be safe from harm.The first command enables remoting in Windows PowerShell. By default, this creates network rules that allow remote access from private and domain networks. The command uses the SkipNetworkProfileCheck parameter to allow remote access from public networks in the same local subnet. The command specifies the Force parameter to suppress confirmation messages.The SkipNetworkProfileCheck parameter does not affect server version of the Windows operating system, which allow remote access from public networks in the same local subnet by default.The second command eliminates the subnet restriction. The command uses the Set-NetFirewallRule cmdlet in the NetSecurity module to add a firewall rule that allows remote access from public networks from any remote location. This includes locations in different subnets.
IT organizations need tools to charge back business units that they support while providing the business units with the right amount of resources to match their needs. For hosting providers, it is equally important to issue chargebacks based on the amount of usage by each customer.
To implement advanced billing strategies that measure both the assigned capacity of a resource and its actual usage, earlier versions of Hyper-V required users to develop their own chargeback solutions that polled and aggregated performance counters. These solutions could be expensive to develop and sometimes led to loss of historical data.
To assist with more accurate, streamlined chargebacks while protecting historical information, Hyper-V in Windows Server 2012 introduces Resource Metering, a feature that allows customers to create cost-effective, usage-based billing solutions. With this feature, service providers can choose the best billing strategy for their business model, and independent software vendors can develop more reliable, end-to-end chargeback solutions on top of Hyper-V.
Key benefits
Hyper-V Resource Metering in Windows Server 2012 allows organizations to avoid the expense and complexity associated with building in-house metering solutions to track usage within specific business units. It enables hosting providers to quickly and cost-efficiently create a more advanced, reliable, usage-based billing solution that adjusts to the provider’s business model and strategy.
Use of network metering port ACLs
Enterprises pay for the Internet traffic in and out of their data centers, but not for the network traffic within their data centers. For this reason, providers generally consider Internet and intranet traffic separately for the purposes of billing. To differentiate between Internet and intranet traffic, providers can measure incoming and outgoing network traffic for any IP address range, by using network metering port ACLs.
Virtual machine metrics
Windows Server 2012 provides two options for administrators to obtain historical data on a client’s use of virtual machine resources: Hyper-V cmdlets in Windows PowerShell and the new APIs in the Virtualization WMI provider. These tools expose the metrics for the following resources used by a virtual machine during a specific period of time:
Average CPU usage, measured in megahertz over a period of time.
Average physical memory usage, measured in megabytes.
Minimum memory usage (lowest amount of physical memory).
Maximum memory usage (highest amount of physical memory).
Maximum amount of disk space allocated to a virtual machine.
Total incoming network traffic, measured in megabytes, for a virtual network adapter.
Total outgoing network traffic, measured in megabytes, for a virtual network adapter.
Movement of virtual machines between Hyper-V hosts—for example, through live, offline, or storage migrations—does not affect the collected data.
Hi, I’m Lalithra Fernando, a program manager on the Hyper-V team, working in various areas including clustering and authorization, as well as with our Hyper-V MVPs. In this post, I’ll be talking about resource metering, a new feature in Hyper-V in Windows Server 2012.
As you’ve probably heard by now, Windows Server 2012 is a great platform for the private cloud. When we began planning this release, we realized that one of the things you need in order to run a cloud is to be able to charge your users for the resources they use.
This is the need resource metering fills. It allows you to measure the resource utilization of your virtual machines. You can use this information as a platform for your own dynamic chargeback solutions, where you can charge customers based on the resources they use instead of a flat upfront cost, or to plan your hosting capacity appropriately.
There are four resources that you can measure: your CPU, memory, network, and storage utilization. We measure these resources over the period of time between when you measure and when you last reset metering.
CPU (MHz): We report the average utilization in megahertz.
Now, you’re probably wondering why we don’t report this as a percentage. After all, that’s what we do in Hyper-V Manager. Well, we know that you like to move your virtual machines. With Windows Server 2012, you can live migrate your virtual machines all over the place. Naturally, the record of how much resources your virtual machine has used moves with it.
We want the virtual machine’s CPU utilization to make sense across multiple machines. If we report a percentage and you move the virtual machine to a host with different processing capabilities, it’s no longer clear what the percentage refers to.
Instead, we report in megahertz. For example, if your virtual machine had an average CPU utilization of 50% over the past billing cycle on a host with a CPU running at 3GHz, we would report 1500MHz.
If your virtual machine spent one hour on a host with a 3GHz CPU and used 50% and another hour on a host with 1GHz CPU and used 75%, we would report the following:
Here I am converting the CPU capacity from GHz to MHz and figuring out how much of that capacity was used over each hour.
2250MHz-Hr / 2 Hours = 1125 MHz.
Then, I simply divide over the two hours to get this value.
One final note: we don’t report minimum and maximum utilization values for CPU. If you think on it a moment, you’ll come to the same realization we did: it is very likely that the minimum will be 0 and the maximum will be the full capacity of the hosts’ CPU at some point over the timespan you’re measuring. Since that’s not very useful, we don’t report it.
Memory (MB): We report the average, maximum, and minimum utilization here, in megabytes.
The minimum memory utilization captures the least memory used over the timespan measured. Since it’s not very useful to know that the minimum memory usage was zero if the virtual machine was ever turned off, we only look at the minimum memory utilization when the virtual machine is running.
We do include the offline time of the virtual machine when calculating the average memory utilization. This provides an accurate view of how much memory the virtual machine was using over that billing cycle, so that you can charge your users accurately.
Network (MB): We report network utilization in megabytes. Of course, we want this metric to be useful, so we considered how you would want to see this information broken down. One way you might want to distinguish between network traffic is to see how much traffic is inbound to the virtual machine, and how much is outbound.
The most important breakdown you will want is how much traffic does the virtual machine send or receive from the internet, which costs you money, and how much is just communication between virtual machines you host, which costs you nothing since it is just using your intranet. With this breakdown, you can charge your user accurately for their internet usage.
So how do we provide these breakdowns? We use ACLs set on the virtual machine’s network adapter. Each ACL has
Direction
“Inbound” or “Outbound”
Remote IP Address
The source or destination of the network packet, depending on direction
For example, 10.0.0.0/8
Action
Allow, Deny, or Meter
These ACLs are used for more than just resource metering; note the Allow and Deny actions. For our purposes, you set the action to “Meter”.
Enabling resource metering creates two sets of default metering ACLs, provided none are already configured. The first set of ACLs, one inbound and one outbound, has a remote IP address of *.*; this wildcard notation indicates that it will meter all IPv4 traffic that is not covered by another ACL. The second set has an IP address of *:*, which meters all IPv6 traffic.
With these metering ACLs, you can measure the total network traffic sent and received by the virtual machine, in megabytes. You can configure your own ACLs to count intranet traffic separately from internet traffic, and charge accordingly.
Disk (MB): As we spoke with customers, we realized that for chargeback purposes, they were only interested in the total disk allocation for a virtual machine. So, here we report that in megabytes.
The total value is the capacity (not the current size on disk) of the VHDs attached to the virtual machine plus the size of the snapshots. Take the following examples:
Fixed size disk:
VM with a single 100GB fixed size VHDs attached
————————————————————-
Total Disk Allocation reported: 100GB
Dynamic disk:
VM with a single dynamic VHD attached,
Current size 30GB, maximum size 100GB
————————————————————-
Total Disk Allocation reported: 100GB
With snapshots:
VM with a single dynamic size VHDs attached,
Current size 30GB, maximum size 100GB,
Plus a 20GB snapshot
————————————————————-
Total Disk Allocation reported: 120GB
Pass-through disks, DAS disks, guest iSCSI connections, and virtual Fibre Channel disks are not included in the total disk allocation metric.
Once you enable resource metering, Hyper-V will begin collecting data. You can reset metering at any time. We will then discard the data we have collected up to that point and start fresh. So, you will typically measure the utilization first, and then reset. When you measure, you are measuring the utilization over the timespan since you last reset metering. Metering is designed to collect this data over long periods of time. If you need greater granularity, you can look at performance counters.
Having resource metering enabled and just capturing utilization data per your billing cycle has no noticeable performance impact. There will be some negligible disk and CPU activity as data is written to the configuration file.
You can try this all out for yourself now, with Windows Server 2012. In the next part, we’ll talk about how to actually use resource metering with our PowerShell cmdlets.
We hope this is useful for you. Please let us know how you’re using it! Thanks!
Windows Server 2012 Hyper-V contains a resource metering mechanism that makes it possible to track system resource usage either for a virtual machine or for a collection of virtual machines. Doing so can help you to keep track of the resources consumed by virtual machine collections. This information could be used to facilitate chargebacks (although Hyper-V does not contain a native chargeback mechanism).
Resource metering is not enabled by default. You can enable resource metering through PowerShell by entering the following command:
By default, Hyper-V collects resource metering statistics once every hour. You can change the collection frequency, but it is a good idea to avoid collecting metering data too frequently because doing so can impact performance and generate an excessive amount of metering data. If you want to change the collection frequency you can do so by using this command:
Set-VMHost –ComputerName <host server name> -ResourceMeteringSaveInterval <HH:MM:SS>
As you look at the command above, you will notice that the collection frequency is being set at the host server level. You cannot adjust the frequency on a per VM basis. You can see what this command looks like in figure 1.
[Click on image for larger view.]Figure 1. You can change the resource metering collection frequency.
When you enable resource metering, there are a number of different resource usage statistics that are compiled. These statistics include:
The average CPU usage (measured in MHz)
The average physical memory usage (measured in MB)
The minimum memory usage (measured in MB)
The maximum memory usage (measured in MB)
The maximum amount of disk space allocated to a VM
The total inbound network traffic (measured in MB)
The total outbound network traffic (measured in MB)
The easiest way to view a virtual machine’s resource usage is to enter the following command:
Get-VM <virtual machine name> | Measure-VM
This command will display all of the available metering data for the virtual machine that you have specified.
Similarly, resource metering data can be displayed for all of the virtual machines that are running on the host server. If you want to see monitoring data for all of the virtual machines, you can acquire it by running this command:
Get_VM | Measure-VM
You can see what the output looks like in figure 2.
[Click on image for larger view.]Figure 2. This is what the resource metering output looks like.
Often times administrators are interested in specific aspects of resource consumption. For example, if a particular host server had limited network bandwidth available then an administrator would probably be interested in seeing the amount of network traffic that each virtual machine was sending and receiving. Conversely, if that same server had far more processing power than what would ever be needed by the virtual machines that are running on it, then the administrator probably would not need to monitor the average CPU usage.
Although you cannot turn data collection on or off for individual statistics, you can configure PowerShell to display only the statistics that you are interested in. The key to doing so is to know the object names that PowerShell assigns to each statistic. You can see the object names by entering the following command:
Get-VM | Measure-VM | Select-Object *
The column on the left side of the output lists the names that PowerShell uses for the individual statistics. You can see what this looks like in figure 3.
[Click on image for larger view.]Figure 3. You can get the object names from the column on the left.
There are a couple of things that you might have noticed in the figure above. First, there are more objects than what are displayed by default. Second, there are more objects than what I listed earlier. The reason for this is that these screen captures came from a server running Windows Server 2012 R2 Preview. Microsoft is extending the Resource Metering feature in Hyper-V 2012 R2 to include additional metering objects. In this article however, I only listed the objects that are available today.
With that in mind, let’s suppose that you only wanted to list the maximum memory consumption for each VM. You could do so by using this command:
You can see the output in figure 4. Keep in mind that you can adapt this command to display any combination of objects that you want.
[Click on image for larger view.]Figure 4. PowerShell can display specific resource metering data.
As you can see, resource metering is useful for tracking resource consumption. It can also be useful for performing chargebacks, although there is no native Hyper-V chargeback mechanism.
The American National Institute of Standards and Technology gives us one of the best definitions of a cloud in their Special Publication 800-145, entitled “The NIST Definition of Cloud Computing.” In this document they describe a cloud as having five essential characteristics. One of the traits that they describe as being necessary to have a cloud is a measured service:
Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
What does this mean? A cloud enables a tenant to consume just what they need and pay for what they use. The cloud must be able to measure that usage. Using this information, a cloud vendor can charge the tenant for their resource usage.
Resource metering is an important part of any Hyper-V deployment. (Image: Dreamstime)
That’s fine for a hosting company. What about in a private cloud, that is, an infrastructure that runs on premise? Traditionally the IT department is run as a cost center, or as the board of directors unfortunately see it, as a budgetary black hole. IT can change this incorrect perception in one of two ways:
Cross-charging: Every department that consumes IT services and resources will be given their own IT budget to spend with the IT department. IT will provide those services and resources, and invoice their internal customers on a regular basis in a non-profit manner. This changes IT into a service organization.
Show-back reporting: Many organizations will never consider changing IT into a chargeable service. However, IT can show the business leaders the cost of providing services to each of the business groups by reporting usage and translating that usage into a monetary value. This is a company politics move that can change the perception of IT within the business.
In my opinion, these sorts of actions could misfire and lead to talk of out-sourcing and off-shoring, so be careful!
What Information is Collected?
Resource metering will collect the following data for each enabled virtual machine:
Average CPU usage in MHz
Average physical memory usage
Minimum physical memory usage
Maximum memory usage
Maximum physical amount of disk
Total incoming network traffic
Total outgoing network traffic
Note that in the case of dynamic virtual hard disks, the potential size, not the actual size, is reported for maximum physical amount of disk. Also note that all data is stored with the virtual machine and moves with the virtual machine as it migrates between hosts.
Enabling and Using Hyper-V Resource Metering
Resource metering is made available using PowerShell. You can write scripts using PowerShell or you can use other tools to leverage the functionality.
You must first enable metering on a per-virtual machine basis. The following snippet will enable metering on all virtual machines on a host:
Tip: Remember to enable metering on any virtual machine created afterwards because the above cmdlet will only affect existing virtual machines.
By default, resource metering will collect metrics every hour. This is based on a per-host setting called ResourceMeteringSaveInterval. You might want to change this setting to match your billing rate in a cloud. If you are just testing resource metering, then you might want a more frequent collection. This example will change the setting to every 10 seconds:
Reporting on the top resource consumers. (Image: Aidan Finn)
Resource metering is a tool that can show the value of IT to the business or enable a service provider to earn revenue. And with this data, you even have some ability to track usage for diagnostics reasons.
IT Professionals need tools to track usage from specific business units. If you search you can find lot of monitoring tolls that can do this job but most of them need to pay and for free open source need an advance knowledge to install, configure and enable measure metrics for the resources that you want.
I don’t want to say don’t use Monitoring Tool in your enviroment. But it takes times and need people to do this. If you are alone you need a quick solution until decide your Monitoring Solution in your enviroment.
So today in this article i will show you another feature that was introduced in Windows Server 2012 Hyper-V that isn’t immediately obvious and is driven by using Windows PowerShell. I will explain only basic commands that can use it every day to measure metrics of your VM’S. The feature is amazing and it’s sure that i will come back with more advance commands of Resource Metering.
Resource Metering expose the metrics for the following resources used by a virtual machine during a specific period of time:
Average CPU usage, measured in megahertz over a period of time.
Average physical memory usage, measured in megabytes.
Minimum memory usage (lowest amount of physical memory).
Maximum memory usage (highest amount of physical memory).
Maximum amount of disk space allocated to a virtual machine.
Total incoming network traffic, measured in megabytes, for a virtual network adapter.
Total outgoing network traffic, measured in megabytes, for a virtual network adapter.
Let’s start to explain with practise.
As usuall open a Powershell as Administrator always.
First we must enable Resource Metering in VM. So Type Enable-VMResourceMetering –VMName WIN2012X64
If you want to verify that Resource Metering is enable in the VM TYPE Get-VM –VMName WIN2012X64| Format-Table Name, ResourceMeteringEnabled
Let’s see the resource metrics that we get from the VM Measure-VM –VMName WIN2012X64
Let’s see more details for the VM. Measure-VM –VMName WIN2012X64 | Format-List
If you want to measure network traffic (Measure-VM –VMName WIN2012X64).NetworkMeteredTrafficReport
This is my last article for the 2015. I will come back with new Articles and Tutorials in 2016.
With Windows Server 2012 Hyper-V Microsoft introduced a new feature in Hyper-V called Resource Metering which allows you to measure the usage of a virtual machine. This allows you to track CPU, Memory, Disk and network usage. This is a great feature especially if you need to do charge back or maybe even for trouble shooting.
Last week I had the chance to test and implement this feature for a customer.
First you can check the available PowerShell cmdlets for Hyper-V or for the the commands which include VMResourceMetering.
The resource metering has to be enabled per Virtual Machine. This is great, so even if you move the virtual machine from one Hyper-V host to another you still have the usage data.
To enable the resource metering you can use the following cmdlet. In my case I enable VM Resource Metering for my VM called SQL2012.
1
Get-VM SQL2012 | Enable-VMResourceMetering
With the cmdlet Measure-VM you can get the statistic for the VM.
Here is another great thing, if you want to measure Network from or to a specific network you can use VM Network Adapter ACLs to do so. With ACLs you can not just allow or deny network traffic, you can also meter network traffic for a special subnet or IP address.
1
Add-VMNetworkAdapterAcl -VMName SQL2012 -Action Meter -RemoteIPAddress 10.10.0.0/16 -Direction Outbound
Of course you can reset the statistics for the VM.
1
Get-VM SQL2012 | Reset-VMResourceMetering
And to disable resource metering for the VM use:
1
Get-VM SQL2012 | Disable-VMResourceMetering
I think this is one of the great new features of Windows Server 2012 Hyper-V which gets not a lot of attention but is really important.
Below links give a detailed clarification on how time clock of guest operating systems in windows hyper-v works and how to solve its associated problems.
There is a lot of confusion about how time synchronization works in Hyper-V – so I wanted to take the time to sit down and write up all the details.
There are actually multiple problems that exist around keeping time inside of virtual machines – and Hyper-V tackles these problems in different ways.
Problem #1 – Running virtual machines lose track of time.
While all computers contain a hardware clock (called the RTC – or real-time clock) most operating systems do not rely on this clock. Instead they read the time from this clock once (when they boot) and then they use their own internal routines to calculate how much time has passed.
The problem is that these internal routines make assumptions about how the underlying hardware behaves (how frequently interrupts are delivered, etc…) and these assumptions do not account for the fact that things are different inside a virtual machine. The fact that multiple virtual machines need to be scheduled to run on the same physical hardware invariably results in minor differences in these underlying systems. The net result of this is that time appears to drift inside of virtual machines.
UPDATE 11/22: One thing that you should be aware of here: the rate at which the time in a virtual machine drifts is affected by the total system load of the Hyper-V server. More virtual machines doing more stuff means time drifts faster.
In order to deal with time drift in a virtual machine – you need to have some process that regularly gets the real time from a trusted source and updates the time in a virtual machine.
Hyper-V provides the time synchronization integration services to do this for you. The way it does this is by getting time readings from the management operating system and sending them over to the guest operating system. Once inside the guest operating system – these time readings are then delivered to the Windows time keeping infrastructure in the form of an Windows time provider (you can read more about this here: http://msdn.microsoft.com/en-us/library/bb608215.aspx). These time samples are correctly adjusted for any time zone difference between the management operating system and the guest operating system.
Problem #2 – Saved virtual machines / snapshots have the wrong time when they are restored.
When we restore a virtual machines from a saved state or from a snapshot we put back together the memory and run state of the guest operating system to exactly match what it was when the saved state / snapshot was taken. This includes the time calculated by the guest operating system. So if the snapshot was taken one month ago – the time and date will report that it is still one month ago.
Interestingly enough, at this point in time we will be reporting the correct (with some caveats) time in the systems RTC. But unfortunately the guest operating system has no idea that anything significant has happened – so it does not know to go and check the RTC and instead continues with its own internally calculated time.
To deal with this the Hyper-V time synchronization integration service detects whenever it has come back from a saved state or snapshot, and corrects the time. It does this by issuing a time change request through the normal user mode interfaces provided by Windows. The effect of this is that it looks just like the user sat down and changed the time manually. This method also correctly adjusts for time zone differences between the management operating system and the guest operating system.
Problem #3 – There is no correct “RTC value” when a virtual machine is started
As I have mentioned – physical computers have a RTC that operating systems look at when they first boot to get the time. This real-time clock is backed by a small battery (you have probably seen the battery yourself if you have ever pulled apart a computer). Unfortunately virtual machines do not have any “batteries”. When a virtual machine is turned off there is no component that keeps track of time for it. Instead – whenever you start a virtual machine we take the time from the management operating system and put this into the real-time clock of the virtual machine.
This is done without the use of the Hyper-V time synchronization integration servers (it happens long before the integration services have loaded).
The downside of this approach is that this does not take into account any potential time zone differences between the management operating system and the guest operating system. The reason for this is that “time zones” are a construct of the software that runs in a virtual machine – and is not communicated to the virtual hardware in any way. So – in short – when we start a virtual machine there is no way for us to know what time zone the guest operating system believes it is in.
One partial mitigation we have for this issue is that when the Hyper-V time synchronization component loads for the first time – it does an initial user mode set of the time to ensure that the time gets corrected as quickly as possible (using the same technique as discussed in problem #2).
…
So now that you understand how this all works – let’s discuss some common issues and questions around virtual machines and time synchronization.
Question #1 – I have a virtual machine that is configured for a different time zone to the management operating system. Should I disable the time synchronization component of Hyper-V?
No, no, no, no, no, no, no. And I say again – no. As I have mentioned above – all time synchronization that is done by the Hyper-V time synchronization integration service is time zone aware. If you disable the Hyper-V time synchronization integration service you will disable all the time synchronization aspects of Hyper-V that are time zone aware – and only leave the initial RTC synchronization active – which is not time zone aware.
This means that your virtual machines will go from booting in the wrong time zone, and then being corrected as soon as the Hyper-V time synchronization integration service loads to booting in the wrong time zone and staying in the wrong time zone.
Question #2 – Is there any way that I can stop Hyper-V from putting the wrong time in the RTC at boot?
In short; no. We need to put something in there – and that is the best thing that we have to work with.
Question #3 – Can’t you use UTC time in the RTC so that the correct time is established when the virtual machine boots?
UTC (which is the computer techy version of saying GMT) time would solve this problem nicely with only one problem. Windows does not support UTC time in the BIOS (Linux does). So while this would solve the problem for our Linux running user base – the fact of the matter is that most of our users run Windows – and this would not work for them.
Question #4 – What about if I am using a different time synchronization source (e.g. domain time or a remote time server)?
Hyper-V time synchronization was designed to “get along well” with other time synchronization sources. You should not need to disable Hyper-V time synchronization in order to use a different time synchronization source – as long as it goes through the Windows time synchronization infrastructure.
In fact – if you are running a Domain Controller inside a virtual machine I would recommend that you leave Hyper-V time synchronization enabled but that you also setup an external time source. You can do this by going to this KB article: http://support.microsoft.com/kb/816042and following the steps outlined in the “Configuring the Windows Time service to use an external time source” section.
UPDATE 11/22: I should have mentioned: since virtual machines tend to lose time much faster than physical computer, you need to configure any external time source to be checked frequently. Once every 15 minutes is a good place to start.
Question #5 – How can I check what time source is being used by Windows inside of a virtual machine?
This is easy to do. Just open an administrative command prompt and run “w32tm /query /source”. If you are synchronizing with a remote computer – its name should be listed. If you are using the Hyper-V time synchronization integration service you should see the following output:
If you see this output:
It means that there is no time synchronization going on for this virtual machine. This is a very bad thing – as time will drift inside of the virtual machine.
Question #6 – Wait a minute! My virtual machine should be synchronizing to the domain (or an external server) – but when I run that command it tells me that the Hyper-V time synchronization provider is being used! How do I fix this!
I do not know why this happens – but sometimes it happens. The first thing that you should do is to check that your domain does have a correctly configured authoritative time source. There have been a small number of times when I have seen this problem being caused by the lack of an authoritative time source.
Alternatively – you can “partially disable” Hyper-V time synchronization. The reason why I say “partially disable” is that you do not want to turn off the aspects of Hyper-V time synchronization that fix the time after a virtual machine has booted for the first time, or after the virtual machine comes back from a saved state. No other time synchronization source can address these scenarios elegantly.
Luckily – there is a way to leave this functionality intact but still ensure that the day to day time synchronization is conducted by an external time source. The key thing trick here is that it is possible to disable the Hyper-V time synchronization provider in the Windows time synchronization infrastructure – while still leaving the service running and enabled under Hyper-V.
To do this you will need to log into the virtual machine, open an administrative command prompt and run the following commands:
This command stops W32Time from using the Hyper-V time synchronization integration service for moment-to-moment synchronization. Remember from earlier in this post that we do not go through the Windows time synchronization infrastructure to correct the time in the event of virtual machine boot / restore from saved state or snapshot. So those operations are unaffected.
These two commands just “kick the Windows time service” to make sure the settings changes take effect immediately.
w32tm /query /source
This final command should confirm that everything is working as expected.
When you run these commands you should see something like this:
Question #7 – I have a virtual machine that has gotten ahead of time, and it never gets corrected back to the correct time. What is going on here?
As a general rule of thumb, when time drifts inside a virtual machine it runs slower than in the real world, and the time falls behind. We will always detect and correct this.
However, in the past, we have had reports of software problems caused when the Hyper-V time synchronization integration service decides to adjust the time back – because it believes the virtual machine is ahead of time. To deal with this (rare) issue – we put logic in our integration service that will not change the time if the virtual machine is more than 5 seconds ahead of the physical computer.
UPDATE 11/22: I was asked how having the virtual machine in a different time zone to the Hyper-V server would affect this. The short answer is that it does not. The 5 second check is done after we have done the necessary time zone translation.
Question #8 – When should I disable the Hyper-V time synchronization service (either in the virtual machine settings, or inside the guest operating system)?
Never.
There are definitely times when you will want to augment the functionality of the Hyper-V time integration services with a remote time source (be it a domain source or an external time server) but the only way to get the best experience around virtual machine boot / restore operations is to leave the Hyper-V time integration services enabled.
Using Hyper-V Server, you may find that the time is drifting a lot from the actual time, especially when Guest Virtual Machines are using CPUs heavily. The host OS is also virtualized, which means that the load of the host is also making the clock drift.
How to prevent the clock from drifting
Disable the Time Synchronization in the Integration Services. (Warning, this setting is defined per snapshot)
Import the following registry file :
Windows Registry Editor Version 5.00[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\W32Time\Config] “MaxAllowedPhaseOffset”=dword:00000001 “SpecialPollInterval”=dword:00000005 “SpecialInterval”=dword:00000001 Note: If you are using notepad, make sure to save the file using an Unicode encoding.
If the guest OS (and the server OS) is not on a domain, type in the following to set the time source :
w32tm /config /manualpeerlist:”time.windows.com,0x01 1.ca.pool.ntp.org,0x012.ca.pool.ntp.org,0x01″ /syncfromflags:MANUAL /update Note: Hosts FQDN are separated by spaces.
Run the following command to force a time synchronization
w32tm /resync
Check that the clock is not drifting anymore by using this command :
w32tm /monitor /computer:time.windows.com
A bit of background …
The system I’m currently working on is heavily based on time. It relies a lot on timestamps taken from various steps of the process. If for some reason the system clock is unstable, that means the data generated by the system is unreliable. It sometimes generates corrupt data, and this is not good for the business.
I was investigating a sequence of events stored in the database in order that could not have happened, because the code cannot generate it this way.
After loads of investigation looking for code issues, I stumbled upon something rather odd in my application logs, considering that each line from the same thread should be time stamped later than the previous :
All the lines above were generated from the same thread, which means that the system time changed radically between the second and the third line. From the application point of view, the time went backward of about two seconds and that also means that during that two seconds, there were data generated in the future. This is not very good…
The Investigation
Looking at the Log4net source code, I confirmed that the time is grabbed using System.DateTime.Now call, which excludes any code issues.
Then I looked at the Windows Time Service utility, and by running the following command :
w32tm /stripchart /computer:time.windows.com
I found out that the time difference from the NTP source was very different, something like 10 seconds. But the most disturbing was not the time difference itself, but the evolution of that time difference.
Depending on the load of the virtual machine, the difference would grow very large, up to a second behind in less than a minute. Both the host and the guest machines were exposing this behavior. Since Hyper-V Integration Services are by default synchronizing the clock of all the virtual machines on the guest OS, that means that the load of a single virtual machine can influence the clock of all other virtual machines. The host machine CPU load can also influence the overall clock rate, because it is also virtualized.
Trying to explain this behavior
To try and make an educated guess, the time source used by windows seems to be the TSC of the processor (by the use of the RDTSC opcode), which is virtualized. The preemption of the CPU by other virtual machines seems to have an negative effect on the counter used as a reference by windows.
The more the CPU is preempted, the more the counter drifts.
Correcting the drift
By default, the Time Service has a “phase adjustment” process that slows down or speeds up the system clock rate to match a reliable time source. The TSC counter on the physical CPU is clocked by the system Quartz (If it is still like this). The “normal” drift of that kind of component is generally not very important, and may be related to external factors like the temperature of the room. The time service can deal with that kind of slow drift.
But the default configuration does not seem to be a good fit for a time source that drifts this quickly and is rather unpredictable. We need to shorten the process of phase adjustment.
Fixing this drift is rather simple, the Time Service needs to correct the clock rate more frequently, to cope with the load of the virtual machines that slow down the clock of the host.
Unfortunately, the default parameters on Hyper-V Server R2 are those of the default member of a domain, which are defined here. The default polling period from a reliable time source is way too long, 3600 seconds, considering the drift faced by the host clock.
A few parameters need to be adjusted in the registry for the clock to stay synchronized :
Set the SpecialInterval value to 0x1 to force the use of SpecialPollInterval.
Set SpecialPollInterval to 10, to force the source NTP to be polled every 10 seconds.
Set the MaxAllowedPhaseOffset to 1, to force the maximum drift to 1 second before the clock is set directly, if adjusting the clock rate failed.
Using these parameters will not mean that the clock will stay perfectly stable, but at the very least it will correct itself very quickly.
It seems that there is a hidden boot.ini parameter for Windows 2003, /USEPMTIMER, which forces windows to use the ACPI timer and avoid that kind of drift. I have not been able to confirm this has any effect at all, and I cannot confirm if the OS is actually using the PM Timer or the TSC.
In computing, a file server (or fileserver) is a computer attached to a network that provides a location for shared disk access, i.e. shared storage of computer files (such as documents, sound files, photographs, movies, images, databases, etc.) that can be accessed by the workstations that are able to reach the computer that shares the access through a computer network. The term server highlights the role of the machine in the client–server scheme, where the clients are the workstations using the storage. It is common that a file server does not perform computational tasks, and does not run programs on behalf of its clients. It is designed primarily to enable the storage and retrieval of data while the computation is carried out by the workstations.
File servers are commonly found in schools and offices, where users use a LAN to connect their client computers.
File servers may also be categorized by the method of access: Internet file servers are frequently accessed by File Transfer Protocol (FTP) or by HTTP (but are different from web servers, that often provide dynamic web content in addition to static files). Servers on a LAN are usually accessed by SMB/CIFS protocol (Windows and Unix-like) or NFS protocol (Unix-like systems).
Design of file servers:
In modern businesses the design of file servers is complicated by competing demands for storage space, access speed, recoverability, ease of administration, security, and budget.
The primary piece of hardware equipment for servers over the last couple of decades has proven to be the hard disk drive. Although other forms of storage are viable (such as magnetic tape and solid-state drives) disk drives have continued to offer the best fit for cost, performance, and capacity.
1. Storage:
Since the crucial function of a file server is storage, technology has been developed to operate multiple disk drives together as a team, forming a disk array. A disk array typically has cache (temporary memory storage that is faster than the magnetic disks), as well as advanced functions like RAID and storage virtualization. Typically disk arrays increase level of availability by using redundant components other than RAID, such as power supplies. Disk arrays may be consolidated or virtualized in a SAN
2. Network Attached Storage (NAS):
Network-attached storage (NAS) is file-level computer data storage connected to a computer network providing data access to a heterogeneous group of clients. NAS devices specifically are distinguished from file servers generally in a NAS being a computer appliance – a specialized computer built from the ground up for serving files – rather than a general purpose computer being used for serving files (possibly with other functions). In discussions of NASs, the term “file server” generally stands for a contrasting term, referring to general purpose computers only.
As of 2010 NAS devices are gaining popularity, offering a convenient method for sharing files between multiple computers. Potential benefits of network-attached storage, compared to non-dedicated file servers, include faster data access, easier administration, and simple configuration.[3]
NAS systems are networked appliances containing one or more hard drives, often arranged into logical, redundant storage containers or RAID arrays. Network Attached Storage removes the responsibility of file serving from other servers on the network. They typically provide access to files using network file sharing protocols such as NFS, SMB/CIFS (Server Message Block/Common Internet File System), or AFP.
A. RAID (redundant array of independent disks) is a data storage virtualization technology that combines multiple physical disk drive components into a single logical unit for the purposes of data redundancy, performance improvement, or both. Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word RAID followed by a number, for example RAID 0 or RAID 1. Each schema, or RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives.
RAID Standard levels:
· RAID 0 consists of striping, without mirroring or parity. The capacity of a RAID 0 volume is the sum of the capacities of the disks in the set, the same as with a spanned volume. There is no added redundancy for handling disk failures, just as with a spanned volume. Thus, failure of one disk causes the loss of the entire RAID 0 volume, with reduced possibilities of data recovery when compared with a broken spanned volume. Striping distributes the contents of files roughly equally among all disks in the set, which makes concurrent read or write operations on the multiple disks almost inevitable and results in performance improvements. The concurrent operations make the throughput of most read and write operations equal to the throughput of one disk multiplied by the number of disks. Increased throughput is the big benefit of RAID 0 versus spanned volume, at the cost of increased vulnerability to drive failures.
· RAID 1 consists of data mirroring, without parity or striping. Data is written identically to two drives, thereby producing a “mirrored set” of drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its seek time and rotational latency), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning.
· RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive. This level is of historical significance only; although it was used on some early machines (for example, the Thinking Machines CM-2), as of 2014 it is not used by any commercially available system.
· RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive. Although implementations exist, RAID 3 is not commonly used in practice.
· RAID 4consists of block-level striping with dedicated parity. This level was previously used by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called RAID-DP. The main advantage of RAID 4 over RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read/write I/O operation requires reading the whole group of data drives, while in RAID 4 one I/O read/write operation does not have to spread across all data drives. As a result, more I/O operations can be executed in parallel, improving the performance of small transfers.
· RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity information is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks. RAID 5 implementations are susceptible to system failures because of trends regarding array rebuild time and the chance of drive failure during rebuild (see “Increasing rebuild time and failure probability” section, below). Rebuilding an array requires reading all data from all disks, opening a chance for a second drive failure and the loss of the entire array. In August 2012, Dell posted an advisory against the use of RAID 5 in any configuration on Dell EqualLogic arrays and RAID 50 with “Class 2 7200 RPM drives of 1 TB and higher capacity” for business-critical data.
· RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5. RAID 10 also minimizes these problems.
B. Nested (hybrid) RAID:
In what was originally termed hybrid RAID, many storage controllers allow RAID levels to be nested. The elements of a RAID may be either individual drives or arrays themselves. Arrays are rarely nested more than one level deep.
The final array is known as the top array. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0), most vendors omit the “+” (yielding RAID 10 and RAID 50, respectively).
· RAID 0+1 creates two stripes and mirrors them. If a single drive failure occurs then one of the stripes has failed, at this point you are running effectively as RAID 0 with no redundancy, significantly higher risk is introduced during a rebuild than RAID 1+0 as all the data from all the drives in the remaining stripe has to be read rather than just from 1 drive increasing the chance of an unrecoverable read error (URE) and significantly extending the rebuild window.
· RAID 1+0 creates a striped set from a series of mirrored drives. The array can sustain multiple drive losses so long as no mirror loses all its drives.
· JBOD RAID N+N With JBOD (Just a Bunch Of Disks), it is possible to concatenate disks, but also volumes such as RAID sets. With larger drive capacities, write and rebuilding time may increase dramatically (especially, as described above, with RAID 5 and RAID 6). By splitting larger RAID sets into smaller subsets and concatenating them with JBOD, write and rebuilding time may be reduced. If a hardware RAID controller is not capable of nesting JBOD with RAID, then JBOD can be achieved with software RAID in combination with RAID set volumes offered by the hardware RAID controller. There is another advantage in the form of disaster recovery, if a small RAID subset fails, then the data on the other RAID subsets is not lost, reducing restore time.
What is Spanned Volume?:
When talking of SPanned Volume we are brought to the topic of Non-RAID drive architectures.
C. Non-RAID drive architectures:
The most widespread standard for configuring multiple hard disk drives is RAID (Redundant Array of Inexpensive/Independent Disks), which comes in a number of standard configurations and non-standard configurations. Non-RAID drive architectures also exist, and are referred to by acronyms with similarity to RAID.
JBOD (derived from “just a bunch of disks“): described multiple hard disk drives operated as individual independent hard disk drives. JBOD (abbreviated from “just a bunch of disks/drives“) is an architecture using multiple hard drives exposed as individual devices. Hard drives may be treated independently or may be combined into a one or more logical volumes using a volume manager like LVM or mdadm; such volumes are usually called “spanned” or “linear | SPAN | BIG”. A spanned volume provides no redundancy, so failure of a single hard drive amounts to failure of the whole logical volume. Redundancy for resilience and/or bandwidth improvement may be provided, in software, at a higher level.
SPAN or BIG: A method of combining the free space on multiple hard disk drives from “JBoD” to create a spanned volume. Such a concatenation is sometimes also called BIG/SPAN. A SPAN or BIG is generally a spanned volume only, as it often contains mismatched types and sizes of hard disk drives. Concatenation or spanning of drives is not one of the numbered RAID levels, but it is a popular method for combining multiple physical disk drives into a single logical disk. It provides no data redundancy. Drives are merely concatenated together, end to beginning, so they appear to be a single large disk. It may be referred to as SPAN or BIG (meaning just the words “span” or “big”, not as acronyms). What makes a SPAN or BIG different from RAID configurations is the possibility for the selection of drives. While RAID usually requires all drives to be of similar capacity[a] and it is preferred that the same or similar drive models are used for performance reasons, a spanned volume does not have such requirements.
MAID (derived from “massive array of idle drives“): an architecture using hundreds to thousands of hard disk drives for providing nearline storage of data, primarily designed for “Write Once, Read Occasionally” (WORO) applications, in which increased storage density and decreased cost are traded for increased latency and decreased redundancy.
Network-attached storage removes the responsibility of file serving from other servers on the network. They typically provide access to files using network file sharing protocols such as NFS, SMB/CIFS, or AFP. From the mid-1990s, NAS devices began gaining popularity as a convenient method of sharing files among multiple computers. Potential benefits of dedicated network-attached storage, compared to general-purpose servers also serving files, include faster data access, easier administration, and simple configuration.
Description:
A NAS unit is a computer connected to a network that provides only file-based data storage services to other devices on the network. Although it may technically be possible to run other software on a NAS unit, it is usually not designed to be a general-purpose server. For example, NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often using a browser.
A full-featured operating system is not needed on a NAS device, so often a stripped-down operating system is used. For example, FreeNAS or NAS4Free, both open source NAS solutions designed for commodity PC hardware, are implemented as a stripped-down version of FreeBSD.
NAS systems contain one or more hard disk drives, often arranged into logical, redundant storage containers or RAID.
The key difference between direct-attached storage (DAS) and NAS is that DAS is simply an extension to an existing server and is not necessarily networked. NAS is designed as an easy and self-contained solution for sharing files over the network.
Both DAS and NAS can potentially increase availability of data by using RAID or clustering.
When both are served over the network, NAS could have better performance than DAS, because the NAS device can be tuned precisely for file serving which is less likely to happen on a server responsible for other processing. Both NAS and DAS can have various amount of cache memory, which greatly affects performance. When comparing use of NAS with use of local (non-networked) DAS, the performance of NAS depends mainly on the speed of and congestion on the network.
NAS is generally not as customizable in terms of hardware (CPU, memory, storage components) or software (extensions, plug-ins, additional protocols) as a general-purpose server supplied with DAS.
One way to loosely conceptualize the difference between a NAS and a SAN is that NAS appears to the client OS (operating system) as a file server (the client can map network drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client’s local disks), and available to be formatted with a file system and mounted.
Despite their differences, SAN and NAS are not mutually exclusive, and may be combined as a SAN-NAS hybrid, offering both file-level protocols (NAS) and block-level protocols (SAN) from the same system. An example of this is Openfiler, a free software product running on Linux-based systems. A shared disk file system can also be run on top of a SAN to provide filesystem service.
Uses:
NAS is useful for more than just general centralized storage provided to client computers in environments with large amounts of data. NAS can enable simpler and lower cost systems such as load-balancing and fault-tolerant email and web server systems by providing storage services. The potential emerging market for NAS is the consumer market where there is a large amount of multi-media data. Such consumer market appliances are now commonly available. Unlike their rackmounted counterparts, they are generally packaged in smaller form factors. The price of NAS appliances has plummeted in recent years, offering flexible network-based storage to the home consumer market for little more than the cost of a regular USB or FireWire external hard disk. Many of these home consumer devices are built around ARM, PowerPC or MIPS processors running an embedded Linuxoperating system.
Clustered NAS:
A clustered NAS is a NAS that is using a distributed file system running simultaneously on multiple servers. The key difference between a clustered and traditional NAS is the ability to distribute[citation needed] (e.g. stripe) data and metadata across the cluster nodes or storage devices. Clustered NAS, like a traditional one, still provides unified access to the files from any of the cluster nodes, unrelated to the actual location of the data.
3. Security:
File servers generally offer some form of system security to limit access to files to specific users or groups. In large organizations, this is a task usually delegated to what is known as directory services such as openLDAP, Novell’s eDirectory or Microsoft’s Active Directory.
These servers work within the hierarchical computing environment which treat users, computers, applications and files as distinct but related entities on the network and grant access based on user or group credentials. In many cases, the directory service spans many file servers, potentially hundreds for large organizations. In the past, and in smaller organizations, authentication could take place directly at the server itself.
File and Storage Services:
File and Storage Services includes technologies that help you set up and manage one or more file servers, which are servers that provide central locations on your network where you can store files and share them with users. If your users need access to the same files and applications, or if centralized backup and file management are important to your organization, you should set up one or more servers as a file server by installing the File and Storage Services role and the appropriate role services.
The File and Storage Services role and the Storage Services role service are installed by default, but without any additional role services. This basic functionality enables you to use Server Manager or Windows PowerShell to manage the storage functionality of your servers. However, to set up or manage a file server, you should use the Add Roles and Features Wizard in Server Manager or the Install-WindowsFeature Windows PowerShell cmdlet to install additional File and Storage Services role services, such as the role services discussed in this topic.
Administrators can use the File and Storage Services role to set up and manage multiple file servers and their storage capabilities by using Server Manager or Windows PowerShell. Some of the specific applications include the following:
Storage Spaces – Use to deploy high availability storage that is resilient and scalable by using cost-effective industry-standard disks.
Folder Redirection, Offline Files, and Roaming User Profiles – Use to redirect the path of local folders (such as the Documents folder) or an entire user profile to a network location, while caching the contents locally for increased speed and availability.
Work Folders – Use to enable users to store and access work files on personal PCs and devices, in addition to corporate PCs. Users gain a convenient location to store work files and access them from anywhere. Organizations maintain control over corporate data by storing the files on centrally managed file servers and optionally specifying user device policies (such as encryption and lock screen passwords). Work Folders is a new role service in Windows Server 2012 R2.
Data Deduplication – Use to reduce the disk space requirements of your files, saving money on storage.
iSCSI Target Server – Use to create centralized, software-based, and hardware-independent iSCSI disk subsystems in storage area networks (SANs).
Data Deduplication – Saves disk space by storing a single copy of identical data on the volume.
Storage Spaces and storage pools – Enables you to virtualize storage by grouping industry-standard disks into storage pools and then creating storage spaces from the available capacity in the storage pools.
So you’ve expanded the virtual disk (VHD/VHDX) of a virtual machine that has checkpoints (or snapshots as they used to be called) on it. Did you forget about them? Did you really leave them lingering around for that long? Bad practice and not supported (we don’t have production snapshots yet, that’s for Windows Server 2016). Anyway your virtual machine won’t boot. Depending on the importance of that VM you might be chewed out big time or ridiculed. But what if you don’t have a restore that works? Suddenly it’s might have become a resume generating event.
All does not have to be lost. Their might be hope if you didn’t panic and made even more bad decisions. Please, if you’re unsure what to do, call an expert, a real one, or at least some one who knows real experts. It also helps if you have spare disk space, the fast sort if possible and a Hyper-V node where you can work without risk. We’ll walk you through the scenarios for both a VHDX and a VHD.
How did you get into this pickle?
If you go to the Edit Virtual Hard Disk Wizard via the VM settings it won’t allow for that if the VM has checkpoints, whether the VM is online or not.
VHDs cannot be expanded on line. If the VM had checkpoints it must have been shut down when you expanded the VHD. If you went to the Edit Disk tool in Hyper-V Manager directly to open up the disk you don’t get a warning. It’s treated as a virtual disk that’s not in use. Same deal if you do it in PowerShell
Resize-VHD -Path “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” -SizeBytes 15GB
That just works.
VHDXs can be expanded on online if they’re attached to a vSCSI controller. But if the VM has checkpoints it will not allow for expanding.
So yes, you deliberately shut it down to be able to do it with the the Edit Disk tool in Hyper-V Manager. I know, the warning message was not specific enough but consider this. The Edit disk tool when launched directly has no idea of what the disk you’re opening is used for, only if it’s online / locked.
Anyway the result is the same for the VM whether it was a VHD or a VHDX. An error when you start it up.
[Window Title]
Hyper-V Manager
[Main Instruction]
An error occurred while attempting to start the selected virtual machine(s).
[Content]
‘DidierTest06’ failed to start.
Synthetic SCSI Controller (Instance ID 92ABA591-75A7-47B3-A078-050E757B769A): Failed to Power on with Error ‘The chain of virtual hard disks is corrupted. There is a mismatch in the virtual sizes of the parent virtual hard disk and differencing disk.’.
Virtual disk ‘C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD_8DFF476F-7A41-4E4D-B41F-C639478E3537.avhd’ failed to open because a problem occurred when attempting to open a virtual disk in the differencing chain, ‘C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd’: ‘The size of the virtual hard disk is not valid.’.
You might want to delete the checkpoint but the merge will only succeed for the virtual disk that have not been expanded. You actually don’t need to do this now, it’s better if you don’t, it saves you some stress and extra work. You could remove the expanded virtual disks from the VM. It will boot but in many cased the missing data on those disks are very bad news. But al least you’ve proven the root cause of your problems.
If you inspect the AVVHD/AVHDX file you’ll get an error that states
The differencing virtual disk chain is broken. Please reconnect the child to the correct parent virtual hard disk.
However attempting to do so will fail in this case.
Failed to set new parent for the virtual disk.
The Hyper-V Virtual Machine Management service encountered an unexpected error: The chain of virtual hard disks is corrupted. There is a mismatch in the virtual sizes of the parent virtual hard disk and differencing disk. (0xC03A0017).
Is there a fix?
Let’s say you don’t have a backup (shame on you). So now what? Make copies of the VHDX/AVHDX or VHD/AVHD and save guard those. You can also work on copies or on the original files.I’ll just the originals as this blog post is already way too long. If you. Note that some extra disk space and speed come in very handy now. You might even copy them of to a lab server. Takes more time but at least you’re not working on a production host than.
Working on the original virtual disk files (VHD/AVHD and / or VHDX/AVHDX)
If you know the original size of the VHDX before you expanded it you can shrink it to exactly that. If you don’t there’s PowerShell to the rescue if you want to find out the minimum size.
But even better you can shrink it to it’s minimum size, it’s a parameter!
Resize-VHD -Path “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” -ToMinimumSize
Now you not home yet. If you restart the VM right now it will fail … with the following error:
‘DidierTest06’ failed to start. (Virtual machine ID 7A54E4DB-7CCB-42A6-8917-50A05354634F)
‘DidierTest06’ Synthetic SCSI Controller (Instance ID 92ABA591-75A7-47B3-A078-050E757B769A): Failed to Power on with Error ‘The chain of virtual hard disks is corrupted. There is a mismatch in the identifiers of the parent virtual hard disk and differencing disk.’ (0xC03A000E). (Virtual machine ID 7A54E4DB-7CCB-42A6-8917-50A05354634F)
What you need to do is reconnect the AVHDX to it’s parent and choose to ignore the ID mismatch. You can do this via Edit Disk in Hyper-V Manager of in PowerShell. For more information on manually merging & repairing checkpoints see my blogs on this subject here. In this post I’ll just show the screenshots as walk through.
Once that’s done you’re VHDX is good to go.
For a VHD you can’t shrink that with the inbox tools. There is however a free command line tool that can do that names VHDTool.exe. The original is hard to find on the web so here is the installer if you need it. You only need the executable, which is portable actually, don’t install this on a production server. It has a repair switch to deal with just this occurrence!
Here’s an example of my lab …
D:\SysAdmin>VhdTool.exe /repair “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD_8DFF476F-7A41-4E4D-B41F-C639478E3537.avhd”
That’s it for the VHD …
You’re back in business! All that’s left to do is get rid of the checkpoints. So you delete them. If you wanted to apply them an get rid of the delta, you could have just removed the disks, re-added the VHD/VHDX and be done with it actually. But in most of these scenarios you want to keep the delta as you most probably didn’t even realize you still had checkpoints around. Zero data loss .
Conclusion
Save your self the stress, hassle and possibly expense of hiring an expert. How? Please do not expand a VHD or VHDX of a virtual machine that has checkpoints. It will cause boot issues with the expanded virtual disk or disks! You will be in a stressful, painful pickle where you might not get out of if you make the wrong decisions and choices!
As a closing note, you must have have backups and restores that you have tested. Do not rely on your smarts and creativity or that others, let alone luck. Luck runs out. Otions run out. Even for the best and luckiest of us. VEEAM has save my proverbial behind a few times already.
I have a Virtual Machine and it includes OS and other programs of course. ATM I have around 5 Snapshots.
The Disk Space is running low and I wanted to expand the VHD. I enter VM’s settings and intend to edit the VHD and expand the VHD but all i found was this messege.
So are there other ways of expanding a VHD? I simple wondered if there is a way to keep the snapshots and expand the virtual hard disk. I realize that there might be problems if i remove them and then import them again.
Answer:
Remove your snapshots and then expand the disk. You should read up on how snapshots work, because that will explain why expanding the underlying VHD will be bad news for the delta disks.
For someone of you who are wondering if it is possible to run another hypervisor in windows, yes it is possible if the hypervisor is Virtualbox, I am not certain of the possibility of doing this with Vmware
PowerShell Enables Any Version of Windows to Remotely Manage Any Other Version of Windows (and Hyper-V)
A very common complaint, and sometimes an outright problem, is that Hyper-V Manager can only fully control versions of Hyper-V that are running on the same code base. Hyper-V Manager in Windows 7 can’t control anything after the version of Hyper-V that released with the Windows 7/Windows Server 2008 R2 code base. Windows 8 or later was required. Starting in Windows 8/Server 2012, Hyper-V Manager can usually manage down-level hosts, but some people have troubles even with that.
With PowerShell, there’s no problem. PowerShell Remoting was introduced in PowerShell 2.0, and since then, PowerShell Remoting has worked perfectly well both up-level and down-level. The following is a screenshot of a Windows 7 installation with native PowerShell 2.0 remotely controlling a Hyper-V 2012 R2 server with native PowerShell 4.0:
PSRemoting Different Versions
How to Enable PowerShell Remoting for Hyper-V
Both the local and remote systems must be set up properly for PowerShell Remoting to work. The first thing that you must do on both sides is:
1
Enable-PSRemoting -Force
On non-domain-joined systems, I received an “Access Denied” error unless I used the real Administrator account; just using an account in the local administrators group wasn’t enough. This seems to be at odds with the normal Windows authentication model and was just plain annoying for Windows 7, which disables the Administrator account by default. You can try the SkipProfileCheck parameter… it might help.
Installing the PowerShell Hyper-V Module
Not surprisingly, the easiest way to install the Hyper-V PowerShell module is with PowerShell. This works on Windows 8 and later, Windows Server 2012 and later, and Hyper-V Server 2012 and later:
1
Install-WindowsFeature Hyper-V-PowerShell
If you’d like to install Hyper-V Manager along with the PowerShell module:
1
Install-WindowsFeature RSAT-Hyper-V-Tools
If you’d like to install both of these tools along with Hyper-V:
If you’d rather take the long way through the GUI for some reason, your approach depends on whether or not you’re using a desktop or a server operating system.
For a desktop, open Turn windows features on or off via the Control Panel. Open the Hyper-V tree, then the Hyper-V Management Tools subtree, and check Hyper-V Module for Windows PowerShell (along with anything else that you’d like).
Hyper-V PowerShell Module on Windows
For a Server operating system, start in Server Manager. Click Add roles and features. Click through all of the screens in the wizard until you reach the Features page. Expand Remote Server Administration Tools, then Hyper-V Management Tools, and check Hyper-V Module for Windows PowerShell (along with anything else that you’d like).
Server Hyper-V PowerShell Module
Once the module is installed, you can use it immediately without rebooting. You might need to import it if you’ve already got an open PowerShell session, or you could just start a new session.
Implicit PowerShell Remoting in the Hyper-V Module
The easiest way to start using PowerShell Remoting is with implicit remoting. As a general rule, cmdlets with a ComputerName parameter are making use of implicit remoting. What that means is that all of the typing is done on your local machine, but all of the action occurs on the remote machine. Everything that the Hyper-V module does is in WMI, so this means that all of the commands that you type are being sent to the VMMS service on the target host to perform. If you are carrying this out interactively, the results are then returned to your console in serializedform.
To make use of implicit remoting with the Hyper-V module, you must have it installed on your local computer. There are more limitations on implicit remoting:
You can only (legally) install the Hyper-V PowerShell module that matches your local Windows version
You are limited to the functionality exposed by your local Hyper-V PowerShell module
No matter the local version, it cannot be used to manage 2008 R2 or lower target hosts
Implicit remoting doesn’t always work if the local and remote Hyper-V versions are different
For example, I cannot install the Hyper-V PowerShell module at all on Windows 7. As another example, the Hyper-V PowerShell module in Windows 10 cannot control a 2012 R2 environment. Basically, the Hyper-V PowerShell module on your local system follows similar rules as Hyper-V Manager on your local system. I want to reiterate that these rules only apply to implicit remoting; you can still explicitly operate any cmdlet in any module that exists on the target.
For example, from my Windows 10 desktop, I run Get-VM -CompterName svhv01 against my Windows Server 2016 TP5 host:
Implicit Remote Get-VM
Behind the scenes, it is directing WMI on SVHV01 to run “SELECT * FROM Msvm_ComputerSystem” against its “root\virtualization\v2” namespace. It is then processing the results through a local view filter.
Implicit Remoting Against Several Hosts
I was saying how PowerShell Remoting can be used against multiple machines at once. Any time that a cmdlet’s ComputerName parameter supports a string array, you can operate it against multiple machines. Run Get-Help against the cmdlet in question and check to see if the ComputerName parameter has square brackets:
ComputerName with Array Support
As you can see, Get-VM has square brackets in its ComputerName parameter (the [<String[]>]] part), so it can accept multiple hosts. Example:
1
Get-VM -ComputerName svhv01, svhv02
Locking Implicit Remoting to a Specific Host
I have a complex environment with several hosts, so I am content to always specify the -ComputerName parameter as necessary. If you’ve only got a single host, then you might like to avoid typing that ComputerName parameter each time. To do that, open up your PowerShell profile and enter the following:
123
Get-Command –Module Hyper-V –Verb Get | foreach { $PSDefaultParameterValues.Add(“$($_.Name):ComputerName”,”TARGETHOSTNAME”)}
Just replace TARGETHOSTNAME with the name or IP address of your remote host. From that point onward, any time you open any PowerShell prompt using the modified profile, all cmdlets from the Hyper-V module that start with “Get” will be automatically injected with “-ComputerName TARGETHOSTNAME”.
Implicit Remoting and the Pipeline
It can take some time and practice to become accustomed to how the pipeline works with implicit remoting, not least of which because some cmdlets behave differently. As a general rule, the pipeline brings items back to your computer.
Where do you expect to find the file? On the remote host or the local host?
Implicit Remoting with a Pipeline
The file was created on my local system (if you looked closely at my screenshot and it made you curious, the file is zero length because there are no VMs on that system, not because it failed).
So, what this behavior means to you is that if you choose to then carry out operations on the objects such as Set cmdlets, you might need to use implicit remoting again after a pipeline… but then again, you might not. Which of the following do you think is correct?
If you said #1, then you’re right! But, do you know why? It’s actually fairly simple to tell. Just look at the object that is crossing the pipeline:
Computer Name on Object
The object contains its computer name and the cmdlets in the Hyper-V module are smart enough to process it as an object that contains a computer name. To then specify the computer name again for the cmdlet to the right of the pipeline just confuses PowerShell and causes it to error. Not all objects have a ComputerName property and not all cmdlets know to look for it. Furthermore, if you do anything to strip away that property and then try to pipe it to another cmdlet, you will need to specify ComputerName again. For example:
If the remote host has the PowerShell module installed, you can establish a session to it and begin work immediately. Natively, this requires the target system to be 2012 or later, as there was no official Hyper-V PowerShell module in prior versions. There is an unofficial release for 2008 R2 (and maybe 2008, I never tried). Explicit remoting is “harder” than implicit remoting (requires more typing) but is much more powerful and there are no fancy rules governing it. If you can connect to the remote machine using PowerShell Remoting, then you can operate any PowerShell cmdlets there (for the PowerShell experts, just imagine that there is an asterisk right here that links to a mention of Constrained Endpoints).
There are two general ways to use explicit remoting. The first is to use Enter-PSSession. That drops you right onto the remote console as a sort of console-within-a-console. From there, you can work interactively. The second method is to encapsulate script in a block and feed it to Invoke-Command. That method is used for scripting and automatically cleans up after itself.
PowerShell Remoting in an Interactive Session
The simplest way to remotely connect to an interactive PowerShell session is:
1
Enter-PSSession hostname
As shown, the command only works between machines on the same domain and when the current user account has sufficient privileges. I showed this above, but for the sake of completeness, to connect when one of the computers is unjoined or untrusted and/or if the user account is not administrative:
This will securely prompt you for the credentials to use on the target. If the target is domained-joined, make sure you use the format of domain\username. If it’s a standalone system, you can use the username by itself or computername\username.
Once the connection is established, it will change the prompt to reflect that you’re accessing the remote computer. You can see examples in the screenshots above. It will look like this:
1
[hostname]: PS C:\Users\UserAccount\Documents>
One thing I generally avoid in instructional text is using positional parameters. I especially dislike mixing positional and named parameters. I’ve done both here for the sake of showing you how uncomplicated PowerShell Remoting is to use. For the purposes of delivering a proper education, be aware that the named parameter that you use to specify the host name is -ComputerName . As long as it’s the first parameter submitted to the cmdlet, you don’t have to type it out.
Once you’re connected, it’s mostly like you were sitting at a PowerShell prompt on the remote system. Be aware that any custom PowerShell profile you have on that host isn’t loaded. Also note that whatever you’re doing on that remote system stays on that system. For instance, you can’t put something into a variable and then call that variable from your system after you exit the session.
When you’re done, you can just close the PowerShell window. PowerShell will clean up for you. If you want to go back to working on your computer:
1
Exit-PSSession
I much prefer the shorter alias:
1
exit
Using PowerShell Remoting to Address the Remote Device Manager Problem
Now that you know how to connect to a remote PowerShell session, you have the ability to overcome one of the long-standing remote management challenges of both Windows and Hyper-V Server. Prior to the 2012 versions, you could remotely connect to Device Manager, but only in a read-only mode. Starting in 2012, even that is gone.
You can get driver information for a lot of devices using PowerShell. For example, Get-NetAdapter returns several Driver fields. But what about installing or upgrading drivers? That was never possible using Device Manager remotely, or even through other remote tools. Well, with PowerShell Remoting, the problem is solvable.
You’re not restricted to running PowerShell commands inside your remote session. You can run executables, batch files, and other such things. The only things you can’t do is initiate a GUI or start anything that has its own shell. Fortunately, one of the things that can be run is pnputil. This utility can be used to manage drivers on a system. So, with PowerShell Remoting, you can remotely install and upgrade drivers.
My systems use Broadcom NICs as their management adapters. I downloaded the drivers and transferred them into local drives on my hosts. Then, using PowerShell Remoting from my desktop, I connected in and used pnputil to install them. The command to install a driver is:
1
pnputil -i -a driverfile.inf
You can see the results for yourself:
You can see that, as expected, my network connection was interrupted. What’s not shown is that PowerShell used a progress bar to show its automatic reconnection attempts. Once the driver was installed, the session automatically picked up right where it left off.
For verification, you can use pnputil -e :
Windows replaces the original driver file name with OEM#, as you can see here, but it keeps the manufacturer name and the driver version and date. If you want further verification, you can also run Get-NetAdapter | fl Driver* .
Advanced PowerShell Remoting with Invoke-Command
Here’s where the fun begins. Where the remote session usage shown above is great for addressing immediate needs, the true power of PowerShell Remoting is in connecting to multiple machines. Invoke-Command is the tool of choice:
If you run the above on systems prior to 2016, the first thing you’ll likely notice is that there’s no prettification of the output. Get-VM usually looks like this:
That’s because there’s a defined custom formatting view being applied. When an object crosses back to a source system across Invoke-Command , no view is applied. What you get is mostly the same thing you’d see if you piped it through Format-List -Property * (mostly seen as | fl * ).
At this point, it might not make any sense why we’re doing this. This is the same output that we got from the implicit remoting earlier, but it required more typing, and more to remember.
If you have any VMs, the above cmdlets produce a wall of text. Let’s slim it down a bit. From PowerShell 2.0 in Windows 7, I ran Invoke-Command -Computer svhv1, svhv2 -Credential (Get-Credential) -UseSSL -ScriptBlock { Get-VM | select Name }. Here’s the output:
Running the same script block locally would have resulted in a table with just the name column. Here, I get three more: PSComputerName, RunspaceId, and PSShowComputerName. In PowerShell 3.0 and later, the PSShowComputerName column isn’t there anymore. The benefit here is that you can use these fields to sort the output by the system that sent it.
You can use -HideComputer with Invoke-Command to suppress the output of all the extra fields if your source system is running PowerShell 3.0 or later. For PowerShell 2.0, RunspaceId is still shown but the others are hidden. They’re still there, so you can query against them. What’s nice about this is, if your system has the related module installed, then any custom formatting views will be applied just as if you were running inside a connected session:
This formatting issue is no longer a concern in Windows 10/Windows Server 2016 (which I suspect is more due to changes in PowerShell 5 than in the Hyper-V module):
PowerShell Remoting Formatting in 2016
Being able to format the output will always be a useful skill even with the basic formatting issues automatically addressed.
It might not make sense why we’re doing things this way. The implicit remoting method that I showed you earlier did just as well, and it required less to type (and memorize). The first reason that you’d use this method is because it doesn’t matter if the local computer and the remote computer are running the same versions of anything. The PowerShell 2.0 examples on Windows 7 that I showed you were running against a Hyper-V Server 2012 R2 environment. Neither the Windows 7 nor my Windows 10 environment can even run implicit remoting against those systems.
Even more importantly, this doesn’t begin to show the true power of PowerShell Remoting. Let’s do some interesting things. For instance, looking at VM output is fun and all, but why stop there? How about:
What I’ve done here is connect in to both hosts, retrieve their virtual machines, store them in a variable on my computer, and then disconnected the remote session. I can store them, format the output, build reports, etc. What I can’t do is make any changes to them, but that’s OK. I’ve got a couple of answers to that.
First, I can perform the modifications right on the target system by using a more complicated script block. The following example builds a script that retrieves all the VMs that aren’t set to start automatically and sets them so that they do. That entire script is assigned to a variable named “RemoteVMManipulation”. I use that as the -ScriptBlock parameter in an Invoke-Command , which I send to each of the hosts. The result of the script is saved to a variable:
This isn’t the most efficient script block, but I wrote it that way for illustration purposes. The variable “VMsNotStartingAutomatically” is created on each of the remote systems, but is destroyed as soon as the script block exits. It is not retrievable or usable on my calling system. However, I’ve placed the combined output into a variable named “ModifiedVMs”. Like a local function call, the output is populated by whatever was in the pipeline at the end of the script block’s execution. In this case, it’s the “VMsNotStartingAutomatically” array. Upon return, this array is transferred to the “ModifiedVMs” variable, which lives only on my system. In subsequent lines of the above script, I can view the VM objects that were changed even though the remote sessions are closed.
The second way to manipulate the objects that were returned is to transmit them back to the remote hosts using the -InputObject parameter and keep track of them with the added “PSComputerName” field:
What I’ve done here is send the “ModifiedVMs” variable from my system into each of the target systems using the-InputObject parameter. Once inside the script block, you reference this variable with $input . There are a few things to note here. For one, you’ll notice that I had to unpack the “ModifiedVMs” variable two times. For another, I wasn’t able to reference the input items as VM objects. Instead, I had to point it to the names of the VMs. This is because we’re not sending in true VM objects. We’re sending in what we got. GetType() reveals them as:
Because they’re a different object type, parameters expecting an object of the type “Microsoft.HyperV.PowerShell.VirtualMachine” will not work. Objects returned from Invoke-Command are always deserialized, which is why you have to go through these extra steps to do anything other than look at them. If you’ve got decent programming experience or you just don’t care about these sorts of things, you can skip ahead to the next section.
Serialization and deserialization are the methods that the .Net Framework, which is the underpinning of PowerShell, uses to first package objects for uses other than in-memory operations, and then to unpackage them later. There are lots of definitions out there for the term “object” in computer programming, but they are basically just containers. These containers hold only two things: memory addresses and an index of those memory addresses. The contents at those memory addresses are really just plain old binary 0s and 1s. It’s the indexes that give them meaning. How exactly that’s done from the developer’s view is dependent upon the language. So, in C++, you might find a “variable” defined as “int32”. This means that the memory location referenced by the index is 32 bits in length and the contents of those 32 bits should be considered an integer and that those contents can be modified. Indexes come in two broad types: data and code. In (a perhaps overly simplistic description of) .Net, data indexes are properties and refer to constants and variables. Code indexes can be either methods or events, and refer to functions.
As long as the objects are in memory, all of this works pretty much as you’d expect. If you send a read or write operation to a data index, then the contents of memory that it points to are retrieved or changed, respectively. If you (or, for an event, the system) call on a code index, then the memory contents it refers to are processed as an instruction set.
What happens if you want to save the object, say to disk? Well, you probably don’t care about the memory locations. You just want their contents. As for the functions and events, those have no meaning once the object is stored. So, what has to happen is all the code portions need to be discarded and the names of the indexes need to be paired up with the contents of the memory that they point to. As mentioned earlier, the .Net Framework does this by a process called serialization. Once an object is serialized, it can be written directly to disk. In our case, though, the object is being transmitted back to the system that called Invoke-Command . Once there, it is deserialized so that its new owning system can manipulate it in-memory like any other object. However, because it came from a serialized object, its structure looks different than the original because it isn’t the same object.
You’ll notice that all the events are gone. The only methods are GetType() and ToString(), which are part of this new object, not carried over from the original, and are here because they exist on every PowerShell object. Properties that contained complex objects have also been similarly serialized and deserialized.
Using Saved Credentials and Multiple Sessions
Of course, what puts the power into PowerShell is automation. Automation should mean you can “set it and forget it”. It’s tough to do that if you have to manually enter information into Get-Credential, isn’t it?
There’s also the problem of multiple credential sets. Hopefully, if you’ve got more than one host that sits outside your domain, they’ve each got their own credentials. I know that some people out there put Hyper-V systems in workgroup mode “to protect the domain” but then use a credential set with the same user name and password as their domain credentials. It’s no secret that I see no value in workgroup Hyper-V hosts when a domain is available except for perimeter networks, but if you’re going to do it, at least have the sense to use unique user names and passwords. Sure, it can be inconvenient, but when you actively choose the micro-management hell of workgroup-joined machines, you can’t really be surprised when you find yourself in micro-management hell. Fortunately for you, PowerShell Remoting can take a lot of the sting out of it.
The first step is to gather the necessary credentials for all of your remote machines and save them into disk files on the system where you’ll be running Invoke-Command. For that bit, I’m just going to pass the buck to Lee Holmes. He does a great job explaining both the mechanism and the safety of the process.
Once you have the credentials stored in variables, you next create individual sessions to the various hosts.
One thing to remember though, is that sessions created with New-PSSession will persist, even if you close your PowerShell prompt. They’ll eventually time out, but until then, they’re like a disconnected RDP session. They just sit there and chew up resources, giving an attackers an open session to attempt to compromise, all for no good reason. If you want, you can reconnect and reuse these sessions. Otherwise, get rid of them:
I’ve really only scratched the surface of PowerShell Remoting here. I had heard about it some time before, but I wasn’t in a hurry to use it because I was “getting by” with Hyper-V Manager and Remote Desktop connections. Ever since I spent a few minutes learning about Remoting, I have come to use it every single day. The ad hoc capabilities of Enter-PSSession allow me to knock things out quickly and the scripting powers of Invoke-Command are completely irreplaceable.
All that, and I haven’t even talked about delegated administration (allowing a junior admin to carry out a set of activities as narrowly defined as you like via a pre-defined PowerShell Remoting session) or implicit remoting or setting up “second hop” powers so you can control other computers from within your remote session. For those things, and more, you’re going to have to do some research on your own. I recommend starting with PowerShell in Depth. Most of the general information, but not all, of what you saw in this article can be found in that book. That chapter does contain all the things I teased about, and more.
Hyper-V Manager, SCVMM and PowerShell can all be used to create a Hyper-V VM, but if you want to configure certain parameters beforehand, you need to use SCVMM.
There are several ways to create VMs on Hyper-V virtualization hosts. The standard approach is to use Hyper-V Manager or System Center Virtual Machine Manager. However, many administrators like to use PowerShell cmdlets to quickly provision Hyper-V VMs. PowerShell is a very useful tool for when you need to deploy Hyper-V VMs in a development environment or when you need to perform VM creation tasks repeatedly.
Create Hyper-V VMs using Hyper-V Manager
Most Hyper-V administrators are familiar with the VM creation process using Hyper-V Manager. All you need to do is open Hyper-V Manager, right-click on a Hyper-V host in the list of available hosts, click on the New action, click on the Virtual Machine action and then follow the steps on the screen to create the VM. You’ll need to specify parameters, like VM name, VM generation and the path to store VM files.
Create Hyper-V VMs using SCVMM
Deploying VMs using System Center Virtual Machine Manager (SCVMM) is fairly simple. You can deploy VMs on a standalone Hyper-V host or in a Hyper-V cluster. You need to go to the VMs and Services workspace, right-click on a SCVMM host group and then click on the Create Virtual Machine action
When you click on the Create Virtual Machine action, SCVMM will open a wizard. All you need to do is follow the steps on the screen. One of the main advantages of SCVMM is that it allows you to configure VM parameters — including Dynamic Memory — before the actual creation process starts. Another benefit of using SCVMM is that you can quickly provision a VM by selecting a SCVMM template that already includes the required VM settings. SCVMM also provides greater flexibility when deploying VMs in a production environment.
When you provision VMs using SCVMM, SCVMM creates a PowerShell script on the fly and then executes it via the SCVMM job window. If you need to use the PowerShell script where SCVMM isn’t installed, you can copy the PowerShell script from the SCVMM job window and modify the Hyper-V host-related parameters.
Create Hyper-V VMs using PowerShell
Hyper-V offers the New-VM PowerShell cmdlet that can be used to create a VM on a local or remote Hyper-V host. It’s important to note that, before creating Hyper-V VMs using PowerShell, you’ll need to make some configuration decisions, as explained below:
Figure out the Hyper-V virtual switch to which the VM will be connected. You can get Hyper-V virtual switch names by executing the Get-VMSwitch * | Format-Table NamePowerShell command. The command will list all the Hyper-V virtual switches on the local Hyper-V host. Copy the Hyper-V virtual switch name to be used in the VM creation command.
Decide the type of memory configuration for the new VM. Are you planning to use static memory or Dynamic Memory? If you plan to use the Dynamic Memory feature, you’ll need to use the Set-VMMemory PowerShell cmdlet after creating the VM.
Identify the VM file path where VM files will be stored. It can be a local path, a path to the Cluster Shared Volumes disk in the Hyper-V cluster or a path to the Scale-Out File Server cluster.
Decide if you’d like the OS in the VM to be installed via a Preboot Execution Environment server running on the network or if you’d like to set up the OS from a DVD. Depending on the OS deployment type, you’d want to change the boot order of the VM.
Are you going to create a new VM on a local or remote Hyper-V host? If you’re going to create a VM on a remote Hyper-V host, get the Hyper-V server’s fully qualified domain name or IP address, and specify that value using the -ComputerName parameter in the New-VMPowerShell cmdlet.
Choose the generation of the VM. Generation 2 VMs provide new features, such as guest clustering, Hyper-V virtual hard disk (VHDX) online resizing, secure boot, fast boot and so on. I recommend you choose Generation 2 unless you have a reason to go for Generation 1 VMs.
Once you have gathered the required parameters for the new VM, use the PowerShell command below to create the VM on the Hyper-V host.
This command will create a VM by the name of the SQLVM on the local Hyper-V host. The new VM will be configured to use 8 GB of memory and will be stored in the C:\ProductionVMs folder. Note that -Generation 2 specifies that this VM will be created as a Generation 2 VM. If you want to change the new VM’s memory configuration from static to Dynamic Memory, use the PowerShell command below:
PowerShell is one of Microsoft’s preferred tools for managing Windows Servers. Although it’s easy to think of PowerShell as a local management tool, PowerShell can just as easily be used to manage other servers in your datacenter. This capability is especially helpful if you have a lot of Hyper-V virtual machines and want to be able to perform bulk management operations.
There are a few different ways of running a PowerShell command against a remote server. For the purposes of this article however, I want to show you how to use the Invoke-Command cmdlet. The reason why I want to talk about this particular method is because the Invoke-Command cmdlet is being extended in Windows Server 2016 to provide better support for Hyper-V virtual machines. I will get to that in a few minutes.
The first thing that you will need to do is to configure the remote system to allow for remote management. Microsoft disables remote PowerShell management by default as a way of enhancing security.
To enable remote PowerShell management, logon to the remote server, open PowerShell (as an Administrator) and run the following command:
Enable-PSRemoting –Force
This command does a few different things. First, it starts the WinRM service, which is used for Windows remote management. It also configures the service to start automatically each time that the server is booted and it also adds a firewall rule that allows inbound connections. In case you are wondering, the Force parameter is used for the sake of convenience. Without it, PowerShell will nag you for approval as it performs the various steps. You can see what the command looks like in action in Figure 1.
[Click on image for larger view.]Figure 1. You must use the Enable-PSRemoting cmdlet to prepare the remote server for management.
There are about a zillion different ways that you can use the Invoke-Command cmdlet. Microsoft provides full documentation for using this cmdlet here. This site covers the full command syntax in ponderous detail. For the purposes of this article however, I want to try to keep things simple and show you an easy method of running a command against a remote system.
The first thing that you need to know is that any time you are going to be running a PowerShell command against a remote system, you will have to enter an authentication credential. Although this step is necessary, it is a bit of a pain to enter a set of credentials every time you run a command. Therefore, my advice is to map your credentials to a variable. To do so, enter the following command:
$Cred = Get-Credential
As you can see in Figure 2, entering this command causes PowerShell to prompt you for a username and password. The credentials that you enter are mapped to the variable $Cred.
[Click on image for larger view.]Figure 2. Your credentials can be mapped to a variable.
Now that your credentials have been captured, the easiest way to run a command against a remote server is by using the following syntax:
Invoke-Command –ComputerName <server name> -Credential $Cred –ScriptBlock{The command that you want to run}
OK, so let’s take a look at how this command works. Right now I am using PowerShell on a system that is running Windows 8.1. I have a Hyper-V server named Hyper-V-4. Let’s suppose that I want to run the Get-VM cmdlet on that server so that I can find out what virtual machines currently exist on it. To do so, I would use this command:
As you can see, the script block contains the command that needs to be executed on the remote system. It is worth noting that this technique only works if both computers are domain joined and are using Kerberos authentication. Otherwise, you will have to use the HTTPS transport or add the remote computer to a list of trusted hosts. The previously mentioned TechNet article contains instructions for doing so.
At the beginning of this article, I mentioned that Invoke-Command was being extended in Windows Server 2016 to better support Hyper-V virtual machines. Microsoft is adding a parameter named VMName (which is used in place of ComputerName). This extension makes use of a new feature called PowerShell direct, which allows you to run PowerShell commands on a Hyper-V virtual machine even if the virtual machine is not connected to the network. This is accomplished by communicating with the VM through the VMBus.
So as you can see, the Invoke-Command cmdlet makes it easy to manage remote servers through PowerShell. I would encourage you to check out the previously mentioned TechNet article because there is a lot more that you can do with the Invoke-Command cmdlet than what I have covered here.
Working with PowerShell can be very common for daily tasks and Hyper-V Server management. However, as there is more than one server to be managed, sometimes it can be difficult to log on and run the PowerShell scripts (most of the time the same one) on different computers.
One of the benefits that PowerShell offers is the remote option that allows you to connect to multiple servers, enabling a single PowerShell window to administer as many servers as you need.
The PowerShell remote connection uses port 80, HTTP. Although the local firewall exception is created by default when it’s enabled, make sure that any other firewall has the exception to allow communication between your servers.
How to do it
These tasks will show you how to enable the PowerShell Remoting feature to manage your Hyper-V Servers remotely using PowerShell.
1. Open a PowerShell window as an administrator from the server for which you want to enable the PowerShell Remoting.
2. Type the Enable-PSRemoting commandlet to enable PowerShell Remoting.
3. The system will prompt you to confirm some settings during the setup. Select A for Yes to All to confirm all of them. Run the Enable-PSRemoting command on all the servers that you want to connect to remotely via PowerShell.
4. In order to connect to another computer in which the PowerShell Remoting is already enabled, type Connect-PSSession Hostname, where hostname is the computer name to which you want to connect.
5. To identify all the commands used to manage the PowerShell sessions, you can create a filter with the command Get-Command *PSSession*. A list of all the PSSession commands will appear, showing you all the available remote connection commands.
6. To identify which command lines from Hyper-V can be used with the remote option computername, use the Get-Commandwith the following parameter:
7. To use the remote PowerShell connection from PowerShell ISE, click on File and select New Remote PowerShell Tab. A window will prompt you for the computer name to which you want to connect and the username, as shown in the following screenshot. Type the computer name and the username to create the connection and click on Connect. Make sure that the destination computer also has the remoting settings enabled.
8. A new tab with the computer name to which you have connected will appear at the top, identifying all the remote connections that you have through PowerShell ISE. The following screenshot shows an example of a PowerShell ISE window with two tabs. The first one to identify the local connection called PowerShell 1 and the remote computer tab called HVHost.
Summary
The process to enable PowerShell involves the creation of a firewall exception, WinRM service configuration, and the creation of a new listener to accept requests from any IP address. PowerShell configures all these settings through a single and easy command—Enable-PSRemoting. By running this command, you will make sure that your computer has all the components enabled and configured to accept and create new remote connections using PowerShell.
Then, we identified the commands which can be used to manage the remote connections. Basically, all the commands that contain PSSession in them. Some examples are as follows:
· Connect-PSSession to create and connect to a remote connection
· Enter-PSSession to connect to an existing remote connection
· Exit-PSSession to leave the current connection
· Get-PSSession to show all existing connections
· New-PSSession to create a new session
Another interesting option that is very important, is to identify which commands support remote connections. All of them use the ComputerName switch. To show how this switch works, see the following example; a command to create a new VM is being used to create a VM on a remote computer named HVHost.
New-VM –Name VM01 –ComputerName HVHost
To identify which commands support the Computername switch, you saw the Get-Command being used with a filter to find all the commandlets. After these steps, your servers will be ready to receive and create remote connections through PowerShell.
Here are 12 steps to remotely manage Hyper-V Server 2012 Core. Have you setup a Microsoft Hyper-V Server 2012 Core edition and now you want to remotely manage it in a workgroup (non-domain) environment?
Hopefully I can help ease your frustration with this article by showing you what worked for me.
If Microsoft did one thing that really tested my patients it’s trying to remotely manage Hyper-V Server Core in a workgroup environment.
Not long ago, I wrote an article titled Remotely Mange Hyper-V Server 2012 Core but admit I lost steam with wanting to work with it after that article/video. I wasn’t very confident with those instructions because every time I tested it there seemed to be different results.
Earlier today I decided to tackle this one again because I have had a lot of questions on this topic. It appears a lot of you out there are having similar issues. I feel very confident this time that I have all the instructions tested and working.
12 Steps to Remotely Manage Hyper-V
Quick run-down
Server: Microsoft Hyper-V Server 2012 Core (Free Edition)
Client: Windows 8 Pro
This next section is what I’m calling the condensed (advanced) version.
Condensed (advanced) Version
Install Hyper-V Server 2012 Core and log in to the console.
Configure date and time (select #9).
Enable Remote Desktop (select #7). Also select the ‘Less Secure’ option.
Configure Remote Management (select #4 then #1).
Add local administrator account (select #3). Username and password need to be exactly the same as the account you are going to use on the client computer to manage this Hyper-V Server.
Configure network settings (select #8). Configure as a static IP. Same subnet as your home network. Don’t forget to configure the DNS IP.
Set the computer name (select #2). Rename the server and reboot.
Remote Desktop to server. On your client machine, remote to the server via the IP address you assigned it. Use the credentials of the local administrator account you created earlier.
Launch PowerShell. In the black cmd window, run the following command: start powershell
Add server hostname and IP to hosts file. Right click hosts and select properties. In the security tab, add your username. Give your account modify rights.This is needed because some remote management tools we need to use rely on the hosts file to resolve the name. Without doing this you are highly likely to encounter some errors while trying to create VHDs and such. Error you might see: There was an unexpected error in configuring the hard disk.
There you have it: 12 steps to remotely manage Hyper-V Server 2012 Core.
You should now be able to remotely manage the Hyper-V server from the client machine. This includes managing the Hyper-V server’s disk from within the disk management console on the client. You should be able to create VHD’s successfully as well from within Hyper-V Manager on the client (assuming you installed the feature).
This was a quick tutorial on how to setup a working Hyper-V Server 2012 Core edition in a non-domain (workgroup) environment and still be able to remotely manage it.
“System.DirectoryServices.Protocols“. Here is the link to the Microsoft website were you can download and save the modules locally and load it into powershell.
Import-module ActiveDirectory
$userList= Import-Csv '.\List of Users.csv'
foreach ($userin$userList){ Get-ADUser -Filter "SamAccountName -eq '$($user.sAMAccountName)'"-SearchBase "DC=subdomain,DC=company,DC=com"-Properties Company |% { Set-ADUser $_-Replace@{Company = 'Deliveron'} } }If you then wanted to query AD for those users to make sure they updated correctly, you could use the following query using Get-ADUser:foreach ($userin$userList){
Get-ADUser -Filter "SamAccountName -eq '$($user.sAMAccountName)'"-SearchBase "DC=subdomain,DC=company,DC=com"-Properties Company | Select SamAccountName, Name, Company
}
This section lists several common DNS problems and explains how to solve them.
Event ID 7062 appears in the event log.
If you see event ID 7062 in the event log, the DNS server has sent a packet to itself. This is usually caused by a configuration error. Check the following:
Make sure that there is no lame delegation for this server. A lame delegation occurs when one server delegates a zone to a server that is not authoritative for the zone.
Check the forwarders list to make sure that it does not list itself as a forwarder
If this server includes secondary zones, make sure that it does not list itself as a master server for those zones.
If this server includes primary zones, make sure that it does not list itself in the notify list.
Zone transfers to secondary servers that are running BIND are slow.
By default, the Windows 2000 DNS server always uses a fast method of zone transfer. This method uses compression and includes multiple resource records in each message, substantially increasing the speed of zone transfers. Most DNS servers support fast zone transfer. However, BIND 4.9.4 and earlier does not support fast zone transfer. This is unlikely to be a problem, because when the Windows 2000 DNS Server service is installed, fast zone transfer is disabled by default. However, if you are using BIND 4.9.4 or earlier, and you have enabled fast zone transfer, you need to disable fast zone transfer.
To disable fast zone transfer
In the DNS console, right-click the DNS server, and then click Properties .
Click the Advanced tab.
In the Server options list, select the Bind secondaries check box, and then click OK .
You see the error message “Default servers are not available.”
When you start Nslookup, you might see the following error message:
*** Can’t find server name for address <address> : Non-existent domain
*** Default servers are not available
Default Server: Unknown
Address: 127.0.0.1
If you see this message, your DNS server is still able to answer queries and host Active Directory. The resolver cannot locate the PTR resource record for the name server that it is configured to use. The properties for your network connection must specify the IP address of at least one name server, and when you start Nslookup, the resolver uses that IP address to look up the name of the server. If the resolver cannot find the name of the server, it displays that error message. However, you can still use Nslookup to query the server.
To solve this problem, check the following:
Make sure that a reverse lookup zone that is authoritative for the PTR resource record exists. For more information about adding a reverse lookup zone, see “Adding a Reverse Lookup Zone” earlier in this chapter.
Make sure that the reverse lookup zone includes a PTR resource record for the name server.
Make sure that the name server you are using for your lookup can query the server that contains the PTR resource record and the reverse lookup zone either iteratively or recursively.
User entered incorrect data in zone.
For information about how to add or update records by using the DNS console, see Windows 2000 Server Help. For more information about using resource records in zones, search for the keywords “managing” and “resource records” in Windows 2000 Server Help.
Active Directory-integrated zones contain inconsistent data.
For Active Directory–integrated zones, it is also possible that the affected records for the query have been updated in Active Directory but not replicated to all DNS servers that are loading the zone. By default, all DNS servers that load zones from Active Directory poll Active Directory at a set interval — typically, every 15 minutes — and update the zone for any incremental changes to the zone. In most cases, a DNS update takes no more than 20 minutes to replicate to all DNS servers that are used in an Active Directory domain environment that uses default replication settings and reliable high-speed links.
User cannot resolve name that exists on a correctly configured DNS server.
First, confirm that the name was not entered in error by the user. Confirm the exact set of characters entered by the user when the original DNS query was made. Also, if the name used in the initial query was unqualified and was not the FQDN, try the FQDN instead in the client application and repeat the query. Be sure to include the period at the end of the name to indicate the name entered is an exact FQDN.
If the FQDN query succeeds and returns correct data in the response, the most likely cause of the problem is a misconfigured domain suffix search list that is used in the client resolver settings.
Name resolution to Internet is slow, intermittent, or fails.
If queries destined for the Internet are slow or intermittent, or you cannot resolve names on the Internet, but local Intranet name resolution operates successfully, the cache file on your Windows 2000–based server might be corrupt, missing, or out of date. You can either replace the cache file with an original version of the cache file or manually enter the correct root hints into the cache file from the DNS console. If the DNS server is configured to load data on startup from Active Directory and the registry, you must use the DNS console to enter the root hints.
To enter root hints in the DNS console
In the DNS console, double-click the server to expand it.
Right-click the server, and then click Properties .
Click the Root Hints tab.
Enter your root hints, and then click OK .
To replace your cache file
Stop the DNS service by typing the following at the command prompt:net stop dns
Type the following:cd % Systemroot % \System32\DNS
Rename your cache file by typing the following:ren cache.dns cache.old
Copy the original version of the cache file, which might be found in one of two places, by typing either of the following:copy backup\cache.dns– Or –copy samples\cache.dns
Start the DNS service by typing the following:net start dns
If name resolution to the Internet still fails, repeat the procedure, copying the cache file from your Windows 2000 source media.
To copy the cache file from your Windows 2000 source media
At the command prompt, type the following: expand <drive>:\i386\cache.dn_ % Systemroot % \system32\dns\cache.dns where drive is the drive that contains your Windows 2000 source media.
Resolver does not take advantage of round robin feature.
Windows 2000 includes subnet prioritization, a new feature, which reduces network traffic across subnets. However, it prevents the resolver from using the round robin feature as defined in RFC 1794. By using the round robin feature, the server rotates the order of A resource record data returned in a query answer in which multiple resource records of the same type exist for a queried DNS domain name. However, if the resolver is configured for subnet prioritization, the resolver reorders the list to favor IP addresses from networks to which they are directly connected.
If you would prefer to use the round robin feature rather than the subnet prioritization feature, you can do so by changing the value of a registry entry. For more information about configuring the subnet prioritization feature, see “Configuring Subnet Prioritization” earlier in this chapter.
WINS Lookup record causes zone transfer to a third-party DNS server to fail.
If a zone transfer from a Windows 2000 server to a third-party DNS server fails, check whether the zone includes any WINS or WINS-R records. If it does, you can prevent these records from being propagated to a secondary DNS server.
To prevent propagation of WINS lookup records to a secondary DNS server
In the DNS console, double-click your DNS server, right-click the zone name that contains the WINS record, and then click Properties .
In the Properties dialog box for the zone, click the WINS tab and select the check box Do not replicate this record.
To prevent propagation of WINS-R records to a secondary DNS server
In the DNS console, double-click your DNS server, right-click the reverse lookup zone that contains the WINS-R record, and then click Properties .
In the properties page for the zone, click the WINS-R tab and select the check box Do not replicate this record .
WINS lookup record causes a problem with authoritative data.
If you have a problem with incorrect authoritative data in a zone for which WINS lookup integration is enabled, the erroneous data might be caused by WINS returning incorrect data. You can tell whether WINS is the source of the incorrect data by checking the TTL of the data in an Nslookup query. Normally, the DNS service answers with names stored in authoritative zone data by using the set zone or resource record TTL value. It generally answers only with decreased TTLs when providing answers based on non-authoritative, cached data obtained from other DNS servers during recursive lookups.
However, WINS lookups are an exception. The DNS server represents data from a WINS server as authoritative but stores the data in the server cache only, rather than in zones, and decreases the TTL of the data.
To determine whether data comes from a WINS server
At the command prompt, type the following:nslookup -d2server < server>where <server> is a server that is authoritative for the name that you want to test.This starts nslookup in user-interactive, debug mode and makes sure that you are querying the correct server. If you query a server that is not authoritative for the name that you test, you are not able to tell whether the data comes from a WINS server.
To test for a WINS forward lookup, type the following:set querytype=a– Or –To test for a WINS reverse lookup, type the following:set querytype=ptr
Enter the forward or reverse DNS domain name that you want to test.
In the response, note whether the server answered authoritatively or non-authoritatively, and note the TTL value.
If the server does not answer authoritatively, the source of the data is not a WINS server. However, if the server answered authoritatively, repeat a second query for the name.
In the response, note whether the TTL value decreased. If it did, the source of the data is a WINS server.
If you have determined that the data comes from a WINS server, check the WINS server for problems. For more information about checking the WINS server for problems, see “Windows Internet Name Service” in this book.
A zone reappears after you delete it.
In some cases, when you delete a secondary copy of the zone, it might reappear. If you delete a secondary copy of the zone when an Active Directory-integrated copy of the zone exists in Active Directory, and the DNS server from which you delete the secondary copy is configured to load data on startup from Active Directory and the registry, the zone reappears.
If you want to delete a secondary copy of a zone that exists in Active Directory, configure the DNS server to load data on startup from the registry, and then delete the zone from the DNS server that is hosting the secondary copy of the zone. Alternatively, you can completely delete the zone from Active Directory when you are logged into a domain controller that has a copy of the zone.
You see error messages stating that PTR records could not be registered
When the DNS server that is authoritative for the reverse lookup zone cannot or is configured not to perform dynamic updates, the system records errors in the event log stating that PTR records could not be registered. You can eliminate the event log errors by disabling dynamic update registration of PTR records on the DNS client. To disable dynamic update registration, add the DisableReverseAddressRegistrations entry, with a value of 1 and a data type of REG_DWORD, to the following registry subkey:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services \Tcpip\Parameters\Interfaces\< name of theinterface >
where name of the interface is the GUID of a network adapter.
You can see that I have assigned a DNS server in my IP configuration, but WHY does nslookup spouts
*** Can't find server name for address 172.27.0.12: Non-existent domain
*** Default servers are not available
Default Server: Unknown
What does it mean by saying “not available” and Unknown”.?
The DNS server(172.27.0.12) is working correctly because it answers query of chj.dev.nls as expected. The DNS server is a Win2003 SP2.
Some detail info:
> set debug
> chj.dev.nls
Server: UnKnown
Address: 172.27.0.12
------------
Got answer:
HEADER:
opcode = QUERY, id = 4, rcode = NOERROR
header flags: response, auth. answer, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 1, additional = 0
QUESTIONS:
chj.dev.nls, type = A, class = IN
AUTHORITY RECORDS:
-> dev.nls
ttl = 3600 (1 hour)
primary name server = nlserver.dev.nls
responsible mail addr = hostmaster.dev.nls
serial = 14716
refresh = 900 (15 mins)
retry = 600 (10 mins)
expire = 86400 (1 day)
default TTL = 3600 (1 hour)
------------
------------
Got answer:
HEADER:
opcode = QUERY, id = 5, rcode = NOERROR
header flags: response, auth. answer, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 1, additional = 0
QUESTIONS:
chj.dev.nls, type = A, class = IN
AUTHORITY RECORDS:
-> dev.nls
ttl = 3600 (1 hour)
primary name server = nlserver.dev.nls
responsible mail addr = hostmaster.dev.nls
serial = 14716
refresh = 900 (15 mins)
retry = 600 (10 mins)
expire = 86400 (1 day)
default TTL = 3600 (1 hour)
------------
Name: chj.dev.nls
>
Any idea? Thank you.
Answer:
Nslookup will try to resolve the name for the ip address of the DNS server configured as the primary DNS server on the client by performing a reverse lookup of the ip address. If you don’t have a rDNS zone set up for your network/subnet you’ll get the “server unknown” message as nslookup will be unable to resolve the name for the ip address.
It’s not an error condition and won’t cause any problems for normal AD and DNS operations.
Answer:
Your server isn’t returning a reverse lookup for its name. That’s why you’re seeing “Unknown” there. You’ll need to create the appropriate reverse lookup zone to allow your server to reverse-resolve its own IP address back to its name.
Answer:
Well, after adding reverse lookup to my internal DNS server, Default Server now show the domain name of my DNS server.
NSLOOKUP is a command line tool which comes with most operating systems and is used for querying DNS servers.
When NSLOOKUP starts, before anything else, it checks the computer’s network configuration to determine the IP address of the DNS server that the computer uses.
Then it does a reverse DNS lookup on that IP address to determine the name of the DNS server.
If reverse DNS for that IP address is not setup correctly, then NSLOOKUP cannot determine the name associated with the IP address.
On Windows Vista/2008, it then says “Default Server: UnKnown”.
On earlier Windows versions, it displays the error message “*** Can’t find server name for address …”.
This does NOT indicate a problem with the actual domain name that you are trying to look up.
It only means that there is no reverse DNS name for the DNS server IP address, which in most cases may not be a problem at all.
To fix this you need to properly configure the reverse zone for the IP address of the DNS server, and make sure that the reverse zone is properly delegated to the server by your IP provider. See the reference article below for more details.
To create a reverse zone in Simple DNS Plus, click the “Records” button, select “New” -> “Zone”, select “Reverse Zone…”, and follow the prompts.
Issue : “Default Server: UnKnown” error on NSLOOKUP from Windows Server 2008 DNS Server.
Note : To show the server name a Reverse DNS Zone should be configured. If you do not have a reverse DNS configured please look in to my below post which is related to reverse DNS configuration.
This issue is not a critical one. Even under this error your DNS resolution can work smoothly. But it’s embarrassing when there are issues like this. Right?. Yes I know! me too. 😀
The reason for this is your DNS server does not posses a record for the server itself. Or simply it does not know what is it’s own name. By creating a PTR static entry we can fix this and let DNS server know it’s own name.
1. Open the DNS management console in the Server 2008 Start > Administrative Tools > DNS
2. Go to the your Reverse Lookup Zone icon and right click on it and select “New Pointer(PTR)“.
3. In the New PTR window enter the IP address of DNS server and enter(or select) the host name of the server.
4. Now click OK and restart the DNS server service.
when i built the Domain Controller with DNS role, i got the unknown default server result when using nslookup
Even the other organization machines in the domain use this server as a DNS server, nslookup still shown the same issue.
After a little investigation i found that, it`s not a critical issue Even under this error your DNS resolution can work smoothly, It means that there is no reverse DNS name for the DNS server IP address, which in most cases may not be a problem at all.
I found a solution says turning on the IPv6 on the NIC will solve this issue, but i don’t want to turn on IPv6, i just want the fix it in IPv4 protocol.
To fix this you need to properly configure the reverse zone for the IP address of the DNS server, and make sure that the reverse zone is properly delegated to the server by your IP provider.
So the problem is the “Reverse Lookup Zone”, the DNS server did not create a related “Reverse Lookup Zone” automatically, you should create it manually by yourself
OK, found the root cause and let’s fix it
Right click on Reverse Lookup Zone, click on New Zone
Create a Primary Zone
Type in your “Network ID” which is your network subnet
Select the Reverse lookup zone, now you got the right name, next
NSLOOKUP RESPONSE DEFAULT SERVER UNKNOWN, ADDRESS ::1
When I do a nslookup, I get the response listed below:
C:\Windows\system32>nslookup
Default Server: UnKnown
Address: ::1
As far as I can verify, EDNS0 is disabled, PTR records exist for the server in the zone. Also, on the server, if I uncheck the IPv6 protocol in the TCP/IP properties of the NIC, this issue goes away.
RESOLUTION:
Check the IPv6 settings to obtain DNS server address automatically
Change the preferred DNS server from ::1 to obtain DNS server address automatically.
DNS Server : nslookup response “Default Server Unknown
Recently, I got a power Failure in my Data Center and face an Active Directory/DNS Crash as well. I configured Active Directory and DNS as well to support my users and organization and AD start replication but after a day I notice that on nslookup there was a message Default Server: Unknown. Ooops! I was really, really worried that what happens to it. Obviously, I start troubleshooting and the solution was much unexpected to me.
Solution 1:
You need to login your DNS Server and if you haven’t setup your reverse lookup zone, please do create it. If it’s already done than you need to create a PTR Record and point to 192.168.10.10 Server (in my example). After creating PTR Record or configuring Reverse lookup zone, you will be able to see Server Name, as image below.
Solution 2:
In some cases, your DNS may behave differently like it shows exact Default Server Name but on giving any website name (google.com), it show error message like “request timed out. timeout was 2 seconds” etc. in this case you have to check your Firewall on DNS Server or any Firewall between your computer and DNS Server and you have to allow your DNS Server.
Solution 3:
You may notice that even disabling Firewall on DNS Server, Local Computer or even allowing DNS Server in middle Firewall doesn’t help then you must check that your DNS Server have properly configured for Live DNS Server’s. Please double check Live DNS IP Address in DNS Forwarders and hope these tips will help you, please show yourself in comments to improve the post.
If on a domain controller that is DNS you have the following error after running the nslookup command:
DNS Nslookup request timed out
Timeout was 2 seconds.
Default server: Unknown
Address: :: 1
it’s just that your server is trying to interrogate (DNS) in IPV6
the bad solution is to disable the IPV6 in the settings of the network card:
If you disable IPV6 on your 2008/2012 server you lose the following features:
– Remote Assistance– Windows Meeting Space (P2P)– Homegroup– DirectAccess– Client Side Caching (offline files) and BranchCache (Windows Server 2008 R2 and Windows 7)
I am not sure if I have a reason to be uncomfortable but the results below do make me uncomfortable. Note that I do not have any problems accessing my network resources and the internet from any program. However…
Pinging ds-any-fp3-real.wa1.b.yahoo.com [98.139.183.24] with 32 bytes of data:
Reply from 98.139.183.24: bytes=32 time=59ms TTL=47
Reply from 98.139.183.24: bytes=32 time=72ms TTL=49
Reply from 98.139.183.24: bytes=32 time=121ms TTL=47
Reply from 98.139.183.24: bytes=32 time=105ms TTL=49
Ping statistics for 98.139.183.24:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 59ms, Maximum = 121ms, Average = 89ms
More about the environment:
– Very small domain – two AD servers (both Windows Server 2012), two computers (running Windows 8) and three or four devices (printer, phone, WiFi access point, etc.).
– The network (DHCP, DNS, servers and gateway static addresses etc.) is both IPv4 and IPv6.
– There are DNS running on both DC servers.
– The servers have static IPv4 and 6 addresses.
– The servers have both DNS addresses (both IPv4 and 6) in their IP configuration.
– Single forward lookup zone – mydomain.com (and of course _msdcs…).
– Two reverse lookup zones one for IPv4 and one for IPv6.
– DHCP has the two DNS servers in the options.
Call me old-fashioned but I’ve been using “nslookup yahoo.com” to diagnose my network problems for years and now when it doesn’t answer unless i specify the dns server, makes me nervous. Am I right and if I am can you suggest possible problems in my configrations.
Answer:
It’s only showing “unknown” for the IPv6 address.
Go into your IPv6 properties, and set the IP and DNS address settings to be obtained automatically.
Then in Manage network adapters windows, change the view options to show Menu, then click on Advanced, Advanced, and make sure IPv4 is on top instead of IPv6.
Answer:
It’s only showing “unknown” for the IPv6 address.
Go into your IPv6 properties, and set the IP and DNS address settings to be obtained automatically.
Then in Manage network adapters windows, change the view options to show Menu, then click on Advanced, Advanced, and make sure IPv4 is on top instead of IPv6.
Answer:
Everything was exactly the way you suggested… Then I played with the order: v6 before v4 just to try and see – got worse. Reversed back to v4 before v6 – and almost all is looking good with the exception of Server: UnKnown (like I said in the original post I do have rev. lookup zone). But this is something I can live with (unless you or somebody else has other suggestion). I am marking your reply as the solution.
Thank you very much!
Answer:
I’m sure you’ve figured this out by now. Although the recommendation from ACE is correct there is possibly another issue.
1. Open Network and Sharing Center
2. Change adapter settings.
3. Selection connection, right-click and choose properties of the network connection.
4. Double-click the IPv6 tcpip settings.
Determine if your IPv6 setting is as shown. If so, change it to “Obtain DNS server …”
Wondering what a VLAN is? read up on the below information about VLAN.
Virtual Local Area Network (VLAN):
Definition – What does Virtual Local Area Network (VLAN)mean?
A virtual local area network (VLAN) is a logical group of workstations, servers and network devices that appear to be on the same LAN despite their geographical distribution. A VLAN allows a network of computers and users to communicate in a simulated environment as if they exist in a single LAN and are sharing a single broadcast and multicast domain. VLANs are implemented to achieve scalability, security and ease of network management and can quickly adapt to changes in network requirements and relocation of workstations and server nodes.
Higher-end switches allow the functionality and implementation of VLANs. The purpose of implementing a VLAN is to improve the performance of a network or apply appropriate security features.
Techopedia explains Virtual Local Area Network (VLAN)
Computer networks can be segmented into local area networks (LANs) and wide area networks (WANs). Network devices such as switches, hubs, bridges, workstations and servers connected to each other in the same network at a specific location are generally known as LANs. A LAN is also considered a broadcast domain.
A VLAN allows several networks to work virtually as one LAN. One of the most beneficial elements of a VLAN is that it removes latency in the network, which saves network resources and increases network efficiency. In addition, VLANs are created to provide segmentation and assist in issues like security, network management and scalability. Traffic patterns can also easily be controlled by using VLANs.
The key benefits of implementing VLANs include:
Allowing network administrators to apply additional security to network communication
Making expansion and relocation of a network or a network device easier
Providing flexibility because administrators are able to configure in a centralized environment while the devices might be located in different geographical locations
Decreasing the latency and traffic load on the network and the network devices, offering increased performance
VLANs also have some disadvantages and limitations as listed below:
High risk of virus issues because one infected system may spread a virus through the whole logical network
Equipment limitations in very large networks because additional routers might be needed to control the workload
More effective at controlling latency than a WAN, but less efficient than a LAN
The links below answers questions related to VLAN, Subnet, Subnetmask, Switch, Router, Gateway and CIDR:
Question: Does a VLAN require a different network?
I often see examples of VLANs being like so:
VLAN 10 – 192.168.10.x
VLAN 20 – 192.168.20.x
Isn’t this redundant? Why would I utilize both VLAN tags and different networks? I would expect to see the following kind of example when discussing VLANs:
VLAN 10 – 192.168.40.x
VLAN 20 – 192.168.40.x
Point being, VLAN tagging is independent of subnetting.
If using VLANs don’t require different subnets, what is the purpose of assigning different VLAN IDs to different subnets?
Why do I see examples like the first one above? What is the point? Seems like it’s just complicating the topic.
Answer:
Here’s my mental block coming into play… are you saying my second example is technically invalid and therefore nonsensical?
No, your second example is fine, just don’t ever expect 192.168.40.0/24 on LAN 10 to talk to a different 192.168.40.0/24 on VLAN 20 as a router won’t know which one you are talking about, to a layer 3 router they are the same network but that’s exactly what I did to separate voice and data in one instance where they needed to use the same addresses and never needed to talk to each other.
Answer:
From a switch point of view if you don’t have any vlans, different sub-nets on the same switch would work perfectly but traffic would go only between devices within the same sub-net.
If you have one vlan with different sub-nets it would again work perfectly fine within the switch.
With above scenario if traffic goes to a router it will be blocked/ignored/forwarded depend on the configuration of a router.
There are many reasons why you wouldn’t put many different sub-nets on one vlan. One of the reason is the whole point of using vlans to separate your network.
Answer:
Each VLAN is an IP subnet,
So each VLAN shall have its IP address range which does not
interfere/overlap with other VLAN Subnets.
Question: VLAN VS Subnetting
Is separating a network by VLAN same thing as separating network by subnetting? I understand that by subnetting, I’m creating different networks that will have different network address.
If I separate a network with different VLANs, am I creating separate networks like in subnetting? If I use VLAN to create different broadcast domain, is subnetting necessary?
Answer:
Normally, 1 IP subnet is associated with 1 layer 2 broadcast domain (VLAN). Every useful VLAN (from an IP perspective) will have an IP network associated with it.
Answer:
VLANs are for creating broadcast domains (different networks) at the L2 level. But only PCs on the same VLAN can communicate, unless you have a L3 switch or router, in which case, you will still have to subnet (give the VLANs IP addresses).
Answer:
A switch will not allow you to place 2 vlan interfaces in the same subnet. Remember that VLANs and Subnet are 1 and the same.
If you have a different subnet… then you need a different VLAN as long as you are not crossing a L3 boundary.
Answer:
I believe I may have confused you… I am sorry I get crazy sometimes. Let me re-phrase that. You can ONLY use the same VLAN ID if you are separated by a L3 boundary (i.e. Router). Each subnet has it’s own Broadcast address, for the subnet 192.168.2.0/24, the broadcast is 192.168.2.255, the broadcast messages will not travel outside this 192.168.2.0/24 network.
Then on the other side of the router, 192.168.200.0/24 will have its own broadcast address 192.168.200.255. Broadcast messages will not travel outside the 192.168.200.0/24 subnet.
For both of these subnets, because they are seperate by a L3 Boundary, both of these subnets can use the same VLAN IDs.
Answer:
I would like to answer your question with the simplest way possible what you have asked is….
Is separating a network by VLAN same thing as separating network by subnetting? I understand that by subnetting, I’m creating different networks that will have different network address.
Yes separating a network by VLAN is same sort of concept what u achieve by subnetting the network . Yes you understanding about subnetting is correct. The only differece here is VLAN is about separating the network at Layer2 where as when u talk about subnetting you are talking about Layer3.
If I separate a network with different VLANs, am I creating separate networks like in subnetting? If I use VLAN to create different broadcast domain, is subnetting necessary?
Yes if you separate a network with different VLANs you are creating separate networks like in subnetting. If you use VLAN to create different broadcast domain , subnetting becomes necessary as a part of it as you can not configure the two VLAN’s with the same IP range.
Understand this concept like this … I believe you already know that LAYER 2 is a single broadcast domain, in order to limit the broadcast and let it be handled on a upper layer-3 the concept of VLAN’s came in. so now when you have VLAN broadcast it remains in the same VLAN dosent go out to the other VLAN. unless there is a layer 3 device avaliable to make it happen. So eventually your broadcast is limited to single VLAN. now if you have Layer3 device avaliable and it is configured to allow the communication between the different VLANs or the subnets only then the traffic propagates.
So the main concept behind all this is to limit the broadcast on layer three and make it appear as if working like layer3 to not allow all the broadcast everywhere.
If the PC’s are able to ping eachother? I checked this in packet tracer, but why isn’t it working. IF both PC are in a different subnet, but in the same VLAN, wouldn’t the switch broadcast the ping to the other PC. Shouldn’t it workm adn why does it not work?
Answer:
1. This seems to be your present situation:
2. When you try to ping 10.10.10.1 from 192.168.1.1 here is what happens:
On PC1: 92.168.1.1
Command – Ping 10.10.10.1
Logic working inside PC1
My network is 192.168.1.0
I have to ping 10.10.10.1
Do the first three digits match my network ?
Is 10.10.10.x equal to 192.168.1.x ?
No
Because it is a foreign network, I need to send this frame to my gateway
Do I have a gateway configured ?
No
Drop this packet as I cannot do anything about it
3. So the ping fails.
4. Let’s experiment.
5. Now the same objective, but with computers own IP configured as a gateway.
On PC1: 92.168.1.1
Command – Ping 10.10.10.1
Logic working inside PC1
Stage 1
This stage won’t be visible
My network is 192.168.1.0
I have to ping 10.10.10.1
Do the first three digits match my network ?
Is 10.10.10.x equal to 192.168.1.x ?
No
I need to send this frame to my gateway
Do I have a gateway configured ?
Yes
Can I get the MAC of the gateway ?
Yes. Got it.
Prepare a normal frame {My MAC – Gateway MAC | MY IP – Des. IP}
Send to gateway MAC – (own interface)
Stage 2
Now, information required at the gateway – destination MAC of 10.10.10.1
So, prepare ARP request frame { My MAC – (Gateway) | Dst. ff.ff.ff.ff.ff.ff.ff | Src. IP – 192.168.1.1 Dst. IP 10.10.10.1}
This is sent out of the interface and replied by 10.10.10.1 as it is
1. able to receive this arp request – on the same “switch / vlan” &
2. it has its own IP configured as gateway
Stage 3
Now all the information required for a valid frame is available, so 192.168.1.1 sends out a valid frame to its gateway – i.e. its own network interface
This is sent on the wire, received by 10.10.10.1
Here it is analyzed as all the information matches – L2 and L3
A reply is prepared in the similar way and sent back to 192.168.1.1
Stage 4
Ping is successful now.
6. Apply the same logic for gateway address configured as the opposite PC – it will work too,
though the pings will now be successful among the immediate two PCs only
compared to all the PCs configured in a similar fashion in the previous case.
7. Framing – each stage – that is what you would need to work on.
Answer:
It is possible to aggregate different subnets in the same VLAN. From the point of view of the VLAN, they are in the same broadcast domain, butwhen a host needs to send a packet to another host, it does not care about VLANs. What a host cares about is whether the destination IP address is in the same subnet or not. If it is, it will send the packet right away to the destination IP address (after knowing the destination MAC address to put in the Ethernet frame); if it isn’t, then the host will try to send the packet to its default gateway. When you have hosts in different subnets, you need a routing capable device (router, L3 switch) to route the packet between the subnets.
In your case, the two hosts are in different subnets, that’s why the ping doesn’t work. Try adding default gateway and a router into your topology and the ping might work.
The bottom line is there are two different process you should separate here:
1. VLAN segmentation;
2. Packet forwarding decision (“do I send it to the destination directly or do I send it to my default gateway?).
Answer:
1. I am afraid that will not be possible on a layer 2 switch without routing capabilities.
2. When the frame reaches the switch svi, it will need to be routed to a different network.
3. I understand that 2960s have a limited L3 routing capabilities. However, L2 switches like 2950s do not.
4. Even on a 2960, the following needs to be configured, in addition to multiple IP addresses for the SVIs.
Ip routing
*ip route 192.168.1.0 255.255.255.0 10.10.10.254
*ip route 10.10.10.0 255.255.255.0 192.168.1.254
5. Therefore from a L2 point of view, multiple IP addresses on a SVI will still not allow mismatched network reachability, unless assisted by routing.
Answer:
When two hosts are on the same L3 broadcast domain they can expect to freely pass frames to each other at L2 (no need for a gateway). When two hosts on different L3 domains need to talk to each other they typically need to go through a router. That is why they need ARP. ARP helps a host map L3 addressing to L2 addressing. When they are on different L3 broadcast domains, most hosts will not ARP unless they have a gateway configured because the only MAC address a host will be interested in when trying to communicate with a host on another L3 domain is the MAC address of the gateway on its own L3 domain.
Take a Window PC as an example. If you configured a NIC with an IP address and subnet mask but no gateway and then tried to ping a device on a different L3 network you would never see an ARP from the PC. However, if you added a default gateway to the configuration and then tried to ping again you would see an ARP. Take a look at this screen shot:
The first ping attempt results in failures and no arp occur because the host has nothing to ARP (no gateway). Then I added a static route (giving the host a gateway to other networks) and then the second ping results in ARP requests from the PC to the gateway address. In this case, I have no gateway active on 10.10.10.254 but the PC doesn’t know this. It just knows that it needs to send an ARP for 10.10.10.254 so that it can send L2 frames to it.
Answer:
Ok, so we can say that packet tracer is not showing me the default/correct behaviour.
So even if both PC’s are in a different IP subnet, but in the same VLAN, and given they use there own IP or the IP of the other PC as a gateway, the ARP process can built the frame and the ping will be succesfull, correct?
Answer:
1. I can assure you, that is correct.
Question: Can Two PC in Different Subnet connected to each other communicate
Answer:
2 different computers on 2 different subnets connected to the same layer 2 switch can ping each other… *IF* they are on the same VLAN.
You dont need a gateway. This is simply dependent on the network topology.
Answer:
Two PCs on different subnets (VLANs) would NOT be able to ping each other unless there is a layer 3 device (i.e. a router). Recall from the ISO OSI reference model that layer 3 devices allow for interconnectivity between networks.
Answer:
2 PC’s in the same VLAN on different Subnets CAN IN FACT ping each other. They are in the same Layer 2 broadcast domain.
Dont assume that just because you have 2 different subnets that you also have 2 separate VLANs.
Answer:
Can you go into detail as to why this works?
I also tested this PT and I am unable to get it to work..
I’ll agree with simplyccna, They might be in the same VLAN but it does not always mean they are in the same network. I would think both PCs would have to be in the same subnet and network in order for them to ping.
If you have a PC that has a mask of 255.255.255.0 with a 192.168.1.0 network IP address and a PC that has a mask of 255.255.255.128 and a network IP address of 192.168.1.128 then maybe the first PC would be able to ping the second PC, but I don’t think the second PC would be able to ping the first PC even if they are in the same VLAN. (I have not tested this so I son’t know, just a guess)
Question: 2 PCs connected to same switch but in diff network, why can’t communicate?
I have a small question, though this types of questions are asked, but I am not getting the exact answer.
Scenario:
Two machines are connected to single switch Switch-A
IP of machine-1 is 192.168.10.1
& that of machine-2 is 192.168.20.1 (another network)
Now My Question is that…
When I ping from machine 1 to machine 2…
This should be the process that I think…happens….
at machine 1: The ping command creates Packet -> Frames -> bits
at Switch -> from bits -> frames conversion is done, it checks for the destination MAC address it is available it send the data/frame to machine -2 i.e.
Bits- >Frames -> send to another machine ->frame -> bits
at machine 2: bits->frames->packet and vice versa it should send the reply…
But in real scenario… this doesn’t work …
Questions
… why this doesn’t happen?
… at what level this fails… machine1, switch or machine 2 ? and why ?
… Switch considers MAC address & not the IP… so it should forward the data…is it right ?
Few things…
1. I know to communicate between n/ws Router is required…. ! but in switch case… MAC is considered & not the IP..!
2. No VLAN, No Router is considered for this example ! Plain n/w
Answer:
“Guys I want u to explain the concept behind this.. its not about I wanted to make it work…I agree that it doesn’t work…but the problem again…the same… why not ????”
Ok so we go back to basics.
2 PC’s.
PC1 = 192.168.20.1/24
PC2 = 192.168.30.1/24
PC1 wants to ping PC2
The first thing it does is compare the IP address of PC2 with its own IP/subnetmask. It realises that PC2 is on another network.
PC1 checks its routing table to se if it has a route to PC2’s subnet. Most likely it does not. But it should have a default route.
So what PC1 does is do an ARP for the default gateways IP and gets the MAC of the default address. (if it doesnt have a default gateway it then drops the ping)
PC1 then encapsulates the ping (icmp) in a ethernet frame with a destination address = MAC address of default gateway.
So when the ethernet frame arrives at the L2 switch it forwards the frame to the default gateway. NOT PC2.
Answer:
MAC address works on Data link layar i.e. Layar 2 and IP address works on Network layar i.e. Layar 3.
As per you ip addressing, 192.168.10.1 & 192.168.20.1 on different subnet.Yes i know both are on the same switch.But subnets are differenent so it won’t communicate without L3 device. i.e. Router and L3 switch.
Because when two PC’s are on the same subnet and one PC is trying to ping, it sends ARP and after resolving ARP it will send ICMP packet which is working on L3 layar and ping success because its on one subnet 255.255.255.0
In your case subnets are different.ARP is not resolved as it sends to gateway.but L3 device is not connected so it wont work.
Answer:
This fails at machine 1. Machine 1 has it’s own routing table based upon the IP and subnetmask you have assigned it. When you try to ping a PC outside of it’s network (that has been caculated from the IP and subnetmask) it will automatically send the information the the configured gateway of machine 1.
It dosent matter that they are connected to the same L2 device, just being connected to the same L2 device does not mean that they will be able to communicate using ARP. Machine 1 will send an ARP request for the gateway (explained above) not machine 2.
Check out the routing table on my laptop. If i want to communicate with something inside 192.168.0.0/16 then it will send an ARP request for that destination, but if it’s outside of that it will send an ARP request for my gateway and forward the data there.
Question: What are the reasons for not putting multiple subnets on the same VLAN?
1down votefavorite
I would like to know why we do not (and should not I guess) use 2 different networks on the same LAN/vLAN. From what I tried and understood :Host in network A (ex: 10.1.1.0/24) can talk to each otherAnd host in network B (ex: 10.2.2.0/24) cant talk to each otherHost and network A cannot talk to host in network B which is normal since inter-LAN communications need a L3 device with routing function.The idea/principle of a LAN/vLAN is, in the course I've followed, described as a broadcast domain. But I am confused since I can configure 2 working networks within the same LAN.I also tried the same configuration but with a second switch and a different vlan number (SW1 with vlan 10 and SW2 with vlan 20). All ports of each switch are in access mode with vlan 10 and 20 respectively. I had the same result.Note : each side of the topology has a host from network A and B
Now, nobody does that and I supposed it is for some goods reasons, but I did not find what are those reasons and that is what I am asking you ?
2down voteaccepted
Answer:There's really no reason not to put multiple subnets on the same VLAN, but there's also probably no reason to do it.Pro:Allows the subnets to talk directly without a router or firewallSave's VLANsCon:Allows the subnets to talk directly without a router or firewallIt's messy from a documentation and troubleshooting perspectiveMore broadcast trafficWe generally don't do it because of the messiness and lack of security. One VLAN = one subnet is easier to document and easier to troubleshoot and there's usually not a good reason to complicate things.The only reason I can think of to do it is company mergers or network upgrades and for both of those I'd prefer it to be temporary.Edit to clarify, for the hosts on different subnets but the same VLAN to talk directly you'd need to either make them their own default gateway or add a route to the "other" subnet that connects it to the interface.In the gateway case if the host IP was 10.1.1.2 then the gateway would also be 10.1.1.2. This will cause the host to ARP for everything on or off it's subnet. This would allow it to talk to the second subnet on that VLAN but the only way it'll be able to talk to anything else is if there's a router/firewall running proxy arp that can help it out.
In the route out the interface case you'd add something like "route add -net 192.56.76.0 netmask 255.255.255.0 eth0" to the device and then 10.1.1.2 will ARP directly on eth0 when it wants to reach 192.56.76.*.
Answer:
Your first "pro" is incorrect. If a node wants to send a packet to another node that is not on its subnet, it will send the packet to its default gateway instead (if the node has a routing table of its own, it will look in that table first). If there is no router available to the node, then it won't be able to send the packet. What you could do is have a router on a stick wtihout the router being VLAN capable/supporting tagging or using multiple physical interfaces.
Answer:
Nope, it's correct, I just didn't mention that you'll need a change on the hosts to either route that subnet out that interface or make the host it's own default gateway. In either case the two hosts will talk directly without going through an L3 device
Answer:
I just wanted to give a real world example of why you might want 2 subnets on one VLAN/LAN:
We have some offices that want non-NAT public addresses and some that want private IP addresses (10.x). By running 2 subnets on 1 VLAN, the users can plug a switch into the office's single ethernet port and have some devices privately IP'd and some publicly IP'd. This saves the admins time and wiring costs of having to run multiple connections to each office or switch links between VLANs anytime there is a change wanted by the end user.
Peter Green gave a good summary of some other pros and cons that I agree with.
Answer:
Now, nobody does that
That statement is not true. Some admins do it, some don't. There are pros and cons to such a setup.
Pros:
You can move stuff arround without reconfiguring the switchports.If you use ICMP redirects you can arrange for the bulk of data traffic to pass directly between hosts without hitting a router.One machine can have IPs on multiple subnets without requiring multiple NICs on the same machine or VLAN support on the end machine (afaict the latter is no problem on Linux but more of an issue on Windows).You save VLAN IDs.
Cons.
More broadcast traffic.If there is a firewall in the L3 routing then people may think the hosts are isolated when they are not really.
Question: Networking fundamentals: Subnetting
Subnetting is the process of breaking a network into multiple logical sub-networks. An IPv4 address is comprised of four octets of eight bits or thirty-two bits total. Each octet is converted to decimal and separated by a dot for example: 11111111.11111111.11111111.00000000 = 255.255.255.0
The Subnet Mask allows the host to compute the range of the network it’s a part of, from network address to broadcast address.
A device with an IP address of 192.168.1.5 with a subnet mask of 255.255.255.128 knows that the Network address is 192.168.1.0 and the Broadcast Address is 192.168.1.127.
Each place in the octet string represents a value:
128 64 32 16 8 4 2 1
1 1 1 1 1 1 1 1
When added together (128+64+32+16+8+4+2+1)= 255.
Network Class Ranges
Depending on what value is used, an IP represents a different class of network:
Most LAN networks use private IP addresses outlined here:
These addresses cannot be routed on the public Internet, but that is why the edge of the network will typically be using NAT (Network Address Translation) to translate the private IP addresses to public addresses. Using subnetting, one can split these private IP addresses to fit as many hosts as needed depending on the subnet mask that is used. The subnet mask divides the network portion (network bits) of the address from the host portion (host bits).
Typical Private Range Masks
Class A: 255.0.0.0
11111111.00000000.00000000.00000000
[-network-].[—————–host—————]
Class B: 255.255.0.0
11111111.11111111.00000000.00000000
[——-network——–].[———host————]
Class C: 255.255.255.0
11111111.11111111.11111111.00000000
[————–network—————].[—host—]
Cisco Meraki allows users to input subnet masks using CIDR notation which is an easier method of appending a subnet mask. If the subnet mask being used in a Class C network is 255.255.255.240, the CIDR notation would be /28 because the network portion (below in blue font) of the mask borrowed four bits from the host portion (red). The borrowed bits are in blue:
1. can switches read an IP address or can they understand IP address?
They actually can, but not in the context of your question. Since you’re asking about forwarding traffic with a Layer 2 switch, the answer is no – they don’t look into IP addresses.
I connected 3 PCs with different subnet mask to switch ports and they were not able to talk to each other, but when they are in the same network, they can communicate.
That’s exactly what subnetting is for!
=> I understand thats what subnetting is for. If those PCs were connected to a router, no doubt for me. Let me rephrase the question. Does subnetting work of above 3 PCs are connected to switch alone?
2. when they are in same network, switch doesn’t care whats the default gateway is, all 3 can talk to each other. is it normal working?
Yes, this is normal. A default gateway is used only when you want to move traffic out of the subnet.
if switches operate at layer 2 how they can do this?
Layer 2 switches look at MAC addresses. Let’s assume that PC1 has a MAC address of 1111.1111.1111, and that of PC2 is 2222.2222.2222
When PC1 wants to talk to PC2, it first checks if PC2 is within its own (PC1’s) subnet. If it is, PC1 sends a broadcast ARP request asking “who is 10.1.0.10?”. All hosts within the broadcast domain receive this query, process it and discard it – all but PC2 that sees that someone is asking for its IP address. So PC2 sends an ARP reply saying “I’m the host in question, my MAC address is 2222.2222.2222”. Now PC1 can build a frame sending it from MAC address 1111.1111.1111 to 2222.2222.2222. The switch receives the frame, looks up the destination MAC address in its MAC table, and forwards the frame out the appropriate port. This is how the frame reaches PC2. Note that the switch did not look at the IP addresses!
When PC1 wants to talk to PC2, it first checks if PC2 is within its own (PC1’s) subnet.
how does PC1 checks if PC2 is within its own subnet, my doubt basically lies around this?
All hosts within the broadcast domain receive this query
=>I understand switches dont divide broadcast domains(except VLANs) and no VLANs configured here. Just 3 PCs are assigned with IP add. and gateways as below and connected to switch ports. There is no configuration done on switch.
ip addr | subnet mask | default gateway
PC1: 10.1.0.6 255.255.255.252 10.1.0.5
PC2: 10.1.0.10 255.255.255.252 10.1.0.9
PC3: 10.1.0.14 255.255.255.252 10.1.0.13
say if PC1 pings PC2, then here,do all PCs receive the query considering as one broadcast domain?
if the answer is NO, then based on which the PC/switch knows about their domain.
if the answer is YES, then plz look at my previous post’s question
Question and Answer:
Okay, now I got the point of your confusion.
I understand thats what subnetting is for. If those PCs were connected to a router, no doubt for me. Let me rephrase the question. Does subnetting work of above 3 PCs are connected to switch alone?
Yes it does. This is not about switches or routers, it’s really about the question: How does a PC decide to send a frame out of its NIC to whatever is connected?
The answer lies in building the frame. A PC (much like a router) will do a few recursive route lookups which should come to an outgoing interface in the end! If it doesn’t, the frame won’t leave the PC’s NIC.
When PC1 wants to talk to PC2, it first checks if PC2 is within its own (PC1’s) subnet.
how does PC1 checks if PC2 is within its own subnet, my doubt basically lies around this?
This is about binary math. Let’s take your example with PC1 (10.1.0.6/30) and PC2 (10.1.0.10/30). So PC1 wants to ping PC2. PC1 needs to decide if this is local communication or not, i.e. if PC2 is within PC1’s subnet or not. Let’s look at the last octet.
6 = 00000110
10 = 00001010
The blue part belongs to subnet, whereas the green part belongs to host. As we can see, the subnet part is different for 10.1.0.6/30 and 10.1.0.10/30 which means that in order to access PC2, PC1 needs to go to its default gateway which is your case 10.1.0.5. Now let’s imagine that there is no default gateway configured on PC1. In this case the frame cannnot be built because PC2 is in another subnet. No frame – no communication, i.e. data won’t go out the PC1’s NIC. Note that it doesn’t matter what is connected to PC1 – a router, a switch, or a directly connected PC2 – the frame won’t go there as the packet cannot be encapsulated.
say if PC1 pings PC2, then here,do all PCs receive the query considering as one broadcast domain?
The ARP broadcast asking about 10.1.0.10 won’t go to the switch because PC2 is not within PC1’s subnet, so no one will receive it. In order to send a packet to PC2, PC1 will have to build a frame with DG’s MAC address as the destination. This means that PC1 will send an ARP request broadcast for 10.1.0.5, and not 10.1.0.10. This broadcast will be received by everyone (PC2, PC3), as everyone must react to frames sent to FF:FF:FF:FF:FF:FF (the broadcast MAC address).
if the answer is NO, then based on which the PC/switch knows about their domain.
Based on the subnet mask and binary math, as explained above.
Question and Answer:
When PC1 wants to talk to PC2, it first checks if PC2 is within its own (PC1’s) subnet.
how does PC1 checks if PC2 is within its own subnet, my doubt basically lies around this?
This is about binary math. Let’s take your example with PC1 (10.1.0.6/30) and PC2 (10.1.0.10/30). So PC1 wants to ping PC2. PC1 needs to decide if this is local communication or not, i.e. if PC2 is within PC1’s subnet or not. Let’s look at the last octet.
6 = 00000110
10 = 00001010
The blue part belongs to subnet, whereas the green part belongs to host. As we can see, the subnet part is different for 10.1.0.6/30 and 10.1.0.10/30 which means that in order to access PC2, PC1 needs to go to its default gateway which is your case 10.1.0.5. Now let’s imagine that there is no default gateway configured on PC1. In this case the frame cannnot be built because PC2 is in another subnet. No frame – no communication, i.e. data won’t go out the PC1’s NIC. Note that it doesn’t matter what is connected to PC1 – a router, a switch, or a directly connected PC2 – the frame won’t go there as the packet cannot be encapsulated.
say if both PCs are in the same subnet, here also we use the same IP address. If PC1 compares the IP address like above, it will still show as different but here they are in same subnet
PC1 (10.1.0.6/24)
PC2 (10.1.0.10/24)
Do PCs look at IP address for determining the subnet? This is the first time I have come across this. Plz clarify on this.
Question and Answer:
2 PCs in different subnet connected to router
PC1 pings PC2:
PC1->default gateway->router->PC2
Default gateway is a router, so if both PCs are connected to the same router at Layer 3 it looks like PC1->router->PC2.
2 PCs in different subnet connected to switch
PC1 pings PC2:
PC1->default gateway-> what happens next…?
Next the packet is routed according to the router’s (which is default gateway) routing table.
Question: Actual difference between VLAN and subnet
Question and answer:
A subnet is a layer 3 term. Layer 3 is the IP layer where IP addresses live.
A VLAN is a layer 2 term, usually referring to a broadcast domain. Layer 2 is where MAC addresses live.
Consider:
On a cheap normal switch, there is just one single broadcast domain – the LAN – containing all the physical ports.
On a more expensive switch, you can configure each phycical port to belong to one or more virtual LANs (VLANs). Each VLAN has its own broadcast space and only other ports on the switch assigned to the same VLAN as you get to see your broadcasts.
Most commonly, broadcast traffic is used for ARP so that hosts can resolve physical hardware (MAC) addresses to IP addresses.
On the cheap normal switch, it’s totally possible to have two subnets (say, 10.0.1.0/255.255.255.0 and 10.0.2.0/255.255.255.0) living happily in the same broadcast domain (VLAN) but each will simply ignore each other’s layer 2 broadcast traffic because the other hosts are outside the expected layer 3 subnet. This means that anyone with a network sniffer like ethereal can sniff broadcast packets and discover the existence of the other subnet within the broadcast domain. If two VLANs were used instead then nobody with a sniffer could see broadcasts from VLANs that their port isn’t a member of.
answer:
VLAN is a logical local area network that contains broadcasts within it self and only hosts that belong to that vlan will see those broadcasts. Subnet is nothing more than an IP address range of IP addresses that help hosts communicate over layer 2 and 3.
Althought you can have more than one subnet or address range per VLAN, usually called a super scope, it is recommeded that VLANs and Subnets are 1 to 1. One subnet per VLAN.
Answer:
Hi Nick, I’ve been having the same trouble after hearing about VLAN recently.. I am a beginner and Ill try to explain what I’ve understood .
In a LAN perspective, both VLAN and Subnet do the same job i.e. break a network into smaller networks thereby increasing the number of broadcast domains. What makes VLAN and subnet different is the way in which they do so. In VLANs, the network to which a host belongs to is decided by the interface to which it is connected (layer 2). In subnets, it is decided by the ip address assigned to the host (layer 3). It’s upto you to decide what you want to use.
Subnet plays a vital role in WAN perspective. 3 clients need 60 ip addresses each. Before subnet was introduced the service providers had to give 3 class C addresses, one for each client. But with subnet, the service provider can use 1 class C address to provide ip address for all the 3 clients (reduced wastage of ip addresses). VLAN has no role here..
NOTE : The sole purpose of subnet it to reduce the wastage of IP addresses. Increasing the number of broadcast domains is an added advantage. Whereas the sole purpose of a VLAN is just to increase the no of broadcast domains.
Answer:
the common practice is assign a subnet per vlan; so each vlan will have unique subnet;
i have 2 different ISP viz airtel and bsnl and 4 different vlan 1)vlan1:10.0.0.1 -255.255.255.0 2)vlan2:10.0.2.1 -255.255.255.0 3) and vlan4 10.0.4.1-255.255.0.0 i want to have intranet within this network
Answer:
You need a commercial router or maybe a consumer on with third party firmware like dd-wrt. You could also use a layer 3 switch if you really have vlans. The larger issue is going to be connecting to 2 ISP and how you plan to share them. That is more of a load balancer function.
Question: Multiple subnets on one VLAN??
I have a question about a network design i did for a project involving VLAN’s. The network had to have two LAN segments one for students and one for administrators. I decided to do this I would have two VLAN’s(one for the students and one for the admins).
There were several classrooms(approximately 50), each room was to have 24 student computers and 1 admin computer, I decided each classroom would be on its own subnet with the computers in each room being connected to a switch and then linked to the rest of the network by connecting to a central layer 3 switch. the links from the switches in the classrooms to the layer 3 switches for students would be on VLAN 1 and the links for the admins would be on VLAN 2.
I thought this was a good design, but when I presented it to my teacher, they said that it is not possible to have multiple subnets on one VLAN.
I thought that VLAN’s were more of a port assignment thing so it wouldn’t matter about the subnet information?
Can someone please help me here. Was I wrong?
Can you only have one subnet per VLAN?
Answer:
you can have multiple subnets on a VLAN, but it’s not a great idea and here’s why… routing. at some point each subnet will need to route to each other subnet and you’ll have to have multiple IPs on each interface to do it, one for each subnet on that VLAN.
imagine adding 50 ips to each VLAN interface on the router, one for each classroom. that’s gonna be annoying at the least to administer and downright dangerous when making changes to network config.
and at the end of the day, seperate VLANs are there for security and splitting broadcast traffic up. which of these is a VLAN for each room solving? switching already provides unicast security between computers on a VLAN. why not have a VLAN for each computer while you’re at it? (just taking it to an absurd extreme)
apologies if that isn’t clear. in a hurry. ask more questions if you feel like it….
Answer:
You could consider using a supernet (ie one larger subnet)
Yes, it is possible to have multiple subnets on a VLAN, just as you can have one subnet spread across multiple VLANs, the principle is the same. You are just creating broadcast domains.
Personally I don’t see any issue with what you have presented.
If you have Admin PCs on VLAN1 – 192.168.0.0
and Student PCs on VLAN2 – 192.168.1.0
You have presented two different network subnets though on two different VLANs? So how did he get the multiple subnets under the same VLAN from???
Answer:
At King of Nowhere
thankyou for ur speedy reply
I have been trying to digest what you said so i didn’t replay instanstly
at some point each subnet will need to route to each other subnet
so it would be better to just create one large subnet for all the students?
seperate VLANs are there for security and splitting broadcast traffic up.
I had the two VLANs so that I could create two network segments as required, using the same network devices(Switches etc.)
I thought that if I just had two seperate subnets (one for students and one for admin) I would need seperate switches for the admin and student computers?
this seems like an exspensive way of doing things, is this the usual practice in setting up such networks? or am i just confused about how the subnets connect?
which of these is a VLAN for each room solving?
did you mean subnet for each room?
At LoM:
thx for the suggestion
My original subnet i was given to use was /13 so there would be no problem with using more bits for host addresses i was just trying to create seperate subnets for each classroom
At Pseudo:
thx for the assistance
I had multiple subnets for the student VLAN as every classroom was its own subnet
Answer:
so it would be better to just create one large subnet for all the students?
yup as far i’m concerned. you’re looking at 50×24=1200 computers though so some segmentation would be advisable to cut down on broadcast traffic swamping the network. mind you 1200 computers are unlikely to all be in one building/floor so there would be a suitable division.
I thought that if I just had two seperate subnets (one for students and one for admin) I would need seperate switches for the admin and student computers?
not necessary. but still need a router if the two talk to each other at some point. could have servers with an ip address on each subnet and then keep them entirely seperate (no router). remember switches are layer 2 and couldn’t care less what traffic they carry.
one thing a VLAN does better than carrying two subnets on the same VLAN (broadcast domain) is DHCP. because DHCP servers are located using broadcasts, it is trivial to hand out the correct subnet to the workstation based on it’s VLAN. carrying multiple subnets, I can’t think of any other way but to reserve addresses for each MAC address. and we’re trying to reduce administration headaches after all…
Answer:
I once worked at a large US company and looked after their Aussie network. When I joined I inherited a network design where the IP addressing scheme/subnet mask was already assigned to each country and I couldn’t change it. If I changed the subnet mask I would encroach on another countries IP addresses.
The only way to use the allocated IP addresses was to have multiple VLANs which had multiple subnet’s attached to them.
Initially when I started they had a Cisco 3600 series router which was doing ‘router on a stick’. There was all sorts of network congestion and complaints from people about network performance.
After a while the Cisco router was relieved of its ‘router on a stick’ function and replaced by a Cisco 4000 series switch which was layer 3 switch. Once this was done all the network issues disappeared.
So to answer your question:
You can have mulitple subnets per VLAN. You can have multiple VLANs per switch. You need a router to route between VLANs either via external router if using a layer 2 switch or its inbuilt if you use a layer 3 switch.
In your design having the teachers on a seperate subnet to the students is a good idea as well.
Your design looks good. I think your teacher is wrong. Your teacher may be a bit behind in the technology.
Answer:
I have a nice little access point sitting beside me a G-3000 H by ZyXEL I would put one in or near each classroom. This AP will do layer 2 seperation multiple SSID, Vlans Radius server inbuilt/external does 32 computers passwords ect per AP or uses external servers may not be what is wanted but have a look at the user handbook on the zyxel site this would have a lot less cables hubs, seperate the students , classrooms from each other and the administrators yet let the administrators get to the students if needed.
Answer:
Am I missing something here? Why would you want to have multiple subnets on a single VLAN??
Why would you not create separate vlans for the classrooms if that is what you want?
Answer:
mind you 1200 computers are unlikely to all be in one building/floor so there would be a suitable division.
suitable division? as in the switches in the rooms?
remember switches are layer 2 and couldn’t care less what traffic they carry.
aha brilliance thx, keep forgetting that, I think thats where i keep getting confused and thinking each port needs its own IP address to connect to the different subnets.
I can’t think of any other way but to reserve addresses for each MAC address.
I don’t quite understand what you are saying in this paragraph, were you saying that DHCP wouldnt work because of all the subnetting? or that the VLAN would allow DHCP to work on the multiple subnets?
Thanks for the suggestion, tis good to know what options are out there
Answer:
You can have multiple subnets on a single VLAN, but you will need to use alot of secondary addressing to get them to route between each other. It is generally considered bad practice, and not something you would use in the real world.
To get the admin workstations out of the way, I would put them all on a single subnet and VLAN.
The student workstations, I would group them in a logical order either by level or building (one VLAN/subnet per level or building if its single story).
You have to be careful when using one giant subnet to cover the whole lot. If someone creates a network loop you will wipe out the entire student network. Using VLAN’s and different subnets, you will reduce this effect to the VLAN the loop has been created in.
Once again, giant subnets are not something you would roll out these days.
but you will need to use alot of secondary addressing
are you talking about addressing on the routers/layer 3 switches to link the subnets?
.
.
If anyone can help me out with the questions in my last reply above that would be appreciated.
Answer:
At LoM: thx for the suggestion My original subnet i was given to use was /13 so there would be no problem with using more bits for host addresses i was just trying to create seperate subnets for each classroom
I think it’s already been mentioned but generally you will not put multiple subnets into a single VLAN (although it can be done). With a few exceptions it’s a pretty bad design when you’re not limited in a fashion that forces you to do so..
In your case, assuming each classroom has their own subnet, you will bring a “classroom” vlan in as well as the admin vlan.
Example:
All rooms get VLAN1 which is your Admin VLAN.
Classroom2 gets VLAN2 with subnet x.x.x.x
Classroom3 gets VLAN3 with subnet y.y.y.y
Classroom4 gets VLAN4 with subnet z.z.z.z
Now, without getting too complex in having multiple router modules (Layer 3 switches are so fast nowadays you can just as easily efficient route than switch), you pull all the VLANs into your one Layer 3 switch that can (if you want) route between the VLANs or control access however you want.
From a trunking point of view..Let’s say Classroom2 and 3 are next to each other and share the same switch.
You would trunking VLAN1, 2 and 3 to this switch. Lets say Classroom 4 and 5 were in another building with another switch. You’d trunk 1,4 and 5 to that switch. You’d then break out whatever VLANs you need to break out..might require some additional switches after that…
So your professor is wrong in saying it’s not possible..it most definitely is…but it’s probably not a good idea since you’re not limited….Maybe that’s what he’s trying to convey to you…
Answer:
Seeing as the network structure being asked for is a standard practice across all Victoria state schools (network separation between administration and curriculum segments), chances are that the teacher is working from a case study.
Regardless of the positibility of the multi-subnet solution being possible, it is still overkill.
If a contractor came to me with such a solution, they’d better have some valid reasons for suggestiong such a design.
Having investigated a similar multi-subnet solution for a school that I work for, I can honestly say that there any very few problems solved by the solution and a great deal of additional overheads.
Why do you want to implement such a solution? “Because I thought it would be cool” isn’t a good enough answer 😉
Answer:
I thought this was a good design, but when I presented it to my teacher, they said that it is not possible to have multiple subnets on one VLAN.
…an absolute crock of, well, you know what. VLANs are layer 2 (well, layer 2.5), but if you can run it over ethernet, you can run it over a VLAN. Hell, you can run IP/IPX/ARTNET side by side on the same VLAN if you want – it’s totally protocol independent.
That being said, supernetting all the machines so they’re on the one subnet wouldn’t be a bad idea, so long as you ensure you have broadcast trapping enabled on your switches (most switches will do this just dandily).
If you do decide to put everything on a different subnet (which does have its advantages), you’d be better off using layer 3 Cisco/Foundry equipment (all of which is fantastic equipment), and if you do need to do router on a stick, VLAN trunk to something that’s going to give some decent throughput (someone may crucify me here, but for static routes, it’s probably worth using MicroTik RouterOS – it’s price/performance ration is fantastic for doing this kind of thing).
It does raise the question though – in a school scenario, is there any need to route between subnets? Most students will only require access to the internet, file shares and any other network-delivered applications. This would mean they wouldn’t require inter-room routing, and would only need to see whatever subnet the internet gateway/fileservers etc. were sitting on. This also goes for a network that is using only Citrix.
Answer:
Thx for the replies LoM, Polymer714 ,noonereallycares and Curtis Bayne, my knowledge is slowly expanding :).
From what I gather, you all seem to be saying that you can have multiple subnets on one VLAN but that having each classroom on its own subnet would just make administration much harder without providing too much benefit over a single large subnet for all the students.
I think i just thought that putting all the classrooms on their own subnets would help to minimize traffic on the network. I guess this isn’t needed though?
I’m still a little confused as to what my teacher was thinking when they said having multiple subnets on one VLAN wouldn’t work because of the trunk link? Could someone fill me in as to what they may have been thinking?
I thought the trunk link was just a way of using one connection to transfer data for multiple VLANs?
Answer:
I’m still a little confused as to what my teacher was thinking when they said having multiple subnets on one VLAN wouldn’t work because of the trunk link? Could someone fill me in as to what they may have been thinking?
Your teacher is confused. The statement isn’t true.
Answer:
The benefit of separating rooms by VLAN (especially in a school) would be to stop the spread of a broadcast virus.
When your teacher said it can’t be done he may have been thinking that you can’t route between the subnets, but you can with secondary IP addresses. But, AFAIK, this will break DHCP as the L3 interface won’t know which address to use as the giaddr for each specific client. i.e. when the client broadcasts for DHCP, it will hit the L3 interface but then it doesn’t know which address to use for the relay, the primary or the secondary. I am unsure on this though.
The best option would just be to have a separate VLAN for each room – if you are using a L3 switch. If it’s router-on-a-stick then you may saturate the uplink if there is a lot of inter-room communication. In this case analyse which room communicate with each other most often and group them in the same VLAN – but try not to put too many rooms together, to control broadcasts.
Are you limited to 2 VLANs?
Answer:
Your teacher is confused. The statement isn’t true.
Thx again LoM, but could you possibly expand on that?
Thx for the reply, I didnt have to use VLANs at all, i just chose to as the design requirement was to have 2 LAN segments (one for admins and one for students)
can’t route between the subnets, but you can with secondary IP addresses.
I havn’t really heard of secondary IP addresses before, but wouldn’t the layer 3 switch be able to handle routing between subnets? why would you need secondary IP addresses?
Answer:
It could handle the routing but here’s the problem..and maybe it’s everyone’s misunderstanding.
Sounds like you have two vlans..
Vlan 1 – admin
Vlan 2 – students
Vlan 2 stretches across ALL the classrooms as does VLAN 1.
Vlan 2 has different subnets for each classroom so lets make it easy
192.168.10.0 /24
192.168.20.0 /24
192.168.30.0 /24
For three different classrooms….
Now you have your VLAN on the Layer3 switch…How does it know about all three subnets? Lets change it up and add this.
172.17.10.0 /24
and
10.10.10.0 /24
For five classrooms across a single VLAN. So how can your layer 3 switch route them? Well, you can configure secondary interfaces on the VLAN which is what has been suggested….
OR
Each subnet has their own vlan, each is configured on the layer 3 switch and you can route between them.
Answer:
Well, say you had 192.168.1.0/24 and 192.168.2.0/24 in the same VLAN. Which address will you use for the L3 interface on the switch?
If you use the first one, how is the RP going to know how to route to the second one? So, you configure 2 IPs on the interface, one for each of the subnets.
But don’t get caught up in this. You shouldn’t design like this. It’s usually just something that’s done if you run out of addresses for a particular subnet.
you configure 2 IPs on the interface, one for each of the subnets.
I think this is where I am getting confused, I’m thinking that the VLAN is a port assignment, so if i had each class connected (with two connections; one for the admin and one for the students) to a different port on the layer 3 switch then the layer 3 switch could route between the subnets?
I guess im getting confused between layer 2 and layer 3?
Answer:
I think this is where I am getting confused, I’m thinking that the VLAN is a port assignment
Usually, VLANs are port assignments. If each classroom has a switch (an “edge” device), then you’d assign the ports on that switch to the appropriate VLAN. For example, the student VLAN is “10” and student PCs are plugged into ports 1-20 on the switch, you’d assign those ports to be VLAN 10. All these devices are now on the same broadcast domain. You can assign them whatever IP addresses you like, and so long as they are in the same subnet, the PCs will talk to each other – this is layer 2 functionality because the PCs are really just using ARP (Address Resolution Protocol) to match MAC addresses to IP addresses without the switch doing much more than forwarding “Who has X IP?” broadcasts.
If you have a teacher VLAN, “20”, in the same room you assign the ports on the switch to VLAN 20 where teacher PCs are plugged in. PCs in VLAN 20 can’t talk to PCs in VLAN 10 without routing, as the “Who has X IP?” broadcasts are not passed across the VLAN boundary. To do this you need to route.
Now, at the core of your network, you’d be doing the routing. You get to the core from an edge device using a trunk port. A trunk port carries multiple VLANs across a single ethernet connection. To route, you also need to assign the VLAN interfaces an IP address (or addresses, you can happily have multiple addresses on one VLAN). This address becomes the default gateway for the PCs on that VLAN, and is learned by the core router’s routing table.
The routing table holds all the information the core knows about routes to various IP addresses. It knows how to get from VLAN 10 to 20, because both VLANs are directly attached via the trunk port. The core also maintains a MAC address lookup table, so it is able to say “MAC address Y exists on VLAN X, which is directly attached to Ethernet port Z, therefore I can send traffic to it”.
Yes, this does mean that to get from a teacher PC to a student PC in the same lab, you have to send the traffic up and down the same physical interface.
I hope that’s moderately clear. 🙂
Answer:
I hope that’s moderately clear. 🙂
Yes thanks Curtis that helped to clear some things up for me
.
.
I think i finally understand the problem:
Each class subnet would have a different network ID but each VLAN is only assigned one Network ID for routing(without secondary addresses). So when it comes to routing the layer 3 device would only know the one IP address of the VLAN and so would not be able to route to the different subnets unless the VLAN has multiple secondary IP addresses?
Am i on the right track here?
Answer:
Nearly. 🙂
The layer 3 switch will be able to route between all VLANs, so long as the VLANs have IP addresses defined. This does include any secondary IP addresses. For example, if VLAN 10 is defined, in the layer 3 switch, as such:
int VLAN 10
{
ip address 192.168.0.1 255.255.255.0
ip address 10.1.1.1 255.255.255.0 secondary
}
And VLAN 20 is defined as such:
int VLAN 20
{
ip address 192.168.1.1 255.255.255.0
}
The the layer 3 switch holds in it’s routing tables all the paths necessary to get to all three networks defined. The PCs in VLAN 10 can have IP addresses in either range and still be routed (assuming the default gateway on each PC is set for the correct subnet).
Answer:
Thx again Curtis,
I think that is what I was trying to get at.
So for my design to work I would need to assign secondary IP addresses to my Student VLAN for every classroom subnet?
Answer:
if you have lots more students than admin staff, i’d probably make a vlsm structure with lots of vlans, something like this
admin vlan: 50 ips required, give it a /26 mask, say 172.16.0.0-63
then make a vlan per classroom for the students to cut back on broadcast domain size: /27 mask, 172.16.0.64-95, 172.16.0.96-127, 172.16.0.128-159 and so on.
then you can have dhcp running with ip helper-address configured on the subinterfaces on your router. the trunk links between switches will forward requests that go between classrooms and all admin traffic (via the router of course). you can implement access lists on the router to control the students access to the admin lan, 802.1x can be used to authenticate the admin puters incase some smarty pants student decides to plug into the admin port etc.. QoS can be applied to give the admin LAN more WAN bandwidth (for pr0n and torrents of course).. thats the way i would do it, others might have better ideas but i think this might be what your teachers are looking for in terms of your project. good luck.
Answer:
Thx for the reply Krisso I shall keep your ideas in mind for my next network design
Answer:
Yeah, you could go with a secondary IP address per classroom in the same VLAN, but I wouldn’t recommend it. For one, the security you’re attempting to gain would be wiped out with one smart student picking an IP and gateway in another classroom.
Krisso’s got the right idea. You want multiple VLANs – one per classroom – each with a small slice of a larger address plan.
If you’re not using an allocated IP range (ie, your ISP has provided you with a block of IPs for your use only) and are doing NAT for general internet connectivity, it could be argued that simply using 192.168.0.1/24, 192.168.1.1/24, 192.168.2.1/24, etc would be easier to remember and administer. 802.1x is a pretty advanced concept, and I suspect a little out of the scope of what you’re trying to achieve. 🙂
Answer:
Thx for your help Curtis,
I now know there are better ways of designing such a network,
It was more that I had already handed in my design to be marked (with the multiple subnets on the same VLAN) and my teacher was saying it couldn’t be done, so I wanted to find the answer to how it could be done so I wouldn’t be marked down.
Question: Single VLAN can support multiple subnets
I was reading cisco book where it says Sinle vlan can support multiple subnets.Because switch ports are configured for vlan number only and not a network address any station connected to a poprt can present ant subnet address range.
if someone can please explain me this with example.
Question and Answer: What is a VLAN?
VLANs (Virtual LANs) are logical grouping of devices in the same broadcast domain. VLANs are usually configured on switches by placing some interfaces into one broadcast domain and some interfaces into another. VLANs can be spread across multiple switches.
A VLAN acts like a physical LAN, but it allows hosts to be grouped together in the same broadcast domain even if they are not connected to the same switch.
The following topology shows a network with all hosts inside the same VLAN:
Without VLANs, a broadcast sent from host A would reach all devices on the network. By placing interfaces Fa0/0 and Fa0/1 on both switches in a separate VLAN, a broadcast from host A would reach only host B, since each VLAN is a separate broadcast domain and only host B is inside the same VLAN as host A. This is shown in the picture below:
Creating VLANs offers many advantages. Broadcast traffic will be received and processed only by devices inside the same VLAN. Users can be grouped by a department, and not by a physical location. VLANs provides also some security benefits, since the sensitive traffic can be isolated in a separate VLAN.
NOTE – to reach hosts in another VLAN, a router is needed.
Access & trunk ports
Each port on a switch can be configured as either an access or a trunk port. An access port is a port that can be assigned to a single VLAN. This type of interface is configured on switch ports that are connected to devices with a normal network card, for example a host on a network. A trunk interface is an interface that is connected to another switch. This type of interface can carry traffic of multiple VLANs.
A Local Area Network (LAN) was originally defined as a network of computers located within the same area. Today, Local Area Networks are defined as a single broadcast domain. This means that if a user broadcasts information on his/her LAN, the broadcast will be received by every other user on the LAN. Broadcasts are prevented from leaving a LAN by using a router. The disadvantage of this method is routers usually take more time to process incoming data compared to a bridge or a switch. More importantly, the formation of broadcast domains depends on the physical connection of the devices in the network. Virtual Local Area Networks (VLAN’s) were developed as an alternative solution to using routers to contain broadcast traffic.
In Section 2, we define VLAN’s and examine the difference between a LAN and a VLAN. This is followed by a discussion on the advantages VLAN’s introduce to a network in Section 3. Finally, we explain how VLAN’s work based on the current draft standards in Section 4.
In a traditional LAN, workstations are connected to each other by means of a hub or a repeater. These devices propagate any incoming data throughout the network. However, if two people attempt to send information at the same time, a collision will occur and all the transmitted data will be lost. Once the collision has occurred, it will continue to be propagated throughout the network by hubs and repeaters. The original information will therefore need to be resent after waiting for the collision to be resolved, thereby incurring a significant wastage of time and resources. To prevent collisions from traveling through all the workstations in the network, a bridge or a switch can be used. These devices will not forward collisions, but will allow broadcasts (to every user in the network) and multicasts (to a pre-specified group of users) to pass through. A router may be used to prevent broadcasts and multicasts from traveling through the network.
The workstations, hubs, and repeaters together form a LAN segment. A LAN segment is also known as a collision domain since collisions remain within the segment. The area within which broadcasts and multicasts are confined is called a broadcast domain or LAN. Thus a LAN can consist of one or more LAN segments. Defining broadcast and collision domains in a LAN depends on how the workstations, hubs, switches, and routers are physically connected together. This means that everyone on a LAN must be located in the same area (see Figure1).
Figure 1: Physical view of a LAN.
VLAN’s allow a network manager to logically segment a LAN into different broadcast domains (see Figure2). Since this is a logical segmentation and not a physical one, workstations do not have to be physically located together. Users on different floors of the same building, or even in different buildings can now belong to the same LAN.
Physical View
Logical View
Figure 2: Physical and logical view of a VLAN.
VLAN’s also allow broadcast domains to be defined without using routers. Bridging software is used instead to define which workstations are to be included in the broadcast domain. Routers would only have to be used to communicate between two VLAN’s [ Hein et al].
When a LAN bridge receives data from a workstation, it tags the data with a VLAN identifier indicating the VLAN from which the data came. This is called explicit tagging. It is also possible to determine to which VLAN the data received belongs using implicit tagging. In implicit tagging the data is not tagged, but the VLAN from which the data came is determined based on other information like the port on which the data arrived. Tagging can be based on the port from which it came, the source Media Access Control (MAC) field, the source network address, or some other field or combination of fields. VLAN’s are classified based on the method used. To be able to do the tagging of data using any of the methods, the bridge would have to keep an updated database containing a mapping between VLAN’s and whichever field is used for tagging. For example, if tagging is by port, the database should indicate which ports belong to which VLAN. This database is called a filtering database. Bridges would have to be able to maintain this database and also to make sure that all the bridges on the LAN have the same information in each of their databases. The bridge determines where the data is to go next based on normal LAN operations. Once the bridge determines where the data is to go, it now needs to determine whether the VLAN identifier should be added to the data and sent. If the data is to go to a device that knows about VLAN implementation (VLAN-aware), the VLAN identifier is added to the data. If it is to go to a device that has no knowledge of VLAN implementation (VLAN-unaware), the bridge sends the data without the VLAN identifier.
In order to understand how VLAN’s work, we need to look at the types of VLAN’s, the types of connections between devices on VLAN’s, the filtering database which is used to send traffic to the correct VLAN, and tagging, a process used to identify the VLAN originating the data.
VLAN Standard: IEEE 802.1Q Draft Standard
There has been a recent move towards building a set of standards for VLAN products. The Institute of Electrical and Electronic Engineers (IEEE) is currently working on a draft standard 802.1Q for VLAN’s. Up to this point, products have been proprietary, implying that anyone wanting to install VLAN’s would have to purchase all products from the same vendor. Once the standards have been written and vendors create products based on these standards, users will no longer be confined to purchasing products from a single vendor. The major vendors have supported these standards and are planning on releasing products based on them. It is anticipated that these standards will be ratified later this year.
VLAN membership can be classified by port, MAC address, and protocol type.
Port
VLAN
1
1
2
1
3
2
4
1
Figure3: Assignment of ports to different VLAN’s.
MAC Address
VLAN
1212354145121
1
2389234873743
2
3045834758445
2
5483573475843
1
Figure4: Assignment of MAC addresses to different VLAN’s.
Protocol
VLAN
IP
1
IPX
2
Figure5: Assignment of protocols to different VLAN’s.
IP Subnet
VLAN
23.2.24
1
26.21.35
2
Figure6: Assignment of IP subnet addresses to different VLAN’s.
The 802.1Q draft standard defines Layer 1 and Layer 2 VLAN’s only. Protocol type based VLAN’s and higher layer VLAN’s have been allowed for, but are not defined in this standard. As a result, these VLAN’s will remain proprietary.
Devices on a VLAN can be connected in three ways based on whether the connected devices are VLAN-aware or VLAN-unaware. Recall that a VLAN-aware device is one which understands VLAN memberships (i.e. which users belong to a VLAN) and VLAN formats.
Figure7: Trunk link between two VLAN-aware bridges.
Figure 8: Access link between a VLAN-aware bridge and a VLAN-unaware device.
Figure9: Hybrid link containing both VLAN-aware and VLAN-unaware devices.
It must also be noted that the network can have a combination of all three types of links.
A bridge on receiving data determines to which VLAN the data belongs either by implicit or explicit tagging. In explicit tagging a tag header is added to the data. The bridge also keeps track of VLAN members in a filtering database which it uses to determine where the data is to be sent. Following is an explanation of the contents of the filtering database and the format and purpose of the tag header [802.1Q].
Figure10: Active topology of network and VLAN A using spanning tree algorithm.
As we have seen there are significant advances in the field of networks in the form of VLAN’s which allow the formation of virtual workgroups, better security, improved performance, simplified administration, and reduced costs. VLAN’s are formed by the logical segmentation of a network and can be classified into Layer1, 2, 3 and higher layers. Only Layer 1 and 2 are specified in the draft standard 802.1Q. Tagging and the filtering database allow a bridge to determine the source and destination VLAN for received data. VLAN’s if implemented effectively, show considerable promise in future networking solutions.
A VLAN (virtual LAN) abstracts the idea of the local area network (LAN) by providing data link connectivity for a subnet. One or more network switches may support multiple, independent VLANs, creating Layer 2 (data link) implementations of subnets. A VLAN is associated with a broadcast domain. It is usually composed of one or more Ethernetswitches.
VLANs make it easy for network administrators to partition a single switched network to match the functional and security requirements of their systems without having to run new cables or make major changes in their current network infrastructure. Ports (interfaces) on switches can be assigned to one or more VLANs, enabling systems to be divided into logical groups — based on which department they are associated with — and establish rules about how systems in the separate groups are allowed to communicate with each other. These groups can range from the simple and practical (computers in one VLAN can see the printer on that VLAN, but computers outside that VLAN cannot), to the complex and legal (for example, computers in the retail banking departments cannot interact with computers in the trading departments).
Each VLAN provides data link access to all hosts connected to switch ports configured with the same VLAN ID. The VLAN tag is a 12-bit field in the Ethernet header that provides support for up to 4,096 VLANs per switching domain. VLAN tagging is standardized in IEEE(Institute of Electrical and Electronics Engineers) 802.1Q and is often called Dot1Q.
When an untagged frame is received from an attached host, the VLAN ID tag configured on that interface is added to the data link frame header, using the 802.1Q format. The 802.1Q frame is then forwarded toward the destination. Each switch uses the tag to keep each VLAN’s traffic separate from other VLANs, forwarding it only where the VLAN is configured. Trunk links (described below) between switches handle multiple VLANs, using the tag to keep them segregated. When the frame reaches the destination switch port, the VLAN tag is removed before the frame is to be transmitted to the destination device.
Multiple VLANs can be configured on a single port using a trunk configuration in which each frame sent via the port is tagged with the VLAN ID, as described above. The neighboring device’s interface, which may be on another switch or on a host that supports 802.1Q tagging, will need to support trunk mode configuration in order to transmit and receive tagged frames. Any untagged Ethernet frames are assigned to a default VLAN, which can be designated in the switch configuration.
When a VLAN-enabled switch receives an untagged Ethernet frame from an attached host, it adds the VLAN tag assigned to the ingress interface. The frame is forwarded to the port of the host with the destination MAC address (media access control address). Broadcast, unknown unicast and multicast (BUM traffic) is forwarded to all ports in the VLAN. When a previously unknown host replies to an unknown unicast frame, the switches learn the location of this host and do not flood subsequent frames addressed to that host.
The switch-forwarding tables are kept up to date by two mechanisms. First, old forwarding entries are removed from the forwarding tables on a periodic basis, often a configurable timer. Second, any topology change causes the forwarding table refresh timer to be reduced, triggering a refresh.
The Spanning Tree Protocol (STP) is used to create loop-free topology among the switches in each Layer 2 domain. A per-VLAN STP instance can be used, which enables different Layer 2 topologies or a multi-instance STP (MISTP) can be used to reduce STP overhead if the topology is the same among multiple VLANs. STP blocks forwarding on links that might produce forwarding loops, creating a spanning tree from a selected root switch. This blocking means that some links will not be used for forwarding until a failure in another part of the network causes STP to make the link part of an active forwarding path.
The figure above shows a switch domain with four switches with two VLANs. The switches are connected in a ring topology. STP causes one port to go into blocking state so that a tree topology is formed (i.e., no forwarding loops). The port on switch D to switch C is blocking, as indicated by the red bar across the link. The links between the switches and to the router are trunking VLAN 10 (orange) and VLAN 20 (green). The hosts connected to VLAN 10 can communicate with server O. The hosts connected to VLAN 20 can communicate with server G. The router has an IPv4 subnet configured on each VLAN to provide connectivity for any communications between the two VLANs.
Disadvantages of VLAN
The limitation of 4,096 VLANs per switching domain creates problems for large hosting providers, which often need to allocate tens or hundreds of VLANs for each customer. To address this limitation, other protocols, like VXLAN (Virtual Extensible LAN), NVGRE(Network Virtualization using Generic Routing Encapsulation) and Geneve, support larger tags and the ability to tunnel Layer 2 frames within Layer 3 (network) packets.
Finally, data communications between VLANs is performed by routers. Modern switches often incorporate routing functionality and are called Layer 3 switches.
Question: Introductory level explanation of VLANs
What’s the basic use case(s) for VLANs?
What are the basic design principles?
I’m looking for something like a two paragraph executive summary style answer so I can determine if I need to learn about VLANs to implement them.
Answer:
A VLAN (Virtual LAN) is a way of creating multiple virtual switches inside one physical switch. So for instance ports configured to use VLAN 10 act as if they’re connected to the exact same switch. Ports in VLAN 20 can not directly talk to ports in VLAN 10. They must be routed between the two (or have a link that bridges the two VLANs).
There are a lot of reasons to implement VLANs. Typically the least of these reasons is the size of the network. I’ll bullet list a few reasons and then break each one open.
Security
Link Utilization
Service Separation
Service Isolation
Subnet Size
Security: Security isn’t itself achieved by creating a VLAN; however, how you connect that VLAN to other subnets could allow you to filter/block access to that subnet. For instance if you have an office building that has 50 computers and 5 servers you could create a VLAN for the server and a VLAN for the computers. For computers to communicate with the servers you could use a firewall to route and filter that traffic. This would then allow you to apply IPS/IDS,ACLs,Etc. to the connection between the servers and computers.
Link Utilization: (Edit)I can’t believe I left this out the first time. Brain fart I guess. Link utilization is another big reason to use VLANs. Spanning tree by function builds a single path through your layer 2 network to prevent loops (Oh, my!). If you have multiple redundant links to your aggregating devices then some of these links will go unused. To get around this you can build multiple STP topology with different VLANs. This is accomplished with Cisco Proprietary PVST, RPVST, or standards based MST. This allows you to have multiple STP typologies you can play with to utilize your previously unused links. In example if I had 50 desktops I could place 25 of them in VLAN 10, and 25 of them in VLAN 20. I could then have VLAN 10 take the “left” side of the network and the remaining 25 in VLAN 20 would take the “right” side of the network.
Service Separation: This one is pretty straight forward. If you have IP security cameras, IP Phones, and Desktops all connecting into the same switch it might be easier to separate these services out into their own subnet. This would also allow you to apply QOS markings to these services based on VLAN instead of some higher layer service (Ex: NBAR). You can also apply ACLs on the device performing L3 routing to prevent communication between VLANs that might not be desired. For instance I can prevent the desktops from accessing the phones/security cameras directly.
Service Isolation: If you have a pair of TOR switches in a single rack that has a few VMWare hosts and a SAN you could create a iSCSI VLAN that remains unrouted. This would allow you to have an entirely isolated iSCSI network so that no other device could attempt to access the SAN or disrupt communication between the hosts and the SAN. This is simply one example of service isolation.
Subnet Size: As stated before if a single site becomes too large you can break that site down into different VLANs which will reduce the number of hosts that see need to process each broadcast.
There are certainly more ways VLANs are useful (I can think of several that I use specifically as an Internet Service Provider), but I feel these are the most common and should give you a good idea on how/why we use them. There are also Private VLANs that have specific use cases and are worth mentioning here.
Answer:
As networks grow larger and larger, scalability becomes an issue. In order to communicate, every device needs to send broadcasts, which are sent to all devices in a broadcast domain. As more devices are added to the broadcast domain, more broadcasts start to saturate the network. At this point, multiple issues creep in, including bandwidth saturation with broadcast traffic, increased processing on each device (CPU usage), and even security issues. Splitting this large broadcast domain into smaller broadcast domains becomes increasingly necessary.
Enter VLANs.
A VLAN, or Virtual LAN, creates separate broadcast domains virtually, eliminating the need to create completely separate hardware LANs to overcome the large-broadcast-domain issue. Instead, a switch can contain many VLANs, each one acting as a separate, autonomous broadcast domain. In fact, two VLANs, can not communicate with each other without the intervention of a layer 3 device such as a router, which is what layer 3 switching is all about.
In summary, VLANs, at the most basic level, segment large broadcast domains into smaller, more manageable broadcast domains to increase scalability in your ever-expanding network.
Answer:
VLANs are logical networks created within the physical network. Their primary use is to provide isolation, often as a means to decrease the size of the broadcast domain within a network, but they can be used for a number of other purposes.
They are a tool that any network engineer should be familiar with and like any tool, they can be used incorrectly and/or at the wrong times. No single tool is the correct one in all networks and all situations, so the more tools you can use, the better you are able to work in more environments. Knowing more about VLANs allows you to use them when you need them and to use them correctly when you do.
One example of how they can be used, I currently work in an environments where SCADA (supervisory control and data acquisition) devices are used widely. SCADA devices typically are fairly simple and have a long history of less than stellar software development, often providing major security vulnerabilities.
We have set the SCADA devices in their in a separate VLAN with no L3 gateway. The only access into their logical network is through the server they communicate with (which has two interfaces, one in the SCADA VLAN) which can be secured with it’s own host based security, something not possible on the SCADA devices. The SCADA devices are isolated from the rest of the network, even while connected to the same physical devices, so any vulnerability is mitigated.
Answer:
In terms of design principles, the most common implementation is to align your VLANs with your organizational structure, ie Engineering folks in one VLAN, Marketing in another, IP phones in another, etc. Other designs include utilizing VLAN’s as “transport” of separate network functions across one (or more) cores. Layer 3 termination of VLANs (‘SVI’ in Cisco parlance, ‘VE’ in Brocade, etc) is also possible on some devices, which eliminates the need of a separate piece of hardware to do inter-VLAN communication when applicable.
VLANs become cumbersome to manage and maintain at scale, as you’ve probably seen cases of already on NESE. In the service provider realm, there’s PB (Provider Bridging – commonly known as “QinQ”, double tagging, stacked tag, etc), PBB (Provider Backbone Bridging – “MAC-in-MAC”) and PBB-TE, which have been designed to try to mitigate the limitation of the number of VLAN ID’s that were available. PBB-TE more aims to eliminate the need for dynamic learning, flooding, and spanning tree. There’s only 12 bits available for use as a VLAN ID in a C-TAG/S-TAG (0x000 and 0xFFF are reserved) which is where the 4,094 limitation comes from.
VPLS or PBB can be used to eliminate the traditional scaling ceilings involved with PB.
The basic use case for VLANs is almost exactly the same as the basic use case for segmentation of the network into multiple data link broadcast domains. The key difference is that with a physicalLAN, you need at least one device (typically a switch) for each broadcast domain, whereas with a virtual LAN broadcast domain membership is determined on a port-by-port basis and is reconfigurable without adding or replacing hardware.
For basic applications, apply the same design principles to VLANs as you would for PLANs. The three concepts you need to know to do this are:
Trunking – Any link that carries frames belonging to more than one VLAN is a trunk link. Typically switch-to-switch and switch-to-router links are configured to be trunk links.
Tagging – When transmitting to a trunk link, the device must tag each frame with the numeric VLAN ID to which it belongs so that the receiving device can properly confine it to the correct broadcast domain. In general, host-facing ports are untagged, while switch-facing and router-facing ports are tagged. The tag is an additional part of the data link encapsulation.
Virtual Interfaces – On a device with one or more trunk link interfaces, it is often necessary to attach, in the logical sense, the device as a link terminal to one or more of the individual VLANs that are present within the trunk. This is particularly true of routers. This logical link attachment is modeled as a virtual interface that acts as a port that is connected to the single broadcast domain associated with the designated VLAN.
If I may offer one more piece of information, which might help.
To understand VLAN’s, you must also understand two key concepts.
-Subnetting – Assuming you want the various devices to be able to talk to one another (servers and clients, for example) each VLAN must be assigned an IP subnet. This is the SVI mentioned above. That enables you to begin routing between the vlans.
-Routing – Once you have each VLAN created, a subnet assigned to the clients on each VLAN, and an SVI created for each VLAN, you will need to enable routing. Routing can be a very simple setup, with a static default route to the internet, and EIGRP or OSPF network statements for each of the subnets.
Once you see how it all comes together, it is actually quite elegant.
Answer:
The original use of a vlan was to restrict the broadcast area in a network. Broadcasts are limited to their own vlan. Later additional funtionality was added. However, keep in mind that vlan’s are layer 2 in for example cisco switches. You can add layer 2 by assigning an IP address to the port on the switch but this is not mandatory.
additional functionality:
trunking: use multiple vlan’s through one physical connection (ex: connecting 2 switches, one physical link is good enough to have a connection for all vlan’s, seperating the vlan’s is done by tagging, see: dot1Q for cisco)
security
easier to manage (ex: shutdown on a vlan doesn’t impact the other vlan’s connectivity…)
…
Question: What is a Virtual LAN (VLAN)?
Answer:
A virtual LAN (Local Area Network) is a logical subnetwork that can group together a collection of devices from different physical LANs. Larger business computer networks often set up VLANs to re-partition their network for improved traffic management.
Several different kinds of physical networks support virtual LANs including both Ethernet and Wi-Fi.
Benefits of a VLAN
When set up correctly, virtual LANs can improve the overall performance of busy networks. VLANs are intended to group together client devices that communicate with each other most frequently. The traffic between devices split across two or more physical networks ordinarily needs to be handled by a network’s core routers, but with a VLAN that traffic can be handled more efficiently by network switchesinstead.
VLANs also bring additional security benefits on larger networks by allowing greater control over which devices have local access to each other. Wi-Fi guest networks are often implemented using wireless access points that support VLANs.
Static and Dynamic VLANs
Network administrators often refer to static VLANs as “port-based VLANs.” A static VLAN requires an administrator to assign individual ports on the network switch to a virtual network. No matter what device plus into that port, it becomes a member of that same pre-assigned virtual network.
Dynamic VLAN configuration allows an administrator to define network membership according to characteristics of the devices themselves rather than their switch port location. For example, a dynamic VLAN can be defined with a list of physical addresses (MAC addresses) or network account names.
VLAN Tagging and Standard VLANs
VLAN tags for Ethernet networks follow the IEEE 802.1Q industry standard. An 802.1Q tag consists of 32 bits (4 bytes) of data inserted into the Ethernet frame header. The first 16 bits of this field contain the hardcoded number 0x8100 that triggers Ethernet devices to recognize the frame as belonging to a 802.1Q VLAN. The last 12 bits of this field contain the VLAN number, a number between 1 and 4094.
Best practices of VLAN administration define several standard types of virtual networks:
Native LAN: Ethernet VLAN devices treat all untagged frames as belonging to the native LAN by default. The native LAN is VLAN 1, although administrators can change this default number.
Management VLAN: Used to support remote connections from network administrators. Some networks use VLAN 1 as the management VLAN while others set up a special number just for this purpose (to avoid conflicting with other network traffic)
Setting up a VLAN
At a high level, network administrators set up new VLANs as follows:
Choose a valid VLAN number
Choose a private IP address range for devices on that VLAN to use
Configure the switch device with either static or dynamic settings. Static configurations require the administrator to assign a VLAN number to each switch port while dynamic configurations require assigning a list of MAC addresses or usernames to a VLAN number.
Configure routing between VLANs as needed. Configuring two or more VLANs to communicate with each other requires the use of either a VLAN-aware router or a Layer 3 switch.
The administrative tools and interfaces used vary greatly depending on the equipment involved.
Question: Virtual LAN
Answer:
A virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a computer network at the data link layer (OSI layer 2).[1][2]LAN is the abbreviation for local area network and in this context virtual refers to a physical object recreated and altered by additional logic. VLANs work by applying tags to network packets and handling these tags in networking systems – creating the appearance and functionality of network traffic that is physically on a single network but acts as if it is split between separate networks. In this way, VLANs can keep network applications separate despite being connected to the same physical network, and without requiring multiple sets of cabling and networking devices to be deployed.
VLANs allow network administrators to group hosts together even if the hosts are not directly connected to the same network switch. Because VLAN membership can be configured through software, this can greatly simplify network design and deployment. Without VLANs, grouping hosts according to their resource needs necessitates the labor of relocating nodes or rewiring data links. VLANs allow networks and devices that must be kept separate to share the same physical cabling without interacting, improving simplicity, security, traffic management, or economy. For example, a VLAN could be used to separate traffic within a business due to users, and due to network administrators, or between types of traffic, so that users or low priority traffic cannot directly affect the rest of the network’s functioning. Many Internet hosting services use VLANs to separate their customers’ private zones from each other, allowing each customer’s servers to be grouped together in a single network segment while being located anywhere in their data center. Some precautions are needed to prevent traffic “escaping” from a given VLAN, an exploit known as VLAN hopping.
To subdivide a network into VLANs, one configures network equipment. Simpler equipment can partition only per physical port (if at all), in which case each VLAN is connected with a dedicated network cable. More sophisticated devices can mark frames through VLAN tagging, so that a single interconnect (trunk) may be used to transport data for multiple VLANs. Since VLANs share bandwidth, a VLAN trunk can use link aggregation, quality-of-service prioritization, or both to route data efficiently.
In a network utilizing broadcasts for service discovery, address assignment and resolution and other services, as the number of peers on a network grows, the frequency of broadcasts also increases. VLANs can help manage broadcast traffic by forming multiple broadcast domains. Breaking up a large network into smaller independent segments reduces the amount of broadcast traffic each network device and network segment has to bear. Switches may not bridge network traffic between VLANs, as doing so would violate the integrity of the VLAN broadcast domain.
VLANs can also help create multiple layer 3 networks on a single physical infrastructure. VLANs are data link layer (OSI layer 2) constructs, analogous to Internet Protocol (IP) subnets, which are network layer (OSI layer 3) constructs. In an environment employing VLANs, a one-to-one relationship often exists between VLANs and IP subnets, although it is possible to have multiple subnets on one VLAN.
Without VLAN capability, users are assigned to networks based on geography and are limited by physical topologies and distances. VLANs can logically group networks to decouple the users’ network location from their physical location. By using VLANs, one can control traffic patterns and react quickly to employee or equipment relocations. VLANs provide the flexibility to adapt to changes in network requirements and allow for simplified administration.[2]
VLANs can be used to partition a local network into several distinctive segments, for instance:[3]
A common infrastructure shared across VLAN trunks can provide a measure of security with great flexibility for a comparatively low cost. Quality of service schemes can optimize traffic on trunk links for real-time (e.g. VoIP) or low-latency requirements (e.g. SAN). However, VLANs as a security solution should be implemented with great care as they can be defeated unless implemented carefully.[4]
In cloud computing VLANs, IP addresses, and MAC addresses in the cloud are resources that end users can manage. To help mitigate security issues, placing cloud-based virtual machines on VLANs may be preferable to placing them directly on the Internet.[5]
After successful experiments with voice over Ethernet from 1981 to 1984, Dr. W. David Sincoskie joined Bellcore and began addressing the problem of scaling up Ethernet networks. At 10 Mbit/s, Ethernet was faster than most alternatives at the time. However, Ethernet was a broadcast network and there was no good way of connecting multiple Ethernet networks together. This limited the total bandwidth of an Ethernet network to 10 Mbit/s and the maximum distance between nodes to a few hundred feet.
By contrast, although the existing telephone network’s speed for individual connections was limited to 56 kbit/s (less than one hundredth of Ethernet’s speed), the total bandwidth of that network was estimated at 1 Tbit/s[citation needed] (100,000 times greater than Ethernet).
Although it was possible to use IP routing to connect multiple Ethernet networks together, it was expensive and relatively slow. Sincoskie started looking for alternatives that required less processing per packet. In the process he independently reinvented transparent bridging, the technique used in modern Ethernet switches.[6] However, using switches to connect multiple Ethernet networks in a fault-tolerant fashion requires redundant paths through that network, which in turn requires a spanning tree configuration. This ensures that there is only one active path from any source node to any destination on the network. This causes centrally located switches to become bottlenecks, limiting scalability as more networks are interconnected.
To help alleviate this problem, Sincoskie invented VLANs by adding a tag to each Ethernet frame. These tags could be thought of as colors, say red, green, or blue. In this scheme, each switch could be assigned to handle frames of a single color, and ignore the rest. The networks could be interconnected with three spanning trees, one for each color. By sending a mix of different frame colors, the aggregate bandwidth could be improved. Sincoskie referred to this as a multitree bridge. He and Chase Cotton created and refined the algorithms necessary to make the system feasible.[7] This color is what is now known in the Ethernet frame as the IEEE 802.1Q header, or the VLAN tag. While VLANs are commonly used in modern Ethernet networks, they are not used in the manner first envisioned here.
In 2003, Ethernet VLANs were described in the first edition of the IEEE 802.1Q standard.[8]
In 2012, the IEEE approved IEEE 802.1aq (shortest path bridging) to standardize load-balancing and shortest path forwarding of (multicast and unicast) traffic allowing larger networks with shortest path routes between devices. In 802.1aq Shortest Path Bridging Design and Evolution: The Architect’s Perspective David Allan and Nigel Bragg stated that shortest path bridging is one of the most significant enhancements in Ethernet’s history.[9]
Early network designers often segmented physical LANs with the aim of reducing the size of the Ethernet collision domain—thus improving performance. When Ethernet switches made this a non-issue (because each switch port is a collision domain), attention turned to reducing the size of the broadcast domain at the MAC layer. VLANs were first employed to separate several broadcast domains across one physical medium.
A VLAN can also serve to restrict access to network resources without regard to physical topology of the network, although the strength of this method remains debatable as VLAN hopping is a means of bypassing such security measures if not prevented. VLAN hopping can be mitigated with proper switchport configuration.[10]
VLANs operate at Layer 2 (the data link layer) of the OSI model. Administrators often configure a VLAN to map directly to an IP network, or subnet, which gives the appearance of involving Layer 3 (the network layer). In the context of VLANs, the term “trunk” denotes a network link carrying multiple VLANs, which are identified by labels (or “tags”) inserted into their packets. Such trunks must run between “tagged ports” of VLAN-aware devices, so they are often switch-to-switch or switch-to-router links rather than links to hosts. (Note that the term ‘trunk’ is also used for what Cisco calls “channels” : Link Aggregation or Port Trunking). A router (Layer 3 device) serves as the backbone for network traffic going across different VLANs.
A basic switch not configured for VLANs has VLAN functionality disabled or permanently enabled with a default VLAN that contains all ports on the device as members.[2] The default VLAN typically has the ID “1”. Every device connected to one of its ports can send packets to any of the others. Separating ports by VLAN groups separates their traffic very much like connecting each group using a distinct switch for each group.
It is only when the VLAN port group is to extend to another device that tagging is used. Since communications between ports on two different switches travel via the uplink ports of each switch involved, every VLAN containing such ports must also contain the uplink port of each switch involved, and traffic through these ports must be tagged.
Management of the switch requires that the administrative functions be associated with one or more of the configured VLANs. If the default VLAN were deleted or renumbered without first moving the management connection to a different VLAN, it is possible for the administrator to be locked out of the switch configuration, normally requiring physical access to the switch to regain management by either a forced clearing of the device configuration (possibly to the factory default), or by connecting through a console port or similar means of direct management.
Switches typically have no built-in method to indicate VLAN port members to someone working in a wiring closet. It is necessary for a technician to either have administrative access to the device to view its configuration, or for VLAN port assignment charts or diagrams to be kept next to the switches in each wiring closet. These charts must be manually updated by the technical staff whenever port membership changes are made to the VLANs.
Generally, VLANs within the same organization will be assigned different non-overlapping network address ranges. This is not a requirement of VLANs. There is no issue with separate VLANs using identical overlapping address ranges (e.g. two VLANs each use the private network 192.168.0.0/16). However, it is not possible to route data between two networks with overlapping addresses without delicate IP remapping, so if the goal of VLANs is segmentation of a larger overall organizational network, non-overlapping addresses must be used in each separate VLAN.
Network technologies with VLAN capabilities include:[citation needed]
The protocol most commonly used today to configure VLANs is IEEE 802.1Q. The IEEE committee defined this method of multiplexing VLANs in an effort to provide multivendor VLAN support. Prior to the introduction of the 802.1Q standard, several proprietary protocols existed, such as Cisco Inter-Switch Link (ISL) and 3Com‘s Virtual LAN Trunk (VLT). Cisco also implemented VLANs over FDDI by carrying VLAN information in an IEEE 802.10 frame header, contrary to the purpose of the IEEE 802.10 standard.
Both ISL and IEEE 802.1Q tagging perform “explicit tagging” – the frame itself is tagged with VLAN information. ISL uses an external tagging process that does not modify the Ethernet frame, while 802.1Q uses a frame-internal field for tagging, and therefore does modify the Ethernet frame. This internal tagging is what allows IEEE 802.1Q to work on both access and trunk links: standard Ethernet frames are used and so can be handled by commodity hardware.
Under IEEE 802.1Q, the maximum number of VLANs on a given Ethernet network is 4,094 (4,096 values provided by the 12-bit VID field minus reserved values 0x000 and 0xFFF). This does not impose the same limit on the number of IP subnets in such a network, since a single VLAN can contain multiple IP subnets. IEEE 802.1ad extends 802.1Q by adding support for multiple, nested VLAN tags (‘QinQ’). Shortest Path Bridging (IEEE 802.1aq) expands the VLAN limit to 16 million.
Inter-Switch Link (ISL) is a Cisco proprietary protocol used to interconnect multiple switches and maintain VLAN information as traffic travels between switches on trunk links. This technology provides one method for multiplexing bridge groups (VLANs) over a high-speed backbone. It is defined for Fast Ethernet and Gigabit Ethernet, as is IEEE 802.1Q. ISL has been available on Cisco routers since Cisco IOS Software Release 11.1.
With ISL, an Ethernet frame is encapsulated with a header that transports VLAN IDs between switches and routers. ISL does add overhead to the frame as a 26-byte header containing a 10-bit VLAN ID. In addition, a 4-byte CRC is appended to the end of each frame. This CRC is in addition to any frame checking that the Ethernet frame requires. The fields in an ISL header identify the frame as belonging to a particular VLAN.
A VLAN ID is added only if the frame is forwarded out a port configured as a trunk link. If the frame is to be forwarded out a port configured as an access link, the ISL encapsulation is removed.
IEEE 802.1aq (Shortest Path Bridging SPB) allows all paths to be active with multiple equal cost paths, provides much larger layer 2 topologies (up to 16 million compared to the 4096 VLANs limit), faster convergence times, and improves the use of the mesh topologies through increased bandwidth and redundancy between all devices by allowing traffic to load share across all paths of a mesh network.
The two common approaches to assigning VLAN membership are as follows:
Static VLANs
Dynamic VLANs
Static VLANs are also referred to as port-based VLANs. Static VLAN assignments are created by assigning ports to a VLAN. As a device enters the network, the device automatically assumes the VLAN of the port. If the user changes ports and needs access to the same VLAN, the network administrator must manually make a port-to-VLAN assignment for the new connection.
Dynamic VLANs are created using software or by protocol. With a VLAN Management Policy Server (VMPS), an administrator can assign switch ports to VLANs dynamically based on information such as the source MAC address of the device connected to the port or the username used to log onto that device. As a device enters the network, the switch queries a database for the VLAN membership of the port that device is connected to. Protocol methods include Multiple VLAN Registration Protocol (MVRP) and the somewhat obsolete GARP VLAN Registration Protocol (GVRP).
In a switch that supports protocol-based VLANs, traffic is handled on the basis of its protocol. Essentially, this segregates or forwards traffic from a port depending on the particular protocol of that traffic; traffic of any other protocol is not forwarded on the port.
For example, it is possible to connect the following to a given switch:
If a protocol-based VLAN is created that supports IP and contains all three ports, this prevents IPX traffic from being forwarded to ports 10 and 30, and ARP traffic from being forwarded to ports 20 and 30, while still allowing IP traffic to be forwarded on all three ports.
VLAN Cross Connect (CC) is a mechanism used to create Switched VLANs, VLAN CC uses IEEE 802.1ad frames where the S Tag is used as a Label as in MPLS. IEEE approves the use of such a mechanism in part 6.11 of IEEE 802.1ad-2005.
Question: Differences Between Physical and Virtual LANs
Answer:
Differences Between Physical and Virtual LANs
It is important to understand that a VLAN does not create new devices or attempt to virtually represent new devices. A lot of attention is currently focused on virtualization and the abstraction of services; however, for the purposes of this discussion, we will ignore those technologies and how they operate.
The purpose of a VLAN is simple: It removes the limitation of physically switched LANs with all devices automatically connected to each other. With a VLAN, it is possible to have hosts that are connected together on the same physical LAN but not allowed to communicate directly. This restriction gives us the ability to organize a network without requiring that the physical LAN mirror the logical connection requirements of any specific organization.
To make this concept a bit clearer, let’s use the analogy of a telephone system. Imagine that a company has 500 employees, each with his or her own telephone and dedicated phone number. If the telephones are connected like a traditional residential phone system, anyone has the ability to call any direct phone number within the company, regardless of whether that employee needs to receive direct business phone calls. This arrangement presents a number of problems, from potential wrong number calls to prank or malicious calls that are intended to reduce the organization’s productivity.
Now suppose a more efficient and secure option is offered, allowing the business to install and configure a separate internal phone system. This phone system forces external calls to go through a separate switchboard or operator—in a more modern phone network, an Integrated Voice Response (IVR) system. This new phone system lets internal users connect directly to each other via extensions (typically using shorter numbers), while it limits what the internal user’s phones can do and where/who the user can call. This internal phone system allows the organization to virtually separate the internal phones. This is essentially what a VLAN does on a network.
To take this analogy into the networking world, consider the network shown in Figure 1.
Suppose that hosts A and B are together in one department, and hosts C and D are together in another department. With physical LANs, they could be connected in only two ways: either all of the devices are connected together on the same LAN (hoping that the users of the other department hosts will not attempt to communicate), or each of the department hosts could be connected together on separate physical switches. Neither of these is a good solution. The first option opens up many potential security holes, and the second option would become expensive very quickly.
To solve this sort of problem, the concept of a VLAN was developed. With a VLAN, each port on a switch can be configured into a specific VLAN, and then the switch will only allow devices that are configured into the same VLAN to communicate. Using the network in Figure 1, if A and B were grouped together and separated from the C and D group, you could place A and B into VLAN 10 and C and D into VLAN 20. This way, their traffic would be kept isolated on the switch. In this configuration, the traffic between groups would be prevented at Layer 2 because of the difference in assigned VLANs.
Question: Difference Between VLAN and LAN
Answer:
VLAN vs LAN
VLAN and LAN are two terms used frequently in the networking field. “LAN” is abbreviated as “Local Area Network” is a computer network to which a large number of computers and other peripheral devices are connected within a geographical area. VLAN is an implementation of a private subset of a LAN in which the computers interact with each other as if they are connected to the same broadcast domain irrespective of their physical locations.
The attributes of both LAN and VLAN are the same; however, the end stations are always combined together regardless of the location. The VLAN is used to create multiple broadcast domains in a switch. This can be explained with a simple illustration. Say, for instance, there is one 48-port layer 2 switch. If two separate VLANs are created on ports 1 to 24 and 25 to 48, a single 48-port layer 2 switch can be made to act like two different switches. This is one of the biggest advantages of using VLAN as you don’t have to use two different switches for different networks. Different VLANs can be created for each segment using just one big switch. Suppose in a company users working from different floors of the same building can be connected to the same LAN virtually.
The VLANs can help to minimize traffic when compared to traditional LANs. For instance, if the broadcast traffic is meant for ten users, they can be placed on ten different VLANs which will in turn reduce the traffic. The use of VLANs over traditional LANs can bring down the cost as the VLANs eliminate the need for expensive routers.
In LANs, the routers process the incoming traffic. With the increasing traffic volume, latency gets generated which in turn results in poor performance. With VLANs, the need for routers is reduced as VLANs can create broadcast domains through switches instead of routers.
LANs require physical administration as the location of the user changes, the need for recabling, addressing the new station, reconfiguration of routers and hubs arises. The mobility of the users in a network results in network costs. Whereas if a user is moved within a VLAN, the administrative work can be eliminated as there is no need for router reconfiguration.
Data broadcast on a VLAN is safe when compared to traditional LANs as sensitive data can be accessed only the users who are on a VLAN.
Summary:
1. VLAN delivers better performance when compared to traditional LANs.
2. VLAN requires less network administration work when compared to LANs.
3. VLAN helps to reduce costs by eliminating the need for expensive routers unlike LANs.
4. Data transmission on VLAN is safe when compared to traditional LANs.
5. VLANs can help reduce traffic as it reduces the latency and creates broadcast domains through switches rather than routers unlike in traditional LANs.
Question: Difference between LAN and VLAN
What is difference between LAN and VLAN. Which one is suited for broadcasting messages How to set up VLAN. What are their advantages and disadvantages
Edit:
If I write a program for VLAN then will it run if I don’t have a switch. (Each computer connected to one another just using a cable to form a simple LAN)
Lan means “Local Area Network” and Vlan stands for “Virtual LAN”. There are no real differences between one and the other except that a vlan is used to create multiple broadcast domains in a switch. Say for example you have one 48 port layer 2 switch.
If you create 2 vlans, one on ports 1 to 24 and one for ports 25 to 48, you can make one switch act like two.
One advantage of using vlans is that if you segment your network by department like this: One class C network for Sales One class C network for IT etc.
You don’t have to use different switches for different networks because you can just use one big switch and create different Vlans for each segment.
How to create a vlan depends on the switch in question. In a cisco switch you can create vlans like this.
SwitchA(config)#configure terminal (enter in global configuration mode)
SwitchA(config)#vlan 3 (defining the vlan 3)
SwitchA(config)#vlan 3 name management (assigning the name management to vlan 3)
SwitchA(config)#exit (exit from vlan 3)
Now assigning the ports 2 and 3 to VLAN 3
SwitchA(config)#interface fastethernet 0/2 (select the Ethernet 0 of port 2)
SwitchA(config-if)#switchport access vlan 3 (allot the membership of vlan 3)
SwitchA(config-if)#exit (exit from interface 2)
Question: LAN vs VLAN | Difference between LAN and VLAN
This page compares LAN vs VLAN and describes difference between LAN and VLAN. LAN stands for Local Area Network while VLAN stands for Virtual Local Area Network. The useful links to difference between various terms are provided here.
Physical LAN-Local Area Network
LAN is the short form of Local Area Network. The hosts are connected on the same ethernet switch on different ports. The common devices used on LAN are Hubs and Switches.
• The Hub share the data between computers using broadcast address. The host sends the frame to the entire network and to all the ports of the switch. All the hosts ignore the frame except the one for which it is intended as per destination address. This increases traffic on the switch to a great extent.
• The another device called switch share the data between computers using unicast address. Hence two hosts can directly communicate within the same switch. Two hosts which are not within the same switch can go through the routers.
VLAN is the short form of Virtual Local Area Network. It is also known as Virtual LAN. The VLAN is basically configured on ethernet switch. Unlike single LAN on ethernet switch, multiple Virtual LANs are implemented on single switch.
This is done by splitting and assigning number of ports to the different VLANs. Hence broadcast, multicast and other unknown destination traffic originated from one VLAN say VLAN-A gets limited to the members of the same VLAN-A. The traffic do not cross the other VLANs in the switch. This will bring down traffic load on the ethernet switch.
Refer VLAN basics➤ and VLAN tagging➤ for more detailed information including VLAN frame, VLAN tagging and VLAN untagging concepts.
Tabular difference between LAN and VLAN
Following table mentions similarities and difference between LAN and VLAN network types.
Features
LAN
VLAN
Full Form
Local Area Network
Virtual Local Area Network
Devices
Hubs and switches are used in LAN
Switches with VLAN tagging capabilities are used.
Coverage
Host (i.e. node) to host communication within the building
Host-to-Host Communication between buildings which are far away beyond LAN limit. This is possible as VLANs can span multiple switches located in different office or building premises.
Protocols
Normal ethernet frame is used.
Uses protocols such as IEEE 802.1Q and VLAN Trunk protocol (VTP). These protocols help traffic to be routed to correct interfaces
Ports to subnet mapping
Ports can not be moved between different subnets
Ports can be moved between subnets easily on the same switch. Hence different VLANs on the same switch can have different number of ports.
Number of LAN/VLANs per ethernet switch
One LAN consisting of multiple hosts on one switch
Many VLANs can coexist on the same ethernet switch. Each of the VLAN will have different number of ports.
Software configuration
Not needed
Need to know commands for tagging in order to configure VLAN
Application
To have sharing of common resources as well as interconnectivity between hosts
Same as mentioned in LAN, in addition it extends capabilities of LAN with easy configurability and less burden on the ethernet switch.
Question: VLAN Overview
A virtual LAN, or VLAN, is a group of computers, network printers, network servers, and other network devices that behave as if they were connected to a single network.
In its basic form, a VLAN is a broadcast domain. The difference between a traditional broadcast domain and one defined by a VLAN is that a broadcast domain is seen as a distinct physical entity with a router on its boundary. VLANs are similar to broadcast domains because their boundaries are also defined by a router. However, a VLAN is a logical topology, meaning that the VLAN hosts are not grouped within the physical confines of a traditional broadcast domain, such as an Ethernet LAN.
If a network is created using hubs, a single large broadcast domain results, as illustrated in Figure 8-2.
Figure 8-2. Two Broadcast Domains Connected Across a WAN
[View full size image]
Because all devices within the broadcast domain see traffic from all other devices within the domain, the network can become congested. Broadcasts are stopped only at the router, at the edge of the broadcast domain, before traffic is sent across the wide-area network (WAN) cloud.
If the network hubs are replaced with switches, you can create VLANs within the existing physical network, as illustrated in Figure 8-3.
Figure 8-3. Two VLANs Connected Across a WAN
[View full size image]
When a VLAN is implemented, its logical topology is independent of the physical topology, such as the LAN wiring. Each host on the LAN can be assigned a VLAN identification number (ID), and hosts with the same VLAN ID behave and work as though they are on the same physical network. This means the VLAN traffic is isolated from other traffic, and therefore all communications remain within the VLAN. The VLAN ID assignment made by the switches can be managed remotely with the right network management software.
Depending on the type of switching technology used, VLAN switches can function in different ways; VLANs can be switched at the data link (Open System Interconnection [OSI] model Layer 2) or the network layer (OSI model Layer 3). The main advantage of using a VLAN is that users can be grouped together according to their network communications requirements, regardless of their physical locations, although some limitations apply to the number of nodes per VLAN (500 nodes). This segmentation and isolation of network traffic helps reduce unnecessary traffic, resulting in better network performance because the network is not flooded. Don’t take this advantage lightly, because VLAN configuration takes considerable planning and work to implement; however, almost any network manager will tell you it is worth the time and energy.
note
An end node can be assigned to a VLAN by inspecting its Layer 3 address, but a broadcast domain is a Layer 2 function. If a VLAN is switched based on Layer 3 addressing, it is in essence routed. There are two basic differences between routing and switching: First, the decision of forwarding is performed by the application-specific integrated circuit (ASIC) at the port level for switching versus the reduced instruction set circuit (RISC) or main processor for routing; second, the information used to make the decision is located at a different part of the data transfer (packet versus frame).
Question: What is the major difference between LAN and VLAN ?
Answer:
Local Area Network is a computer network to which a large number of computers and other peripheral devices are connected within a geographical area. VLAN is an implementation of a private subset of a LAN in which the computers interact with each other as if they are connected to the same broadcast domain irrespective of their physical locations.It Delivers Better performance,less Network administeation work, eleminating the need of expansive Routers, more Security Then LAN.
Answer:
1. VLAN delivers better performance when compared to traditional LANs.
2. VLAN requires less network administration work when compared to LANs.
3. VLAN helps to reduce costs by eliminating the need for expensive routers unlike LANs.
4. Data transmission on VLAN is safe when compared to traditional LANs.
Answer:
Lan means “Local Area Network” and Vlan stands for “Virtual LAN”.
Local Area Network is a computer network to which a large number of computers and other peripheral devices are connected within a geographical
area. VLAN is an implementation of a private subset of a LAN in which the computers interact with each other as if they are connected to the same broadcast domain irrespective of their physical locations.It Delivers Better performance,less Network administeation work, eleminating the need of expansive Routers, more Security Then LAN.
1. VLAN delivers better performance when compared to traditional LANs.
2. VLAN requires less network administration work when compared to LANs.
3. VLAN helps to reduce costs by eliminating the need for expensive routers unlike LANs.
4. Data transmission on VLAN is safe when compared to traditional LANs.
Answer:
LAN local area network consists within a building connected with network devices like switches, routers etc. and VLAN VIRTUAL LOCAL AREA network is a concept of virualy logical domain’s connectivity and communication. VLAN’s are created in a SWITCH to seperated the goups and join the same domain like, sale department , purchase department etc etc to communicate each other. For example there is VLAN Named sale department , in this case any computer we join to sale department can only communicate each other within sale department vlan. secure , fast and reduced the burdon of more switches purchasing.
Answer:
In a LAN Environment VLANs are used to separate Broadcast domains logically. VLAN delivers better performance, requires less network administration and helps to reduce Broadcast traffic.
Answer:
Lan means “Local Area Network” and Vlan stands for “Virtual LAN”. There are no real differences between one and the other except that a vlan is used to create multiple broadcast domains in a switch. Say for example you have one 48 port layer 2 switch.
Answer:
LAN and VLAN are two terms used frequently in the networking field. “LAN” is abbreviated as “Local Area Network” is a computer network to which a large number of computers and other peripheral devices are connected within a geographical area. VLAN is an implementation of a private subset of a LAN in which the computers interact with each other as if they are connected to the same
broadcast domain irrespective of their physical locations.
The VLAN is used to create multiple broadcast domains in a switch.
Question: WAN, MAN, LAN, WLAN, VLAN and PAN what are these ?
Wide Area Network, WAN is a collection of computers and network resources connected via a network over a geographic area. Wide-Area Networks are commonly connected either through the Internet or special arrangements made with phone companies or other service providers.
Local-Area Network, LAN has networking equipment or computers in close proximity to each other, capable of communicating, sharing resources and information. For example, most home and business networks are on a LAN.
Metropolitan-Area Network, MAN is a network that is utilized across multiple buildings. A MAN is much larger than the standard Local-Area Network (LAN) but is not as large as a Wide Area Network (WAN) and commonly is used in school campuses and large companies with multiple buildings.
Personal Area Network, PAN, is a local network designed to transmit data between personal computing devices (PCs), personal digital assistants (PDAs) and telephones. Gaming devices, like a game console system, may also be set up on a PAN.
Virtual Local Area Network, VLAN is a virtual LAN that allows a network administrator to setup separate networks by configuring a network device, such as a router, and not through cabling. This allows for a network to be divided, setup, and changed, which allows a network administrator to organize and filter data accordingly in a corporate network.
Wireless Local Area Network, WLAN is a type local network that utilizes radio waves, rather than wires, to transmit data. Today’s computers have WLAN built onboard, which means no additional WiFi card needs to be installed.
Answer:
WAN -wide are network. this network connection between telco company with media divices and router. this network connection country to country, earth to the moon,moon to the sea, this network there have multiple routing protocol.wan if you interested to know this, you must be to attend ccna training.
MAN – metropolitan area network, this network is limited implemention, only inside the city between telco company but the same connection, divices and routing protocol in wan network. in other in i.t person the long range wireles network is consider this a MAN network.
LAN- local area network, this network implement connection from router to switch and into computer inside your company or in your home. there have multipple configuration in lan network,v-lan,rstp, etc. if deffends the project.
WLAN- wireless local area network. this network now is built in your laptop.there have many wlan divices can insert in your usb port. but you should install the driver software if your operating systen did not recognized. wlan can connect to wireless router with or w/o internet but should you know the ssid and encryption and security key.
PAN- personal area network- this is1st invented small RF signal in the laptop or computer. this call bluetooth device. there have security key to connect other blutooth device such a mobile phone to transfer file
Answer:
WAN- Wide Area Network (connect multiple smaller networks, such as local area networks (LANs) or metro area networks (MANs)
MAN- Metropolitan Area Network(a network spanning a physical area larger than a LAN but smaller than a WAN, such as a city)
LAN-Local Area Network (connects network devices over a relatively short distance)
WLAN-Wireless Local Area Network LAN based on WiFi wireless network technology)
VLAN-Virtual Local Area Network (local area network with a definition that maps workstations on some other basis than geographic location)
PAN-Personal Area Network ( networks typically involve a mobile computer, a cell phone and/or a handheld computing device such as a PDA)
Answer:
WAN: Wide Area Networks cover a broad area, like communication links that cross metropolitan, regional
MAN: Metropolitan Area Networks are very large networks that cover an entire city.
LAN: Local Area Networks cover a small physical area, like a home, office.
WLAN: Wireless Local Area Networks enable users to move around within a larger coverage area
VLAN: A virtual local area network is a logical group of workstations, servers and network devices that appear to be on the same LAN despite their geographical distribution
PAN: Personal Area Networks are used for communication among various devices, such as telephones, personal digital assistants, fax machines, and printers
Answer:
These all are the Networks.
Personal area network, or PAN
Local area network, or LAN
Metropolitan area network, or MAN
Wide area network, or WAN
Storage area network, or SAN
Enterprise private network, or EPN
Virtual private network, or VPN
Most popular network types are LAN and WAN.
One broadcast domain is called LAN.
A network implemented in large numbers of devices over the Internet is called WAN.
ANswer:
These are the Network types everyone have different work and different structure. Most popular network types are LAN and WAN.
one broadcast domain is called LAN.
A network implemented in large numbers of devices over the Internet is called WAN
Locally installed and based on common carrier e.g. twisted pair, fiber optic cable etc.
Locally installed and based on common carrier e.g. twisted pair wires, fiber, coaxial cable, wireless including wireless and cellular network based
Applications
Used mainly by fixed desktop computers and portable computers (e.g. laptops) . Now-a-days it is used by smart phones due to emergence of WLAN network
Used mainly by desktop and mini computers.
Can be used by any devices, but desktop devices are mainly using this network type.
What is the difference between VLAN and VPN?
※ VLAN stands for Virtual Local Area Network. It is a set of hosts that communicate with each other as if they were connected to the same switch (as if they were in the same domain), even if they are not.
※ VPN stands for Virtual Private Network. It provides a secure method for connecting to a private network through a public network that is not secure, such as the internet from a remote location.
※ VPN allows creating a smaller sub network using the hosts in an underlying larger network and a VLAN can be seen as a sub group of VPN. The main purpose of VPN is to provide a secure method for connecting in a private network, from remote locations.
Virtual LANs are core to enterprise networking. This guide covers VLAN trunks, VLAN planning, and basic VLAN configuration.
If you’re just getting started in the world of network administration and architecture, there’s no better place to begin than with a solid understanding of virtual LANs (VLANs.)
In order to understand the purpose of VLANs, it’s best to look at how Ethernet networks previously functioned. Prior to VLANs and VLAN-aware switches, Ethernet networks were connected using Ethernet hubs. A hub was nothing more than a multi-port repeater. When an end device sent information onto the Ethernet network toward a destination device, the hub retransmitted that information out all other ports as a network-wide broadcast.
The destination device would receive the information sent, but so would all other devices on the network. Those devices would simply ignore what the heard. And while this method worked in small environments, the architecture suffered greatly from scalability issues. Too much time was spent discarding received messages and waiting for a turn to transmit their own messages that Ethernet networks using hubs became congested.
A layer 2 aware switch solves this problem using two different methods. First, the switch has the ability to learn and keep track of devices by their MAC address. By maintaining a dynamic table of MAC address to switch port number, the switch has the ability to send messages directly from a source device to the destination device in a unicast transmission as opposed to a broadcast transmission that is sent to all devices. This is known as the switch forwarding table.
While the forwarding table does a great deal to limit broadcast messages, and thus reduce the amount of broadcast overhead, it does not completely eliminate it. Broadcast messages are still required in many situations. And as such, the more devices on a physical network, the more broadcast messages are going to be clogging up the network.
That leads us to our second method that layer 2 switches use to streamline Ethernet communication. Instead of having one large layer 2 network, VLANs are used to segment a switch — or network of switches — into multiple, logical layer 2 networks. Broadcast messages sent and received are contained within each smaller VLAN. Thus, if you have a network of 1,000 end devices and create 4 VLANs of 250 devices each, each logical network must only have to deal with 250 devices of broadcast overhead, as opposed to all 1,000 if they were on the same layer 2 network.
VLAN trunks
Now that you have an understanding of the purpose of VLANs, the next skill to acquire is the understanding of VLAN trunks. Large networks often contain more than one switch. And if you want to span virtual LANs across two or more switches, a VLAN trunk can be used. VLAN information is local to each switch database, so the only way to pass VLAN information between switches is to use a trunk.
A VLAN trunk can be configured to pass VLAN data for one or all VLANs configured on a switch. The trunk keeps track of which VLAN that the data belongs to by adding a VLAN tag to each Ethernet frame that is passed between switches. Once the receiving switch receives the frame, it strips the VLAN tag off and places the frame onto the proper local VLAN.
Inter-VLAN routing
The last basic skill regarding VLANs on enterprise networks is the concept of inter-VLAN routing. While devices on the same VLAN can communicate with other devices in the same VLAN, the same cannot be done when the devices belong to different VLANs. This is where inter-VLAN routing is necessary.
As we have learned, a VLAN breaks up a physical layer 2 network into multiple, logical layer 2 networks. In order to move between these layer 2 networks, this traffic needs to be routed at layer 3. So while switches can send data from source devices to destination devices using layer 2 MAC addresses, inter-VLAN routing using IP addressing. This can be either IP version 4 or IPv6, although most enterprise networks still use IPv4 on internal networks.
On enterprise networks that are well planned, each VLAN configured is its own unique IPv4 subnet. For example, devices on VLAN 10 will be configured to use IPv4 addresses in the 10.10.10.X IP space while devices on VLAN 99 will be configured to use IPv4 addresses in the 10.10.99.x space. In addition to each device having its own IP address and subnet mask, a default gateway IP addresses is required. Every device in VLAN 10 will be configured to use the same default gateway IP address such as 10.10.10.1 and every device configured for VLAN 99 will use the gateway of 10.10.99.1. The default gateway IP address is a router interface (either physical or virtual) that is responsible for routing traffic to other IP networks.
So if a device in VLAN 10 needs to communicate with a device in VLAN 99, the VLAN 10 device will forward the data to its default gateway. Layer 3 routing will occur and forward the data to the default gateway of VLAN 99. Once on the correct destination VLAN, the data is then forwarded at layer 2 to the destination endpoint.
Planning a VLAN strategy
Depending on the size of the network, planning a VLAN strategy can be either fairly easy, or somewhat complex. Remember, because each VLAN is also its own sub-network, we have to come up with a VLAN strategy where it makes the most sense in terms of grouping devices. In todays modern networks with virtualized layer 2 and layer 3 networks, the number of VLANs and layer 3 interfaces that can be configured on enterprise hardware is in the multiple thousands. Additionally, since inter-VLAN routing can now be performed at wire speed, there is no noticeable difference between sending/receiving traffic from devices on the same VLAN vs. different VLANs.
That being said, due to broadcast overhead, its typically advisable that a single VLAN not have any more than 500 or so devices. Any more than this and you begin to start having network congestion problems due to a significant increase in broadcast traffic on the layer 2 segment. Most network designs call for subnet sizes that have no more than 250 devices.
In terms of how to segment devices onto different VLANs, security is the primary factor today. From a security standpoint, its best to place similar devices onto the same subnets. For example, put all employee computers on VLAN 10, printers on VLAN 20, servers on VLAN 50 and IP phones on VLAN 100. By doing this, you can easily apply layer 3 filters or firewall rules that target specific devices in how traffic in and out of that VLAN is treated.
Configuring a VLAN and adding a switch port
Lets now move onto how to configure VLAN basics using a Cisco switch. In this example, we will configure VLAN 80 as our server VLAN. We will then configure switch port 10 to use this new VLAN. Keep in mind that out of the box, only VLAN 1 is configured on the switch and all switch ports are configured to use this VLAN.
Configuring a VLAN trunk
In this next example, lets assume that we have two switches that are connected by a single Ethernet interface: port 20 on both switches. Each switch has been configured with VLAN 1, 2 and 3. The goal is to trunk only these three VLANs of the two switches together. To accomplish this, configure the following on both switches (see above).
Configuring a SVI for inter-VLAN routing
A switched virtual interface (SVI) is the name of a virtual router interface on a layer 3 switch. The virtual interface is the VLAN’s default gateway used for routing traffic between networks. In this example, we will configure a SVI for VLAN 10 and VLAN 20. VLAN 10 will use the IPv4 subnetwork of 10.10.10.X/24 with a default gateway of 10.10.10.1. VLAN 20 will use a subnetwork of 10.10.20.X/24 with a default gateway of 10.10.20.1. Once complete, the switch will then be able to route traffic between the two VLANs via layer 3 routing.
Advanced VLAN topics to research
If youre looking to learn some more advanced skills related to VLANs, I recommend researching the following topics:
Spanning Tree Protocol (STP)
VLAN Trunking Protocol (VTP)
Private VLANs
Dynamic VLANs
VLAN security weaknesses
Question: Vlan
Answer:
2. LAN <ul><li>A Local Area Network (LAN) was originally defined as a network of computers located within the same area </li></ul><ul><li>Local Area Networks are defined as a single broadcast domain. This means that if a user broadcasts information on his/her LAN, the broadcast will be received by every other user on the LAN. </li></ul><ul><li>Broadcasts are prevented from leaving a LAN by using a router. The disadvantage of this method is routers usually take more time to process incoming data compared to a bridge or a switch </li></ul>
3. VLAN <ul><li>A VLAN is a logical group of network devices that appears to be on the same LAN </li></ul><ul><li>Configured as if they are attached to the same physical connection even if they are located on a number of different LAN segments. </li></ul><ul><li>Logically segment LAN into different broadcast domains. </li></ul>
4. VLAN <ul><li>VLANs can logically segment users into different subnets (broadcast domains) </li></ul><ul><li>Broadcast frames are only switched on the same VLAN ID. </li></ul><ul><li>This is a logical segmentation and not a physical one, workstations do not have to be physically located together. Users on different floors of the same building, or even in different buildings can now belong to the same LAN. </li></ul>
5. LAN VS VLAN <ul><li>By using switches, we </li></ul><ul><li>can assign computer </li></ul><ul><li>on different floors to </li></ul><ul><li>VLAN1, VLAN2, and </li></ul><ul><li>VLAN3 </li></ul><ul><li>Now, logically, a </li></ul><ul><li>department is spread </li></ul><ul><li>across 3 floors even </li></ul><ul><li>though they are </li></ul><ul><li>physically located on </li></ul><ul><li>different floors </li></ul>
8. STATIC VLANS <ul><li>Static membership VLANs are called port-based and port-centric membership VLANs. </li></ul><ul><li>This is the most common method of assigning ports to VLANs. </li></ul><ul><li>As a device enters the network, it automatically assumes the VLAN membership of the port to which it is attached. </li></ul><ul><li>There is a default VLAN , on Cisco switches that is VLAN 1. </li></ul>Default VLAN 1 Default VLAN 1 ConfiguredVlan 10
9. DYNAMIC VLANS <ul><li>Dynamic membership VLANs are created through network management software </li></ul><ul><li>Dynamic VLANs allow for membership based on the MAC address of the device connected to the switch port. </li></ul><ul><li>As a device enters the network, it queries a database within the switch for a VLAN membership </li></ul>
10. CONFIGURING PORTS <ul><li>Access ports are used when: </li></ul><ul><ul><li>Only a single device is connected to the port </li></ul></ul><ul><ul><li>Multiple devices (hub) are connected to the port, all belonging to the same VLAN </li></ul></ul><ul><ul><li>Another switch is connected to this interface, but this link is only carrying a single VLAN (non-trunk link). </li></ul></ul><ul><li>Trunk ports are used when: </li></ul><ul><ul><li>Another switch is connected to this interface, and this link is carrying multiple VLANs(trunk link). </li></ul></ul>
11. <ul><li>Switch(config-if)switchport mode [access|trunk] </li></ul><ul><li>An access port means that the port (interface) can only belong to a single VLAN. </li></ul>
13. VLAN TRUNKING <ul><li>In a switched network, a trunk is a point-to-point link that supports several VLANs. </li></ul><ul><li>The purpose of a trunk is to conserve ports when a link between two devices that implement VLANs is created . </li></ul>
14. VLAN TECHNIQUES <ul><li>Two techniques </li></ul><ul><ul><li>Frame Filtering –examines particular information about each frame (MAC address or layer 3 protocol type) </li></ul></ul><ul><ul><li>Frame Tagging –places a unique identifier in the header of each frame as it is forwarded throughout the network backbone. </li></ul></ul>
15. FRAME FILTERING <ul><li>Users can be logically group via software based on: </li></ul><ul><ul><li>port number </li></ul></ul><ul><ul><li>MAC address </li></ul></ul><ul><ul><li>Ip subnet </li></ul></ul><ul><ul><li>protocol being used </li></ul></ul>
17. <ul><li>Membership by Port </li></ul><ul><li>Membership by MAC Address </li></ul><ul><li>Membership by IP Subnet Address </li></ul>port vlan 1 1 2 1 3 2 4 1 disadvantage of this method is that it does not allow for user mobility.
18. <ul><li>Membership by Port </li></ul><ul><li>Membership by MAC Address </li></ul><ul><li>Membership by IP Subnet Address </li></ul><ul><li>Advantage : </li></ul><ul><li>no reconfiguration needed </li></ul><ul><li>Disadvantage : </li></ul><ul><li>VLAN membership must be assigned initially. </li></ul><ul><li>performance degradation as members of different VLANs coexist on a single switch port </li></ul>MAC Address vlan 1212354145121 1 2389234873743 1 3045834758445 2 5483573475843 1
19. <ul><li>Membership by Port </li></ul><ul><li>Membership by MAC Address </li></ul><ul><li>Membership by IP Subnet Address </li></ul><ul><li>Advantage: </li></ul><ul><li>Good for application-based VLAN strategy </li></ul><ul><li>User can move workstations </li></ul><ul><li>eliminate the need for frame tagging </li></ul>IP Subnet vlan 23.2.24 1 26.21.35 2
20. VLAN TAGGING <ul><li>VLAN frame tagging was specifically developed for switched communications. </li></ul><ul><li>Frame tagging places a unique identifier in the header of each frame as it is forwarded throughout the network backbone. </li></ul><ul><li>The identifier is understood and examined by each switch before any broadcasts or transmissions are made to other switches, routers, or end stations. </li></ul><ul><li>When the frame exits the network backbone, the switch removes the identifier before the frame is transmitted to the target end station. </li></ul>
21. <ul><li>The two most common tagging schemes for Ethernet segments are </li></ul><ul><ul><li>ISL (Inter-Switch Link) </li></ul></ul><ul><ul><li>802.1Q – An IEEE standard </li></ul></ul>
22. ISL (Frame Encapsulation) <ul><li>An Ethernet frame is encapsulated with a header that transports VLAN IDs. </li></ul><ul><li>The ISL encapsulation is added by the switch before sending across the trunk. </li></ul>
23. <ul><li>The switch removes the ISL encapsulation before sending it out a non trunk link. </li></ul><ul><li>It adds overhead to the frame as a 26-byte header containing a 10-bit VLAN ID . </li></ul><ul><li>In addition, a 4-byte cyclic redundancy check (CRC) is appended to the end of each frame. </li></ul><ul><ul><ul><li>This CRC is in addition to any frame checking that the Ethernet frame requires. </li></ul></ul></ul>
24. IEEE 802.1Q <ul><li>Significantly less overhead than the ISL. </li></ul><ul><li>802.1Q inserts only an additional 4 bytes into the Ethernet frame. </li></ul><ul><li>The 802.1Q tag is inserted by the switch before sending across the trunk. </li></ul><ul><li>The switch removes the 802.1Q tag before sending it out a non trunk link. </li></ul>
27. <ul><li>Trunking protocols were developed to effectively manage the transfer of frames from different VLANs on a single physical link. </li></ul><ul><li>The trunking protocols establish agreement for the distribution of frames to the associated ports at both ends of the trunk. </li></ul><ul><li>VLAN tagging information is added by the switch before it is sent across the trunk and removed by the switch before it is sent down a non-trunk link </li></ul>
29. SwitchA(config-if) switchport mode trunk SwitchB(config-if)switchport mode trunk encapsulation dot1q SwitchB(config-if)switchport mode trunk <ul><li>If SwitchA can only be a 802.1.Q trunk and SwitchB can be either ISL or 802.1Q trunk, configure SwitchB to be 802.1Q. </li></ul><ul><li>On switches that support both 802.1Q and ISL, the switchport trunk encapsulation command must be done BEFORE the switchport mode trunk command. </li></ul>
30. VLAN Configuration <ul><li>Configuring VLANs under Linux is a process similar to configuring regular Ethernet interfaces. The main difference is you first must attach each VLAN to a physical device. This is accomplished with the vconfig utility. If the trunk device itself is configured, it is treated as native. For example, these commands define VLANs 2-4 on device eth0: </li></ul><ul><li>vconfig add eth0 2 </li></ul><ul><li>vconfig add eth0 3 </li></ul><ul><li>vconfig add eth0 4 </li></ul>
31. Switch Configuration <ul><li>Before you begin configuration, make sure the IP address of the switch falls within the new management subnet. The IP configuration is associated with a virtual interface. This is normally VLAN1. </li></ul><ul><li>interface VLAN1 ip address 10.0.0.2 255.255.255.224 </li></ul>
33. Moving the Ports <ul><li>interface FastEthernet0/2 switchport access vlan 2 </li></ul><ul><li>interface FastEthernet0/3 switchport access vlan 2 </li></ul><ul><li>interface FastEthernet0/4 switchport access vlan 3 </li></ul><ul><li>interface FastEthernet0/5 switchport access vlan 3 </li></ul><ul><li>Once your changes are complete, you can see which ports are in which VLAN by using the show vlan command. </li></ul>
34. BENEFITS OF VLAN <ul><li>Performance </li></ul><ul><li>Formation of Virtual Workgroups </li></ul><ul><li>Simplified Administration </li></ul><ul><li>Reduced Cost </li></ul><ul><li>Security </li></ul>
35. REFERENCES <ul><li>David Passmore, John Freeman, “The Virtual LAN Technology Report,’‘ </li></ul><ul><li>Paul Frieden,” VLANS on LINUX “ </li></ul><ul><li>cisco </li></ul>
41. <ul><li>TPID – defined value of 8100 in hex. When a frame has the EtherType equal to 8100, this frame carries the tag IEEE 802.1Q / 802.1P. </li></ul><ul><li>TCI – Tag Control Information field including user priority, Canonical format indicator and VLAN ID. </li></ul><ul><li>User Priority – Defines user priority, giving eight (2^3) priority levels. IEEE 802.1P defines the operation for these 3 user priority bits. </li></ul><ul><li>CFI – Canonical Format Indicator is always set to zero for Ethernet switches. CFI is used for compatibility reason between Ethernet type network and Token Ring type network. If a frame received at an Ethernet port has a CFI set to 1, then that frame should not be forwarded as it is to an untagged port. </li></ul><ul><li>VID – VLAN ID is the identification of the VLAN, which is basically used by the standard 802.1Q. It has 12 bits and allow the identification of 4096 (2^12) VLANs. Of the 4096 possible VIDs, a VID of 0 is used to identify priority frames and value 4095 (FFF) is reserved, so the maximum possible VLAN configurations are 4,094. </li></ul>
A VLAN is a set of end stations and the switch ports that connect them. You can have different reasons for the logical division, such as department or project membership. The only physical requirement is that the end station and the port to which it is connected both belong to the same VLAN.
Adding virtual LAN (VLAN) support to a Layer 2 switch offers some of the benefits of both bridging and routing. Like a bridge, a VLAN switch forwards traffic based on the Layer 2 header, which is fast. Like a router, it partitions the network into logical segments, which provides better administration, security, and management of multicast traffic.
Each VLAN in a network has an associated VLAN ID, which appears in the IEEE 802.1Q tag in the Layer 2 header of packets transmitted on a VLAN. An end station might omit the tag, or the VLAN portion of the tag, in which case the first switch port to receive the packet can either reject it or insert a tag using its default VLAN ID. A given port can handle traffic for more than one VLAN, but it can support only one default VLAN ID.
The Private Edge VLAN feature lets you set protection between ports located on the switch. This means that a protected port cannot forward traffic to another protected port on the same switch. The feature does not provide protection between ports located on different switches.
The diagram in this article shows a switch with four ports configured to handle the traffic for two VLANs. Port 1/0/2 handles traffic for both VLANs, while port 1/0/1 is a member of VLAN 2 only, and ports 1/0/3 and 1/0/4 are members of VLAN 3 only. The script following the diagram shows the commands you would use to configure the switch as shown in the diagram.
In the simplest of LAN topologies, you have a single physical network and everything on that LAN can communicate with any other device. In an IP network, on a simple private LAN you have a single IP subnet (e.g. 192.168.1.0/24). In this simple network, all devices are all part of the same physical LAN (‘wiring’) and logical LAN (IP network).
A Virtual LAN (‘VLAN’) is a method of segmenting different devices according to their location, function or security clearance.
For example, you may wish to separate departments (sales, accounts, R&D) or separate company traffic/data from guests using WiFi in your premises. The rules set for VLANs can set whether each VLAN can or cannot communicate with any other. A VLAN can also provide additional security by ensuring that physical networks only carry necessary data, perhaps omitting more sensitive data. A VLAN can be physically separated or separated by differential labelling of datagrams.
VLANs vs. Subnets
It’s important to remember that a VLAN is not the same as a different subnet (e.g. 192.168.1.0 vs. 10.0.0.0). Subnets provide IP addressing space, or logical departmental or network numbering but do not separate the networks or provide any security. If you just have multiple subnets, any device could have more than one IP address or connect to either subnet as both are available on the same physical network. VLANs and subnets can be used together – each subnet can be within a different VLAN. This is a common application as it makes it easier to keep track of your VLANs.
Types of VLAN
There are two main types of VLAN; port based or tag based. They can be used in combination with each other. VLANs can increase both network efficiency and security.
Port Based VLANs
A port based VLAN is one where the physical ports of an Ethernet switch (such as the one built into your router) are separated so that traffic does not pass between chosen ports. You can choose which ports can and can’t communicate with each other.
For example, if you have one PC plugged directly into each port on your router. All PCs have access to the Internet. You set two VLANS (VLAN0 and VLAN1). The PCs on ports 1,2 & 3 are in VLAN0 and can communicate with each other but not the PCs/devices on the other ports. Ports 5 & 6 are in the other VLAN and cannot communicate with ports 1,2 & 3. Port 4 is set to be in both VLANs so the PC on that port can communicate with all other devices. That is a port based VLAN – the physical port is isolated or common to a group:
In the example below, within the setup of the router, we have set up two VLANs that are each a member of the Subnet LAN1, operating in the same IP range but separated. VLAN0 has Ethernet ports 1-4 in it, and VLAN1 contains Ports 4-6. See how Port 4 is in both VLANs, so the device (PC) connected to port 4 will be able to communicate with all devices in VLAN0 and VLAN1 but all other devices will be restricted to devices within their own VLAN:
If a port is common to more than one VLAN, your router will allow that port to communicate with the ports in each VLAN that it is a member of.
The VLANs are not able to communicate directly but the device connected to that port, such as a printer, would be accessible by each of the VLANs.
A port doesn’t have to connect to a PC directly, it can feed a secondary Ethernet switch; in that case, the switch will inherit the VLAN characteristics and receive only data which is part of that port’s VLAN.
Tag Based VLANs
A Tag-based VLAN is one where an identifier label (or ‘tag’) is added to the Ethernet frame to identify it as belonging to a specific VLAN group. This has the advantage over port based VLAN in that multiple tagged VLANs can be sent over the same physical network/cable and split only once required; making it inherently scalable. The most common protocol for defining VLAN tags is 802.1q. Remember that VLAN tags exist at Layer 2 – not the IP layer so even if you have multiple IP subnets, they can all belong to the same VLAN structures.
In the diagram below, we have 3 VLANs (IDs 10, 11 and 12), all of which are available on port 2 of the router. The router connects to a larger switch which in turn splits the VLANs up so that each goes only to specific onward ports on the switch:
The most common distinction between tagged-VLAN data is to separate IP subnets, but they can also be used departmentally or for specific devices or services. Tagged based VLANs provide much more scalability than port-based VLANs. Whether they provide any additional security will depend entirely on your topology.
To make use of tagged VLANs, all networking components must recognise and support VLAN tags. The device, for example, might be a secondary Ethernet switch with 24 ports and is set to split one VLAN to be distributed onto ports 1-12 and another VLAN onto ports 13-24. The device may instead be a wireless access point which supports multiple SSIDs. It takes data with one VLAN tag to serve SSID1, and another VLAN to serve SSID2. That way, the wireless access point is fed by only one Ethernet cable but can serve two completely separated wireless networks.
In the example, we have three VLANs set up and we have given each a unique VLAN tag; that can be anything you like but in our case we have chosen 10, 11 and 12 for VLANs 1,2 and 3 respectively. Vigor 2860 Port 2 is included in VLAN 1,2 and 3 and this means that it is able to send and receive traffic for these VLANs . A switch such as the P2261 would then be connected to Vigor 2860 Port 2 and the corresponding port on the switch would also be configured to the same VLAN tags. Other ports on the P2261 switch can be configured to a VLAN tag to allow a device connected to the port to communicate with the VLAN matching the tag.
In our example P2261:
Ports 3, 4, 5, 6 have a tag of 10 so would be able to communicate with VLAN1.
Ports 7, 8, 9, 10 have a tag of 11 so would be in VLAN2 and port 11 and 12 have a tag of 12 to associate them with VLAN3.
The “Permit untagged device in P1 to access router” box is ticked which means that a PC can also be directly connected to the Vigor 2860 port 1 without needing to be configured to be vlan aware and still communicate with the router. Devices connected directly to ports P3,P4,P5,P6 would need to be vlan aware.
Combining tags, ports and Wireless SSIDs
DrayTek routers allow you to combine port-based VLANs, tagged VLANs, physical Ethernet ports and wireless SSIDS (for wireless equipped routers), allowing much flexibility. The actual VLAN setup page therefore looks like this:
Devices which do not support tags
Not all networking equipment supports tagged VLANs, so to accommodate those, you can have tagged data and untagged data running on the same network, perhaps physically isolated by port-based VLANs, or your switch can remove the VLAN tag before forwarding the data onto the connected device. A feature of most tag-capable Ethernet switches is that they can add, remove, change or forward VLAN tags.
Note : The capability of any particular product will vary; please refer to specifications of each product for feature support.
Question: VLAN
Stands for “Virtual Local Area Network,” or “Virtual LAN.” A VLAN is a custom networkcreated from one or more existing LANs. It enables groups of devices from multiple networks (both wired and wireless) to be combined into a single logical network. The result is a virtual LAN that can be administered like a physical local area network.
In order to create a virtual LAN, the network equipment, such as routers and switchesmust support VLAN configuration. The hardware is typically configured using a software admin tool that allows the network administrator to customize the virtual network. The admin software can be used to assign individual ports or groups of ports on a switch to a specific VLAN. For example, ports 1-12 on switch #1 and ports 13-24 on switch #2 could be assigned to the same VLAN.
Say a company has three divisions within a single building — finance, marketing, and development. Even if these groups are spread across several locations, VLANs can be configured for each one. For instance, each member of the finance team could be assigned to the “finance” network, which would not be accessible by the marketing or development teams. This type of configuration limits unnecessary access to confidential information and provides added security within a local area network.
VLAN Protocols
Since traffic from multiple VLANs may travel over the same physical network, the data must be mapped to a specific network. This is done using a VLAN protocol, such as IEEE 802.1Q, Cisco’s ISL, or 3Com’s VLT. Most modern VLANs use the IEEE 802.1Q protocol, which inserts an additional header or “tag” into each Ethernet frame. This tag identifies the VLAN to which the sending device belongs, preventing data from being routed to systems outside the virtual network. Data is sent between switches using a physical link called a “trunk” that connects the switches together. Trunking must be enabled in order for one switch to pass VLAN information to another.
4,904 VLANs can be created within an Ethernet network using the 802.1Q protocol, but in most network configurations only a few VLANs are needed. Wireless devices can be included in a VLAN, but they must be routed through a wireless router that is connected to the LAN.
Question: VLAN Basics
Answer:
Virtual Local Area Networks (VLANs) divide a single existing physical network into multiple logical networks. Thereby, each VLAN forms its own broadcast domain. Communication between two different VLANs is only possible through a router that has been connected to both VLANs. VLANs behave as if they had been constructed using switches that are independent of each other.
Types of VLANs
In principle, there are two approaches to implementing VLANs:
as port-based VLANs (untagged)
as tagged VLANs
Port-based VLANs
With regard to port-based VLANs, a single physical switch is simply divided into multiple logical switches. The following example divides an eight-port physical switch (Switch A) into two logical switches.
Eight-port switch with two port-based VLANs
Switch A
Switch-Port
VLAN ID
Connected device
1
1(green)
PC A-1
2
PC A-2
3
(not used)
4
(not used)
5
2(orange)
PC A-5
6
PC A-6
7
(not used)
8
(not used)
Although all of the PCs have been connected to one physical switch, only the following PCs can communicate with each other due to the configuration of the VLAN:
PC A-1 with PC A-2
PC A-5 with PC A-6
Assume that there are also four PCs in the neighboring room. PC B-1 and PC B-2 should be able to communicate with PC A-1 and PC A-2 in the first room. Likewise, communication between PC B-5 and PC B-6 in Room 2 and PC A-5 and PC A-6 should be possible.
There is another switch in the second room.
Switch B
Switch-Port
VLAN ID
Connected device
1
1(green)
PC B-1
2
PC B-2
3
(not used)
4
(not used)
5
2(orange)
PC B-5
6
PC B-6
7
(not used)
8
(not used)
Two cables will be required for connecting both VLANs.
One cable from Switch A Port 4 to Switch B Port 4 (for VLAN 1)
One from Switch A Port 8 to Switch B Port 8 (for VLAN 2)
Connection of both VLANs to the physical switch. Two cables are required for port-based VLANs.
Note on PVID: For some switches it is necessary to set the PVID (Port VLAN ID) on untagged ports in addition to the VLAN ID of the port. This specifies which VLAN any untagged frames should be assigned to when they are received on this untagged port. The PVID should therefore match the configured VLAN ID of the untagged port.[1][2]
Tagged VLANs
With regard to tagged VLANs, multiple VLANs can be used through a single switch port. Tags containing the respective VLAN identifiers indicating the VLAN to which the frame belongs are attached to the individual Ethernet frames. If both switches understand the operation of tagged VLANs in the example above, the reciprocal connection can be accomplished using one single cable.
Connection of both VLANs to both physical switches using a single cable. VLAN tags (IEEE 802.1q) are used on this cable (or trunk).
Structure of an Ethernet Frame
The VLAN tag is added to an Ethernet Frame by MAC address.
At a high level, subnets and VLANs are analogous in that they both deal with segmenting or partitioning a portion of the network. However, VLANs are data link layer (OSI layer 2) constructs, while subnets are network layer (OSI layer 3) IP constructs, and they address (no pun intended) different issues on a network. Although it’s a common practice to create a one-to-one relationship between a VLAN and subnet, the fact that they are independent layer 2 and layer 3 constructs adds flexibility when designing a network.
Subnets (IPv4 implementation)
An IP address can be logically split (a.k.a. subnetting) into two parts: a network/routing prefix and a host identifier. Network devices that belong to a subnet share a common network/routing prefix in their IP address. The network prefix is determined by applying a bitwise AND operation between the IP address and subnet mask (typically 255.255.255.0). Using an example address of 192.168.5.130, the network prefix (subnet) is 192.168.5.0, while the host identifier is 0.0.0.130.
Traffic is exchanged or routed between subnetworks via routers (many modern switches also include router functionality) when the routing/subnet prefixes of the source address and the destination address differ. A router constitutes the logical and/or physical boundary between subnets.
The benefits of subnetting a network vary with each deployment scenario. In large organizations or those using Classless Inter-Domain Routing (CIDR), it’s necessary to allocate address space efficiently. It may also enhance routing efficiency, or have advantages in network management when subnetworks are administered by different internal groups. Subnets can be arranged logically in a hierarchical architecture, partitioning an organization’s network address space into a tree-like routing structure.
VLANs
A VLAN has the same attributes as a physical local area network, but it allows for devices to be grouped together more easily, even if they are not connected on the same network switch. Separating ports by VLAN groups separates their traffic in a similar fashion to connecting the devices to a separate, distinct switch of their own. VLANs can provide a very high level of security with great flexibility for a comparatively low cost.
Network architects use VLANs to segment traffic for issues such as scalability, security, and network management. Switches can’t (or at least shouldn’t) bridge IP traffic between VLANs because doing so would violate the integrity of the VLAN broadcast domain, so if one VLAN becomes compromised in some fashion, the remainder of the network will not be impeded. Quality of Service schemes can optimize traffic on VLANs for real-time (VoIP) or low-latency requirements (SAN).
Without VLANs, a switch considers all devices on the switch to be in the same broadcast domain, so VLANs can essentially create multiple layer 3 networks on a single physical infrastructure. For example, if a DHCP server is plugged into a switch it will serve any host on that switch that is configured for DHCP. By using VLANs, the network can be easily split up so some hosts will not use that DHCP server and will obtain link-local addresses, or obtain an address from a different DHCP server.
Additional Thoughts
You can have one physical network and configure two or more logical networks by simply assigning different subnets, like 192.168.0.0 and 192.168.1.0. The problem, though, is that both subnets transmit data through the same switch. Traffic going through the switch can be seen by all other hosts, no matter which subnet they’re on. The result is that security is low and there will be less bandwidth available since all traffic uses the same backbone.
As an alternative, you can create a VLAN for each logical network. Bandwidth availability for each VLAN (or logical network) is no longer shared, and security is improved because the switch that connects each VLAN network (in theory…) will not allow traffic to cross between the VLANs.
Usually VLANs are the better choice for many applications, including audio, but there are times when subnetting makes sense. The main reasons are:
Mitigating performance problems because LANs can’t scale indefinitely. Excessive broadcasts or flooding of frames to unknown destinations will limit their scale. Either of these conditions can be caused by making a single broadcast domain in an Ethernet LAN too big. Bandwidth exhaustion (unless it’s caused by broadcast packets or flooding of frames) is not typically solved with VLANs and subnetting, though, since they won’t increase the amount of bandwidth available. It usually happens because of a lack of physical connectivity (too few NICs on a server, too few ports in a group, the need to move up to a faster port speed, etc.). The first step is to monitor network traffic and identify trouble spots. Once you know how traffic moves around on your LAN, you can begin to think about subnetting for performance reasons.
A desire to limit / control traffic moving between hosts at layer 3 or above. If you want to control IP (or TCP, or UDP, etc.) traffic between hosts, rather than attacking the problem at layer 2, you might consider subnetting and adding firewalls / routers with ACLs between the subnets.
I’ve seen this a few times, more usually on Vista; and it’s annoying.
The easiest thing I’ve found that ‘fixed it’ in many cases (not all) was to merge and erase all the various network entries/profiles (wired and/or wireless), until there were none.
I’m NOT talking about the networking devices/drivers themselves. Just the various “Home”, “Work”, and “Public” network entries representing your networks.
Reboot, let it rediscover and reconnect to the network(s) (it should ask you which ‘type’ again).
Hopefully it will be less confused after that. 🙂
To do this:
Open “Control Panel”
Select and open “Network and Sharing Center”
Click the “Icon” (like the House icon) under “View your active networks”. This will open the “Set Network Properties” dialog. Here you can rename a network connection or change the icon for that network connection.
Click “Merge or Delete Network Locations” to see a list of stored network connections. You can merge or delete connections here as well as see if a network connection is in use and managed or unmanaged.
Answer:
Check your network-card drivers. I’ve run into this with older-network cards/drivers several times. More than likely, you need to go to the manufacturer’s website to get the correct driver. Many network adapters will “work” … but because they don’t have the proper bits to tell windows 7/vista that it’s indeed an ethernet adapter… they aren’t treated like normal ethernet network adapters… and are treated more like a generic network interface that could be virtual or some form of tunneling adapter.
Question: Subnetting, netmasks and slash notation
Answer:
Netmasks are used in ACLs (access control lists), firewalls, routing and subnetting. It involves grouping IP addresses. Each range contains a power of two (1, 2, 4, 8, 16, etc) number of addresses and starts on a multiple (0, 1, 2, 3, etc) of that number of addresses.
127.0.0.1 is reserved for the loopback, with network address 127.0.0.0, netmask 255.0.0.0 and 127.255.255.255 as its broadcast address.
0.0.0.0 is the entire Internet with netmask 0.0.0.0 and 255.255.255.255 as its broadcast address.
0.0.0.0 with netmask 255.255.255.255 is an unconfigued interface.
224.0.0.0 … 239.255.255.255 is used for multicast. 240.0.0.0 … 255.255.255.255 is reserved.
CIDR does not link the number of hosts to the network address, at least not in the strict way that ‘classic’ A, B and C networks do. Furthermore, it doesn’t limit the size to 16M, 64k or 256 IP nrs. Instead, any power of 2 can be used as a size of the network (number of hosts + network address + broadcast address). In other words, CIDR sees an IP address as a 32 bit rather than a 4 byte address.
Netmasks
The following table shows the netmasks in a binary form. The ‘CIDR’ column is the number of ‘1’s from left to right. This also known as ‘slash notation’.
Binary Hex Quad Dec 2ⁿ CIDR Number of addresses
00000000000000000000000000000000 00000000 0.0.0.0 2³² /0 4,294,967,296 4 G
10000000000000000000000000000000 80000000 128.0.0.0 2³¹ /1 2,147,483,648 2 G
11000000000000000000000000000000 C0000000 192.0.0.0 2³⁰ /2 1,073,741,824 1 G
11100000000000000000000000000000 E0000000 224.0.0.0 2²⁹ /3 536,870,912 512 M
11110000000000000000000000000000 F0000000 240.0.0.0 2²⁸ /4 268,435,456 256 M
11111000000000000000000000000000 F8000000 248.0.0.0 2²⁷ /5 134,217,728 128 M
11111100000000000000000000000000 FC000000 252.0.0.0 2²⁶ /6 67,108,864 64 M
11111110000000000000000000000000 FE000000 254.0.0.0 2²⁵ /7 33,554,432 32 M
11111111000000000000000000000000 FF000000 255.0.0.0 2²⁴ /8 16,777,216 16 M
11111111100000000000000000000000 FF800000 255.128.0.0 2²³ /9 8,388,608 8 M
11111111110000000000000000000000 FFC00000 255.192.0.0 2²² /10 4,194,304 4 M
11111111111000000000000000000000 FFE00000 255.224.0.0 2²¹ /11 2,097,152 2 M
11111111111100000000000000000000 FFF00000 255.240.0.0 2²⁰ /12 1,048,576 1 M
11111111111110000000000000000000 FFF80000 255.248.0.0 2¹⁹ /13 524,288 512 k
11111111111111000000000000000000 FFFC0000 255.252.0.0 2¹⁸ /14 262,144 256 k
11111111111111100000000000000000 FFFE0000 255.254.0.0 2¹⁷ /15 131,072 128 k
11111111111111110000000000000000 FFFF0000 255.255.0.0 2¹⁶ /16 65,536 64 k
11111111111111111000000000000000 FFFF8000 255.255.128.0 2¹⁵ /17 32,768 32 k
11111111111111111100000000000000 FFFFC000 255.255.192.0 2¹⁴ /18 16,384 16 k
11111111111111111110000000000000 FFFFE000 255.255.224.0 2¹³ /19 8,192 8 k
11111111111111111111000000000000 FFFFF000 255.255.240.0 2¹² /20 4,096 4 k
11111111111111111111100000000000 FFFFF800 255.255.248.0 2¹¹ /21 2,048 2 k
11111111111111111111110000000000 FFFFFC00 255.255.252.0 2¹⁰ /22 1,024 1 k
11111111111111111111111000000000 FFFFFE00 255.255.254.0 2⁹ /23 512
11111111111111111111111100000000 FFFFFF00 255.255.255.0 2⁸ /24 256
11111111111111111111111110000000 FFFFFF80 255.255.255.128 2⁷ /25 128
11111111111111111111111111000000 FFFFFFC0 255.255.255.192 2⁶ /26 64
11111111111111111111111111100000 FFFFFFE0 255.255.255.224 2⁵ /27 32
11111111111111111111111111110000 FFFFFFF0 255.255.255.240 2⁴ /28 16
11111111111111111111111111111000 FFFFFFF8 255.255.255.248 2³ /29 8
11111111111111111111111111111100 FFFFFFFC 255.255.255.252 2² /30 4
11111111111111111111111111111110 FFFFFFFE 255.255.255.254 2¹ /31 2
11111111111111111111111111111111 FFFFFFFF 255.255.255.255 2⁰ /32 1
What used to be class A is now ‘/8’, B is ‘/16’, C is ‘/24’ and ‘/32’ is the ‘netmask’ for a single host.
Netmasks are used by routers to make routing decisions. For instance;
if ( Address & Netmask == Network ) {
// Belongs to network
...
} else {
// Does not belong to network
...
}
Which yields;
if ( 0xC0A80001 & 0xFFFFFF00 == 0xC0A80000 ) {
// Belongs to network
...
} else {
// Does not belong to network
...
}
Bitwise operators are hardcoded in processors and therefore very efficient.
Networks
The bits in the ‘host’ part of a network address are all ‘0’. Bits left of the ‘hosts’ bits can be either ‘0’ or ‘1’ (this is rather like sub netting a classic A, B or C network).
The following table/graph shows a network being split in two smaller networks, then in four, then in eight, then 16, etc.
In the example above the smallest network is four successive IP addresses. If you want even smaller ranges, below is an example for ‘248’ beeing split in two and then four;
Netmask / 2ⁿ Number of addresses Number of /64s
0000:0000:0000:0000:0000:0000:0000:0000 /0 2¹²⁸ 340,282,366,920,938,463,463,374,607,431,768,211,456 16 E
ffff:0000:0000:0000:0000:0000:0000:0000 /16 2¹¹² 5,192,296,858,534,827,628,530,496,329,220,096 256 T
ffff:ffff:0000:0000:0000:0000:0000:0000 /32 2⁹⁶ 79,228,162,514,264,337,593,543,950,336 4 G
ffff:ffff:ffff:0000:0000:0000:0000:0000 /48 2⁸⁰ 1,208,925,819,614,629,174,706,176 1 Y 64 k
ffff:ffff:ffff:ffff:0000:0000:0000:0000 /64 2⁶⁴ 18,446,744,073,709,551,616 16 E 1
ffff:ffff:ffff:ffff:ffff:0000:0000:0000 /80 2⁴⁸ 281,474,976,710,656 256 T
ffff:ffff:ffff:ffff:ffff:ffff:0000:0000 /96 2³² 4,294,967,296 4 G
ffff:ffff:ffff:ffff:ffff:ffff:ffff:0000 /112 2¹⁶ 65,536 64 k
ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff /128 2⁰ 1 1
‘:0000:’ can be written as ‘:0:’. And the longest sequence of zeros as ‘::’.
Since the IPv6 internet is 2000::/3 (2000:0000:0000:0000:0000:0000:0000:0000 to 3fff:ffff:ffff:ffff:ffff:ffff:ffff:ffff), the number of available addresses is 2¹²⁵ = 42,535,295,865,117,307,932,921,825,928,971,026,432.
/56 and /60
Some ISPs provide a /56 or a /60 instead of a /48;
Netmask / 2ⁿ Number of addresses Number of /64s
ffff:ffff:ffff:0000:0000:0000:0000:0000 /48 2⁸⁰ 1,208,925,819,614,629,174,706,176 65356
ffff:ffff:ffff:ff00:0000:0000:0000:0000 /56 2⁷² 4,722,366,482,869,645,213,696 256
ffff:ffff:ffff:fff0:0000:0000:0000:0000 /60 2⁶⁸ 295,147,905,179,352,825,856 16
ffff:ffff:ffff:ffff:0000:0000:0000:0000 /64 2⁶⁴ 18,446,744,073,709,551,616 1
A /48 is 2¹⁶ = 65,536 successive /64s. A /56 is 2⁸ = 256 successive /64s. A /60 is 2⁴ = 16 successive /64s.
/120
Some advocate the use of /120s. A /120 is the same size as an IPv4 /24; 256 addresses;
Netmask / 2ⁿ Number of addresses
ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff00 /120 2⁸ 256
The idea is only to use 256 addresses out of a /64 and firewall the rest in order to avoid NDP (Neighbour Discovery Protocol) exhaustion attacks.
Combine host and network in one statement
Suppose I have a host ‘2001:db8:1234:1::1/128’ and a network ‘2001:db8:1234:1::/64’. One can combine both statements (EG in ifconfig) in one statement; ‘2001:db8:1234:1::1/64’.
Determining the proper mask value to assign to router and client IP addresses is sometimes difficult. You are usually pretty safe using 255.255.255.0 for your IPNetRouter gateway’s private subnet, especially if you never intend to have more than 254 unique LAN clients on your LAN. The approved private LAN network ranges are described in RFC-1918.
In the simple case, if you lower the number of the subnet mask, the more open (or greater) the number of valid IP address in a subnetwork. Let’s start with the standard, typical mask for a home LAN, 255.255.255.0. It typically permits 254 clients on a LAN connected to the IPNetRouter gateway (eg x.y.z.1-x.y.z.254 are good IPs to use on the x.y.z subnet with mask 255.255.255.0; x.y.z.0 and x.y.z.255 are generally not because of the way IP routing works). If you up the last number of the subnet mask you lower the number of clients permitted on your LAN. For instance, if you set it to 255.255.255.252 only three LAN clients and the gateway (four IP addresses) will be permitted to communicate with one another on that particular subnet. To route properly, the router should be one of the IP addresses in the same subnet as the clients.
If you understand binary operations the above will make more sense since the number of clients on a subnet is limited by performing a binary AND operation between the subnet mask and a given IP address.
Using the Subnet Calculator Tool
Using the Subnet Calculator tool in IPNetRouter or IPNetMonitor, you can see how many clients can be supported on an IP subnet based on a particular subnet mask. The prefix length set in the subnet calculator is equivalent to the shorthand value in the following table:
IP address
Net Mask
Mask Binary Shorthand
Resulting network number
192.168.222.15
255.255.255.0
/24 (254 hosts)
192.168.222.0
24.157.68.5
255.255.0.0
/16 (65533 hosts)
24.157.0.0
10.1.15.12
255.255.255.255
/32 (1 host)
10.1.15.12 (the identity mask)
192.168.56.129
255.255.255.128
/25 (128 hosts)
192.168.53.128
172.16.73.5
255.255.255.252
/30 (4 hosts)
172.16.73.4
192.168.73.6
255.255.255.252
/30 (4 hosts)
192.168.73.4
192.168.73.82
255.255.255.252
/30 (4 hosts)
192.168.73.80
By experimenting with the last IP address in the example, you can see how the subnet and client ID can change by altering the mask while the IP address remains constant. It is the network number that is used to determine whether a client is on the same or a different subnet when determining whether to broadcast an IP packet to the local network or not.
For each increase in the shorthand mask number by one, halve the number of available clients for your local LAN. For each decrease of one in the mask (again, using the “/” syntax), the number of permitted clients on the LAN is doubled. This is a simplistic explanation, good enough for handling a subnet like 192.168.0.1 with a mask short hand value of /24 thru /32 (long hand 255.255.255.0 thru 255). The subnet calculator can determine the range of the clients local network by its IP address and network mask. Shorthand “/30” represents a sublan of four machines (hosts) with a network number determined by the machines IP address; shorthand “/31” is for a subnet of two clients; shorthand “/29” is for a network of eight clients, etc.
Some of the interfaces in IPNetRouter support the “/” syntax for masks, others support the “255.255.255.0” type syntax. Using the Subnet Calculator, you can automatically do the conversion between the two without much hassle.
For filtering of IP packets, the net mask is used to designate a range of IP addresses to apply the filter to. In the last example, 192.168.73.80 through .83 would be filtered if a “/30” mask was applied to 192.168.73.82.
If you want to know more about network masks, RFC-950 is a good starting point. See the help text for the Subnet Calculator for more information on how it works.
Binary Subnet Masks and Routing–the Short Version
(The Internet was designed by mathematician’s and people with strong mathematics backgrounds. If you are not well-versed in binary number theory but are interested in how routing really works, the best thing to find an easy guide to the Internet–your local librarian or bookstore may be able to recommend such a book (we hope). Maybe someday it will be easier. For now…)
If any 32-bit IP address is ANDed with 255.255.255.0 (the equivalent of 24 “1” bits followed by eight “0” bits), you are left with only 255 valid client IDs in a given subnet (actually 254 since the all 1s and all 0s client host numbers are typically reserved). ANDing 255.255.255.252 with an IP address, only four addresses will be valid for the local subnet. Doesn’t make sense? Well, think of it this way. The destination address and the origination IP address are each ANDed with the origination IPs mask for any packet sent. The results of the two operations are then compared. The masks obliterate the client IDs (still kept in the packet header) and then are compared with one another. The following two examples take place on the originating host.
Destination of an IP datagram is on the same LAN
Origination is 192.168.2.4, mask is 255.255.255.0, the AND operation gives 192.168.2.0
Destination is 192.168.2.17, mask is 255.255.255.0, the AND operation gives 192.168.2.0
Since the packets originate on the same subnet, the machine sends the packet out on the LAN without asking the router what to do–its a local neighborhood destination (Yep, you don’t need a router if you use the same network and masks for a local LAN when using straight IP addressing.)
Destination and originating hosts are on different LANs
Origination is 192.168.14.3, mask is 255.255.255.0, the AND operation gives 192.168.14.0
Destination is 24.156.22.45, mask is 255.0.0.0, the AND operation gives 24.156.22.0*
Since the source and destination networks are different the packet is sent to the router for further handling. (*NOTE: the origination mask is used for mask calculations to avoid problems when using different masks on the same subnetwork; if the sending host determines that the IP packet it is about to send is not on its subnet, it should send the packet to a router/gateway for handling.)
In the instance of an address with a mask of 255.255.255.252, there are only four local host IPs that are within the same subnetwork. All other addresses will result in the packet being sent to the local router for handling. The last number, 252, is equivalent to 11111100 in binary.
Question: Do all the subnets in a network have to have the same subnet mask?
In your example you specified host addresses, not networks, since the host part of the IP addresses is not zero, and obviously /192 was meant to be /26. If we round the IP addresses to networks we get 192.168.0.0/30, 192.168.0.0/28 and 192.168.0.0/26 – they overlap.
Overlapping subnets can be present in the routing table at the same time if their prefix length (netmask) is different. The router will select matched route with the longest prefix when deciding where to route a packet.
So destination IP 192.168.0.0-3 will match the first route, 192.168.0.4-15 will match the second and 192.168.0.16-63 will match the third.
Answer:
192.168.0.0 255.255.255.252 i.e. /30
192.168.0.4 255.255.255.252 i.e. /30
192.168.0.8 255.255.255.248 i.e. /29
Above mask assignment is fine because none of them overlap and address that you’ve mentioned are included as well.
It depends on block size you choose, overlap doesn’t work. Subnet’s network ID should be a multiple of a block size, starting from anywhere in the middle won’t work.
A classful network is a network addressing architecture used in the Internet from 1981 until the introduction of Classless Inter-Domain Routing in 1993. The method divides the IP address space for Internet Protocol version 4 (IPv4) into five address classes based on the leading four address bits. Classes A, B, and C provide unicast addresses for networks of three different network sizes. Class D is for multicast networking and the class E address range is reserved for future or experimental purposes.
Since its discontinuation, remnants of classful network concepts have remained in practice only in limited scope in the default configuration parameters of some network software and hardware components, most notably in the default configuration of subnet masks.
In the original address definition, the most significant eight bits of the 32-bit IPv4 address was the network number field which specified the particular network a host was attached to. The remaining 24 bits specified the local address, also called rest field (the rest of the address), which uniquely identified a host connected to that network.[1] This format was sufficient at a time when only a few large networks existed, such as the ARPANET (network number 10), and before the wide proliferation of local area networks (LANs). As a consequence of this architecture, the address space supported only a low number (254) of independent networks. It became clear early in the growth of the network that this would be a critical scalability limitation.[citation needed]
Expansion of the network had to ensure compatibility with the existing address space and the IPv4 packet structure, and avoid the renumbering of the existing networks. The solution was to expand the definition of the network number field to include more bits, allowing more networks to be designated, each potentially having fewer hosts. Since all existing network numbers at the time were smaller than 64, they had only used the 6 least-significant bits of the network number field. Thus it was possible to use the most-significant bits of an address to introduce a set of address classes while preserving the existing network numbers in the first of these classes.[citation needed]
The new addressing architecture was introduced by RFC791 in 1981 as a part of the specification of the Internet Protocol.[2] It divided the address space into primarily three address formats, henceforth called address classes, and left a fourth range reserved to be defined later.
The first class, designated as Class A, contained all addresses in which the most significant bit is zero. The network number for this class is given by the next 7 bits, therefore accommodating 128 networks in total, including the zero network, and including the IP networks already allocated. A Class B network was a network in which all addresses had the two most-significant bits set to 1 and 0 respectively. For these networks, the network address was given by the next 14 bits of the address, thus leaving 16 bits for numbering host on the network for a total of 65536 addresses per network. Class C was defined with the 3 high-order bits set to 1, 1, and 0, and designating the next 21 bits to number the networks, leaving each network with 256 local addresses.
The leading bit sequence 111 designated an at-the-time unspecified addressing mode (“escape to extended addressing mode“),[2] which was later subdivided as Class D (1110) for multicast addressing, while leaving as reserved for future use the 1111 block designated as Class E.[3]
The number of addresses usable for addressing specific hosts in each network is always 2N – 2, where N is the number of rest field bits, and the subtraction of 2 adjusts for the use of the all-bits-zero host portion for network address and the all-bits-one host portion as a broadcast address. Thus, for a Class C address with 8 bits available in the host field, the maximum number of hosts is 254.
Today, IP addresses are associated with a subnet mask. This was not required in a classful network because the mask was implicitly derived from the IP address itself; Any network device would inspect the first few bits of the IP address to determine the class of the address.
The blocks numerically at the start and end of classes A, B and C were originally reserved for special addressing or future features, i.e., 0.0.0.0/8 and 127.0.0.0/8 are reserved in former class A; 128.0.0.0/16 and 191.255.0.0/16 were reserved in former class B but are now available for assignment; 192.0.0.0/24 and 223.255.255.0/24 are reserved in former class C. While the 127.0.0.0/8 network is a Class A network, it is designated for loopback and cannot be assigned to a network.[4]
Class D is reserved for multicast and cannot be used for regular unicast traffic.
Class E is reserved and cannot be used on the public Internet. Many older routers will not accept using it in any context.[citation needed]
The first architecture change extended the addressing capability in the Internet, but did not prevent IP address exhaustion. The problem was that many sites needed larger address blocks than a Class C network provided, and therefore they received a Class B block, which was in most cases much larger than required. In the rapid growth of the Internet, the pool of unassigned Class B addresses (214, or about 16,000) was rapidly being depleted. Classful networking was replaced by Classless Inter-Domain Routing (CIDR), starting in 1993 with the specification of RFC 1518and RFC 1519, to attempt to solve this problem.
Before the introduction of address classes, the only address blocks available were what later became known as Class A networks.[5] As a result, some organizations involved in the early development of the Internet received address space allocations far larger than they would ever need.
Question: What happanes when IP address of two computer are same but different subnet masks?
Answer:
The question is simple but the answer is tricky and lengthy :
If you are using DHCP on your router for address assignment, then NEVER EVER any router would assign a bad or out-of-the-subnet IP to any host in that particular subnet. PERIOD. The address assignment by any router would be perfect.
If a human assigns a bad IP intentionally which belongs to any other subnet, then
a packet destined for you computer which has a bad IP, would not reach your subnet, the router will route it to the proper subnet because all routing protocols use LMF(LONGEST MATCH FIRST) RULE, in which the router searches for largest CIDR value. This is completely logically correct, imagine the following. A /28 subnet has small subnet than a /27 subnet, so it will find smallest possible aubnet first and route the packet to it.
Any packet originating from your “bad IP PC” will reach the internet server but any reply from it will not reach you, because as I said the router will forward it to other subnet, not you.
Answer
It’s practically not possible that both the system will have same ip address. Possibly subnet mask can be same ..otherwise both pc cannot communicate with same Ip ..dulplicacy will occur in every case
Answer:
There will be a conflict between the two. If the two computers are on the same LAN network then you would be prompted with a duplicate IP message or IP address already exists in network message. If the two computers are on different LAN segments then the two wont be able to communicate with each other. When data would be destined to the same IP the computer would think that it is its own address and will not forward the data to the gateway. It does not look at the subnet mask because the destination IP address is its own nick address.
Answer:
when two sys hving same ip with diff subnet then that two sys cnt communicte with each other. Bt they can comunicte with hving same subnet mask sys.
Question: Is there any way two computers in two different subnets can communicate?
Answer:
There has to be a router or a Layer3 switch that does inter-vlan routing
Subnetworks, or subnets, are created by taking a single private address range and dividing it into multiple separate networks using a subnet mask. Such division is often used in large companies to help network administrators divide access between different sensitive, network resources. Computers located on different subnets may need to communicate directly with one another. Accomplishing this requires that the two machines be connected to a router, which can forward information based on routable IP addresses.
Step 1
Connect the computers to the network. Ensure that each connection eventually reaches a router or a routable switch.
Step 2
Connect the routers to each other. This step is only necessary if the two separate subnets are connected to two physically separate routers. If the two routers do not have an available, routable interace, they must be connected to a third, interim “core” router, designed to handle routing between the other routers and anything outside of those networks.
Step 3
Enable a routing protocol in each subnet’s router. Options include Routing Information Protocol (RIP), Open Shortest Path First (OSPF) or, on Cisco-based switches, Interior Gateway Routing Protocol (IGRP).
Step 4
Allow time for the routing tables to update. Routing protocols advertise to neighboring routers the networks to which they are they are directly connected. In this way, each routers gets an images of networks to which they are indirectly connected (i.e. they are connected to a router which is connected to a destination network). When all directly attached routers have up-to-date information about neighboring routers and their attached networks, this is referred to as “convergence.” The more complex the network, the longer it takes for convergence to occur.
Step 5
Log into one of the computers on a subnet and issue a trace route command to the computer on the other subnet. This will show you that communication is functioning properly and that the information is taking the appropriate path (each routed interface, or “hop,” will be listed as part of the route the packet took). To issue a traceroute in Windows, open the command prompt and type “tracert [IP address]”, where [IP address] is the address of the computer on the other subnet.