Design a site like this with WordPress.com
Get started

Computer

A computer is a device that can be instructed to carry out an arbitrary set of arithmetic or logical operations automatically. The ability of computers to follow a sequence of operations, called a program, make computers very flexible and useful. Such computers are used as control systems for a very wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer assisted design, but also in general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects millions of other computers.

Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then.

Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored informationPeripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.

Etymology

According to the Oxford English Dictionary, the first known use of the word “computer” was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: “I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number.” This usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.[1]

The Online Etymology Dictionary gives the first attested use of “computer” in the “1640s, [meaning] “one who calculates,”; this is an “… agent noun from compute (v.)”. The Online Etymology Dictionary states that the use of the term to mean “calculating machine” (of any type) is from 1897.“ The Online Etymology Dictionary indicates that the “modern use” of the term, to mean “programmable digital electronic computer” dates from “… 1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine”.

History

Pre-20th century-The earliest counting device was probably a form of tally stick.

-calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers.[3][4] The use of counting rods is one example.

-The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC.

-In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.

-The Antikythera mechanism is believed to be the earliest mechanical analog “computer”, according to Derek J. de Solla Price.[5] It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.

-The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century.

-The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus.

-A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy.

-An astrolabe incorporating a mechanical calendar computer.

gear-wheels was invented by Abi Bakr of IsfahanPersia in 1235.

– Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendarastrolabe,[10] an early fixed-wired knowledge processing machine[11] with a gear train and gear-wheels,[12]circa 1000 AD.

-The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.

-The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft. To save space and for ease of reading, these are typically circular devices rather than the classic linear slide rule shape. A popular example is the E6B.

-In the 1770s Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automata) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically “programmed” to read instructions. Along with two other complex machines, the doll is at the Musée d’Art et d’Histoire of NeuchâtelSwitzerland, and still operates.

-The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.

-The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 Lord Kelvin had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators.[14] In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.

First computing device

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the “father of the computer”,[15] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unitcontrol flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.

-The machine was about a century ahead of its time. All the parts for his machine had to be made by hand — this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage’s failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine’s computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.

Analog computers

-During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[18] The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. 

-The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.

-The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. 

-This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (control systems) and aircraft (slide rule).

Digital computers

-Electromechanical

-By 1938 the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.

-Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.

-Vacuum tubes and digital electronic circuits

-The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.

-In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[26] the first “automatic electronic digital computer”.[27] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.

-During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newmanand his colleagues commissioned Flowers to build the Colossus.[28] He spent eleven months from early February 1943 designing and building the first Colossus.[29] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[30] and attacked its first message on 5 February.

-Colossus was the world’s first electronic digital programmable computer.[18] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process

-The U.S.-built ENIAC[33] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. Like the Colossus, a “program” on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC’s development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.

Modern computers

-Concept of modern computer

The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper,[35]On Computable Numbers. Turing proposed a simple device that he called “Universal Computing machine” and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing’s design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[36] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

-Stored programs

Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[28] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report “Proposed Electronic Calculator” was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.

-Transistors

The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the “second generation” of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space.

-Integrated circuits

The next great advance in computing power came with the advent of the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of DefenceGeoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[46]

The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[47] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[48] In his patent application of 6 February 1959, Kilby described his new device as “a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated”.[49][50] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[51] His chip solved many practical problems that Kilby’s had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby’s chip was made of germanium.

This new development heralded an explosion in the commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term “microprocessor”, it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[52] designed and realized by Ted HoffFederico Faggin, and Stanley Mazor at Intel.[53]

-Mobile computers become dominant

With the continued miniaturization of computing resources, and advancements in portable battery life, portable computers grew in popularity in the 2000s.[54]The same developments that spurred the growth of laptop computers and other portable computers allowed manufacturers to integrate computing resources into cellular phones. These so-called smartphones and tablets run on a variety of operating systems and have become the dominant computing device on the market, with manufacturers reporting having shipped an estimated 237 million devices in 2Q 2013.

Ref: https://en.wikipedia.org/wiki/Computer

OpenSSH

OpenSSH (also known as OpenBSD Secure Shell) is a suite of security-related network-level utilities based on the Secure Shell (SSH) protocol, which help to secure network communications via the encryption of network traffic over multiple authentication methods and by providing secure tunneling capabilities.

OpenSSH started as a fork of the free SSH program, developed by Tatu Ylönen; later versions of Ylönen’s SSH were proprietary software, offered bySSH Communications Security. OpenSSH was first released as part of the OpenBSDoperating system in 1999.

OpenSSH is not a single computer program, but rather a suite of programs that serve as alternatives to unencrypted network communication protocols like FTPand rlogin. Active development primarily takes place within the OpenBSD source tree. OpenSSH is integrated into the base system of several other BSD projects, while the portable version is available as a package in other Unix-like systems.

OpenSSH was created by the OpenBSD team as an alternative to the original SSH software by Tatu Ylönen, which is now proprietary software. Although source code is available for the original SSH, various restrictions are imposed on its use and distribution. OpenSSH was created as a fork of Björn Grönvall’s OSSH that itself was a fork of Tatu Ylönen’s original free SSH 1.2.12 release, which was the last one having a license suitable for forking. The OpenSSH developers claim that their application is more secure than the original, due to their policy of producing clean and auditedcode and because it is released under the BSD license, the open source license to which the word open in the name refers.

OpenSSH first appeared in OpenBSD 2.6. The first portable release was made in October 1999. Developments since then have included the addition of ciphers (e.g., chacha20poly1305 in 6.5 of January 2014), cutting the dependency on OpenSSL (6.7, October 2014) and an extension to facilitate public key discovery and rotation for trusted hosts (for transition from DSA to Ed25519 public host keys, version 6.8 of March 2015).

The OpenSSH suite includes the following command-line utilities and daemons:

  • ssh, a replacement for rloginrsh and telnet to allow shell access to a remote machine.
  • scp, a replacement for rcp
  • sftp, a replacement for ftp to copy files between computers
  • sshd, the SSH server daemon
  • ssh-keygen, a tool to inspect and generate the RSADSA and Elliptic Curve keys that are used for user and host authentication
  • ssh-agent and ssh-add, utilities to ease authentication by holding keys ready and avoid the need to enter passphrases every time they are used
  • ssh-keyscan, which scans a list of hosts and collects their public keys

https://en.wikipedia.org/wiki/OpenSSH

Telnet

FYI the commands below is for Ubuntu OS

-Check status:

sudo /etc/init.d/xinetd status

or

sudo service telnet status

Restart telnet server:

sudo /etc/init.d/inetd restart

or

sudo /etc/init.d/openbsd-inetd restart

If neither xinetd, inetd nor openbsd-inetd was found (if all three doesnt exist) then telnet is not installed

Look for telnet solution directory with:

sudo find / -name telnet

or

apt-cache policy telnet -In ubuntu 16.04 will see that it is installed but if all the above doesnt work then it means it wasnt enabled.

or 

telnet -V 

or 

telnet –V

If you dont have inetd deamon wont be able to use the telnet service in your system. When you install telnetd deamon it also installs inetd deamon.

The telnet service is pre-installed in ubuntu but because default ubuntu has no listening port the telnet service is not useful yet. 

That also means that one does not need to disable telnet unless the one have installed a telnet daemon

Reference:

https://wiki.ubuntu.com/SecurityTeam/Policies#No_Open_Ports

https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.znetwork/znetwork_112.htm

http://www.ibm.com/support/knowledgecenter/ssw_aix_72/com.ibm.aix.cmds5/telnetd.htm

http://www.cyberciti.biz/faq/how-do-i-turn-on-telnet-service-on-for-a-linuxfreebsd-system/

http://askubuntu.com/questions/389298/how-do-i-disable-telnet-on-ubuntu-server

kernel -Part 3

What is a kernel?  If you spend any time reading Android forums, blogs, how-to posts or online discussion you’ll soon hear people talking about the kernel.  A kernel isn’t something unique to Android – iOS and MacOS have one, Windowshas one, BlackBerry’s QNX has one, in fact all high level operating systems have one.  The one we’re interested in is Linux, as it’s the one Android uses. Let’s try to break down what it is and what it does.

Android devices use the Linux kernel, bet every phone uses their own version of it. Linux kernel maintainers keep everything tidy and available, contributors (like Google) add or alter things to better meet their needs, and the people making the hardware contribute as well, because they need to develop hardware drivers for the parts they’re using for the kernel version they’re using.  This is why it takes a while for independent Android developers and hackers to port new versions to older devices and get everything working.  Drivers written to work with one version of the kernel for a phone might not work with a different version of software on the same phone. And that’s important, because one of the kernel’s main functions is to control the hardware.  It’s a whole lot of source code, with more options while building it than you can imagine, but in the end it’s just the intermediary between the hardware and the software.

When software needs the hardware to do anything, it sends a request to the kernel.  And when we say anything, we mean anything.  From the brightness of the screen, to the volume level, to initiating a call through the radio, even what’s drawn on the display is ultimately controlled by the kernel.  For example – when you tap the search button on your phone, you tell the software to open the search application.  What happens is that you touched a certain point on the digitizer, which tells the software that you’ve touched the screen at those coordinates.  The software knows that when that particular spot is touched, the search dialog is supposed to open.  The kernel is what tells the digitizer to look (or listen, events are “listened” for) for touches, helps figure out where you touched, and tells the system you touched it.  In turn, when the system receives a touch event at a specific point from the kernel (through the driver) it knows what to draw on your screen.  Both the hardware and the software communicate both ways with the kernel, and that’s how your phone knows when to do something.  Input from one side is sent as output to the other, whether it’s you playing Angry Birds, or connecting to your car’s Bluetooth.  

It sounds complicated, and it is.  But it’s also pretty standard computer logic — there’s an action of some sort generated for every event, and depending on that action things happen to the running software. Without the kernel to accept and send information, developers would have to write code for every single event for every single piece of hardware in your device. With the kernel, all they have to do is communicate with it through the Android system API’s, and hardware developers only have to make the device hardware communicate with the kernel. The good thing is that you don’t need to know exactly how or why the kernel does what it does, just understanding that it’s the go-between from software to hardware gives you a pretty good grasp of what’s happening under the glass.  

Reference

http://www.androidcentral.com/android-z-what-kernel

Kernel -Part 3B

This is a term for the computing elite, so proceed at your own risk. To understand what a kernel is, you first need to know that today’s operating systems are built in “layers.” Each layer has different functions such as serial port access, disk access, memory management, and the user interface itself. The base layer, or the foundation of the operating system, is called the kernel. The kernel provides the most basic “low-level” services, such as the hardware-software interaction and memory management. The more efficient the kernel is, the more efficiently the operating system will run.

When referring to an operating system, the kernelis the first section of the operating system to load into memory. As the center of the operating system, the kernel need to be small, efficient and loaded into a protected area in the memory; so as not to be overwritten. It can be responsible for such things as disk drive management, interrupthandler, file management, memory management, process management, etc.

http://techterms.com/definition/kernel

http://www.computerhope.com/jargon/k/kernel.htm

Kernel -Part 2

The kernel is a program that constitutes the central core of a computer operating system. It has complete control over everything that occurs in the system.

A kernel can be contrasted with a shell (such as bashcsh or ksh in Unix-likeoperating systems), which is the outermost part of an operating system and a program that interacts with user commands. The kernel itself does not interact directly with the user, but rather interacts with the shell and other programs as well as with the hardware devices on the system, including the processor (also called the central processing unit or CPU), memory and disk drives.

The kernel is the first part of the operating system to load into memory during booting (i.e., system startup), and it remains there for the entire duration of the computer session because its services are required continuously. Thus it is important for it to be as small as possible while still providing all the essential services needed by the other parts of the operating system and by the various application programs.

When a computer crashes, it actually means the kernel has crashed. If only a single program has crashed but the rest of the system remains in operation, then the kernel itself has not crashed. A crash is the situation in which a program, either a user application or a part of the operating system, stops performing its expected function(s) and responding to other parts of the system. The program might appear to the user to freeze. If such program is a critical to the operation of the kernel, the entire computer could stall or shut down.

The kernel provides basic services for all other parts of the operating system, typically including memory management, process management, file management and I/O (input/output) management (i.e., accessing the peripheral devices). These services are requested by other parts of the operating system or by application programs through a specified set of program interfaces referred to as system calls.

Process management, possibly the most obvious aspect of a kernel to the user, is the part of the kernel that ensures that each process obtains its turn to run on the processor and that the individual processes do not interfere with each other by writing to their areas of memory. A process, also referred to as a task, can be defined as an executing (i.e., running) instance of a program.

The contents of a kernel vary considerably according to the operating system, but they typically include (1) a scheduler, which determines how the various processes share the kernel’s processing time (including in what order), (2) a supervisor, which grants use of the computer to each process when it is scheduled, (3) an interrupt handler, which handles all requests from the various hardware devices (such as disk drives and the keyboard) that compete for the kernel’s services and (4) a memory manager, which allocates the system’s address spaces (i.e., locations in memory) among all users of the kernel’s services.

The kernel should not be confused with the BIOS (Basic Input/Output System). The BIOS is an independent program stored in a chip on the motherboard (the main circuit board of a computer) that is used during the booting process for such tasks as initializing the hardware and loading the kernel into memory. Whereas the BIOS always remains in the computer and is specific to its particular hardware, the kernel can be easily replaced or upgraded by changing or upgrading the operating system or, in the case of Linux, by adding a newer kernel or modifying an existing kernel.

Most kernels have been developed for a specific operating system, and there is usually only one version available for each operating system. For example, the Microsoft Windows 2000 kernel is the only kernel for Microsoft Windows 2000 and the Microsoft Windows 98 kernel is the only kernel for Microsoft Windows 98. Linux is far more flexible in that there are numerous versions of the Linux kernel, and each of these can be modified in innumerable ways by an informed user.

A few kernels have been designed with the goal of being suitable for use with any operating system. The best known of these is the Mach kernel, which was developed at Carnegie-Mellon University and is used in the Macintosh OS X operating system.

The term kernel is frequently used in books and discussions about Linux, whereas it is used less often when discussing some other operating systems, such as the Microsoft Windows systems. The reasons are that the kernel is highly configurable in the case of Linux and users are encouraged to learn about and modify it and to download and install updated versions. With the Microsoft Windows operating systems, in contrast, there is relatively little point in discussing kernels because they cannot be modified or replaced.

Categories of Kernels

Kernels can be classified into four broad categories: monolithic kernelsmicrokernelshybrid kernels and exokernels. Each has its own advocates and detractors.

Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers(small programs that allow the operating system to interact with hardware devices, such as disk drives, video cards and printers). Modern monolithic kernels, such as those of Linux and FreeBSD, both of which fall into the category of Unix-like operating systems, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel’s capabilities as required, while helping to minimize the amount of code running in kernel space.

A microkernel usually provides only minimal services, such as defining memory address spaces, interprocess communication (IPC) and process management. All other functions, such as hardware management, are implemented as processes running independently of the kernel. Examples of microkernel operating systems are AIX, BeOS, Hurd, Mach, Mac OS X, MINIX and QNX.

Hybrid kernels are similar to microkernels, except that they include additional code in kernel space so that such code can run more swiftly than it would were it in user space. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure microkernels can provide high performance. Hybrid kernels should not be confused with monolithic kernels that can load modules after booting (such as Linux).

Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000 and XP. DragonFly BSD, a recent fork (i.e., variant) of FreeBSD, is the first non-Mach based BSD operating system to employ a hybrid kernel architecture.

Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, and they provide no hardware abstractions on top of which applications can be constructed. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.

Exokernels in themselves they are extremely small. However, they are accompanied by library operating systems, which provide application developers with the conventional functionalities of a complete operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API (application programming interface), such as one for Linux and one for Microsoft Windows, thus making it possible to simultaneously run both Linux and Windows applications.

The Monolithic Versus Micro Controversy

In the early 1990s, many computer scientists considered monolithic kernels to be obsolete, and they predicted that microkernels would revolutionize operating system design. In fact, the development of Linux as a monolithic kernel rather than a microkernel led to a famous flame war (i.e., a war of words on the Internet) between Andrew Tanenbaum, the developer of the MINIX operating system, and Linus Torvalds, who originally developed Linux based largely on MINIX.

Proponents of microkernels point out that monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error. Although this sounds sensible, it is questionable how important it is in reality, because operating systems with monolithic kernels such as Linux have become extremely stable and can run for years without crashing.

Another disadvantage cited for monolithic kernels is that they are not portable; that is, they must be rewritten for each new architecture (i.e., processor type) that the operating system is to be used on. However, in practice, this has not appeared to be a major disadvantage, and it has not prevented Linux from being ported to numerous processors.

Monolithic kernels also appear to have the disadvantage that their source codecan become extremely large. Source code is the version of software as it is originally written (i.e., typed into a computer) by a human in plain text (i.e., human readable alphanumeric characters) and before it is converted by a compiler into object code that a computer’s processor can directly read and execute.

For example, the source code for the Linux kernel version 2.4.0 is approximately 100MB and contains nearly 3.38 million lines, and that for version 2.6.0 is 212MB and contains 5.93 million lines. This adds to the complexity of maintaining the kernel, and it also makes it difficult for new generations of computer science students to study and comprehend the kernel. However, the advocates of monolithic kernels claim that in spite of their size such kernels are easier to design correctly, and thus they can be improved more quickly than can microkernel-based systems.

Moreover, the size of the compiled kernel is only a tiny fraction of that of the source code, for example roughly 1.1MB in the case of Linux version 2.4 on a typical Red Hat Linux 9 desktop installation. Contributing to the small size of the compiled Linux kernel is its ability to dynamically load modules at runtime, so that the basic kernel contains only those components that are necessary for the system to start itself and to load modules.

The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional operating system (one of the most popular of which is muLinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems(i.e., computer circuitry built into other products).

Although microkernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency.

Reference

http://www.linfo.org/kernel.html

Kernel

A kernel is the core component of an operating system. Using interprocess communication and system calls, it acts as a bridge between applications and the data processing performed at the hardware level.

When an operating system is loaded into memory, the kernel loads first and remains in memory until the operating system is shut down again. The kernel is responsible for low-level tasks such as disk management, task management and memory management.

A computer kernel interfaces between the three major computer hardware components, providing services between the application/user interface and the CPU, memory and other hardware I/O devices.

The kernel provides and manages computer resources, allowing other programs to run and use these resources. The kernel also sets up memory address space for applications, loads files with application code into memory, sets up the execution stack for programs and branches out to particular locations inside programs for execution.

The kernel is responsible for:

  • Process management for application execution
  • Memory management, allocation and I/O
  • Device management through the use of device drivers
  • System call control, which is essential for the execution of kernel services

There are five types of kernels:

  1. Monolithic Kernels: All operating system services run along the main kernel thread in a monolithic kernel, which also resides in the same memory area, thereby providing powerful and rich hardware access.
  2. Microkernels: Define a simple abstraction over hardware that use primitives or system calls to implement minimum OS services such as multitasking, memory management and interprocess communication.
  3. Hybrid Kernels: Run a few services in the kernel space to reduce the performance overhead of traditional microkernels where the kernel code is still run as a server in the user space.
  4. Nano Kernels: Simplify the memory requirement by delegating services, including the basic ones like interrupt controllers or timers to device drivers.
  5. Exo Kernels: Allocate physical hardware resources such as processor time and disk block to other programs, which can link to library operating systems that use the kernel to simulate operating system abstractions.

Reference

https://www.techopedia.com/definition/3277/kernel

OS vs Kernel -Part 1

The Operating System is a generic name given to all of the elements (user interface, libraries, resources) which make up the system as a whole.

The kernel is “brain” of the operating system, which controls everything from access to the hard disk to memory management. Whenever you want to do anything, it goes though the kernel.

a kernel is part of the operating system, it is the first thing that the boot loader loads onto the cpu (for most operating systems), it is the part that interfaces with the hardware, and it also manages what programs can do what with the hardware, it is really the central part of the os, it is made up of drivers, a driver is a program that interfaces with a particular piece of hardware, for example: if I made a digital camera for computers, I would need to make a driver for it, the drivers are the only programs that can control the input and output of the computer

The Kernel is the core piece of the operating system. It is not necessarily an operating system in and of itself.

Everything else is built around it.

In computing, the ‘kernel’ is the central component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel’s responsibilities include managing the system’s resources (the communication between hardware and software components). 

Reference

http://stackoverflow.com/questions/2013937/what-is-an-os-kernel-how-does-it-differ-from-an-operating-system

What is the difference between a kernel and shell?

  • What happens if we have ONLY the kernel BUT NO shell?
    You then have a machine with the actual OS but there is NO way you can use it. There is no “interface” for the human to interact with the OS and hence the machine. (Assuming GUIs don’t exist, for simplicity 🙂
  • What happens if we have ONLY the shell BUT NO kernel?
    This is impossible. Shell is a program provided by the OS so that you can interact with it. Without the kernel/OS nothing can execute (in a sense, not 100% true though, but you get the idea)

A shell is just a program that offers some functionality that runs on the OS. The kernel is the “essence/core” of the OS. The words can be confusing so here’s the dictionary definition of kernel:

</font>

</div>
<div data-blogger-escaped-style="-webkit-font-smoothing: antialiased; -webkit-tap-highlight-color: transparent; border: 0px; box-sizing: border-box; font-family: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin-bottom: 15px; margin-top: 15px; outline: none 0px; padding: 0px; vertical-align: baseline;" style="border:0;box-sizing:border-box;font-family:inherit;font-stretch:inherit;font-style:inherit;font-variant:inherit;font-weight:inherit;line-height:inherit;margin-bottom:15px;margin-top:15px;outline:none 0;padding:0;vertical-align:baseline;">
<p style="margin:0;"><font data-blogger-escaped-style="-webkit-font-smoothing: antialiased; -webkit-tap-highlight-color: transparent; border: 0px; box-sizing: border-box; font-family: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: 700; line-height: inherit; margin: 0px; outline: none 0px; padding: 0px; vertical-align: baseline;" face="inherit" style="border:0;box-sizing:border-box;font-stretch:inherit;font-style:inherit;font-variant:inherit;font-weight:700;line-height:inherit;margin:0;outline:none 0;padding:0;vertical-align:baseline;"><i data-blogger-escaped-style="-webkit-font-smoothing: antialiased; -webkit-tap-highlight-color: transparent; border: 0px; box-sizing: border-box; font-family: inherit; font-stretch: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: none 0px; padding: 0px; vertical-align: baseline;" style="border:0;box-sizing:border-box;font-family:inherit;font-stretch:inherit;font-variant:inherit;font-weight:inherit;line-height:inherit;margin:0;outline:none 0;padding:0;vertical-align:baseline;">Kernel:</i></font></p>

</div>
<div data-blogger-escaped-style="-webkit-font-smoothing: antialiased; -webkit-tap-highlight-color: transparent; border: 0px; box-sizing: border-box; font-family: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin-bottom: 15px; margin-top: 15px; outline: none 0px; padding: 0px; vertical-align: baseline;" style="border:0;box-sizing:border-box;font-family:inherit;font-stretch:inherit;font-style:inherit;font-variant:inherit;font-weight:inherit;line-height:inherit;margin-bottom:15px;margin-top:15px;outline:none 0;padding:0;vertical-align:baseline;">
<p style="margin:0;"><i data-blogger-escaped-style="-webkit-font-smoothing: antialiased; -webkit-tap-highlight-color: transparent; border: 0px; box-sizing: border-box; font-family: inherit; font-stretch: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: none 0px; padding: 0px; vertical-align: baseline;" style="border:0;box-sizing:border-box;font-family:inherit;font-stretch:inherit;font-variant:inherit;font-weight:inherit;line-height:inherit;margin:0;outline:none 0;padding:0;vertical-align:baseline;">a softer, usually edible part of a nut, seed, or fruit stone contained within its hard shell.

See how the words kernel/shell relate? That’s the origin of it and its borrowed use in computing. The kernel is the essence/core of the OS. You access the machine via the OS and the OS via a “shell” that seems to “contain” the kernel.

Hope this clarifies your confusion 🙂

The core inner part of the OS is the Kernel (linux kernel or Windows kernel or FreeBSD kernel) but users interact with this by using the outer part or shell (eg bash shell or cmd.exe or korn shell)

Users can not directly control hardware like printers or monitors. Users can not directly control virtual memory or process scheduling. While the kernel takes care of such matters, the user uses the UI or shell to communicate with the kernel. The UI can be CLI (bash shell or DOS shell) or GUI (Kde for Linux or Metro UI for Windows)

The kernel is the part of the operating system that runs in privileged mode. It does all sorts of things like interact with hardware, do file I/O, and spawn off processes.

The shell (e.g., bash), by contrast, is a specific program which runs in user-space (i.e., unprivileged mode). Whenever you try to start a process with the shell, the shell has to ask the kernel to make it. In Linux, it would probably do this with the system calls fork and execve. Furthermore, the shell will forward its input (usually, from your own key presses) to the running program’s stdin, and it will forward the program’s output (stdout and stderr) to its own output (usually displayed on your screen). Again, it does all this with the help of the kernel.

Basically the kernel is the center of the operating system that manages everything. The shell is just a particular program, a friendly interface that translates your commands into some low-level calls to the kernel.

Analogical we try to say like, Kernel is chef who prepare food, and shell is kind of  waiter who take the order and deliver it to the user.

Technically, Shell is software program which understood that what user want and convey it to kernel. Kernel perform work according to the instruction and return back to user via shell.

If you really enthusiastic about how shell and kernel work together then please feel free to browse code NEKTech-Linux Jitendra-khasdev/NEKTech-Linux-Shell  

  • A shell is a command interpreter, i.e. the  program that either process the command you enter in your terminal  emulator (interactive mode) or process shell scripts (text files  containing commands) (batch mode). In early Unix times, it used to be  the unique way for users to interact with their machines. Nowadays,  graphical environments are replacing the shell for most casual users.

  • A kernel is a low level program interfacing with  the hardware (CPU, RAM, disks, network, …) on top of which  applications are running. It is the lowest level program running on  computers although with virtualization you can have multiple kernels  running on top of virtual machines which themselves run on top of  another operating system.

  • An API is a generic term defining the interface developers have to use when writing code using libraries and a programming language. Kernels have no APIs as they are not libraries. They do have an ABI,  which, beyond other things, define how do applications interact with  them through system calls. Unix application developers use the standard C  library (eg: libc, glibc) to build ABI compliant binaries. printf(3) and fopen(3) are not wrappers to system calls but (g)libc standard facilities. The low level system calls they eventually use are write(2) and open(2) and possibly others like brk, mmap. The number in parentheses is a convention to tell in what manual the command is to be found.

The first volume of the Unix manual pages contains the shell commands.

The second one contains the system call wrappers like write and open. They form the interface to the kernel.

The third one contains the standard library (including the Unix standard API) functions (excluding system calls) like fopen and printf. These are not wrappers to specific system calls but just code using system calls when required.

A  KERNEL is the part of the Operating System that communicates between the hardware and software of a computer and manages how hardware resources are used to meet software requirements.

A SHELL is the user interface that allows users to request specific tasks from the computer. Two types of user interfaces are the text based “Command Line Interface”, (CLI), or the Icon based “Graphical User Interface”, (GUI).

Note that the “CLI” uses less resources and is a more stable interface.

However, those of us using home routers will use a “GUI”. Note that the Operating System, (OS), of a home router is actually called “Firmware”.

Reference

https://www.quora.com/What-is-the-difference-between-a-kernel-and-shell

What is Shell and Kernel

Both the Shell and the Kernel are the Parts of this Operating System. These Both Parts are used for performing any Operation on the System. When a user gives his Command for Performing Any Operation, then the Request Will goes to the Shell Parts, The Shell Parts is also called as the Interpreter which translate the Human Program into the Machine Language and then the Request will be transferred to the Kernel. So that Shell is just called as the interpreter of the Commands which Converts the Request of the User into the Machine Language.

Kernel is also called as the heart of the Operating System and the Every Operation is performed by using the Kernel , When the Kernel Receives the Request from the Shell then this will Process the Request and Display the Results on the Screen.

As we have learned there are Many Programs or Functions those are Performed by the Kernel But the Functions those are Performed by the Kernel will never be Shown to the user. And the Functions of the Kernel are Transparent to the user.

Reference

http://ecomputernotes.com/fundamental/disk-operating-system/what-is-shell-and-kernel

What is “the shell”?

Simply put, the shell is a program that takes your commands from the keyboard and gives them to the operating system to perform. In the old days, it was the only user interface available on a Unix computer. Nowadays, we have graphical user interfaces (GUIs) in addition to command line interfaces (CLIs) such as the shell.

On most Linux systems a program called bash (which stands for Bourne Again SHell, an enhanced version of the original Bourne shell program, sh, written by Steve Bourne) acts as the shell program. There are several additional shell programs available on a typical Linux system. These include:kshtcsh and zsh.

http://linuxcommand.org/lts0010.php

The Superuser

In computing, the superuser is a special user account used for system administration. Depending on the operating system (OS), the actual name of this account might be root, administrator,admin or supervisor. In some cases, the actual name of the account is not the determining factor; on Unix-like systems, for example, the user with a user identifier (UID) of zero is the superuser, regardless of the name of that account;[1] and in systems which implement a role based security model, any user with the role of superuser (or its synonyms) can carry out all actions of the superuser account).

The principle of least privilege recommends that most users and applications run under an ordinary account to perform their work, as a superuser account is capable of making unrestricted, potentially adverse, system-wide changes.

https://en.wikipedia.org/wiki/Superuser

The Shell

In computing, a shell is an operating system’s user interface for access to that operating system. It is named a shell because it is a layer around the operating system kernel.

Most operating system shells are not direct interfaces to the underlying kernel, even if a shell communicates with the user via peripheral devices attached to the computer directly. Shells are actually special applications that use the kernel API in just the same way as it is used by other application programs. A shell manages the user–system interaction by prompting users for input, interpreting their input, and then handling an output from the underlying operating system (much like a read–eval–print loop, REPL).[1] Since the operating system shell is actually an application, it may easily be replaced with another similar application, for most operating systems.

Most operating system shells fall into one of two categories – command-line and graphical. Command line shells provide a command-line interface (CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). Other possibilities, although not so common, include voice user interfaceand various implementations of a text-based user interface (TUI) that are not CLI. The relative merits of CLI- and GUI-based shells are often debated.

Text (CLI) shells

command-line interface (CLI) is an operating system shell that uses alphanumeric characters typed on a keyboard to provide instructions and data to the operating system, interactively. For example, a teletypewriter can send codes representing keystrokes to a command interpreter program running on the computer; the command interpreter parses the sequence of keystrokes and responds with an error message if it cannot recognize the sequence of characters, or it may carry out some other program action such as loading an application program, listing files, logging in a user and many others. 

A feature of many command-line shells is the ability to save sequences of commands for re-use. Such batch files (script files) can be used repeatedly to automate routine operations such as initializing a set of programs when a system is restarted. Batch mode use of shells usually involves structures, conditionals, variables, and other elements of programming languages; some have the bare essentials needed for such a purpose, others are very sophisticated programming languages in and of themselves. Conversely, some programming languages can be used interactively from an operating system shell or in a purpose-built program.

The command-line shell may offer features such as command-line completion, where the interpreter expands commands based on a few characters input by the user. A command-line interpreter may offer a history function, so that the user can recall earlier commands issued to the system and repeat them, possibly with some editing. Since all commands to the operating system had to be typed by the user, short command names and compact systems for representing program options were common. Short names were sometimes hard for a user to recall, and early systems lacked the storage resources to provide a detailed on-line user instruction guide.

Graphical shells

Graphical shells provide means for manipulating programs based on graphical user interface (GUI), by allowing for operations such as opening, closing, moving and resizing windows, as well as switching focus between windows. Graphical shells may be included with desktop environments or come separately, even as a set of loosely coupled utilities.

Reference

https://en.wikipedia.org/wiki/Shell_(computing)

You haven’t about…Curl?

What’s curl used for?

curl is used in command lines or scripts to transfer data. It is also used in cars, television sets, routers, printers, audio equipment, mobile phones, tablets, settop boxes, media players and is the internet transfer backbone for thousands of software applications affecting billions of humans daily.

Supports…

DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, HTTP/2, cookies, user+password authentication (Basic, Plain, Digest, CRAM-MD5, NTLM, Negotiate and Kerberos), file transfer resume, proxy tunneling and more.

https://curl.haxx.se/

FYI -Apache file: httpd.conf

FYI -Apache file: httpd.conf

HTTP Daemon is a software program that runs in the background of a web server and waits for the incoming server requests. The daemon answers the request automatically and serves the hypertext and multimedia documents over the internet using HTTP.

httpd stands for Hypertext Transfer Protocol Daemon (i.e. web server).

httpd.conf is a configuration file which is used by the Apache HTTP Server. It is the file which Apache server looks at for its different configuration properties . Properties can be directly edited from the file using super user permissions.

The httpd.conf file can be located on any Unix-based system that complies with the Filesystem Hierarchy Standard under the following path: /etc/httpd/httpd.conf.

This file, httpd.conf was once used in Microsoft’s Internet Information Services(IIS)

Reference:a

https://en.wikipedia.org/wiki/Httpd.conf

https://en.wikipedia.org/wiki/Httpd

From https://help.ubuntu.com/lts/serverguide/httpd.html it will be worth knowing that the httpd.conf does no longer exist in newer versions of apache

Therefore this commands wont work as to check if apache is running

sudo service httpd status

or

/etc/init.d/httpd status 

You should also know that the path /etc/init.d/ exist but the file httpd doesnt.

To start apache use: sudo systemctl start apache2.service 

or

How to start Apache 2:

 sudo service apache2 start

How to stop Apache 2:

sudo service apache2 stop

How to restart Apache 2:

sudo service apache2 restart

How to reload Apache 2:

sudo service apache2 reload

Also:

/etc/init.d/apache2 restart  -to restart your main site’s apache service.

sudo systemctl status apache2.service -check status of apache if running.

or

sudo service apache2 status

or

/etc/init.d/apache2 status

Reference:

http://askubuntu.com/questions/718589/httpd-is-not-present-in-init-d

http://www.webhostingtalk.com/showthread.php?t=712742

http://www.cyberciti.biz/faq/ubuntu-linux-start-restart-stop-apache-web-server/

.

It will be good to note that if apache is running it doesnt mean that the HTTP connection is working. To test HTTP connection of apache you can read about telnet.

Terminal Commands

1. Some folders cant be logged into with just cd Folder, some folders works with 

cd /Folder

This is true for Usr folder

2. To find a file whose location you happen to have forgotten use: Sudo find / -name file_name 

3. Apache2ctl -V is used to view info about the apache2 if it is installed. It includes info such as the configuration file location and name of file as well.

Sending Output to a File – Terminal

1. Using redirection operator (>)

In addition to redirecting the output from one process and sending it to another process, we can also write that output to a file using the > operator.

$ ls -a ~ | grep _ > underscores.txt

http://conqueringthecommandline.com/book/basics

2. If it was an output from online using Curl then use ‘-O’ to send the output to a file

Write output to file instead of stdout. If you are using {} or [] to fetch multiple documents, you can use ’#’ followed by a number in the file specifier. That variable will be replaced with the current string for the URL being fetched. Like in:curl http://{one,two}.site.com -o “file_#1.txt”or use several variables like:curl http://{site,host}.host[1-5].com -o ”#1_#2″You may use this option as many times as the number of URLs you have. See also –create-dirs option to create the local directories dynamically. Specify ’-’ to force the output to stdout.

https://www.tutorialspoint.com/unix_commands/curl.htm

if you want to save the content of the page as part of your request, add the -o option along with a file name.

$ curl -o msg http://quiet-waters-1228.herokuapp.com/hello
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    40    0    40    0

http://www.computerworld.com/article/2992017/operating-systems/the-joy-of-curl.html

3. Download a Single File

The following command will get the content of the URL and display it in the STDOUT (i.e on your terminal).

$ curl http://www.centos.org

To store the output in a file, you an redirect it as shown below. This will also display some additional download statistics.

$ curl http://www.centos.org > centos-org.html  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed 100 27329    0 27329    0     0   104k      0 –:–:– –:–:– –:–:–  167k

4. Save the cURL Output to a file

We can save the result of the curl command to a file by using -o/-O options.

  • -o (lowercase o) the result will be saved in the filename provided in the command line
  • -O (uppercase O) the filename in the URL will be taken and it will be used as the filename to store the result
$ curl -o mygettext.html http://www.gnu.org/software/gettext/manual/gettext.html

Now the page gettext.html will be saved in the file named ‘mygettext.html’. You can also note that when running curl with -o option, it displays the progress meter for the download as follows.

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 66 1215k   66  805k    0     0  33060      0  0:00:37  0:00:24  0:00:13 45900
100 1215k  100 1215k    0     0  39474      0  0:00:31  0:00:31 --:--:-- 68987

When you use curl -O (uppercase O), it will save the content in the file named ‘gettext.html’ itself in the local machine.

$ curl -O http://www.gnu.org/software/gettext/manual/gettext.html

Note: When curl has to write the data to the terminal, it disables the Progress Meter, to avoid confusion in printing. We can use ‘>’|’-o’|’-O’ options to move the result to a file.

Similar to cURL, you can also use wget to download files. Refer to wget examplesto understand how to use wget effectively.

http://www.thegeekstuff.com/2012/04/curl-examples/?utm_source=feedburner

5. curl http://www.google.com | pbcopy

Here, we’re feeding the response retrieved by curl into another new command, pbcopy. This is a little bit nicer on the eyes and the brain, since it just puts the curl results straight to your clipboard, which allows you to paste straight into your favorite text editor. No code will be printed in your Terminal, only a confirmation graph of curl’s download.

We can also use redirection with curl to copy it straight to a file, skipping the middleman.

6. curl http://www.google.com >> ~/google.txt

This will append the response into google.txt, located in your home directory. You could also use a single ’>’ to obliterate what’s in that file, leaving only Google’s source in the file.

https://quickleft.com/blog/command-line-tutorials-curl/

LAMP Project

1. setting up a web server and develop a simple PHP app, maybe one that connects to another web service somewhere

2.  You should use VMWare to install an Ubuntu Server VM on your computer. You can download it here:http://www.ubuntu.com/download/server Try to get a web server running on your VM. Then see if you can install wordpress on that machine. NAT (Network Address Translation). Then you can learn how accessing something over NAT is different from with a VPN.

3.  Now I have:

a.I installed LAMP during the installation process

b. I can confirm that my apache is running, however I cannot display the apache web page on my host when I do localhost/ on my brower client (chrome). 

c. I can also confirm that I have telnet is installed on my ubuntu server but not running on my server because I have no deamon installed. My research tells me that without the deamon the telnet service is useless.

d.Now, should I install the deamon in other to use the telnet to test my HTTP connection as regards to my apache connection from host to guest?Meanwhile my NAT setup has the Host port and Gues port specified to be of the same port of 80, although I did try using the guest port of 8080 too but all to no avail. But I think my NAT settings, specifically the port forwarding is correct since I did left the host and guest IP Addresses empty.

4. Okay, I was able to:

a. Use telnet to confirm that my Apache connection is positive on my guest.

b. I was able to confirm that the host port I set on my Vbox is listened to by my host. However, there doesnt seem to be any port being listened to nor traffic sent to on my guest. And because of this, I will be using the host terminal to setup my nat port forwarding and that is by utilizing the Vboxmanage commands on my host terminal.- I want to use Vboxmanage to display my Ubuntu server info- also Vboxmanage to display current port forwarding setup or rather nat settings of my Ubuntu. From there I will    be able to find out if I have open ports at all in my Ubuntu server (although I didnt find any when checked from      the guest). If I can find my apache port, then there wont be any need to do the below last step, I will skip the        step and dive into setting up my SSH connection.- After excelling in the two steps above, I will then reset the nat forwarding using vboxmanage commands on my       host

c.Now, I havent setup my SSH yet, I will get there once I exhaust all my other possible ways of finding what is preventing the apache connection from host to guest. In addition, i did confirm that my ubuntu firewall is turned off and my host firewall as well is not blocking any connection (b/c I did turn off my norton firewall on my host but no apache connection still). 

5. If you think your NAT forwarding is set up correctly (and it might be), it’s also possible that your guest has a firewall turned on.I see you’re sending these emails late I night. I hope you’re not working yourself too hard. 

6. a If I want to be able to connect from my Guest (Ubuntu server) to my host (Mac OS), I believe I can setting up NAT with port forwarding from guest to the host just as I did for connecting from host to guest?

b.       Also, after setting up my connection from host (Mac OS) to guest (Ubuntu Server), how do I setup port forwarding from my router to my host to be able to connect outside my home network such as from work etc?

c. I haven’t researched much about putty but I believe putty can have a part to play here since you do make use of it a lot. Can you please direct me on where to start researching from regarding its usage?

7.  a.Item 1 will be a good one to discuss.

b. Item 2 will depend on your router. The first thing you’ll have to do is find out how to connect to your router. See if you can figure out how to do that.

c.For 2b – I would like you to do some research about Putty before asking me about it. I think you know where I’ll direct you to find out more about putty – try that first J. If you have questions about the things you read, we can talk about those