1. Using redirection operator (>)
In addition to redirecting the output from one process and sending it to another process, we can also write that output to a file using the >
operator.
$ ls -a ~ | grep _ > underscores.txt
http://conqueringthecommandline.com/book/basics
2. If it was an output from online using Curl then use ‘-O’ to send the output to a file
Write output to file instead of stdout. If you are using {} or [] to fetch multiple documents, you can use ’#’ followed by a number in the file specifier. That variable will be replaced with the current string for the URL being fetched. Like in:curl http://{one,two}.site.com -o “file_#1.txt”or use several variables like:curl http://{site,host}.host[1-5].com -o ”#1_#2″You may use this option as many times as the number of URLs you have. See also –create-dirs option to create the local directories dynamically. Specify ’-’ to force the output to stdout.
https://www.tutorialspoint.com/unix_commands/curl.htm
if you want to save the content of the page as part of your request, add the -o option along with a file name.
$ curl -o msg http://quiet-waters-1228.herokuapp.com/hello
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 40 0 40 0
http://www.computerworld.com/article/2992017/operating-systems/the-joy-of-curl.html
3. Download a Single File
The following command will get the content of the URL and display it in the STDOUT (i.e on your terminal).
$ curl http://www.centos.org
To store the output in a file, you an redirect it as shown below. This will also display some additional download statistics.
$ curl http://www.centos.org > centos-org.html % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 27329 0 27329 0 0 104k 0 –:–:– –:–:– –:–:– 167k
4. Save the cURL Output to a file
We can save the result of the curl command to a file by using -o/-O options.
- -o (lowercase o) the result will be saved in the filename provided in the command line
- -O (uppercase O) the filename in the URL will be taken and it will be used as the filename to store the result
$ curl -o mygettext.html http://www.gnu.org/software/gettext/manual/gettext.html
Now the page gettext.html will be saved in the file named ‘mygettext.html’. You can also note that when running curl with -o option, it displays the progress meter for the download as follows.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
66 1215k 66 805k 0 0 33060 0 0:00:37 0:00:24 0:00:13 45900
100 1215k 100 1215k 0 0 39474 0 0:00:31 0:00:31 --:--:-- 68987
When you use curl -O (uppercase O), it will save the content in the file named ‘gettext.html’ itself in the local machine.
$ curl -O http://www.gnu.org/software/gettext/manual/gettext.html
Note: When curl has to write the data to the terminal, it disables the Progress Meter, to avoid confusion in printing. We can use ‘>’|’-o’|’-O’ options to move the result to a file.
Similar to cURL, you can also use wget to download files. Refer to wget examplesto understand how to use wget effectively.
http://www.thegeekstuff.com/2012/04/curl-examples/?utm_source=feedburner
5. curl http://www.google.com | pbcopy
Here, we’re feeding the response retrieved by curl into another new command, pbcopy. This is a little bit nicer on the eyes and the brain, since it just puts the curl results straight to your clipboard, which allows you to paste straight into your favorite text editor. No code will be printed in your Terminal, only a confirmation graph of curl’s download.
We can also use redirection with curl to copy it straight to a file, skipping the middleman.
6. curl http://www.google.com >> ~/google.txt
This will append the response into google.txt, located in your home directory. You could also use a single ’>’ to obliterate what’s in that file, leaving only Google’s source in the file.
https://quickleft.com/blog/command-line-tutorials-curl/