Tags: consecutive, file, guess, lines, linux, pipe, print, printing, programming, screen, script, text, ulimately, unix

printing a given set of consecutive lines from a text file

On Programmer » Unix & Linux

3,704 words with 7 Comments; publish: Sat, 17 Nov 2007 03:08:00 GMT; (20046.88, « »)

I guess it isn't a script question per se, but it could be used in a script. What I want to do is to print to the screen (and ulimately pipe to a file) a particular number of lines from a text file, starting at a given line. The "more" command starts from a specified line with the +num option but then it goes right to the end of the file. I have specified a page length but when you pipe to a file that doesn't make any difference. This might be confusing so I'll give a simple example.

Say file1.txt is:






and I want file2.txt to be:



How can I do this? Would I have to write a script or is there a single line command I haven't thought of?


All Comments

Leave a comment...

    • Could you do soemthing like ( very basic might not be what you need )

      cat file | tail -3 >tmp_file

      cat tmp_file | head -2 > final_file


      #1; Sat, 17 Nov 2007 17:28:00 GMT
    • I only know the number of lines from the beginning of the file, and the end of the file is being added to by another program (the example file is very much smaller than the files I am dealing with). I actually ended up doing it by hand, but I will take note of the "head" command, of which I was previously unaware, for future use. Thanks for the suggestion.
      #2; Sat, 17 Nov 2007 17:29:00 GMT
    • I figured out exactly what I had to do, now that I know about the "head" command. The command is:

      more +3 file1.txt |head -2

      #3; Sat, 17 Nov 2007 17:30:00 GMT
    • #!/bin/sh

      # dumpln

      # $1 = filename

      # $2 = start

      # $3 = end

      let count=0

      while read rec


      count=`expr $count + 1`

      if [ $count -ge $2 ] && [ $count -le $3 ];then

      echo $rec


      done < $1


      save the code in a file called dumpln

      chmod +x dumpln

      dumpln filename start line end line


      dumpln oldfile 4 10 > newfile

      #4; Sat, 17 Nov 2007 17:31:00 GMT
    • I thought I would do it in perl:


      $| =1;

      $file = shift;

      $startNum = shift;

      $endNum = shift;

      open (FILE, $file) or die "could not open file:$!\n";

      my $x='0';

      while (<FILE>){


      if ($x >= $startNum && $x <= $endNum){print};


      close FILE;

      it takes the file you want to work on as an argument with the start and end lines you want.

      #5; Sat, 17 Nov 2007 17:32:00 GMT
    • Do the lines have any strings which are common, and are uncommon to the lines you would like to omit? If so, you can use the grep command to select those lines with the matching strings or you can use the -v option to invert the matches or select those lines without the matching strings. This will probably allow you to be more general in your line selecting techniques.

      grep "matching string" filename > __filename


      grep -v "matching string" filename > __filename

      Earlier you were mentioning pipes, but I think you meant to say "redirect" the output into another file which is what the > operator does. So you have many suggestions this one would apply if your lines weren't always consecutive in nature and there is matching text in the lines you want to keep.


      #6; Sat, 17 Nov 2007 17:33:00 GMT
    • tac and head ...

      ...unusual Q. from someone in my work..

      How can I remove the first and last n lines of a file? "tail" can, but "head" can't.

      Elegant answer, using tac - cat in reverse!

      tail +3 foo|tac|tail +3|tac

      Whoa! Don't ya just luuuurve them pipes!!!

      Now if I knew how to do a <EDIT> I Do Now!! (I think)

      for i in /home/andy/pr0n


      tail +3 $i | tac | tail +3 | tac > $i


      construct... </EDIT>



      #7; Sat, 17 Nov 2007 17:34:00 GMT