Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

eppur_se_muova

(36,247 posts)
Sun Oct 18, 2015, 12:18 AM Oct 2015

Need some really Unix-y help w/basic commands ...

OK, I'm not a Unix guru, but I did enough with Unix in my earlier days to learn the basics of stdin, stdout, pipes, etc. Now I'm trying to extraxt data from (hundreds of) files that were recovered from a partition whose directory was overwritten. I'm frustrated to find basic programs like grep aren't functioning as described on some Linux distros. What I really need to do is extract hex data from the 17-20th and 57-60th bytes of each file and print out a list of the data together with the name of the appropriate file. To view just one file I can use

hexdump <filename> | head -4

and from there visually extract the bytes. But to do this on a large scale, I tried

hexdump * | head -4 > <filename>

and only got data from the first file. I was able to locate data with grep but the -H option (print file name) isn't implemented in some Linux distros. Any suggestions? The least I need is a list of those bytes in the same order as the filenames (with none omitted); I could patch those together in a GUI word processor. At best, I'd like to list filename, long int, long int after interpreting those bytes, but again that's something I can kluge if it's too hard to do with the CLI.

BTW, I think I've already found the most important files by trial and error but I'd like to be sure, and be prepared if this should happen again. I.e., it's not all that urgent and I wouldn't want anyone to knock themselves out over it.

3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Need some really Unix-y help w/basic commands ... (Original Post) eppur_se_muova Oct 2015 OP
The reason hexdump * | head -4 > <filename> didn't work... PoliticAverse Oct 2015 #1
Thanks, I've started to read a book on shell scripting, oh, about ten times now ... eppur_se_muova Oct 2015 #3
well one problem (not at a nix system right now and i always want to doublecheck) w0nderer Oct 2015 #2

PoliticAverse

(26,366 posts)
1. The reason hexdump * | head -4 > <filename> didn't work...
Sun Oct 18, 2015, 12:54 AM
Oct 2015

is because it piped the output of the entire first command "hexdump *" into "head -4" not the
output of each hexdump one at a time.

You need a short shell script, like:

#!/bin/bash
for f in $*
do
echo -n $f " " >> output.txt
hexdump $f | head -4 >> output.txt
done

eppur_se_muova

(36,247 posts)
3. Thanks, I've started to read a book on shell scripting, oh, about ten times now ...
Sun Oct 18, 2015, 09:31 AM
Oct 2015

will give that a shot.

w0nderer

(1,937 posts)
2. well one problem (not at a nix system right now and i always want to doublecheck)
Sun Oct 18, 2015, 12:58 AM
Oct 2015

could be that you are using > instead of >> to redirect output

> #for each iteration it'll overwrite the outputfile

>> #appends data to output file

rough and fast 'hackup' could look like

(checked on busybox) and dumps errors to stdout

for i in $(ls);do echo $i >>output.out && hexdump $i |head -4 >> output.out ;done

output.out will be
filename
hexoutput 4 lines
filename
hexoutput 4 lines

and so on

hope this helps (like i said, only tested on w32 system with busybox ash/sh) (all i had access to fast and dirty)

Latest Discussions»Culture Forums»Open Source and Free Software»Need some really Unix-y h...