Wednesday, July 31, 2013

Gmail pdf attachments to Blogger embedded jpeg

I maintain a blog for a bunch of old people who don't know what blogging is so they send emails instead of just making it a blog to begin with...  These people send me attachments... PDF attachments.

What I HATE to do is download the file then upload it back again.  It makes it worse to download, convert to jpg, and then upload just to embed the jpeg version of the PDF into the post!

I found a work around today.

[Add Gmail attachments to gDrive]

UPDATE: Gmail rolled out the ability to simply click the attachment and preview or move it in your gDrive!
OLD INFO: First we need to get the PDFs into google drive.  When googling "gmail attachments to drive" I came across this fancy script that can be used to pull all gmail attachments into your google drive: http://www.labnol.org/internet/send-gmail-to-google-drive/21236/

Once the PDF attachments are in gDrive open it in the gDrive Viewer (in most cases that means click on the title/link and choose "Open" in the bottom right of the page.


Viewing the PDF via gDoc viewer gives you a jpeg for each page of the PDF (google's best effort of what it should look like).

Here is where the magic happens...

[Use the gDrive PDF Preview image as the image in your blog]

Simply right-click on the image (jpeg), choose "Copy Image URL", go to your blog and add an image via URL... BLAMO!



All this together enables you to get an attachment in an email, post that (a representation of it) inline in a post, all without downloading a single thing!

I go the extra mile of setting the Share permissions of the PDF to "Anyone who has the link can view" and adding a link to the PDF along with the inline jpeg.

And if the attachment is generic and might be used in future posts (like a parent permission slip) then I make a separate post that is just the inline-jpeg/pdf-link and link posts to it.

Friday, July 26, 2013

Linux - GNU Screen instructions

GNU Screen is a command line application that can be used to multiplex several virtual consoles, allowing a user to access multiple separate terminal sessions inside a single terminal window or remote terminal session. It is useful for dealing with multiple programs from a command line interface, and for separating programs from the shell that started the program." Remember, Screen was designed in 1987 but is still quite useful.

For a full HTML manual use the following link (http://www.gnu.org/software/screen/manual/screen.html)

There are a few real world usage cases that make Screen very valuable.  You may want to start a command line based application or process then disconnect from it without closing or interrupting the process and come back to it later. Disconnection could be due to closing of an SSH session or because you needed to start the process but someone else needs to finish it... more on this to follow.

There are a few "Quick Reference Guides" out there but for most cases the following* (http://aperiodic.net/screen/quick_reference) reference guide will do the trick (*edited here):

Getting in

start a new screen session with a session name <name> screen -S <name>
list running sessions/screens screen -ls
reattach to a running session screen -r
… to session with name screen -r <name>
the “ultimate attach” screen -dRR (Attaches to a screen session. If the session is attached elsewhere, detaches that other display. If no session exists, creates one. If multiple sessions exist, uses the first one.)
start in multi-user mode screen -x if only one screen session is started this will connect to it in multiuser mode. If multiple sessions you will need to define the session by name or number.
start connecting to tty/serial connection screen <device> <baud_rate>
Mac ex: using a USB-serial adapter:screen /dev/cu.usbserial 115200
Linux ex: using the serial port:screen /dev/ttyS0 115200
Additional windows can be made as normal, however, to close this window (the serial window) use C-a k

Escape key

All screen commands are prefixed by an escape key, by default C-a (that's Control+a, sometimes written ^a). To send a literalC-a to the programs in screen, use C-a a.

Getting out

detach C-a d
detach and logout (quick exit) C-a D D
exit screen “C-a : quit” or exit all of the programs in screen.
force-exit screen C-a C-\ (not recommended)

Help

See help C-a ? (lists keybindings)

The man page is the complete reference, but it's very long.

Window Management

create new window C-a c
change to last-visited active window C-a C-a (commonly used to flip-flop between two windows)
change to window by number C-a <number> (only for windows 0 to 9)
change to window by number or name C-a ' <number or title>
change to next window in list C-a n or C-a <space>
change to previous window in list C-a p or C-a <backspace>
see window list C-a " (allows you to select a window to change to)
show window bar C-a w (if you don't have window bar)
close current window Close all applications in the current window (including shell)
kill current window C-a k (not recommended) used if connecting to a tty serial port
kill all windows C-a \ (not recommended)
rename current window C-a A

Split screen

split display horizontally C-a S
split display vertically C-a | or C-a V (for the vanilla vertical screen patch)
jump to next display region C-a tab
remove current region C-a X
remove all regions but the current one C-a Q

Scripting

send a command to a named session screen -S <name> -X <command>
create a new window and run ping example.com screen -S <name> -X screen ping example.com
stuff characters into the input buffer
using bash to expand a newline character
(from here)
screen -S <name> [-p <page>] -X stuff $'quit\r'
a full example
# run bash within screen
screen -AmdS bash_shell bash
# run top within that bash session
screen -S bash_shell -p 0 -X stuff $'top\r'
 
# ... some time later
 
# stuff 'q' to tell top to quit
screen -S bash_shell -X stuff 'q'
# stuff 'exit\n' to exit bash session
screen -S bash_shell -X stuff $'exit\r'

Misc

redraw window C-a C-l
enter copy mode C-a [ or C-a <esc> (also used for viewing scrollback buffer)
paste C-a ]
monitor window for activity C-a M
monitor window for silence C-a _
enter digraph (for producing non-ASCII characters) C-a C-v
lock (password protect) display C-a x
enter screen command C-a :

There are more options to make screen more visually functional which can be found at the following link (http://www.debian-administration.org/articles/560). I have also incorporated some of the options into the following screen configuration file.

More can be configured through a screen config file.  In the user's home directory make a file .screenrc that contains the options

#CODE: example .screenrc file
defscrollback 5000
altscreen on
shell -/bin/bash
hardstatus alwayslastline "%{= g} %{= w}%-Lw%{=r}%n%f* %t%{-}%+LW"
  1. defscrollback 5000 = 5000 lines of screen scroll back memory
  2. altscreen on = allows screen to act like the standard virtual terminal window and clear text at the exit of a program like vim, less, or more
  3. shell -/bin/bash = allows screen to keep the standard shell prompt prefix/title instead of "bash-3.2$"
  4. hardstatus alwayslastline ... = always keep the last line as a title list of current windows and use a "-" to show the last window used and highlight and place an "*" next to the current selected window.

For a more complete walkthrough visit (https://docs.loni.org/wiki/The_Door_to_Screen/Table_of_Contents)

Create an EXT2 FS in Linux via CLI (CentOs 6.3 64bit)

1. Open a terminal (Applications > System Tools > Terminal)

2. Within that terminal, type   fdisk -l     this will list all the disks that the system currently sees. For ease of use, make sure your system doesn't have any additional external drive connected other than the one you want to format and setup. See below for an example fdisk -l output

root@localhost:~ [root@localhost ~]# fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c8ffe

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64       60802   487873536   8e  Linux LVM

Disk /dev/mapper/VolGroup-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/VolGroup-lv_swap: 7834 MB, 7834959872 bytes
255 heads, 63 sectors/track, 952 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/VolGroup-lv_home: 438.1 GB, 438057304064 bytes
255 heads, 63 sectors/track, 53257 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdb: 16.0 GB, 16013852672 bytes
64 heads, 32 sectors/track, 15272 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x42c383bf

In this case, the bolded and italicized section at the bottom is the disk I want. And that disk is showing as /dev/sdb

3. You can also verify this is the correct disk by checking the /var/log/messages right after you plug in the drive (after boot, and right before formatting)

command is tail /var/log/messages The output will look something like this:

root@localhost:~ [root@localhost ~]# tail /var/log/messages 
Jun 26 14:26:17 localhost kernel: scsi7 : SCSI emulation for USB Mass Storage devices
Jun 26 14:26:18 localhost kernel: scsi 7:0:0:0: Direct-Access              Patriot Memory   PMAP PQ: 0 ANSI: 0 CCS
Jun 26 14:26:18 localhost kernel: sd 7:0:0:0: Attached scsi generic sg2 type 0
Jun 26 14:26:19 localhost kernel: sd 7:0:0:0: [sdb] 31277056 512-byte logical blocks: (16.0 GB/14.9 GiB)
Jun 26 14:26:19 localhost kernel: sd 7:0:0:0: [sdb] Write Protect is off
Jun 26 14:26:19 localhost kernel: sd 7:0:0:0: [sdb] Assuming drive cache: write through
Jun 26 14:26:19 localhost kernel: sd 7:0:0:0: [sdb] Assuming drive cache: write through
Jun 26 14:26:19 localhost kernel: sdb:
Jun 26 14:26:19 localhost kernel: sd 7:0:0:0: [sdb] Assuming drive cache: write through
Jun 26 14:26:19 localhost kernel: sd 7:0:0:0: [sdb] Attached SCSI removable disk

The bolded and italicized portions tell you what the drive is, and the [sdb] tells you the drive is located at /dev/sdb

4. Once you have verified where the drive is (/dev/XXX) now you need to format it. The base command is mke2fs, and there are a ton of options you can use. Here is an example:

mke2fs -b 4096 -L New-Volume /dev/XXX

The -b creates a filesystem with a blocksize of 4k and the -L labels the volume as New-Volume.

Verify with your customer if they want any special parameters (like blocksize, Inode size, etc) set on the deliverable drive. Then include the options wanted when running this command. Type man mke2fs for the additional options and brief description of what they do.

Enable TFTP server

RHEL (Red Hat Enterprise Linux) 5.3 based MDCs (Meda Data Controllers)

(from RHEL 5 documentation)

The TFTP config file should already be configured as follows (disables = no, server_args = <path>):

#CODE
[root@sbf ~]# vim /etc/xinetd.d/tftp

#
# File /etc/xinetd.d/tftp
#
service tftp
{
...
...
       server_args             = -s /tftpboot
       disable                 = no
}

You must then restart xinetd for the new configuration to take effect:

[root@sbf ~]# chkconfig tftp on
[root@sbf ~]# service xinetd restart

To check if it is functioning (listening on the default port) use netstat:

#CODE
[root@sbf ~]# netstat -nulp | grep 69
udp        0      0 0.0.0.0:69                  0.0.0.0:*                               22418/xinetd        
udp        0      0 0.0.0.0:69                  0.0.0.0:*                               22379/in.tftpd

To turn it back off:

#CODE
[root@sbf ~]# chkconfig tftp off
[root@sbf ~]# service xinetd restart

To get files from the server using command line (Mac/Linux):

#CODE
[user@client ~]$ tftp 10.0.0.2
tftp> get testFile.txt

Connecting Samba or NFS from Mac

Samba (SMB/CIFS)

To connect to an SMB server:

  1. With the Finder active, from the Go menu, select Connect to Server... .  Alternatively, with the Finder active, press Command-k 
  2. In the Connect to Server window that opens, next to the "Address:" field, type cifs:// , followed by the fully qualified domain name (FQDN) or IP address of the server, a forward slash, and then the name of the shared volume (e.g., smb://10.0.0.4/).
  3. Click Connect
  4. In the authentication window that appears, click OK.

NFS

To connect to an NFS server:

  1. With the Finder active, from the Go menu, select Connect to Server... .  Alternatively, with the Finder active, press Command-k 
  2. In the Connect to Server window that opens, next to the "Address:" field, type nfs:// , followed by the FQDN or IP address of the server, a forward slash, and then the path of the exported share (e.g., nfs://10.0.0.4/media/spycer-vol0). 
  3. Click Connect.

Linux - Connect VNC to root desktop without console login

This is NOT the same screen as when logging in from a keyboard/video/mouse "console".  For remote console access x11vnc must be running and the console must be logged in first.  This is intended to give remote access to the root (virtual) desktop if a console is unavailable.
from: http://wiki.neddix.com/VNC_Server_Installation_on_CentOS

1. Introduction

This document describes how to install the VNC service on a Centos 5 with X Server.
* This currently requires manually starting the vncserver service *

2. Installation of the VNC Package

#CODE
yum install vnc-server

3. Configure Persistent Desktop Sessions

Make an entry in /etc/sysconfig/vncservers for each user account you want to give VNC access, e.g.
#CODE
VNCSERVERS="1:root 2:dvssan"
VNCSERVERARGS[1]="-geometry 1024x768"
VNCSERVERARGS[2]="-geometry 1024x768"

4. Set VNC Passwords

Run switch user for each account and set the VNC password
#CODE
su - <username>
vncpasswd
exit

5. Configure Service Startup

#CODE
chkconfig vncserver on
service vncserver start

6. Configure Windows Manager

By default the twm Windows Manager is used in a VNC session. To start the default Windows Manager uncomment these two lines in $HOME/.vnc/xstartup
#CODE
unset SESSION_MANAGER
exec /etc/X11/xinit/xinitrc
Restart the VNC Service
#CODE
service vncserver restart

7. Testing

Run the VNC Viewer, e.g. on a Windows PC and connect to the server. Append the display number to the hostname separated by a colon. E.g. in this sample config
#CODE
10.0.0.2:1
connects on display :1 as user root.

Xsan: Client cannot access certain folders on Xsan volume, or cannot access an entire volume

Xsan: Client cannot access certain folders on Xsan volume, or cannot access an entire volume (http://support.apple.com/kb/TS2742)

The solution is to run the following from terminal logged in as an administrative user:

#CODE sudo chflags nouchg /Volumes/<volume_name>

QLogic Blade license from 4Gb to 8Gb

The following are the CLI (command line interface) instructions for licensing a 4Gb blade to 8Gb
Command Line Interface Instructions:

  1. Log into the SANbox 5000 Series Fibre Channel switch Command Line Interface via telnet, ssh, or the serial console. 
  2. Enter the CLI command admin start. Enter the CLI command feature add [license_key]. [license_key] is the key no brackets. 
  3. To list the licenses installed on the SANbox 5000 Series Fibre Channel switch, enter the CLI command log
 Enterprise Fabric Suite or QuickTools Instructions:

  1. From the switch "Faceplate" view, left click the "Switch" pull down menu and select "Features"
  2. From the "Feature Licenses" dialog, select "Add" Enter the Authorization Code/License Key that is located at the top of this email Select "Add Key". 
  3. 4 additional ports will be active on the switch Select "Close" to return to the Faceplate view

Configure a RHEL MDS as a syslogd loghost (syslog collection server)

(from http://lonesysadmin.net/2011/01/13/how-to-configure-remote-syslogd-on-red-hatcentos-5/)

1. This logs all the logs from any number of external hosts to the /var/log/messages and other logs.

2. Edit /etc/sysconfig/syslog. Add “-r” to the SYSLOGD_OPTIONS line:

#CODE
SYSLOGD_OPTIONS="-m 0 -r"

Restart syslogd with:

#CODE
/usr/bin/sudo /sbin/service syslog restart

(note that the service is ‘syslog’ and not ‘syslogd’)

3. Verify that syslogd is listening on port 514 using netstat:

#CODE
$ sudo netstat –anp | grep 514
udp        0      0 0.0.0.0:514      0.0.0.0:*       5332/syslogd

4. Change another host to use the syslogd host. On another Linux box the format is in /etc/syslog.conf is something like:

#CODE
*.info;cron.!=info;mail.none;local0.notice          @logs.company.com

where logs.company.com is the machine you just set up to listen to syslog messages.

You’ll need to restart that host’s syslog to make the change. If you “tail –f /var/log/messages” on the log host you should be able to use /usr/bin/logger on the client host to make messages appear.

Client:

#CODE
$ logger hey

Syslog server:

#CODE
$ sudo tail -f /var/log/messages
…
Jan 13 15:33:37 clienthost plankers: hey

Keep in mind that syslog will sort messages into the categories it already has defined in /etc/syslog.conf. So if you send mail log data (mail.*) they’ll end up in /var/log/maillog by default.

5. You may wish to change your log rotation schedule to prevent large files. You can do this in /etc/logrotate.conf.

6. Searching, etc. can be done with standard UNIX tools like grep, tail, less, etc. in /var/log.

DotHill Gracefully Power-off

When powering off a DotHill you can log into the WebUI and use the following steps:

1. Login (default username / password = manage / !manage)

2. Click the "Manage" button

 

DotHill_shutdown1

 

3. Clink the "RESTART SYSTEM" link

4. Choose "Shut Down Both RAID Controllers" from the drop down box

5. Click the "Shut Down" button

 

DotHill_shutdown2

 

6. You will then need to physically power the unit off.

Applying a new StorNext license for HA/failover MDS

Applying a new StorNext license for HA / failover MDS configs.

The StorNext filesystems must be restarted to pickup the updated licenses, this will introduce a short service disruptions during the restart.

Do this on both MDSs:

   0) cd /usr/cvfs/config

   1) Rename the existing "license.dat" to something else in case we might need it later.

   2) Copy the new license file to the directory.

   3) Rename the new license file to "license.dat".

On the standby MDS (not necessarily MDS2, check cvadmin to see who is primary and who is standby):

   4) Make sure this is the standby MDS.   Use cvadmin to stop the snfs volumes.  (this will pickup the new licenses)

On the primary MDS:

   5) Use cvadmin to stop and then start of the snfs volumes.

Back on the standby MDS:

   6) Use cvadmin to start the snfs volumes.  (this will pickup the new licenses)

StorNext (cvfs) filesystem started / activated but won't mount on MDS (RHEL 5/6)

Be sure the multipath daemon (multipathed) must be disabled and /etc/multipath.conf renamed otherwise StorNext will not be able to mount the file system and will return:

  • mount.cvfs: Can´t mount filesystem ‘snfs1’: Device or resource busy—System log may contain additional information

dmesg / messages log will display:

  • CvOpenOnePath: Open Failed, disk <test123> error 6 device </dev/sdan>—Could not mount filesystem snfs1, cvfs error ‘File is busy’ (6)

 

* It is suggested to verify that the Linux Device-Mapper Multipath (DMMultipath) rpms are removed from the system.  

Thursday, July 25, 2013

Quantum/StorNext/SNFS/CVFS - Windows Permissions using an Active Directory Domain

Quantum/StorNext/SNFS/CVFS - Windows Permissions using an Active Directory Domain information provided by Quantum Support

From the StorNext "Operating Guidelines and Limitations" section of the release notes (beginning with version 3.5.3), as well as in the associated user's guides*:

In StorNext releases prior to 3.5, the StorNext Windows client attempted to keep the UNIX uid, gid and mode bits synchronized with similar fields in the Windows security descriptor. However, these Windows and UNIX fields were often not synchronized correctly due to mapping and other problems. One consequence of this problem was that changing the owner in Windows incorrectly changed the UNIX uid and file permissions and propagated these errors into sub-directories. Beginning with release 3.5, the StorNext Windows client sets the UNIX uid, gid and mode bits only when Windows creates a file. The StorNext Windows client will no longer change the Unix uid, gid or mode bits when a Windows user changes the Windows security descriptor or Read-Only file attribute.

If you change the UNIX mode bits and the file is accessible from Windows, you must change the Windows security descriptor (if Windows Security is configured On) or Read-Only file attribute to ensure the change is reflected on both Windows and UNIX.

 

Below is the description of the settings related to Windows and Active Directory usage in the StorNext file system config files.

The options that are related to Active Directory in a windows environment are:

The "windowsSecurity" variable is passed back to a Microsoft Windows client.

The WindowsSecurity variable enables or disables the use of the Windows Security Reference Monitor (ACLs) on Windows clients. This makes use of an Active Directory domain in which LDAP services must be enabled.

The "unixIdFabricationOnWindows" variable is passed back to a Microsoft Windows client.

The client uses this information to turn on/off "fabrication" of uid/gids from a Microsoft Active Directory obtained GUID for a given Windows user.

A value of yes will cause the client for this file system to fabricate the uid/gid and possibly override any specific uid/gid already in Microsoft Active Directory for the Windows user.

This setting should only be enabled if it is necessary for compatibility with Apple MacOS clients.

The default is false, unless the meta-data server is running on Apple MacOS, in which case it is true.

The "unixNobodyGidOnWindows" variable instructs the FSM to pass this value back to a Microsoft Windows client.

The Windows SNFS clients will then use this value as the gid for a Windows user when no gid can be found using Microsoft Active Directory.

The default value is 60001. This value must be between 0 and 2147483647, inclusive.

The "unixNobodyUidOnWindows" variable instructs the FSM to pass this value back to Microsoft Windows clients.

The Windows SNFS clients will then use this value as the uid for a Windows user when no uid can be found using Microsoft Active Directory.

The default value is 60001. This value must be between 0 and 2147483647, inclusive.

The remaining options for a unix to Windows conversion have to do with files and directories:

The "unixFileCreationModeOnWindows" variable instructs the FSM to pass this value back to Microsoft Windows clients.

The Windows SNFS clients will then use this value as the permission mode when creating a file.

The default  value is 0644.  This value must be between 0 and 0777, inclusive.

The "unixDirectoryCreationModeOnWindows" variable instructs the FSM to pass this value back to Microsoft Windows clients.

The Windows SNFS clients will then use this value as the permission mode when creating a directory.   

The default value is 0755.  This value must be between 0 and 0777, inclusive.

 

In conclusion and to try and make this understandable:

If the "windowsSecurity" option is set to false, then the uid/gid for any files created by any Windows Client will be the default settings established in the "unixNobodyGidOnWindows" and unixNobodyUidOnWindows” options.

If the "windowsSecurity" option is set to true, then the StorNext client makes a call with the Windows API DsGetDcName() and it returns either an error 1355 (ERROR_NO_SUCH_DOMAIN) or successfully connects to the domain control.  If an error is returned to the API query then we use the defaults as described earlier.  If the AD is reached and the logged in user is a member described in that domain, then StorNext expects the uid/gid information under the Unix Attributes tab to be populated and uses it instead of the default settings established in the file system configuration file(s).

 

Warning: Once the "windowsSecurity" option is enabled (configuration option set to true and file system restarted) the only way to disable it is to rebuild the file system with the "windowsSecurity" setting set to false.  Quantum Support has stated there is no additional metadata overhead and the only "down side" would have to be inferred from the *first section.

SAN permissions work around

Permissions on a SAN can become very complex.  The most flexible and widely adopted solution is a unified permissions (ex LDAP) solution.  LDAP requires an infrastructure and is outside the scope of this document but, depending on implementation, can offer user accountability and file traceability in addition to permission or access control.

One workaround is to manually synchronize the UIDs for all users of the SAN.  For example, set all user's UID to 501 across *nix (Mac, Linux, etc) and within StorNext configuration file for Windows clients.

First we need an understating of the current permissions.  By default the UID configured for use on DVS SAN equipment is 500.  Mac creates the first user as 501.  Windows does not have a *nix translatable UID by default and requires leveraging of services (StorNext, NFS, CIFS/SMB, etc.) to interpret a *nix UID.

The idea of manually synchronizing UIDs is simple, change the UID of all clients connecting to the SAN, or, change the UID that is used by a service allowing access to the SAN Volume.

  • *nix (Mac/Linux) clients with block level (fiber channel) access have direct access to the file system and therefore need to change their local (client) UID to match what is used by the SAN.
  • Windows clients with block level access will need a change made to the StorNext configuration file for each file system it connects to.
  • All clients that connect via service (NFS, CIFS/SMB) will be utilizing the UID defined in the config file of that service.
 
When choosing a UID take into consideration the number of clients of each type, if you have more Mac clients than Windows/Linux then using a UID of 501 might be simpler to implement.  And conversely, if you are adding a few Mac systems to an established SAN using UID 500 then changing the UID of the Mac systems would seem more likely.
 
Assuming we will use UID 501 across a new SAN making full use of NFS, CIFS/SMB:
Change the StorNext file system configuration (ex. sncfgedit dvs-rt0) file updating:
<unixNobodyUidOnWindows>500</unixNobodyUidOnWindows>
to
<unixNobodyUidOnWindows>501</unixNobodyUidOnWindows>
and restart the file system or cvfs service (service cvfs restart)
 
Change the NFS configuration (ex. /etc/exports) file updating:
/media/spycer-vol0      *(rw,async,no_root_squash,all_squash,insecure,anonuid=500,anongid=500)
to 
/media/spycer-vol0      *(rw,async,no_root_squash,all_squash,insecure,anonuid=501,anongid=500)
and restart the NFS service (service nfs restart)
 
Change the CIFS/SMB configuration (/etc/samba/smb.conf) file updating:
guest account = dvssan
to a user with the correct UID
guest account = smbsan
and restart the Samba service (service smb restart)
or update the dvssan UID to 501
usermod -u 501 dvssan
*Note: this would then require all local filesystem files created/used by dvssan (UID 500) to be updated to UID 501
The permissions for all files within the volume would need to be sycronized.  As root run the following command:
chown -R 501:501 /path/to/SAN/mountPoint
Once complete, all new files within the SAN volumes will belong to the current user of whichever client you are connecting with.

Commands I always forget...

I always seem to be forgetting stuff.  My Sister uses recall as her benchmark of how our aging mother is "doing" brain wise.  When hearing some of the stuff she expects Mom to remember I think I must be loosing it too, and I'm 45yrs younger than my Mom!

>Rsync

rsync is a cool little app that has been around for ages, it allows you to copy stuff from one system to another using ssh.
rsync -avhPn --del -e 'ssh -p xxxxx' --exclude 'Dir_of_BigFiles' ~/Personal/ user@remote_host:Personal/
the -n means "dry-run" so it pretends to copy stuff but doesn't. -a means archive (most likely what you want for files like personal photos), -v for verbose incase something goes wrong, -h for human readable sizes, -P for progress, and --del for delete stuff on the destination that is not on the source (so be careful with paths, hence -n).

I run this on my work laptop to sync the personal files (mostly pictures of my kid) from there to my home server.  My home server is allowing ssh incoming on a high port (to keep the script kiddies away) and I keep forgetting the syntax to rsync over ssh on a non standard port.  Oh, I also exclude the directory with big files (movies) since my bandwidth at home is kinda limited.

>SSH tricks

ssh is one of the best nerd tools around!  I use it for connecting to my home when on the road and as a poor mans vpn to my home for web browsing:

ssh -DN 8080 user@host -p xxxxx
The -D dynamically links the local port 8080 (this is on my laptop) through ssh to the host (my home server).  The I configure my browser to use a SOCKS proxy on 8080 and BLAM, I'm on my local network surfing the internet or any other web based service I have at my home (XBMC, DD-WRT, development web pages, etc)
The -N means don't bother giving me an interactive shell, meaning, the terminal prompt on my laptop can't use the ssh connection through the terminal (ie, although I have an ssh connection open I can't do normal ssh stuff through this window) which is nice because it keeps the tunnel from closing because of a time-out.

This also works for connecting any to any TCP port within the network of the host (home) so if I want to use VNC I can
ssh user@host -NL 5900:additional_host:5900
The -L is for linking a single port, this passes traffic from my laptop port 5900 through "host" into "additional_host" port 5900 allowing my laptop to VNC to localhost 5900 (screen 0) and see the remote system behind the network accessible via "host".

Another common use is to remotely connect and use screen for continued administration/development

ssh -t user@host -p xxxxx screen -dR Ivan
This tells ssh to make a connection to the host system (home) as "user" (me) on port xxxxxx and launch screen connecting to or creating a session called Ivan.  The -t forces the creation of a "terminal" which means, if you do not have -t having ssh launch some applications fail.  Lots of awesome tricks with ssh, I'll put more in the future.
If you are in need of sending command substitution variables to get the remote system to add the current time to the name of a file that you are creating so need to put an escape character  before the $ or you will get the command substitution of the local system.  [ex: ssh root@ "zip -r /path/to/dir/\$(date +%F-%H_%M)-\$(hostname | grep -i something | awk '{print \$2;}')-logs.zip /path/to/things/to/zip"] [hint: notice the \ in front of all the $'s]

>GNU Screen

GNU screen gives a server admin the ability to connect to a multi-user terminal/shell or just a persistent shell session that can be disconnected from and re connected to later without interrupting the commands being run.  Lets say I am going to parse a bunch of files looking for duplicates and I'm working on bonding a few NICs together for additional functionality, and I have a little loop running to monitor uptime on a switch, etc, etc, all within a network I access through a ssh server.  I connect and launch screen session and use a bunch of screen windows (or multiple session) to manage all those tasks... then, when I'm heading home and want to pick it all up once I'm there, I disconnect before I leave and reconnect to that same screen session(s) later as if I never left!

Of course screen is all configurable with options for making life a little easier:

#CODE: .screenrc
   #All single lines
   #shutoff the start up message
startup_message off
   #allow a big scroll back buffer so you don't "loose" anything
defscrollback 5000
   #allows screen redraw for some apps like VIM or LESS
altscreen on
   #give me my normal prompt
shell -/bin/bash
   #give me some "tabs" at the bottom of the screen to help me visualize where I am
caption splitonly "%{= wK}%-w%?%F%{= bw}%:%{= Wk}%? %n %t %{-}%+w %-= "
   #same but not just when split
hardstatus alwayslastline "%{= B}%H (ctl-a) %{= r}[ %{= d}%Y-%m-%d %c:%s %{= r}]%{= d} - %{= wk}%-Lw%{= Bw} %n$f*  %t %{-}%+Lw %-= :)"
   #pre-launch screen windows with different options#
   #ssh to my home and launch screen there
screen -t ssh-home 0 ssh -t -D 8080 user@host -p xxxxxx screen -dR Ivan
   #ssh to a VM on my laptop
screen -t ssh-VM 1 ssh -t user@192.168.10.101 screen -dR Ivan
   #ssh to a work system in the lab as a comon host but use my .screenrc config
screen -t ssh-lab 2 ssh -t user@host screen -dR Ivan -c .screenrc.Ivan
   #ssh to a freelance location to work on their systems
screen -t freelance 4 ssh -t user@host screen -dR Service
   #give me a normal shell prompt
screen -t bash 5

So I can just launch screen and let it connect to all the places I "usually" go via ssh.  It could also be running scripts or other apps so learn about the .screenrc file.

>Read a text file in a zip inplace


use less to read a text file in a zip:
unzip -p [archive.zip] [inner/zip/path/to/file.txt] | less
unzip -c [archive.zip] [inner/zip/path/to/file.txt] | less
In a tar.gz
tar --to-stdout -zxf file.tar.gz | less
or just a file that is gzipped
gzip -c file.gz | less
I think this is a good nerd start!  I'm gonna try and keep this up to date to concatenate all the useful nerd info I find out there.

Lion (OSX 10.7 Xsan 2.3) client configuration for StorNext (step by step)



(this procedure applies to Mountain Lion also)

Lion (Xsan 2.3) client configuration for StorNext MDSs. (includes SpycerBoxes)

On the Mac you wish to attach to StorNext, make sure a Fibre Channel card is installed, the Fibre Channel switch is zoned, and Fibre cables are attached.

"ping" check to MDS and "cvlabel -l" sees all the expected LUNs.
1. Enable Xsan from System Preferences. There must be a Fibre card installed for this option to be available.
2. Start a terminal session, then login as admin: sudo su
3. Change to the following directory: cd /Library/Preferences/Xsan/
4. Create or update fsnameservers file to point to the MDS Meta IP address(s).
5. Create or update the config.plist (update the metadadtaNetwork string as needed):
#CODE: config.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict> 
 <key>computers</key> 
 <array/>
 <key>metadataNetwork</key>
 <string>10.0.0.0/24</string>
 <key>ownerEmail</key>
 <string></string>
 <key>ownerName</key>
 <string></string>
 <key>role</key> 
 <string>CLIENT</string>
 <key>sanName</key>
 <string>DVS Test SAN</string>
</dict>
</plist>


6. Create or update the automount.plist (update the filesystem name as needed (spycerbox)):
#CODE: automount.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
 <dict>
 <key>spycerbox</key> 
 <dict>
 <key>AutoMount</key>
 <string>rw</string>
 <key>MountOptions</key>
 <dict/>
 </dict>
 </dict>
</plist>


7. Stop and start xsan, the volume should show up under the devices list in the finder.
        launchctl unload /System/Library/LaunchDaemons/com.apple.xsan.plist
        launchctl load /System/Library/LaunchDaemons/com.apple.xsan.plist


Notes:
Test mounted and copy tests to StorNext 3.5.1, 4.0.1.1, 4.2.

Quantum officially support Xsan 2.3 clients starting from StorNext4.1.2.

Apple officially support xSan 2.3 clients at StorNext 4.1.1 .

Attached is the ATTO Celerity 8X driver set. This appears to clear some issues found where not all luns are being seen by 10.7 and 10.8 systems.


#CODE: example Automount.plist (4 filesystems)
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
 <dict>
 <key>VADER</key>
 <dict>
 <key>AutoMount</key>
 <string>rw</string>
 <key>MountOptions</key>
 <dict/>
 </dict>
 <key>YODA</key>
 <dict>
 <key>AutoMount</key>
 <string>rw</string>
 <key>MountOptions</key>
 <dict/>
 </dict>
 <key>WOOKIEE</key>
 <dict>
 <key>AutoMount</key>
 <string>rw</string>
 <key>MountOptions</key>
 <dict/>
 </dict>
 <key>X-WING</key>
 <dict>
 <key>AutoMount</key>
 <string>rw</string>
 <key>MountOptions</key>
 <dict/>
 </dict>
 </dict>
</plist>

Multiple output files with dd utility

From: http://joshhead.wordpress.com/2011/08/04/multiple-output-files-with-dd-utility/


Multiple output files with dd utility


This is a note for personal reference and in case anybody finds this while searching some day.
Did you know that it is possible to redirect output not just to multiple files, but multiple commands in a shell? I speak in the context of Bash on Linux but this probably applies to some other environments too.
A few days ago I wondered if I could write an image file dumped with dd to two devices at once.
Copying to one is easy, and goes something like this:
# >   dd if=image.bin of=/dev/sdc
The “if” and “of” arguments specify input file and output file. /dev/sdc might point to a hard drive or USB stick (on my system today, it was a Compact Flash card). Please don’t try run it if you don’t understand what it’s doing, or you might destroy some data :) .
So what about two outputs? This won’t work:
# >   dd if=image.bin of=/dev/sdc of=/dev/sdh
It will run, but not the way I want it to. dd will ignore all “of” arguments except for the last one.
I can do it like this, by just running two copies at the same time:
# >   dd if=image.bin of=/dev/sdc & dd if=image.bin of=/dev/sdh &
The ampersands put the commands into the background so that multiple commands can run at once. This works well, but it feels like a waste to read the input file from disk twice!
I am guessing that thanks to lots of RAM and some caching, it is fast enough, but it made me wonder… can I reuse the same disk reads by way of some output redirection in the shell?
Already having some understanding of standard in, standard out, and pipes, I knew I could do this much:
# >   dd if=image.bin | dd of=/dev/sdh
dd will write to standard output if not given an “of” argument and it read from standard in with no “if” argument. The pipe character, |, connects the standard output from one command to the standard input of another.
I even knew that the tee command could be used to split standard input into multiple outputs.
# >   echo "hello" | tee file1.txt file2.txt
The above will write hello to file1.txt, file2.text, and standard out. I might even be able to output from tee directly to a block device like /dev/sdc, but I’m not sure it will treat all those bits and blocks as nicely as dd will. I want to send the output to several more dd processes. So how do I do it?
The key is the >() construct, which is a shell feature that allows an output command to be used instead of an output file. The shell creates a temporary file name and substitutes it in place of that expression. The command sending the output will write to the temporary file, and the output will be redirected to the standard input of whatever command is inside the parenthesis. I knew this would be possible somehow!
That explanation probably got a little confusing, so here is an attempt that I came up with:
# >   dd if=image.bin | tee >(dd of=/dev/sdc) >(dd of=/dev/sdh)
You could do it that way, but since tee writes to standard output in addition to any files it is given, you will dump an extra copy of all the bits right into your terminal output, and it will make a big mess. Better to catch tee’s standard output in a nice pipe for the last dd.
# >   dd if=image.bin | tee >(dd of=/dev/sdc) | dd of=/dev/sdh
And that’s it! The >() is something I just picked up today, so if you understand it better than I do, please correct me on any mistakes you noticed.

One Response to “Multiple output files with dd utility”

  1. greg Says:
    You can also use the dcfldd command, it lets you use as many “of=” as you want on the command line. I wish they named it dddcfl or something else entirely as I don’t use dd or dcfldd very often and its hard to remember dcfldd. I found this blog post because I did a google search because I couldn’t remember how to spell it. I have no idea what kind of overhead tee causes, but I’d assume dcfldd is more efficient since its all one binary executable.

Mac OS X Security Part 2: The Mac Forensic Toolkit By Ryan Faas

From: http://www.peachpit.com/articles/article.aspx?p=707908&amp;seqNum=2
I didn't wanna lose this info!


Unix Tools Included with Mac OS X

Several Unix tools are included with Mac OS X that can be useful in forensic investigations. The first of these, the dd command, was discussed in part 1 of this series as a method for acquiring a forensic disk image.


While many Mac utilities can create disk images, dd is an optimal choice for forensic use because it can create a disk image without mounting the drive (which would contaminate it). dd can also be used with a variety of arguments to modify how the disk image is created, including an option to split the image into multiple segments, which can be a useful tool if you are asked to present the image to another party (such as a law enforcement agency or attorney) because it enables you to create segments that can easily be burned onto CD/DVD.


To use dd effectively, however, you need to be able to identify which disk connected to your forensic Mac is the suspect disk (as well as any other disks connected to the system). You can use ls /dev/disk? to see a list of connected drives. Likewise, you can use the ioreg command with the –c"IOMedia" argument to get additional information about available drives.


If you want to examine the partition tables of either the connected but not mounted suspect drive or a copy or image of the drive, you might also find the pdisk command useful. For examining the partition table of a drive image, the hdiutil pmap command can also be helpful. Also, as mentioned in part 1 of the series, you can use the mount command to mount connected disks to a forensic system, including the argument to mount the suspect as read-only for inspection prior to imaging or copying, and you can use the –shadow argument to mount a disk image using a shadow file with the hdiutil attach command. This enables you to work with the disk as if it were writable, but preserve its contents by writing any changes to a shadow file that will be destroyed when the disk image is unmounted.


Finally, the command line grep utility as well as the command-line variations of Spotlight can assist you in locating data from a forensic image. You can also use the GUI version of Spotlight and the Finder to search for data on a forensic image.

dcfldd

dcfldd is an open source Unix tool that is based on dd but has been expanded to improve its use in forensic investigations. Although not included with Mac OS X, dcfldd can be downloaded and compiled to run under Mac OS X. One of the major advantages of dcfldd over dd is that is supports the hashing of data when disk images are created, allowing for verification that the contents of the image have not been modified since the image was acquired. In a situation with legal consequences, this provides another item in your chain of evidence to prove that the evidence you acquired has not be tampered with.


dcfldd also includes some other features, including the capability to output to multiple disks/images in a single operation. This is useful timesaver if you are creating multiple copies of the suspect disk to be stored as evidence or used for investigative purposes. dcfldd can also provide updates during copy and image operating so that you have an idea how long they will take, something that dd doesn’t provide. Other features that both dd and dcfldd share are also more configurable under dcfldd, which can make it a better choice in many circumstances.


Sleuth Kit and Autopsy

Sleuth Kit is an open source forensic suite available for Unix that has been verified to run effectively under Mac OS X. Autopsy is a web-based GUI for the commands included in Sleuth Kit. Sleuth Kit includes both analysis tools and case management tools. The analysis tools enable you to examine suspect disks/images in a variety of ways, whereas the case management tools provide a solution for recording your notes and evidence.


Among Sleuth Kit’s analysis tools are tools for listing files and directories, tools for examining and sorting files based on type and content, a tool for developing a timeline of actions performed while the suspect drive was in use, search tools, tools for analyzing the metadata and data structures on a suspect disk, and tools for examining the disk images and the partition tables they contain. Sleuth Kit’s case management features include a tool for organizing multiple investigations, a tool for taking notes, and a tool for establishing a timeline of events based on file activity and logs. Sleuth Kit can also be used to verify image integrity and generate reports of your findings.


Black Bag Technologies Mac Forensic Software

BlackBag Technologies is a company that specializes in data forensic tools and consulting. Its CTO, Derrick Donnelly, is considered the foremost expert on Mac forensic analysis. As a result, it is not surprising that the BlackBag Technologies Mac Forensic Software (BlackBag MFS) suite is a comprehensive, Mac OS X–specific set of tools covering every facet of Mac OS X forensic investigation for acquiring and analyzing a forensic image.


BlackBag MFS includes 19 utilities to aid in forensic investigations including browsing and scanning directories, investigating suspect files, examining file header information and metadata, searching for hidden files, discerning the type and creator codes of files, sorting files by all manner of criteria, viewing image files, searching comment data for files, and breaking up large collections of files in manageable chunks. It also provides easy-to-use GUI tools for disabling disk automounting and for mounting drives as read-only. BlackBag MFS is also designed to work well with many of the Unix command-line tools discussed earlier as well as forensic tools for other platforms.


MacQuisition

MacQuisition is a tool also developed by Black Bag Technologies. It is designed to make the process of acquiring a forensic image much simpler. MacQuisition is a bootable Mac OS X DVD that can be used to boot a suspect computer and acquire a forensic image, saving it either to a locally mounted external drive or to a network storage location. While MacQuisition doesn’t provide tools for analyzing that image, it does provide a very simple method for acquiring an image.


MacForeniscsLab

SubRosaSoft’s MacFornesicsLab is the second commercial Mac OS X–specific forensic suite on the market. Like BlackBag MFS, MacForensicsLab includes a number of analysis features as well as tools to make acquiring a forensic disk image much simpler (including the ability to dynamically turn auto-disk mounting on or off). Like Sleuth Kit and Autopsy, MacForensicsLab also includes built-in tools for notetaking and case management and for organizing your evidence as you find it. MacFornesicsLab can then combine all this information into a variety of easy-to-format reports.


One of the excellent features of MacForensicsLab is that is a completely self-contained environment. From the process of initially creating and detailing a case/investigation through image acquisition and analysis, notetaking, and final reporting, the investigator never has to leave the application’s interface. It even includes a special terminal feature for running command-line tools. This provides several advantages, most notably the fact that everything is easily and automatically recorded for later use as evidence and that there is a consistency to not only the interface but also to the actions and methods used during investigation.


MacForensicsLab’s interface is very straightforward and user-friendly, but it also provides a powerful set of tools for searching, sorting, and notating data and evidence. MacForensicsLab can be also be used to recover deleted or lost data. It also includes features specifically designed for examining image files for "skin tones" to make identifying pornographic content simpler as well as to search potential credit card and social security number strings within files—two major focuses of criminal or inappropriate activity investigations.