When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
;login: The USENIX Magazine, February 2011:42-47.
This helps funding further development; and it won't cost you a cent.
Or you can get GNU Parallel without this requirement by paying 10000 EUR.
To silence this citation notice run 'parallel --bibtex' once or use '--no-notice'.
find -L . -type f | parallel -j 30 rsync -a {} /DESTINATION_EFS_FILESYSTEM
for branch in git branch -r | grep -v HEAD
;do echo -e git show --format="%ci %cr" $branch | head -n 1
\t$branch; done | sort -r
So you can put your data into glacier with 2 differents ways:
1) directly to glacier via API
2) Store them to s3 then with a management policy, it'll go to Glacier
Warning : Huge cost when you download from glacier and when you delete before 3 months
CloudFormer to create AWS CloudFormation templates from existing AWS resources
tmux <3
while true; do python monitor_beanstalk.py; bg_color=$([ $? == 0 ] && echo "green" || echo "red"); tmux set-window-option -t${TMUX_PANE} window-status-bg $bg_color; sleep 30; clear; done
aws efs describe-file-systems| jq '.FileSystems|.[]|[.Name, .SizeInBytes.Timestamp, .SizeInBytes.Value]' -c
Retourne une ligne par EFS
Sur chaque ligne, un array avec :
[0] = nom de l'efs
[1] = timestamp du moment où la taille a été calculée
[2] = la taille en bytes
Pour avoir la taille en GB :
aws efs describe-file-systems| jq '.FileSystems|.[]|[.Name, .SizeInBytes.Timestamp, .SizeInBytes.Value / 1024 /1024 / 1024]' -c
aws efs describe-file-systems| jq '.FileSystems|.[]|[.Name, .SizeInBytes.Value / 1024 /1024 / 1024]' -c
aws s3api list-objects --bucket BUCKET_NAME --output json --query "[sum(Contents[].Size), length(Contents[])]"
Retourne la taille en bytes et le nombre d'objets
Avec un ficheir qui contient le nom des bucket :
for BUCKET_NAME in $(cat s3list); do echo -n "$BUCKET_NAME " ; region=$(aws s3api get-bucket-location --bucket $BUCKET_NAME|jq '.LocationConstraint' -r); aws s3api list-objects --region $region --bucket $BUCKET_NAME --output json --query "sum(Contents[].Size)"; done
Il faut utiliser l'option noresvport de mount pour que mount n'utilise pas des ports sources inférieur à 1024
Why do you have to? Tradition, mostly. Once upon a time, restricting NFS to privileged ports (<1023) was considered a security measure. Back when people were using mainframe computers, this made sure that the NFS software on the client side was part of the OS/approved by the administrator, since a program can only use a privileged port if it's run by the root user. Today, this makes no sense because anyone can own a computer and have root access, so this doesn't mean anything in terms of security.
By default, many NFS servers don't allow non-privileged source ports. Some NFS clients (such as Ubuntu's), default to using a privileged source port unless otherwise specified, which is why your Linux client works without issue. Clearly, the OS X client doesn't do this. I don't know if that was an Apple design choice or something inherited from BSD. I know that Solaris also defaults to a non-privileged port.
The two ways of avoiding this problem are, telling the OS X client to use a privileged port, as you discovered, or configuring your NFS server to allow non-privileged ports (look it up in your server's documentation).
How do you get OS X to use a privileged port using a GUI? As far as I know, you can't on versions > 10.6. One used to be able to mount NFS shares in Disk Utility and type in extra options, but that was removed. (details) It was never a simple button or anything. NFS is hardly something most of the "non-techy" crowd need, so I guess it wasn't a priority and there are reasons routinely using privileged ports isn't a great idea.
I haven't tried it, but http://www.bresink.com/osx/NFSManager.html seems to allow configuration of OS X's NFS features without the command line.