Apache Bench – Doing ab Performance Tests The Right Way
Apache Bench
- 5 Minutes for realistic test
- 1/10/50/100/200/300/400/500/800/1000 ... Read more
1 2 3 |
a@macbook:~/$ ls ls: illegal option -- - usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...] |
1 2 3 4 5 6 7 8 |
alias ls='ls -G' ~/.bashrc, .~/bash_aliases ~/.profile a@macbook:~$ grep -Es "ls\s+--color" ~/.bash* ~/.profile /Users/a/.bash_aliases:#alias ls='ls --color=auto' /Users/a/.bashrc: #alias ls='ls --color=auto' |
1 2 3 4 |
#sudo netstat -lnptu|grep :<port> #check what is running on port 80 sudo netstat -lnptu|grep :80 |
1 |
vmstat 1 |
1 2 |
sudo apt-get install -y htop sudo htop |
1 2 |
sudo apt-get install atop -y sudo atop |
1 2 3 4 5 |
sudo apt-get purge -y firefox firefox-* cd ~ sudo rm -rf .mozilla/firefox/ .macromedia/ /etc/firefox/ /usr/lib/firefox/ /usr/lib/firefox-addons/ sudo apt-get purge -y firefox firefox-* sudo apt-get install -y firefox |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_PAPER = "de_DE.UTF-8", LC_ADDRESS = "de_DE.UTF-8", LC_MONETARY = "de_DE.UTF-8", LC_NUMERIC = "de_DE.UTF-8", LC_TELEPHONE = "de_DE.UTF-8", LC_IDENTIFICATION = "de_DE.UTF-8", LC_MEASUREMENT = "de_DE.UTF-8", LC_CTYPE = "UTF-8", LC_TIME = "de_DE.UTF-8", LC_NAME = "de_DE.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). |
1 2 3 4 5 6 7 |
locale-gen en_US.UTF-8 sudo localedef -i en_US -f UTF-8 en_US.UTF-8 export LANGUAGE=en_US.UTF-8 export LANG=en_US.UTF-8 export LC_ALL=en_US.UTF-8 locale-gen en_US.UTF-8 sudo dpkg-reconfigure locales |
1 2 3 4 5 6 7 8 9 10 11 |
#!/bin/bash fullfilepath=$1 filename="${fullfilepath%.*}" echo $filename #determine file extension extension=$(ffmpeg -i "$fullfilepath" 2>&1 | grep Audio|sed 's/.*Audio: \([a-z0-9]\+\).*/\1/ig') echo $extension audiofilename="$filename.$extension" echo $audiofilename ffmpeg -i "$fullfilepath" -vn -acodec copy "$audiofilename" #ffmpeg -i "$fullfilepath" -vn -ab 160k -ac 2 -ar 44100 "$filename.mp3" |
1 2 3 4 5 6 7 8 9 |
# remove the old postgres version (not data) sudo apt-get remove -y postgresql postgresql-9.3 # add package sources list, key and update sudo sh -c "echo 'deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main' > /etc/apt/sources.list.d/pgdg.list" wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add - sudo apt-get update -y --fix-missing sudo apt-get install -y libpq-dev postgresql-9.4 |
1 2 3 4 |
Setting up postgresql-common (165) ... Can't exec "insserv": No such file or directory at /usr/sbin/update-rc.d line 203. update-rc.d: error: insserv rejected the script header dpkg: error processing package postgresql-common (--configure): |
1 |
ln -s /usr/lib/insserv/insserv /sbin/insserv |
1 2 3 4 5 6 7 8 9 10 |
# Add current user to vboxsf group in Ubuntu guest OS sudo usermod -aG vboxsf $USER #You can access the share by making the user, or group id of 1000, a member of group vboxsf. #This is done by changing the vboxsf line in the /etc/group file. May require reboot. #File permission issues with shared folders under Virtual Box (Ubuntu Guest, Windows Host) # add the following lines to .bashrc mkdir -p ~/host/share sudo mount -t vboxsf -o uid=1000,gid=1000 share ~/host/share |
1 2 3 4 5 6 |
if sudo grep -q "$USER ALL=NOPASSWD: ALL" /etc/sudoers; then echo "passwordless sudo already active" else echo "setting sudo without password for $USER"; sudo sh -c 'echo "'$USER' ALL=NOPASSWD: ALL" >> /etc/sudoers' fi |
1 |
find . -type f -printf "%C@ %p\n" | sort -rn | head -n 10 |
1 2 3 4 |
ssh -A -t <jump-user>@<jump-server> ssh -A -X <destination-server> # example to go directly to far-away-server via jump.server: ssh -A -t jump-user@jump.server.net ssh -A -X far-user@far-away-server.net |
1 2 |
# installation sudo apt-get install -y ncdu |
1 2 3 4 5 6 7 8 9 |
ncdu -x / #Since scanning a large directory may take a while, you can scan a directory and export the results for later viewing: ncdu -1xo- / | gzip > /tmp/export.gz # ...some time later: zcat /tmp/export.gz | ncdu -f- # To export from a cron job, make sure to replace -1 with -0 to suppress any unnecessary output. |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 |
> man ncdu Ncdu Manual NAME SYNOPSIS DESCRIPTION OPTIONS Mode Selection Interface options Scan Options KEYS EXAMPLES HARD LINKS BUGS AUTHOR SEE ALSO NAME ncdu - NCurses Disk Usage SYNOPSIS ncdu [options] dir DESCRIPTION ncdu (NCurses Disk Usage) is a curses-based version of the well-known 'du', and provides a fast way to see what directories are using your disk space. OPTIONS Mode Selection -h Print a short help message and quit. -v Print ncdu version and quit. -f FILE Load the given file, which has earlier been created with the -o option. If FILE is equivalent to -, the file is read from standard input. For the sake of preventing a screw-up, the current version of ncdu will assume that the directory information in the imported file does not represent the filesystem on which the file is being imported. That is, the refresh and file deletion options in the browser will be disabled. dir Scan the given directory. -o FILE Export all necessary information to FILE instead of opening the browser interface. If FILE is -, the data is written to standard output. See the examples section below for some handy use cases. Be warned that the exported data may grow quite large when exporting a directory with many files. 10.000 files will get you an export in the order of 600 to 700 KiB uncompressed, or a little over 100 KiB when compressed with gzip. This scales linearly, so be prepared to handle a few tens of megabytes when dealing with millions of files. Interface options -0 Don't give any feedback while scanning a directory or importing a file, other than when a fatal error occurs. Ncurses will not be initialized until the scan is complete. When exporting the data with -o, ncurses will not be initialized at all. This option is the default when exporting to standard output. -1 Similar to -0, but does give feedback on the scanning progress with a single line of output. This option is the default when exporting to a file. In some cases, the ncurses browser interface which you'll see after the scan/import is complete may look garbled when using this option. If you're not exporting to a file, -2 is probably a better choice. -2 Provide a full-screen ncurses interface while scanning a directory or importing a file. This is the only interface that provides feedback on any non-fatal errors while scanning. -q Quiet mode. While scanning or importing the directory, ncdu will update the screen 10 times a second by default, this will be decreased to once every 2 seconds in quiet mode. Use this feature to save bandwidth over remote connections. This option has no effect when -0 is used. -r Read-only mode. This will disable the built-in file deletion feature. This option has no effect when -o is used, because there will not be a browser interface in that case. It has no effect when -f is used, either, because the deletion feature is disabled in that case anyway. Scan Options These options affect the scanning progress, and have no effect when importing directory information from a file. -x Do not cross filesystem boundaries, i.e. only count files and directories on the same filesystem as the directory being scanned. --exclude PATTERN Exclude files that match PATTERN. The files will still be displayed by default, but are not counted towards the disk usage statistics. This argument can be added multiple times to add more patterns. -X FILE, --exclude-from FILE Exclude files that match any pattern in FILE. Patterns should be separated by a newline. --exclude-caches Exclude directories containing CACHEDIR.TAG. The directories will still be displayed, but not their content, and they are not counted towards the disk usage statistics. See http://www.brynosaurus.com/cachedir/ KEYS ? Show help + keys + about screen up, down j, k Cycle through the items right, enter, l Open selected directory left, <, h Go to parent directory n Order by filename (press again for descending order) s Order by filesize (press again for descending order) C Order by number of items (press again for descending order) a Toggle between showing disk usage and showing apparent size. d Delete the selected file or directory. An error message will be shown when the contents of the directory do not match or do not exist anymore on the filesystem. t Toggle dirs before files when sorting. g Toggle between showing percentage, graph, both, or none. Percentage is relative to the size of the current directory, graph is relative to the largest item in the current directory. c Toggle display of child item counts. e Show/hide 'hidden' or 'excluded' files and directories. Please note that even though you can't see the hidden files and directories, they are still there and they are still included in the directory sizes. If you suspect that the totals shown at the bottom of the screen are not correct, make sure you haven't enabled this option. i Show information about the current selected item. r Refresh/recalculate the current directory. q Quit EXAMPLES To scan and browse the directory you're currently in, all you need is a simple: ncdu If you want to scan a full filesystem, your root filesystem, for example, then you'll want to use -x: ncdu -x / Since scanning a large directory may take a while, you can scan a directory and export the results for later viewing: ncdu -1xo- / | gzip >export.gz # ...some time later: zcat export.gz | ncdu -f- To export from a cron job, make sure to replace -1 with -0 to suppress any unnecessary output. You can also export a directory and browse it once scanning is done: ncdu -o- | tee export.file | ./ncdu -f- The same is possible with gzip compression, but is a bit kludgey: ncdu -o- | gzip | tee export.gz | gunzip | ./ncdu -f- To scan a system remotely, but browse through the files locally: ssh -C user@system ncdu -o- / | ./ncdu -f- The -C option to ssh enables compression, which will be very useful over slow links. Remote scanning and local viewing has two major advantages when compared to running ncdu directly on the remote system: You can browse through the scanned directory on the local system without any network latency, and ncdu does not keep the entire directory structure in memory when exporting, so you won't consume much memory on the remote system. HARD LINKS Every disk usage analysis utility has its own way of (not) counting hard links. There does not seem to be any universally agreed method of handling hard links, and it is even inconsistent among different versions of ncdu. This section explains what each version of ncdu does. ncdu 1.5 and below does not support any hard link detection at all: each link is considered a separate inode and its size is counted for every link. This means that the displayed directory sizes are incorrect when analyzing directories which contain hard links. ncdu 1.6 has basic hard link detection: When a link to a previously encountered inode is detected, the link is considered to have a file size of zero bytes. Its size is not counted again, and the link is indicated in the browser interface with a 'H' mark. The displayed directory sizes are only correct when all links to an inode reside within that directory. When this is not the case, the sizes may or may not be correct, depending on which links were considered as "duplicate" and which as "original". The indicated size of the topmost directory (that is, the one specified on the command line upon starting ncdu) is always correct. ncdu 1.7 and later has improved hard link detection. Each file that has more than two links has the "H" mark visible in the browser interface. Each hard link is counted exactly once for every directory it appears in. The indicated size of each directory is therefore, correctly, the sum of the sizes of all unique inodes that can be found in that directory. Note, however, that this may not always be same as the space that will be reclaimed after deleting the directory, as some inodes may still be accessible from hard links outside it. BUGS Directory hard links are not supported. They will not be detected as being hard links, and will thus be scanned and counted multiple times. Some minor glitches may appear when displaying filenames that contain multibyte or multicolumn characters. All sizes are internally represented as a signed 64bit integer. If you have a directory larger than 8 EiB minus one byte, ncdu will clip its size to 8 EiB minus one byte. Please report any other bugs you may find at the bug tracker, which can be found on the web site at http://dev.yorhel.nl/ncdu AUTHOR Written by Yoran Heling <projects@yorhel.nl>. SEE ALSO du(1) |
1 |
wget -erobots=off -r http://www.guguncube.com |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
sudo update-alternatives --config editor #There are 5 choices for the alternative editor (providing /usr/bin/editor). # # Selection Path Priority Status #------------------------------------------------------------ #* 0 /bin/nano 40 auto mode # 1 /bin/ed -100 manual mode # 2 /bin/nano 40 manual mode # 3 /usr/bin/mcedit 25 manual mode # 4 /usr/bin/vim.basic 30 manual mode # 5 /usr/bin/vim.tiny 10 manual mode # #Press enter to keep the current choice[*], or type selection number: 4 #update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/editor (editor) in manual mode |
1 2 3 4 |
# install numfmt through coreutils sudo apt-get install -y coreutils alias ducks='sudo du -cbs * | sort -rn| head -11|numfmt --field 1 --to=iec|column -t' |
1 2 3 4 5 6 7 8 9 |
> cat ~/.bash_aliases alias ducks='sudo du -cbs * | sort -rn| head -11|numfmt --field 1 --to=iec|column -t' .. > ducks 10TB this 5GB is 2MB hello 10KB world |
1 2 3 4 5 6 7 8 9 10 11 12 |
# remove and purge old skype versions sudo apt-get remove -y skype skype-bin:i386 skype:i386 sudo apt-get purge -y skype skype-bin:i386 skype:i386 # remove and purge old sni-qt versions sudo apt-get remove -y sni-qt:i386 sudo apt-get purge -y sni-qt:i386 # install new sni-qt version sudo apt-get install -y sni-qt:i386 # download and install latest skype version wget http://www.skype.com/go/getskype-linux-beta-ubuntu-64 -O /tmp/skype-ubuntu-latest_i386.deb sudo dpkg -i /tmp/skype-ubuntu-latest_i386.deb sudo apt-get install -f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
# -*- coding: utf-8 -*- """ # Install xvfb sudo apt-get install -y xvfb # create build directory mkdir -p ~/build/selenium cd ~/build/selenium # Install API for browsermob-proxy and selenium sudo pip install selenium browsermob-proxy --upgrade # download browsermob proxy wget https://github.com/downloads/webmetrics/browsermob-proxy/browsermob-proxy-2.0-beta-6-bin.zip unzip browsermob-proxy-2.0-beta-6-bin.zip # copy browsermob-proxy to /var/lib sudo cp -r browsermob-proxy /var/lib/ sudo chown -R a:a /var/lib/browsermob-proxy # create log directory mkdir -p log # download selenium-server wget http://selenium-release.storage.googleapis.com/2.41/selenium-server-standalone-2.41.0.jar # start selenium-server java /usr/bin/java -jar selenium-server-standalone-2.41.0.jar >> ./log/selenium.$(date +"%Y%d%m").log 2>&1& # download chrome driver wget http://chromedriver.storage.googleapis.com/2.9/chromedriver_linux64.zip unzip chromedriver_linux64.zip # chromedriver """ import sys import os def main(): from xvfbwrapper import Xvfb with Xvfb() as xvfb: #if True: open_page_in_selenium() def open_page_in_selenium(browser='chrome'): import os from selenium import webdriver browsermob_proxy_filepath = "/var/lib/browsermob-proxy/bin/browsermob-proxy" # Create Proxy Server from browsermobproxy import Server server = Server(browsermob_proxy_filepath) server.start() proxy = server.create_proxy() # Create Webdriver #driver = webdriver.Firefox() # Create Chrome Driver - unused chromedriver = "./chromedriver" os.environ["webdriver.chrome.driver"] = chromedriver #driver = webdriver.Chrome(chromedriver) #chrome_options = webdriver.ChromeOptions() #chrome_options.add_argument("--proxy-server={0}".format(proxy.proxy)) #driver = webdriver.Chrome(chrome_options = chrome_options) # Create Profile using Proxy Server profile = webdriver.FirefoxProfile() profile.set_proxy(proxy.selenium_proxy()) driver = webdriver.Firefox(firefox_profile=profile) # Create HAR proxy.new_har("myhar") url = "http://www.python.org" try: from datetime import datetime print "%s: Go %s"%(datetime.now(), url) driver.get(url) print "%s: Finish %s"%(datetime.now(), url) #from selenium.webdriver.common.keys import Keys # submit query #elem = driver.find_element_by_name("q") #elem.send_keys("selenium") #elem.send_keys(Keys.RETURN) web_har = proxy.har # returns a HAR JSON blob print web_har # Get Additional Performance Data performance = driver.execute_script("return window.performance") print performance print "%s: Complete %s"%(datetime.now(), url) finally: driver.close() server.stop() if __name__ == "__main__": main() pass |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
# use a custom directory for download and installation mkdir -p ~/build/selenium cd ~/build/selenium # Install Google Chrome wget -q -O- https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list' sudo apt-get update sudo apt-get install -y google-chrome-stable # Download Selenium wget http://selenium-release.storage.googleapis.com/2.41/selenium-server-standalone-2.41.0.jar # Download Chrome Driver Selenium wget http://chromedriver.storage.googleapis.com/2.9/chromedriver_linux64.zip unzip chromedriver_linux64.zip # chromedriver # Install Selenium Python Bindings sudo pip install selenium # Create Python Test File for Selenium with a Google Chrome Driver cat > chrome-selenium-test.py <<"_EOF_" # -*- coding: utf-8 -*- import os from selenium import webdriver from selenium.webdriver.common.keys import Keys chromedriver = "./chromedriver" os.environ["webdriver.chrome.driver"] = chromedriver driver = webdriver.Chrome(chromedriver) #driver = webdriver.Firefox() driver.get("http://www.python.org") print driver.title assert "Python" in driver.title # submit query elem = driver.find_element_by_name("q") elem.send_keys("selenium") elem.send_keys(Keys.RETURN) # get performance data performance = driver.execute_script("return window.performance") print performance driver.close() _EOF_ python chrome-selenium-test.py # Welcome to Python.org # {u'webkitClearResourceTimings': {}, u'memory': {u'totalJSHeapSize': 12700000, u'usedJSHeapSize': 10000000, u'jsHeapSizeLimit': 1620000000}, u'webkitGetEntries': {}, u'removeEventListener': {}, u'webkitSetResourceTimingBufferSize': {}, u'getEntries': {}, u'clearMeasures': {}, u'webkitGetEntriesByType': {}, u'addEventListener': {}, u'measure': {}, u'webkitGetEntriesByName': {}, u'getEntriesByName': {}, u'mark': {}, u'clearMarks': {}, u'onwebkitresourcetimingbufferfull': None, u'getEntriesByType': {}, u'dispatchEvent': {}, u'timing': {u'secureConnectionStart': 1398625289930, u'redirectStart': 0, u'domContentLoadedEventStart': 1398625291520, u'responseEnd': 1398625291392, u'redirectEnd': 0, u'loadEventStart': 1398625292024, u'unloadEventStart': 1398625291395, u'domainLookupEnd': 1398625289457, u'connectEnd': 1398625290252, u'unloadEventEnd': 1398625291395, u'requestStart': 1398625290252, u'loadEventEnd': 1398625292048, u'navigationStart': 1398625289453, u'domLoading': 1398625291407, u'domInteractive': 1398625291520, u'fetchStart': 1398625289453, u'domComplete': 1398625292023, u'domContentLoadedEventEnd': 1398625291571, u'responseStart': 1398625291225, u'connectStart': 1398625289457, u'domainLookupStart': 1398625289457}, u'now': {}, u'navigation': {u'TYPE_RELOAD': 1, u'redirectCount': 0, u'TYPE_RESERVED': 255, u'TYPE_NAVIGATE': 0, u'type': 0, u'TYPE_BACK_FORWARD': 2}} |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
sudo apt-get -f install sudo apt-get update # to update your package list. sudo apt-get autoclean # to clean up any partial packages. sudo apt-get clean # to clean up the apt cache. sudo apt-get autoremove # will clean up any unneeded dependencies. #If while doing this you can identify the broken package this code will very forcefully remove it. # reconfigure software sudo dpkg --configure -a sudo dpkg --remove -force --force-remove-reinstreq package name # Change package name to the real name of course. # Last resort sudo apt-get -u dist-upgrade # If it shows any held packages, it is best to eliminate them. # Packages are held because of dependency conflicts that apt cannot resolve. # Try this command to find and repair the conflicts: sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade # If it cannot fix the conflicts, it will exit with: # Delete the held packages one by one, running dist-upgrade each time, until there are no more held packages. # Then reinstall any needed packages. Be sure to use the --dry-run option, # so that you are fully informed of consequences: sudo apt-get remove --dry-run package-name |
1 2 3 4 5 6 7 8 9 |
Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: package1 : Depends: package2 (>= 1.8) but 1.7.5-1ubuntu1 is to be installed E: Unable to correct problems, you have held broken packages. |
1 2 3 4 |
The following packages have unmet dependencies: libavcodec-extra-54:i386 : Conflicts: libavcodec54:i386 but 6:9.11-2ubuntu2 is to be installed libavcodec54:i386 : Conflicts: libavcodec-extra-54:i386 but 6:9.11-2ubuntu2 is to be installed E: Unable to correct problems, you have held broken packages. |
1 2 3 4 5 6 7 |
check host ftp.redhat.com with address ftp.redhat.com if failed icmp type echo with timeout 15 seconds then alert if failed port 21 protocol ftp then exec "/usr/X11R6/bin/xmessage -display :0 ftp connection failed" alert foo@bar.com |
1 2 |
check host elastic_health_check with address 0.0.0.0 if failed url http://0.0.0.0:9200/_cluster/health for 2 cycles then alert |
1 2 3 4 5 6 7 8 9 10 |
# http://stackoverflow.com/questions/1115816/monit-and-apache-site-behind-http-basic-auth # It seems to be possible to include the credentials in the URL, have you tried this?: # (from http://mmonit.com/monit/documentation/monit.html#connection_testing ) # If a username and password is included in the URL Monit will attempt to login at the server using Basic Authentication. # http://user:password@www.foo.bar:8080/document/?querystring#ref check host hacker_news with address news.ycombinator.com if failed url http://username:password@www.myserver.com/search?q=123 and content = "successfully logged in" then alert |