* sort the folder, sub-folder and files with size order:
sort -sh | sort -h #[ sort -h, --human-numeric-sort ]
*High memory & CPU Usage process: [ ps aux ]
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
Sort by Mem%:
ps aux | sort -nrk4 | head -5 |awk '{print $1, $4}' # I would use this.
ps aux | sort -nk4 | tail -5 |awk '{print $1, $4}'
Sort by RSS:
ps aux | sort -nrk6 | head -5 |awk '{print $1, $6}'
ps aux | sort -nrk6 | head -5 |awk '{print "Process Name = "$1, "Memory % = ", $4, "RSS = ", $6}'
Process Name = mysql Memory % = 46.5 RSS = 829792
Process Name = hadoop Memory % = 6.0 RSS = 108592
Process Name = hadoop Memory % = 5.5 RSS = 99688
Process Name = hadoop Memory % = 5.0 RSS = 90036
Process Name = hadoop Memory % = 0.7 RSS = 12828
Sort By CPU:
ubuntu@domU-12-31-38-01-79-24:/etc/nagios$ ps aux | grep -v USER |sort -nrk3 | head -5 |awk '{print "Process Name = "$1, "cpu = ", $3, "Memory % = ", $4, "RSS = ", $6}'
Process Name = mysql cpu = 4.2 Memory % = 46.5 RSS = 829760
Process Name = ubuntu cpu = 0.0 Memory % = 0.0 RSS = 940
Process Name = ubuntu cpu = 0.0 Memory % = 0.0 RSS = 640
Process Name = ubuntu cpu = 0.0 Memory % = 0.0 RSS = 836
Process Name = ubuntu cpu = 0.0 Memory % = 0.0 RSS = 1080
# ps -elFL
F S UID PID PPID LWP C NLWP PRI NI ADDR SZ WCHAN RSS PSR STIME TTY TIME CMD
$ getconf LONG_BIT [ To see the system arch [ 32 / 64 bit ] ]
Disclaimer: Its a collection from lots of other site(s) and few of my notes. I would also like to declare that I am not owning lots of its content. Please feel free to contact me directly if you want me to remove any of your content, that you don't want to share to other through this blog.
Monday, 31 March 2014
Sunday, 30 March 2014
selenium_python
Web applicaiton testing with Selenium and Python:
1: How to install selenium for python:
1a: check if you have easy_install [ for python pkg installation] in your system, if not install the same.
1b: sudo easy_install selenium
[or] you can try pip to install the same [pip install -U selenium ]
[or] download the file form PyPi and use: python setup.py install
2: [ Start Firefox and visit to www.google.com ]
In [1]: from selenium import webdriver
In [2]: browser = webdriver.Firefox()
In [3]: browser.get('http://www.google.com/')
====== Trying for auto login ==========
from selenium import webdriver
#Following are optional required
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
baseurl = "http://www.irctc.co.in/"
username = "yourUsername"
password = "yourPassword"
xpaths = { 'usernameTxtBox' : "//input[@name='username']",
'passwordTxtBox' : "//input[@name='password']",
'submitButton' : "//input[@name='login']"
}
mydriver = webdriver.Firefox()
mydriver.get(baseurl)
#mydriver.maximize_window()
#Clear Username TextBox if already allowed "Remember Me"
#mydriver.find_element_by_xpath(xpaths['usernameTxtBox']).clear()
#Write Username in Username TextBox
mydriver.find_element_by_xpath(xpaths['usernameTxtBox']).send_keys(username)
#Clear Password TextBox if already allowed "Remember Me"
mydriver.find_element_by_xpath(xpaths['passwordTxtBox']).clear()
#Write Password in password TextBox
mydriver.find_element_by_xpath(xpaths['passwordTxtBox']).send_keys(password)
#Click Login button
mydriver.find_element_by_xpath(xpaths['submitButton']).click()
########## External #####
Youtube video:
Selenium For Pythonistas: https://www.youtube.com/watch?v=2OA941RLbmU
http://selenium-python.readthedocs.org/
1: How to install selenium for python:
1a: check if you have easy_install [ for python pkg installation] in your system, if not install the same.
1b: sudo easy_install selenium
[or] you can try pip to install the same [pip install -U selenium ]
[or] download the file form PyPi and use: python setup.py install
2: [ Start Firefox and visit to www.google.com ]
In [1]: from selenium import webdriver
In [2]: browser = webdriver.Firefox()
In [3]: browser.get('http://www.google.com/')
====== Trying for auto login ==========
from selenium import webdriver
#Following are optional required
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
baseurl = "http://www.irctc.co.in/"
username = "yourUsername"
password = "yourPassword"
xpaths = { 'usernameTxtBox' : "//input[@name='username']",
'passwordTxtBox' : "//input[@name='password']",
'submitButton' : "//input[@name='login']"
}
mydriver = webdriver.Firefox()
mydriver.get(baseurl)
#mydriver.maximize_window()
#Clear Username TextBox if already allowed "Remember Me"
#mydriver.find_element_by_xpath(xpaths['usernameTxtBox']).clear()
#Write Username in Username TextBox
mydriver.find_element_by_xpath(xpaths['usernameTxtBox']).send_keys(username)
#Clear Password TextBox if already allowed "Remember Me"
mydriver.find_element_by_xpath(xpaths['passwordTxtBox']).clear()
#Write Password in password TextBox
mydriver.find_element_by_xpath(xpaths['passwordTxtBox']).send_keys(password)
#Click Login button
mydriver.find_element_by_xpath(xpaths['submitButton']).click()
########## External #####
Youtube video:
Selenium For Pythonistas: https://www.youtube.com/watch?v=2OA941RLbmU
http://selenium-python.readthedocs.org/
json_xml_yaml_data_format
Data formats: JSON, XML, YAML:
------------------------------
Text expressed as JSON:
{"menu": {
"id": "file",
"value": "File",
"popup": {
"menuitem": [
{"value": "New", "onclick": "CreateNewDoc()"},
{"value": "Open", "onclick": "OpenDoc()"},
{"value": "Close", "onclick": "CloseDoc()"}
]
}
}}
The same text expressed as XML:
<menu id="file" value="File">
<popup>
<menuitem value="New" onclick="CreateNewDoc()" />
<menuitem value="Open" onclick="OpenDoc()" />
<menuitem value="Close" onclick="CloseDoc()" />
</popup>
</menu>
YAML: Examples:
menu:
id: file
value: File
popup:
menuitem:
- value: New
onclick: CreateNewDoc()
- value: Open
onclick: OpenDoc()
- value: Close
onclick: CloseDoc()
...
http://docs.ansible.com/YAMLSyntax.html
http://demono.ru/Utilities/onlineSDConverter.aspx [ converter: JSON, XML, YAML ]
External Links:
--------------
YAML is better for make configurations
JSON is better for transfer data between applictions
XML is better for make structured data in popular format, that should be not simple readable
Google Protocol Buffers is alternative of XML, but more compact and not readable
------------------------------
Text expressed as JSON:
{"menu": {
"id": "file",
"value": "File",
"popup": {
"menuitem": [
{"value": "New", "onclick": "CreateNewDoc()"},
{"value": "Open", "onclick": "OpenDoc()"},
{"value": "Close", "onclick": "CloseDoc()"}
]
}
}}
The same text expressed as XML:
<menu id="file" value="File">
<popup>
<menuitem value="New" onclick="CreateNewDoc()" />
<menuitem value="Open" onclick="OpenDoc()" />
<menuitem value="Close" onclick="CloseDoc()" />
</popup>
</menu>
YAML: Examples:
menu:
id: file
value: File
popup:
menuitem:
- value: New
onclick: CreateNewDoc()
- value: Open
onclick: OpenDoc()
- value: Close
onclick: CloseDoc()
...
http://docs.ansible.com/YAMLSyntax.html
http://demono.ru/Utilities/onlineSDConverter.aspx [ converter: JSON, XML, YAML ]
External Links:
--------------
YAML is better for make configurations
JSON is better for transfer data between applictions
XML is better for make structured data in popular format, that should be not simple readable
Google Protocol Buffers is alternative of XML, but more compact and not readable
Thursday, 27 March 2014
ssh config generator for aws instance using python boto
import boto.ec2
# NOTE: if you don't put the key at conn then it will search for
# your profile file, if not /etc/boto.cfg if not your varible 'BOTO_CONFIG=/file/path'
conn=boto.ec2.connect_to_region('us-east-1')
#conn=boto.ec2.connect_to_region('us-east-1',aws_access_key_id='key' , aws_secret_access_key='key')
# Following is the prod key: [ Full access ]
#AWS_ACCESS_KEY_ID = 'key'
#AWS_SECRET_ACCESS_KEY = 'key'
reservations = conn.get_all_instances()
for res in reservations:
for inst in res.instances:
print "%s \t %s" % ("Host", inst.tags['Name'])
print "%s %s" % ("HostName", inst.public_dns_name)
print "%s" % ("StrictHostKeyChecking no")
print "%s %s" % ("Port", "1717")
print "%s %s" % ("User", "ubuntu")
print "%s %s%s.%s\n" % ("IdentityFile", "/path/of/folder/", inst.key_name, "pem" )
#print "%s" % (inst.tags['Name'])
# NOTE: if you don't put the key at conn then it will search for
# your profile file, if not /etc/boto.cfg if not your varible 'BOTO_CONFIG=/file/path'
conn=boto.ec2.connect_to_region('us-east-1')
#conn=boto.ec2.connect_to_region('us-east-1',aws_access_key_id='key' , aws_secret_access_key='key')
# Following is the prod key: [ Full access ]
#AWS_ACCESS_KEY_ID = 'key'
#AWS_SECRET_ACCESS_KEY = 'key'
reservations = conn.get_all_instances()
for res in reservations:
for inst in res.instances:
print "%s \t %s" % ("Host", inst.tags['Name'])
print "%s %s" % ("HostName", inst.public_dns_name)
print "%s" % ("StrictHostKeyChecking no")
print "%s %s" % ("Port", "1717")
print "%s %s" % ("User", "ubuntu")
print "%s %s%s.%s\n" % ("IdentityFile", "/path/of/folder/", inst.key_name, "pem" )
#print "%s" % (inst.tags['Name'])
pulling aws instance details using python boto1
########### Example #########
import boto.ec2
# NOTE: if you don't put the key at conn then it will search for
# your profile file, if not /etc/boto.cfg if not your varible 'BOTO_CONFIG=/file/path'
conn=boto.ec2.connect_to_region('us-east-1')
#conn=boto.ec2.connect_to_region('us-east-1',aws_access_key_id='access_key' , aws_secret_access_key='secret_key')
# Following is the prod key: [ Full access ]
#AWS_ACCESS_KEY_ID = 'access_key'
#AWS_SECRET_ACCESS_KEY = 'secret_key'
reservations = conn.get_all_instances()
for res in reservations:
for inst in res.instances:
print "%s " % (inst.tags['Name'])
print "%s %s " % ("publicDnsName:", inst.public_dns_name)
print "%s %s " % ("internalDnsName:", inst.private_dns_name)
print "%s %s " % ("publicIP:", inst.ip_address)
print "%s %s " % ("internalIP:", inst.private_ip_address)
print "%s %s " % ("architecture:", inst.architecture)
print "%s %s " % ("image_id:", inst.image_id)
print "%s %s " % ("instance_type", inst.instance_type)
print ""
###################################################
import boto.ec2
conn=boto.ec2.connect_to_region('us-east-1')
reservations = conn.get_all_instances()
for res in reservations:
for inst in res.instances:
print "%s " % (inst.tags['Name'])
print "%s %s " % ("publicDnsName:", inst.public_dns_name)
print "%s %s " % ("internalDnsName:", inst.private_dns_name)
print "%s %s " % ("publicIP:", inst.ip_address)
print "%s %s " % ("internalIP:", inst.private_ip_address)
print "%s %s " % ("architecture:", inst.architecture)
print "%s %s " % ("image_id:", inst.image_id)
print "%s %s " % ("instance_type", inst.instance_type)
print ""
##### To find all the details #####
import boto.ec2
conn=boto.ec2.connect_to_region('us-east-1')
reservations = conn.get_all_instances()
for res in reservations:
for inst in res.instances:
print(inst.__dict__)
break # remove this to list all instances
### Output: ###
{'_in_monitoring_element': False,
'ami_launch_index': u'0',
'architecture': u'x86_64',
'block_device_mapping': {},
'connection': EC2Connection:ec2.amazonaws.com,
'dns_name': u'ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com',
'id': u'i-xxxxxxxx',
'image_id': u'ami-xxxxxxxx',
'instanceState': u'\n ',
'instance_class': None,
'instance_type': u'm1.large',
'ip_address': u'xxx.xxx.xxx.xxx',
'item': u'\n ',
'kernel': None,
'key_name': u'FARM-xxxx',
'launch_time': u'2009-10-27T17:10:22.000Z',
'monitored': False,
'monitoring': u'\n ',
'persistent': False,
'placement': u'us-east-1d',
'previous_state': None,
'private_dns_name': u'ip-10-xxx-xxx-xxx.ec2.internal',
'private_ip_address': u'10.xxx.xxx.xxx',
'product_codes': [],
'public_dns_name': u'ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com',
'ramdisk': None,
'reason': '',
'region': RegionInfo:us-east-1,
'requester_id': None,
'rootDeviceType': u'instance-store',
'root_device_name': None,
'shutdown_state': None,
'spot_instance_request_id': None,
'state': u'running',
'state_code': 16,
'subnet_id': None,
'vpc_id': None}
NOTE: You can use above any of this sub value to get only those information.
Wednesday, 26 March 2014
Auto monitoring of aws instance using python boto
Over here, we can discuss about " monitoring automation ", a generic
way: which will work for any one who is using "AWS" "Nagios" for
monitoring.
Features:
[1]: automatically add the new host into monitoring, when ever we add a new system into the aws system.
[2]: automatically will remove the system from monitoring if we terminate the system from aws system.
[3]: read group information from custom tags.
NOTE: As we are going auto monitoring, few rule we have to maintain, else the monitoring will fail.
Rule1: We can have only two tags to any of our aws instance. [1. default: Name, 2. groups ] NOTE, these are case sensitive, so please maintain the same.
Rule2: As of now we have the following key words that can be part of the groups custom tags: [Note: if you need new, you have to let me know before putting the value. This is also case sensitive ] [ you can update the nagios hostgroup config file to add new hostgroup, before adding them into groups custom tags.]
hostgroup_name hadoop
hostgroup_name db
hostgroup_name http
Following is the python boto script:
#!/usr/bin/env python
import boto.ec2
import subprocess
#import os, subprocess
conn=boto.ec2.connect_to_region('us-east-1')
reservations = conn.get_all_instances()
for res in reservations:
for inst in res.instances:
print ("define host{")
print "%s \t %s" % ("use","generic-host") # \t for tab
print "%s %s" % ("host_name", inst.tags['Name'])
if inst.tags['Name'] == 'qa1':
print "%s \t%s" % ("check_command", "check_ssh")
# different check for qa1 as it is fedora system.
print "%s \t %s: %s" % ("alias", inst.tags['Name'], inst.public_dns_name)
print "%s %s" % ("address", inst.private_ip_address)
# Swapped the alias and address value, because of cost effective)
## Following few code block will check for a custom tags knonw as groups
## if its find the groups, then that host will be part of those hosts.
alltags = (inst.tags) # Will get all the other tags.
alltagsC = str(alltags) # changing the variable type to string.
isgroup = (alltagsC.find('groups'))
if isgroup > 0:
sp = isgroup+11 #found the groups index value and picking the other groups
#global otherGroups
otherGroups = alltagsC[sp:-2]
#print "%s %s %s" % ("hostgroups", inst.instance_type, otherGroups)
print "%s %s" % ("hostgroups", otherGroups)
#else:
#print "%s %s" % ("hostgroups", inst.instance_type)
print ("}\n")
NOTE: As of now I don't know how to get the custom tags value so did some hacks.
NOTE: Removing instance type as part of group, because the monitor will fail, if we have define any group with a instance type and no host is part of that group.
And put the following script into a file and put the file under root crontab:
#!/bin/bash
sudo /path/to/getInstanceDetails.py > /path/to/all_hosts.cfg
sleep 2
sudo service nagios3 restart
##Added this above script in cron as root user: sudo crontab -e
## */15 * * * * sudo /path/to/aboveScrptName.sh
## Now where I will update, what to check where ##
define service{
hostgroup_name db ;<-NOTE: over here you just have to put hostgroup.
service_description MYSQL
check_command check_nrpe_1arg!check_mysql
use generic-service-after-15 ; Name of service template to use
notification_interval 0 ; set > 0 if you want to be renotified
}
NOTE: you can create generic-service-xxx names with its own properties and add them over here.
Features:
[1]: automatically add the new host into monitoring, when ever we add a new system into the aws system.
[2]: automatically will remove the system from monitoring if we terminate the system from aws system.
[3]: read group information from custom tags.
NOTE: As we are going auto monitoring, few rule we have to maintain, else the monitoring will fail.
Rule1: We can have only two tags to any of our aws instance. [1. default: Name, 2. groups ] NOTE, these are case sensitive, so please maintain the same.
Rule2: As of now we have the following key words that can be part of the groups custom tags: [Note: if you need new, you have to let me know before putting the value. This is also case sensitive ] [ you can update the nagios hostgroup config file to add new hostgroup, before adding them into groups custom tags.]
hostgroup_name hadoop
hostgroup_name db
hostgroup_name http
Following is the python boto script:
#!/usr/bin/env python
import boto.ec2
import subprocess
#import os, subprocess
conn=boto.ec2.connect_to_region('us-east-1')
reservations = conn.get_all_instances()
for res in reservations:
for inst in res.instances:
print ("define host{")
print "%s \t %s" % ("use","generic-host") # \t for tab
print "%s %s" % ("host_name", inst.tags['Name'])
if inst.tags['Name'] == 'qa1':
print "%s \t%s" % ("check_command", "check_ssh")
# different check for qa1 as it is fedora system.
print "%s \t %s: %s" % ("alias", inst.tags['Name'], inst.public_dns_name)
print "%s %s" % ("address", inst.private_ip_address)
# Swapped the alias and address value, because of cost effective)
## Following few code block will check for a custom tags knonw as groups
## if its find the groups, then that host will be part of those hosts.
alltags = (inst.tags) # Will get all the other tags.
alltagsC = str(alltags) # changing the variable type to string.
isgroup = (alltagsC.find('groups'))
if isgroup > 0:
sp = isgroup+11 #found the groups index value and picking the other groups
#global otherGroups
otherGroups = alltagsC[sp:-2]
#print "%s %s %s" % ("hostgroups", inst.instance_type, otherGroups)
print "%s %s" % ("hostgroups", otherGroups)
#else:
#print "%s %s" % ("hostgroups", inst.instance_type)
print ("}\n")
NOTE: As of now I don't know how to get the custom tags value so did some hacks.
NOTE: Removing instance type as part of group, because the monitor will fail, if we have define any group with a instance type and no host is part of that group.
And put the following script into a file and put the file under root crontab:
#!/bin/bash
sudo /path/to/getInstanceDetails.py > /path/to/all_hosts.cfg
sleep 2
sudo service nagios3 restart
##Added this above script in cron as root user: sudo crontab -e
## */15 * * * * sudo /path/to/aboveScrptName.sh
## Now where I will update, what to check where ##
define service{
hostgroup_name db ;<-NOTE: over here you just have to put hostgroup.
service_description MYSQL
check_command check_nrpe_1arg!check_mysql
use generic-service-after-15 ; Name of service template to use
notification_interval 0 ; set > 0 if you want to be renotified
}
NOTE: you can create generic-service-xxx names with its own properties and add them over here.
Labels:
aws,
boto,
How To,
Informations,
Monitoring,
Notes,
python
Tuesday, 25 March 2014
aws hosts information for auto monitoring using python boto 1
import boto.ec2
conn=boto.ec2.connect_to_region('us-east-1')
reservations = conn.get_all_instances()
for res in reservations:
for inst in res.instances:
if 'Name' in inst.tags:
print ("define host{")
print ("use\tgeneric-host") # \t for tab
print "%s %s " % ("host_name", inst.tags['Name'])
#print "%s "% (inst.tags['Name'])
print "%s \t %s " % ("alias", inst.tags['Name'])
print "%s %s " % ("address", inst.public_dns_name)
print(inst.instance_type)
print ("}\n")
#print "%s (%s) [%s] [%s]" % (inst.tags['Name'], inst.id, inst.state, inst.public_dns_name)
else:
print "%s [%s]" % (inst.id, inst.state)
######NOTE#########
I believe in the following script output you can see the output name " example: instance_type " so you can use the same for getting that information:
example: for my above script:
inst.public_dns_name will give public_dns_name
inst.region will give the region details and so..on
NOTE: To get the details of the instance that is part of a custom tags then:
reservations = conn.get_all_instances(filters={'tag-key': 'groups'})
############ external script ##############
External Links:
http://www.saltycrane.com/blog/2010/03/how-list-attributes-ec2-instance-python-and-boto/
conn=boto.ec2.connect_to_region('us-east-1')
reservations = conn.get_all_instances()
for res in reservations:
for inst in res.instances:
if 'Name' in inst.tags:
print ("define host{")
print ("use\tgeneric-host") # \t for tab
print "%s %s " % ("host_name", inst.tags['Name'])
#print "%s "% (inst.tags['Name'])
print "%s \t %s " % ("alias", inst.tags['Name'])
print "%s %s " % ("address", inst.public_dns_name)
print(inst.instance_type)
print ("}\n")
#print "%s (%s) [%s] [%s]" % (inst.tags['Name'], inst.id, inst.state, inst.public_dns_name)
else:
print "%s [%s]" % (inst.id, inst.state)
######NOTE#########
I believe in the following script output you can see the output name " example: instance_type " so you can use the same for getting that information:
example: for my above script:
inst.public_dns_name will give public_dns_name
inst.region will give the region details and so..on
NOTE: To get the details of the instance that is part of a custom tags then:
reservations = conn.get_all_instances(filters={'tag-key': 'groups'})
############ external script ##############
from pprint import pprint from boto import ec2 AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXXXXXX' AWS_SECRET_ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' ec2conn = ec2.connection.EC2Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) reservations = ec2conn.get_all_instances() instances = [i for r in reservations for i in r.instances] for i in instances: pprint(i.__dict__) break # remove this to list all instances
{'_in_monitoring_element': False,
'ami_launch_index': u'0',
'architecture': u'x86_64',
'block_device_mapping': {},
'connection': EC2Connection:ec2.amazonaws.com,
'dns_name': u'ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com',
'id': u'i-xxxxxxxx',
'image_id': u'ami-xxxxxxxx',
'instanceState': u'\n ',
'instance_class': None,
'instance_type': u'm1.large',
'ip_address': u'xxx.xxx.xxx.xxx',
'item': u'\n ',
'kernel': None,
'key_name': u'FARM-xxxx',
'launch_time': u'2009-10-27T17:10:22.000Z',
'monitored': False,
'monitoring': u'\n ',
'persistent': False,
'placement': u'us-east-1d',
'previous_state': None,
'private_dns_name': u'ip-10-xxx-xxx-xxx.ec2.internal',
'private_ip_address': u'10.xxx.xxx.xxx',
'product_codes': [],
'public_dns_name': u'ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com',
'ramdisk': None,
'reason': '',
'region': RegionInfo:us-east-1,
'requester_id': None,
'rootDeviceType': u'instance-store',
'root_device_name': None,
'shutdown_state': None,
'spot_instance_request_id': None,
'state': u'running',
'state_code': 16,
'subnet_id': None,
'vpc_id': None}
External Links:
http://www.saltycrane.com/blog/2010/03/how-list-attributes-ec2-instance-python-and-boto/
Wednesday, 19 March 2014
ansible note1
Most Imp:
You need to have the "ssh-agent bash" and "ssh-add private_key(s)" first.
Edit (or create) /etc/ansible/hosts and put one or more remote systems in it, for which you have your SSH key in authorized_keys:
If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon. Ports listed in your SSH config file won’t be used, so it is important that you set them if things are not running on the default port:
Suppose you have just static IPs and want to set up some aliases that don’t live in your host file, or you are connecting through tunnels. You can do things like this:
######### Example of /etc/ansible/hosts file: #############
cat /etc/ansible/hosts
# This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
#
# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or ip addresses
# - A hostname/ip can be a member of multiple groups
# Ex 1: Ungrouped hosts, specify before any group headers.
#green.example.com
#blue.example.com
#192.168.100.1
#192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group
#[webservers]
#alpha.example.org
#beta.example.org
#192.168.1.100
#192.168.1.110
[group1]
host1 ansible_ssh_host=host1.example.com ansible_ssh_port=port ansible_ssh_user=user
# If you have multiple hosts following a pattern you can specify
# them like this:
#www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group
#[dbservers]
#
#db01.intranet.mydomain.net
#db02.intranet.mydomain.net
#10.25.1.56
#10.25.1.57
# Here's another example of host ranges, this time there are no
# leading 0s:
#db-[99:101]-node.example.com
#127.0.0.1
External link:
http://docs.ansible.com/intro.html
https://www.digitalocean.com/community/articles/how-to-install-and-configure-ansible-on-an-ubuntu-12-04-vps
You need to have the "ssh-agent bash" and "ssh-add private_key(s)" first.
Edit (or create) /etc/ansible/hosts and put one or more remote systems in it, for which you have your SSH key in authorized_keys:
192.168.1.50
aserver.example.org
bserver.example.org
$ ssh-agent bash $ ssh-add ~/.ssh/id_rsa
$ ansible all -m ping
# as bruce, sudoing to batman $ ansible all -m ping -u bruce --sudo --sudo-user batman
It is ok to put systems in more than one group, for instance a server could be both a webserver and a dbserver.
If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon. Ports listed in your SSH config file won’t be used, so it is important that you set them if things are not running on the default port:
badwolf.example.com:5309
Suppose you have just static IPs and want to set up some aliases that don’t live in your host file, or you are connecting through tunnels. You can do things like this:
jumper ansible_ssh_port=5555 ansible_ssh_host=192.168.1.50
######### Example of /etc/ansible/hosts file: #############
cat /etc/ansible/hosts
# This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
#
# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or ip addresses
# - A hostname/ip can be a member of multiple groups
# Ex 1: Ungrouped hosts, specify before any group headers.
#green.example.com
#blue.example.com
#192.168.100.1
#192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group
#[webservers]
#alpha.example.org
#beta.example.org
#192.168.1.100
#192.168.1.110
[group1]
host1 ansible_ssh_host=host1.example.com ansible_ssh_port=port ansible_ssh_user=user
# If you have multiple hosts following a pattern you can specify
# them like this:
#www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group
#[dbservers]
#
#db01.intranet.mydomain.net
#db02.intranet.mydomain.net
#10.25.1.56
#10.25.1.57
# Here's another example of host ranges, this time there are no
# leading 0s:
#db-[99:101]-node.example.com
#127.0.0.1
######### Notes #########
sudo nano /etc/ansible/hosts
[group_name]
alias ansible_ssh_host=server_ip_address
[droplets]
host1 ansible_ssh_host=111.111.111.111
host2 ansible_ssh_host=222.222.222.222
host3 ansible_ssh_host=333.333.333.333
We can put our configuration in here. YAML files start with "---", so make sure you don't forget that part.
ansible -m ping all
ansible -m ping droplets
host1:1717 ansible_ssh_user=ubuntu
host2:1717 ansible_ssh_user=ubuntu
[somegroup]
foo ansible_ssh_port=1234
bar ansible_ssh_port=1235
amit@amitAsus:~$ ansible -m ping group1
host1 | success >> {
"changed": false,
"ping": "pong"
}
host2| success >> {
"changed": false,
"ping": "pong"
}
- hosts: h1:h2
user: admin
tasks:
- name: update package list
action: command /usr/bin/apt-get update
- name: upgrade packages
action: command /usr/bin/apt-get -u -y dist-upgrade
- hosts: h3
user: sysadmin
tasks:
- name: update package list
action: command /usr/bin/apt-get update
- name: upgrade packages
action: command /usr/bin/apt-get -u -y dist-upgrade
NOTE: you can add as many ssh-key if you want:
1. ssh-agent bash
2. ssh-add /data/aws-keys/one-private-key
3. ssh-add /data/aws-keys/another-private-key
For multiple host group: [group1:group2]
amit@amitAsus:~$ ansible -m ping group1:group2
we could also specify an individual host:
ansible -m ping host1
We can specify multiple hosts by separating them with colons:
ansible -m ping host1:host2
###
The -m ping portion of the command is an instruction to Ansible to use the "ping" module. These are basically commands that you can run on your remote hosts. The ping module operates in many ways like the normal ping utility in Linux, but instead it checks for Ansible connectivity.
The ping module doesn't really take any arguments, but we can try another command to see how that works. We pass arguments into a script by typing -a.
The "shell" module lets us send a terminal command to the remote host and retrieve the results. For instance, to find out the memory usage on our host1 machine, we could use:
ansible -m shell -a 'free -m' host1
## If you started an ansible command and did a ^C its process will be there on the system as following:
amit 17178 0.0 0.0 45716 2972 ? S 15:35 0:00 ssh -tt -q -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/home/amit/.ansible/cp/ansible-ssh-%h-%p-%r -o Port=1717 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 host2.example.com /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1395223494.41-7224770274317 && chmod a+rx $HOME/.ansible/tmp/ansible-1395223494.41-7224770274317 && echo $HOME/.ansible/tmp/ansible-1395223494.41-7224770274317'
##
[droplets]
host1 ansible_ssh_host=111.111.111.111
host2 ansible_ssh_host=222.222.222.222
host3 ansible_ssh_host=333.333.333.333
we can use alias of host too then:
amit@amitAsus:~/test$ ansible
ansible ansible-doc ansible-galaxy ansible-playbook ansible-pull
amit@amitAsus:~/test$ ansible-doc
Usage: ansible-doc [options] [module...]
Show Ansible module documentation
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-M MODULE_PATH, --module-path=MODULE_PATH
Ansible modules/ directory
-l, --list List available modules
-s, --snippet Show playbook snippet for specified module(s)
-v Show version number and exit
amit@amitAsus:~/test$ ansible-doc -l
accelerate Enable accelerated mode on remote node
acl Sets and retrieves file ACL information.
add_host add a host (and alternatively a group) to the ansible-playbo
airbrake_deployment Notify airbrake about app deployments
apt Manages apt-packages
apt_key Add or remove an apt key
apt_repository Add and remove APT repositores
arista_interface Manage physical Ethernet interfaces
arista_l2interface Manage layer 2 interfaces
arista_lag Manage port channel (lag) interfaces
arista_vlan Manage VLAN resources
assemble Assembles a configuration file from fragments
async_status Obtain status of asynchronous task
authorized_key Adds or removes an SSH authorized key
bigip_monitor_http Manages F5 BIG-IP LTM http monitors
bigip_monitor_tcp Manages F5 BIG-IP LTM tcp monitors
bigip_node Manages F5 BIG-IP LTM nodes
bigip_pool Manages F5 BIG-IP LTM pools
bigip_pool_member Manages F5 BIG-IP LTM pool members
boundary_meter Manage boundary meters
bzr Deploy software (or files) from bzr branches
campfire Send a message to Campfire
cloudformation create a AWS CloudFormation stack
command Executes a command on a remote node
copy Copies files to remote locations.
cron Manage cron.d and crontab entries.
datadog_event Posts events to DataDog service
debug Print statements during execution
digital_ocean Create/delete a droplet/SSH_key in DigitalOcean
django_manage Manages a Django application.
dnsmadeeasy Interface with dnsmadeeasy.com (a DNS hosting service).
docker manage docker containers
easy_install Installs Python libraries
ec2 create or terminate an instance in ec2, return instanceid...
ec2_ami create or destroy an image in ec2, return imageid
ec2_eip associate an EC2 elastic IP with an instance.
ec2_elb De-registers or registers instances from EC2 EL*s*
ec2_facts Gathers facts about remote hosts within ec2 (aws)
ec2_group maintain an ec2 VPC security group.
ec2_tag create and remove tag(s) to ec2 resources.
ec2_vol create and attach a volume, return volume id and device map.
ec2_vpc configure AWS virtual private clouds
ejabberd_user Manages users for ejabberd servers
elasticache Manage cache clusters in Amazon Elasticache. - Returns infor
facter Runs the discovery program `facter' on the remote system...
fail Fail with custom message
fetch Fetches a file from remote nodes
file Sets attributes of files
filesystem Makes file system on block device
fireball Enable fireball mode on remote node
firewalld Manage arbitrary ports/services with firewalld
flowdock Send a message to a flowdock
gc_storage This module manages objects/buckets in Google Cloud Storage.
gce create or terminate GCE instances
gce_lb create/destroy GCE load-balancer resources
gce_net create/destroy GCE networks and firewall rules
gce_pd utilize GCE persistent disk resources
gem Manage Ruby gems
get_url Downloads files from HTTP, HTTPS, or FTP to node
git Deploy software (or files) from git checkouts
github_hooks Manages github service hooks.
glance_image Add/Delete images from glance
group Add or remove groups
group_by Create Ansible groups based on facts
grove Sends a notification to a grove.io channel
hg Manages Mercurial (hg) repositories.
hipchat Send a message to hipchat
homebrew Package manager for Homebrew
hostname Manage hostname
htpasswd manage user files for basic authentication
include_vars Load variables from files, dynamically within a task.
ini_file Tweak settings in INI files
irc Send a message to an IRC channel
jabber Send a message to jabber user or chat room
jboss deploy applications to JBoss
kernel_blacklist Blacklist kernel modules
keystone_user Manage OpenStack Identity (keystone) users, tenants and role
lineinfile Ensure a particular line is in a file, or replace an existin
linode create / delete / stop / restart an instance in Linode Publi
lvg Configure LVM volume groups
lvol Configure LVM logical volumes
macports Package manager for MacPorts
mail Send an email
modprobe Add or remove kernel modules
mongodb_user Adds or removes a user from a MongoDB database.
monit Manage the state of a program monitored via Monit
mount Control active and configured mount points
mqtt Publish a message on an MQTT topic for the IoT
mysql_db Add or remove MySQL databases from a remote host.
mysql_replication Manage MySQL replication
mysql_user Adds or removes a user from a MySQL database.
mysql_variables Manage MySQL global variables
nagios Perform common tasks in Nagios related to downtime and notif
netscaler Manages Citrix NetScaler entities
newrelic_deployment Notify newrelic about app deployments
nova_compute Create/Delete VMs from OpenStack
nova_keypair Add/Delete key pair from nova
npm Manage node.js packages with npm
ohai Returns inventory data from `Ohai'
open_iscsi Manage iscsi targets with open-iscsi
openbsd_pkg Manage packages on OpenBSD.
openvswitch_bridge Manage Open vSwitch bridges
openvswitch_port Manage Open vSwitch ports
opkg Package manager for OpenWrt
osx_say Makes an OSX computer to speak.
ovirt oVirt/RHEV platform management
pacman Package manager for Archlinux
pagerduty Create PagerDuty maintenance windows
pause Pause playbook execution
ping Try to connect to host and return `pong' on success.
pingdom Pause/unpause Pingdom alerts
pip Manages Python library dependencies.
pkgin Package manager for SmartOS
pkgng Package manager for FreeBSD >= 9.0
pkgutil Manage CSW-Packages on Solaris
portinstall Installing packages from FreeBSD's ports system
postgresql_db Add or remove PostgreSQL databases from a remote host.
postgresql_privs Grant or revoke privileges on PostgreSQL database objects...
postgresql_user Adds or removes a users (roles) from a PostgreSQL database..
quantum_floating_ip Add/Remove floating IP from an instance
quantum_floating_ip_associate Associate or disassociate a particular floating IP with an i
quantum_network Creates/Removes networks from OpenStack
quantum_router Create or Remove router from openstack
quantum_router_gateway set/unset a gateway interface for the router with the specif
quantum_router_interface Attach/Dettach a subnet's interface to a router
quantum_subnet Add/Remove floating IP from an instance
rabbitmq_parameter Adds or removes parameters to RabbitMQ
rabbitmq_plugin Adds or removes users to RabbitMQ
rabbitmq_user Adds or removes users to RabbitMQ
rabbitmq_vhost Manage the state of a virtual host in RabbitMQ
raw Executes a low-down and dirty SSH command
rax create / delete an instance in Rackspace Public Cloud
rax_clb create / delete a load balancer in Rackspace Public Cloud...
rax_clb_nodes add, modify and remove nodes from a Rackspace Cloud Load Bal
rax_facts Gather facts for Rackspace Cloud Servers
rax_network create / delete an isolated network in Rackspace Public Clou
rds create or delete an Amazon rds instance
redhat_subscription Manage Red Hat Network registration and subscriptions using
redis Various redis commands, slave and flush
rhn_channel Adds or removes Red Hat software channels
rhn_register Manage Red Hat Network registration using the `rhnreg_ks' co
riak This module handles some common Riak operations
route53 add or delete entries in Amazons Route53 DNS service
rpm_key Adds or removes a gpg key from the rpm db
s3 idempotent S3 module putting a file into S3.
script Runs a local script on a remote node after transferring it..
seboolean Toggles SELinux booleans.
selinux Change policy and state of SELinux
service Manage services.
set_fact Set host facts from a task
setup Gathers facts about remote hosts
shell Execute commands in nodes.
slurp Slurps a file from remote nodes
stat retrieve file or file system status
subversion Deploys a subversion repository.
supervisorctl Manage the state of a program or group of programs running v
svr4pkg Manage Solaris SVR4 packages
swdepot Manage packages with swdepot package manager (HP-UX)
synchronize Uses rsync to make synchronizing file paths in your playbook
sysctl Manage entries in sysctl.conf.
template Templates a file out to a remote server.
unarchive Copies archive to remote locations and unpacks them
uri Interacts with webservices
urpmi Urpmi manager
user Manage user accounts
virt Manages virtual machines supported by libvirt
wait_for Waits for a condition before continuing.
xattr set/retrieve extended attributes
yum Manages packages with the `yum' package manager
zfs Manage zfs
zypper Manage packages on SuSE and openSuSE
zypper_repository Add and remove Zypper repositories
http://docs.ansible.com/intro.html
https://www.digitalocean.com/community/articles/how-to-install-and-configure-ansible-on-an-ubuntu-12-04-vps
Wednesday, 12 March 2014
understaning svn hook 1
understaning svn hook 1
# svnlook is one of the most required command to create the svn hooks, so that you can get more information about your checkin...
# Here is an example hook script, for a Unix /bin/sh interpreter.
# For more examples and pre-written hooks, see those in
# the Subversion repository at
# http://svn.collab.net/repos/svn/trunk/tools/hook-scripts/ and
# http://svn.collab.net/repos/svn/trunk/contrib/hook-scripts/
REPOS="$1"
TXN="$2"
# Make sure that the log message contains some text.
SVNLOOK=/usr/bin/svnlook
$SVNLOOK log -t "$TXN" "$REPOS" | \
grep "[a-zA-Z0-9]" > /dev/null || exit 1
# Check that the author of this commit has the rights to perform
# the commit on the files and directories being modified.
commit-access-control.pl "$REPOS" "$TXN" commit-access-control.cfg || exit 1
# All checks passed, so allow the commit.
exit 0
svnlook --help
general usage: svnlook SUBCOMMAND REPOS_PATH [ARGS & OPTIONS ...]
Note: any subcommand which takes the '--revision' and '--transaction'
options will, if invoked without one of those options, act on
the repository's youngest revision.
Type 'svnlook help <subcommand>' for help on a specific subcommand.
Type 'svnlook --version' to see the program version and FS modules.
Available subcommands:
author
cat
changed
date
diff
dirs-changed
help (?, h)
history
info
lock
log
propget (pget, pg)
proplist (plist, pl)
tree
uuid
youngest
svnlook help cat
# The above command will give you how to use the sub-command:
cat: usage: svnlook cat REPOS_PATH FILE_PATH
Print the contents of a file. Leading '/' on FILE_PATH is optional.
Valid options:
-r [--revision] ARG : specify revision number ARG
-t [--transaction] ARG : specify transaction name ARG
# svnlook is one of the most required command to create the svn hooks, so that you can get more information about your checkin...
# Here is an example hook script, for a Unix /bin/sh interpreter.
# For more examples and pre-written hooks, see those in
# the Subversion repository at
# http://svn.collab.net/repos/svn/trunk/tools/hook-scripts/ and
# http://svn.collab.net/repos/svn/trunk/contrib/hook-scripts/
REPOS="$1"
TXN="$2"
# Make sure that the log message contains some text.
SVNLOOK=/usr/bin/svnlook
$SVNLOOK log -t "$TXN" "$REPOS" | \
grep "[a-zA-Z0-9]" > /dev/null || exit 1
# Check that the author of this commit has the rights to perform
# the commit on the files and directories being modified.
commit-access-control.pl "$REPOS" "$TXN" commit-access-control.cfg || exit 1
# All checks passed, so allow the commit.
exit 0
svnlook --help
general usage: svnlook SUBCOMMAND REPOS_PATH [ARGS & OPTIONS ...]
Note: any subcommand which takes the '--revision' and '--transaction'
options will, if invoked without one of those options, act on
the repository's youngest revision.
Type 'svnlook help <subcommand>' for help on a specific subcommand.
Type 'svnlook --version' to see the program version and FS modules.
Available subcommands:
author
cat
changed
date
diff
dirs-changed
help (?, h)
history
info
lock
log
propget (pget, pg)
proplist (plist, pl)
tree
uuid
youngest
svnlook help cat
# The above command will give you how to use the sub-command:
cat: usage: svnlook cat REPOS_PATH FILE_PATH
Print the contents of a file. Leading '/' on FILE_PATH is optional.
Valid options:
-r [--revision] ARG : specify revision number ARG
-t [--transaction] ARG : specify transaction name ARG
Tuesday, 11 March 2014
how to find what all methods an object have in python, when you import that.
>>> import boto
>>> dir(boto)
Out[9]:
['BotoConfigLocations',
'BucketStorageUri',
'Config',
'FileStorageUri',
'InvalidUriError',
'NullHandler',
'UserAgent',
'Version',
'__builtins__',
'__doc__',
'__file__',
'__name__',
'__package__',
'__path__',
'__version__',
'_aws_cache',
'_get_aws_conn',
'boto',
'check_extensions',
'config',
'connect_autoscale',
'connect_cloudformation',
'connect_cloudfront',
'connect_cloudwatch',
'connect_dynamodb',
'connect_ec2',
'connect_ec2_endpoint',
'connect_elb',
'connect_emr',
'connect_euca',
'connect_fps',
'connect_gs',
'connect_ia',
'connect_iam',
'connect_mturk',
'connect_rds',
'connect_route53',
'connect_s3',
'connect_sdb',
'connect_ses',
'connect_sns',
'connect_sqs',
'connect_sts',
'connect_swf',
'connect_vpc',
'connect_walrus',
'exception',
'handler',
'init_logging',
'log',
'logging',
'lookup',
'os',
'plugin',
'pyami',
're',
'resultset',
'set_file_logger',
'set_stream_logger',
'storage_uri',
'storage_uri_for_key',
'sys',
'urlparse']
>>> hasattr(boto,"ec2_connect")
False
>>> hasattr(boto,"connect_ec2")
True
>> help(boto.connect_ec2)
connect_ec2(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.ec2.connection.EC2Connection`
:return: A connection to Amazon's EC2
(END)
link:
http://www.diveintopython.net/power_of_introspection/index.html
http://stackoverflow.com/questions/34439/finding-what-methods-an-object-has
http://en.wikipedia.org/wiki/Python_%28programming_language%29
Monday, 10 March 2014
python script with system command 1
NOTE: sudo apt-get install ipython [ To install ipython ]
#!/usr/bin/env python
# System Information Gethering Script
import subprocess
# Example:
# subprocess.call(["ls","-l","/tmp/"])
# Example: You can also use as following for the above command:
# subprocess.call("df -h", shell=True)
#Command 1
uname = "uname"
uname_arg = "-a"
print "Gethering system information with %s command:\n" %uname
subprocess.call([uname,uname_arg])
#Command 2
diskspace = "df"
diskspace_arg = "-h"
print "Gathering diskspace information %s command:\n" %diskspace
subprocess.call([diskspace,diskspace_arg])
Friday, 7 March 2014
How to update the media wiki navigation page
How to update the media wiki navigation page
Go to the following wiki page location and log in as admin user and update the nagigation page.http://mediawiki-server/mediaw
Admin username: WikiSysop
Wednesday, 5 March 2014
nagios monitoring ang amazon [ aws ] internal external dns name vs ipaddress
nagios monitoring ang amazon [ aws ] internal external dns name vs ipaddress
NOTE:
How do you monitor an aws ec2 host if it is a spot instance and the internal ip keep changing and you are on ec2 classic network [ not the vpn ]
so, when you are monitoring the remote host, the remote host nrpe cfg file need to permit your nagios server host. But when you have to monitor the a server, you can use the internal ip to monitor and it will work fine, but when in the above case, even if your security group allow using the external IP or public dns or using the elastic ip, the monitoring will fail. In that case you need to use the public dns name.
Example:
NOTE: The following [ using public dns name ] will work.
Lets say your nagios server belongs to "security-group-x" and its allow for your nagios communication port. [ default 5666 ]and your nagios server address is updated at your nrpe.cfg's allow host list.
define host{
use generic-host
host_name spot-ec2-in-classic-network
alias spot-ec2-in-classic-network
address ec2-23-10-100-200.compute-1.amazonaws.com
}
NOTE: In the following example it will not work.
Why?: In the above example I have given the address as the public-dns name provided by AWS and AWS can get the further information, like from which security group it is comming from and migh be in the following example, its doing the reverse dig and getting a different dns name [ if you have set ] or not doing that even?
define host{
use generic-host
host_name spot-ec2-in-classic-network
alias spot-ec2-in-classic-network
address 23.10.100.200
}
NOTE: So, if you have above type requirement, then will suggest to use the Amazon [ AWS ] DNS names. Some time I believe, you should use the amazon dns name for all the communication, even if it for internal ip :)
NOTE:
How do you monitor an aws ec2 host if it is a spot instance and the internal ip keep changing and you are on ec2 classic network [ not the vpn ]
so, when you are monitoring the remote host, the remote host nrpe cfg file need to permit your nagios server host. But when you have to monitor the a server, you can use the internal ip to monitor and it will work fine, but when in the above case, even if your security group allow using the external IP or public dns or using the elastic ip, the monitoring will fail. In that case you need to use the public dns name.
Example:
NOTE: The following [ using public dns name ] will work.
Lets say your nagios server belongs to "security-group-x" and its allow for your nagios communication port. [ default 5666 ]and your nagios server address is updated at your nrpe.cfg's allow host list.
define host{
use generic-host
host_name spot-ec2-in-classic-network
alias spot-ec2-in-classic-network
address ec2-23-10-100-200.compute-1.amazonaws.com
}
NOTE: In the following example it will not work.
Why?: In the above example I have given the address as the public-dns name provided by AWS and AWS can get the further information, like from which security group it is comming from and migh be in the following example, its doing the reverse dig and getting a different dns name [ if you have set ] or not doing that even?
define host{
use generic-host
host_name spot-ec2-in-classic-network
alias spot-ec2-in-classic-network
address 23.10.100.200
}
NOTE: So, if you have above type requirement, then will suggest to use the Amazon [ AWS ] DNS names. Some time I believe, you should use the amazon dns name for all the communication, even if it for internal ip :)
Nagios Active and Passive checks with nsca and nrpe
Here is the few notes on the Nagios Active and Passive checks with nsca and nrpe:
Ref link: http://nsclient.org/nscp/wiki/doc/usage/nagios/nsca
$ dpkg --listfiles nsca-client
/.
/etc
/etc/send_nsca.cfg
/usr
/usr/share
/usr/share/doc
/usr/share/doc/nsca-client
/usr/share/doc/nsca-client/copyright
/usr/share/doc/nsca-client/NEWS.Debian.gz
/usr/share/doc/nsca-client/changelog.Debian.gz
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/send_nsca.1.gz
/usr/sbin
/usr/sbin/send_nsca
$ send_nsca --help
NSCA Client 2.7.2
Copyright (c) 2000-2007 Ethan Galstad (www.nagios.org)
Last Modified: 07-03-2007
License: GPL v2
Encryption Routines: AVAILABLE
Usage: send_nsca -H <host_address> [-p port] [-to to_sec] [-d delim] [-c config_file]
Options:
<host_address> = The IP address of the host running the NSCA daemon
[port] = The port on which the daemon is running - default is 5667
[to_sec] = Number of seconds before connection attempt times out.
(default timeout is 10 seconds)
[delim] = Delimiter to use when parsing input (defaults to a tab)
[config_file] = Name of config file to use
Note:
This utility is used to send passive check results to the NSCA daemon. Host and
Service check data that is to be sent to the NSCA daemon is read from standard
input. Input should be provided in the following format (tab-delimited unless
overriden with -d command line argument, one entry per line):
Service Checks:
<host_name>[tab]<svc_description>[tab]<return_code>[tab]<plugin_output>[newline]
Host Checks:
<host_name>[tab]<return_code>[tab]<plugin_output>[newline]
$ dpkg --listfiles nsca
/.
/etc
/etc/nsca.cfg
/etc/init.d
/etc/init.d/nsca
/usr
/usr/share
/usr/share/doc
/usr/share/doc/nsca
/usr/share/doc/nsca/examples
/usr/share/doc/nsca/examples/nsca.xinetd
/usr/share/doc/nsca/README.gz
/usr/share/doc/nsca/copyright
/usr/share/doc/nsca/README.Debian
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/nsca
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/nsca.1.gz
/usr/sbin
/usr/sbin/nsca
/usr/share/doc/nsca/NEWS.Debian.gz
/usr/share/doc/nsca/changelog.Debian.gz
Ref link: http://nsclient.org/nscp/wiki/doc/usage/nagios/nsca
$ dpkg --listfiles nsca-client
/.
/etc
/etc/send_nsca.cfg
/usr
/usr/share
/usr/share/doc
/usr/share/doc/nsca-client
/usr/share/doc/nsca-client/copyright
/usr/share/doc/nsca-client/NEWS.Debian.gz
/usr/share/doc/nsca-client/changelog.Debian.gz
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/send_nsca.1.gz
/usr/sbin
/usr/sbin/send_nsca
$ send_nsca --help
NSCA Client 2.7.2
Copyright (c) 2000-2007 Ethan Galstad (www.nagios.org)
Last Modified: 07-03-2007
License: GPL v2
Encryption Routines: AVAILABLE
Usage: send_nsca -H <host_address> [-p port] [-to to_sec] [-d delim] [-c config_file]
Options:
<host_address> = The IP address of the host running the NSCA daemon
[port] = The port on which the daemon is running - default is 5667
[to_sec] = Number of seconds before connection attempt times out.
(default timeout is 10 seconds)
[delim] = Delimiter to use when parsing input (defaults to a tab)
[config_file] = Name of config file to use
Note:
This utility is used to send passive check results to the NSCA daemon. Host and
Service check data that is to be sent to the NSCA daemon is read from standard
input. Input should be provided in the following format (tab-delimited unless
overriden with -d command line argument, one entry per line):
Service Checks:
<host_name>[tab]<svc_description>[tab]<return_code>[tab]<plugin_output>[newline]
Host Checks:
<host_name>[tab]<return_code>[tab]<plugin_output>[newline]
$ dpkg --listfiles nsca
/.
/etc
/etc/nsca.cfg
/etc/init.d
/etc/init.d/nsca
/usr
/usr/share
/usr/share/doc
/usr/share/doc/nsca
/usr/share/doc/nsca/examples
/usr/share/doc/nsca/examples/nsca.xinetd
/usr/share/doc/nsca/README.gz
/usr/share/doc/nsca/copyright
/usr/share/doc/nsca/README.Debian
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/nsca
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/nsca.1.gz
/usr/sbin
/usr/sbin/nsca
/usr/share/doc/nsca/NEWS.Debian.gz
/usr/share/doc/nsca/changelog.Debian.gz
Subscribe to:
Posts (Atom)