Can I automatically add a new host to known_hosts?



Here's my situation: I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via ssh. The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts file on the central client.

The problem I'm having is that the first ssh command run against a new virtual instance always comes up with an interactive prompt:

The authenticity of host '[hostname] ([IP address])' can't be established.
RSA key fingerprint is [key fingerprint].
Are you sure you want to continue connecting (yes/no)?

Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.


Posted 2010-04-16T04:15:11.227

Reputation: 5 782

1For a test environment which is self-contained and physically secure, automated key acceptance may work just fine. But automatically accepting public keys in a production environment or across an untrusted network (such as the Internet) completely bypasses any protection against man-in-the-middle attacks that SSH would otherwise afford. The only valid way to make sure you're secure against MITM attacks is to verify the host's public key through some out-of-band trusted channel. There is no secure way to automate it without setting up a moderately complicated key signing infrastructure. – Eil – 2018-02-15T20:04:59.810



Set the StrictHostKeyChecking option to no, either in the config file or via -o :

ssh -o StrictHostKeyChecking=no

Ignacio Vazquez-Abrams

Posted 2010-04-16T04:15:11.227

Reputation: 38 539

@IgnacioVazquez-Abrams, just add a disclaimer about the security. This answer is perfectly acceptable with knowledge of impact to security. Many environments, like behind corp firewalls, make the security issue not as dramatic. – deepelement – 2017-05-30T15:21:33.063

Deactivating security features to supress 'irritating' warnings is a very bad practise! – blafasel – 2017-10-17T12:17:36.773

This is not the way to do it. MITM. – jameshfisher – 2017-11-28T16:37:52.343

This answer should mention the risk of MiTM. – Limbo Peng – 2018-06-22T03:47:22.157

52This leaves you open to man in the middle attacks, probably not a good idea. – JasperWallace – 2013-09-23T07:23:31.870

8@JasperWallace, while this is usually good advice, the specific use case (deploying test VMs and sending commands to them) should be safe enough. – Massimo – 2014-10-21T17:33:19.273

8This gives a Warning: Permanently added 'hostname,' (RSA) to the list of known hosts. To avoid the warning, and to avoid the entry being added to any known_hosts file, I do: ssh -o StrictHostKeyChecking=no -o LogLevel=ERROR -o UserKnownHostsFile=/dev/null – Peter V. Mørch – 2015-05-21T09:19:01.060

Note that this must make the connection, so if you need to connect to a port, use -p [port], and if you're automating the process and need to immediate disconnect, send any command you're allowed to execute on the target machine. eg ssh -o StrictHostKeyChecking=no -p 2020 'echo hello' – DanielM – 2015-07-16T14:08:40.713

8Downvoting as this does not answer the question and opens to serious security vulnerabilities. – marcv81 – 2016-01-20T04:48:14.830

I also downvoted this. I want to securely connect, not defeat my own ssh. – Mnebuerquo – 2016-06-15T17:16:32.220

11@Mnebuerquo: If you were worried about security then you wouldn't have anything at all to do with this question. You'd have the correct host key in front of you, gathered from the console of the system you wanted to connect to, and you would manually verify it upon first connecting. You certainly wouldn't do anything "automatically". – Ignacio Vazquez-Abrams – 2016-06-15T17:31:54.337


IMO, the best way to do this is the following:

ssh-keygen -R [hostname]
ssh-keygen -R [ip_address]
ssh-keygen -R [hostname],[ip_address]
ssh-keyscan -H [hostname],[ip_address] >> ~/.ssh/known_hosts
ssh-keyscan -H [ip_address] >> ~/.ssh/known_hosts
ssh-keyscan -H [hostname] >> ~/.ssh/known_hosts

That will make sure there are no duplicate entries, that you are covered for both the hostname and IP address, and will also hash the output, an extra security measure.


Posted 2010-04-16T04:15:11.227

Reputation: 2 274

Make shure you use the right hostname else the key will not by valid – – 2017-08-15T14:34:46.173

This is the right answer if you want to update a host. – Rogers Sampaio – 2017-11-04T22:50:57.137

This is not the way to do it. MITM. – jameshfisher – 2017-11-28T16:38:12.563

4Why do you need all 3 ssh-keyscan's? Can't you get by with just the first one since it works for both hostname and ip? – Robert – 2013-05-24T22:00:10.670

6Can you be sure that the machine replying to the ssh-keyscan request is really the one you want to talk to? If not you've opened yourself to a man in the middle attack. – JasperWallace – 2013-09-23T07:24:05.530

2@JasperWallace Yes, for that you need at least the fingerprint or even better the public key, in which case you can add it directly to known_hosts, turning this question moot. If you only have the fingerprint, you will have to write an extra step which verifies the downloaded public key with your fingerprint... – None – 2014-04-28T21:57:01.693

1Calls to ssh-keyscan were failing for me because my target host doesn't support the default version 1 key type. Adding -t rsa,dsa to the command fixed this. – phasetwenty – 2014-08-06T18:11:43.830


This is probably a bad idea. You are opening yourself to a man-in-the-middle attack by updating these keys.

To avoid duplicate entries, check the return status of ssh-keygen -F [address] instead.

– retrohacker – 2015-09-29T03:04:50.280

This is only slightly better than the accepted answer. If someone does a MITM attack on you when you do this, then you store their fake keys, and you permanently accept their MITM attack. – Mnebuerquo – 2016-06-15T17:17:56.600


For the lazy ones:

ssh-keyscan <host> >> ~/.ssh/known_hosts


Posted 2010-04-16T04:15:11.227

Reputation: 919

6@Mnebuerquo You say what to do but not how, which would be helpful. – Jim – 2017-03-24T21:20:17.350

This is not the way to do it. MITM. – jameshfisher – 2017-11-28T16:38:22.290

2@jameshfisher Yes its vulnerable to MITM attacks, but have you ever compared the RSA fingerprint, which was shown to you with the actual one of the server, when you were doing this manually? No? So this answer is the way to do it for you. If yes, you shouldn't use this answer and do it manually or implement other security measures... – fivef – 2017-11-30T09:07:16.447

1@Mnebuerquo I would be really glad if you also let us know a better way to handle this, when we need to clone a repo using batch scripts un-attended and we want to by-pass this prompt. Please shed some light on a real solution if you think this is not the right one! – Waqas Shah – 2018-01-04T04:22:44.900

@WaqasShah See the answer by BradChesney79 ... This is a better way to do it. First verify the keys and fingerprints ahead of time, then upload them to your machine where you want to be able to do ssh connections. You can upload a complete known_hosts file first thing upon your script starting. If you're creating virtual machines, you can create the key when you create the image, so you know it before you start them up.

– Mnebuerquo – 2018-01-04T11:43:58.663

@Mnebuerquo Thanks for the link, it's such a coincidence that today I used this same approach and to me too this seems like the best one. Thanks anyways! – Waqas Shah – 2018-01-04T14:52:45.250

1The answer by BradChesney79 is not in any sense a better way to do it. Literally all he is doing there is using nmap to get the SSH host key fingerprint and then comparing it to what ssh-keyscan says the fingerprint is. In both cases, the fingerprint comes from the same place. It's just as vulnerable to MITM as any other of these automated solutions. The only secure and valid way to verify an SSH public key is over some trusted out-of-band channel. (Or set up some kind of key-signing infrastructure.) – Eil – 2018-02-15T19:52:25.207

Key signing infrastructure +1

For out-of-band now-a-days, it often suffices to grab the fingerprint from your cloud dashboard (where you create your instances), which almost always have a direct console into the instance. Add these to your list of trusted fingerprints. Then simply compare with the ssh-keyscan result.

If you have physical access to the machine, even better! – DylanYoung – 2018-11-28T17:04:11.073

10+1 for being guilty as charged. Thanks. – SaxDaddy – 2014-10-28T17:59:10.910

"ssh-keyscan -H <host> >> ~/.ssh/known_hosts" produces an entry more like what ssh does with user interaction. (The -H hashes the name of the remote host.) – Sarah Messer – 2015-09-04T20:19:35.947

3Vulnerable to MITM attacks. You're not checking the key fingerprint. – Mnebuerquo – 2016-06-15T17:20:12.220


As mentioned, using key-scan would be the right & unobtrusive way to do it.

ssh-keyscan -t rsa,dsa HOST 2>&1 | sort -u - ~/.ssh/known_hosts > ~/.ssh/tmp_hosts
mv ~/.ssh/tmp_hosts ~/.ssh/known_hosts

The above will do the trick to add a host, ONLY if it has not yet been added. It is also not concurrency safe; you must not execute the snippet on the same origin machine more than once at the same time, as the tmp_hosts file can get clobbered, ultimately leading to the known_hosts file becoming bloated...


Posted 2010-04-16T04:15:11.227

Reputation: 499

@utapyngo ssh-keygen -F will give you the current fingerprint. If it comes back blank with return code of 1, then you don't have it. If it prints something and return code is 0, then it's already present. – Rich L – 2017-08-17T13:58:56.327

This is not the way to do it. MITM. – jameshfisher – 2017-11-28T16:38:41.210

1If you care that much about MITM, deploy DNSSEC and SSHFP records or use some other secure means of distributing the keys and this kludge solution will be irrelevant. – Zart – 2017-11-28T17:09:39.047

Is there a way to check whether the key is in known_hosts before ssh-keyscan? The reason is that it requires some time and additional network connection. – utapyngo – 2014-05-23T07:49:50.103

1The original poster's version of this file had cat ~/.ssh/tmp_hosts &gt; ~/.ssh/known_hosts, but a subsequent edit changed it to &gt;&gt;. Using &gt;&gt; is an error. It defeats the purpose of the uniqueness in the first line, and causes it to dump new entries into known_hosts every time it runs. (Just posted an edit to change it back.) – paulmelnikow – 2015-07-27T17:46:20.303

1This is subject to the same MITM attacks as the others. – Mnebuerquo – 2016-06-15T17:18:59.073


You could use ssh-keyscan command to grab the public key and append that to your known_hosts file.


Posted 2010-04-16T04:15:11.227

Reputation: 5 743

3@Mnebuerquo Fair point in the general context, but why would someone be trying to programmatically gather keys if they already knew what the correct key was? – Brian Cline – 2017-08-25T19:27:36.687

This is not the way to do it. MITM. – jameshfisher – 2017-11-28T16:39:05.873

2Make sure you check the fingerprint to ensure it is the correct key. Otherwise you open yourself up to MITM attacks. – Mnebuerquo – 2016-06-15T17:20:54.943


This is how you can incorporate ssh-keyscan into your play:

# ansible playbook that adds ssh fingerprints to known_hosts
- hosts: all
  connection: local
  gather_facts: no
  - command: /usr/bin/ssh-keyscan -T 10 {{ ansible_host }}
    register: keyscan
  - lineinfile: name=~/.ssh/known_hosts create=yes line={{ item }}
    with_items: '{{ keyscan.stdout_lines }}'


Posted 2010-04-16T04:15:11.227

Reputation: 284

This is not the way to do it. MITM. – jameshfisher – 2017-11-28T16:39:23.513

1@jameshfisher I would be really glad if you also let us know a better way to handle this, when we need to clone a repo using batch scripts un-attended and we want to by-pass this prompt. Please shed some light on a real solution if you think this is not the right one! Please let us know "how" to do it, if you think this is not the right way to do it! – Waqas Shah – 2018-01-04T04:25:46.820

It is a perfectly valid method of adding values to known_hosts, but yes it is susceptible to MITM. However, for internal use it is fine. – Cameron Lowell Palmer – 2018-11-23T14:15:26.860

@WaqasShah Just validate against your repos' possible fingerprints. For github they can be found here: Now why is 'here' any more secure? Why PKI of course :)

– DylanYoung – 2018-11-28T17:11:39.980

1Are you uploading a known valid known_hosts file, or are you doing ssh-keyscan and dumping the output into known_hosts without verifying fingerprints? – Mnebuerquo – 2016-06-15T17:22:21.667

1This is simply dumps output of a keyscan, yes. So in effect it's the same as StrictHostKeyChecking=no, just with silent known_hosts updating without fiddling with ssh options. This solution also doesn't work well due to ssh-keyscan returning multiple lines which causes this task always be flagged as 'changed' – Zart – 2016-06-16T21:42:33.733


I had a similar issue and found that some of the provided answers only got me part way to an automated solution. The following is what I ended up using, hope it helps:

ssh -o "StrictHostKeyChecking no" -o PasswordAuthentication=no 10.x.x.x

It adds the key to known_hosts and doesn't prompt for the password.


Posted 2010-04-16T04:15:11.227

Reputation: 239

3Nobody checks the fingerprint. – Brendan Byrd – 2017-06-02T03:11:13.613

This is not the way to do it. MITM. – jameshfisher – 2017-11-28T16:40:33.600

1Vulnerable to MITM attacks. You're not checking the fingerprint. – Mnebuerquo – 2016-06-15T17:23:08.593


this would be a complete solution, accepting host key for the first time only

#!/usr/bin/env ansible-playbook
- name: accept ssh fingerprint automatically for the first time
  hosts: all
  connection: local
  gather_facts: False

    - name: "check if known_hosts contains server's fingerprint"
      command: ssh-keygen -F {{ inventory_hostname }}
      register: keygen
      failed_when: keygen.stderr != ''
      changed_when: False

    - name: fetch remote ssh key
      command: ssh-keyscan -T5 {{ inventory_hostname }}
      register: keyscan
      failed_when: keyscan.rc != 0 or keyscan.stdout == ''
      changed_when: False
      when: keygen.rc == 1

    - name: add ssh-key to local known_hosts
        name: ~/.ssh/known_hosts
        create: yes
        line: "{{ item }}"
      when: keygen.rc == 1
      with_items: '{{ keyscan.stdout_lines|default([]) }}'


Posted 2010-04-16T04:15:11.227

Reputation: 51

This is not the way to do it. MITM. – jameshfisher – 2017-11-28T16:40:09.180


So, I was searching for a mundane way to bypass the unkown host manual interaction of cloning a git repo as shown below:

brad@computer:~$ git clone
Cloning into 'viperks-api'...
The authenticity of host ' (' can't be established.
RSA key fingerprint is 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40.
Are you sure you want to continue connecting (yes/no)?

Note the RSA key fingerprint...

So, this is a SSH thing, this will work for git over SSH and just SSH related things in general...

brad@computer:~$ nmap --script ssh-hostkey

Starting Nmap 7.01 ( ) at 2016-10-05 10:21 EDT
Nmap scan report for (
Host is up (0.032s latency).
Other addresses for (not scanned): 2401:1d80:1010::150
Not shown: 997 filtered ports
22/tcp  open  ssh
| ssh-hostkey:
|   1024 35:ee:d7:b8:ef:d7:79:e2:c6:43:9e:ab:40:6f:50:74 (DSA)
|_  2048 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40 (RSA)
80/tcp  open  http
443/tcp open  https

Nmap done: 1 IP address (1 host up) scanned in 42.42 seconds

First, install nmap on your daily driver. nmap is highly helpful for certain things, like detecting open ports and this-- manually verifying SSH fingerprints. But, back to what we are doing.

Good. I'm either compromised at the multiple places and machines I've checked it-- or the more plausible explanation of everything being hunky dory is what is happening.

That 'fingerprint' is just a string shortened with a one way algorithm for our human convenience at the risk of more than one string resolving into the same fingerprint. It happens, they are called collisions.

Regardless, back to the original string which we can see in context below.

brad@computer:~$ ssh-keyscan
# SSH-2.0-conker_1.0.257-ce87fba app-128
no hostkey alg
# SSH-2.0-conker_1.0.257-ce87fba app-129 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
# SSH-2.0-conker_1.0.257-ce87fba app-123
no hostkey alg

So, ahead of time, we have a way of asking for a form of identification from the original host.

At this point we manually are as vulnerable as automatically-- the strings match, we have the base data that creates the fingerprint, and we could ask for that base data (preventing collisions) in the future.

Now to use that string in a way that prevents asking about a hosts authenticity...

The known_hosts file in this case does not use plaintext entries. You'll know hashed entries when you see them, they look like hashes with random characters instead of or

brad@computer:~$ ssh-keyscan -t rsa -H
# SSH-2.0-conker_1.0.257-ce87fba app-128
|1|yr6p7i8doyLhDtrrnWDk7m9QVXk=|LuKNg9gypeDhfRo/AvLTAlxnyQw= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==

The first comment line infuriatingly shows up-- but you can get rid of it with a simple redirect via the ">" or ">>" convention.

As I've done my best to obtain untainted data to be used to identify a "host" and trust, I will add this identification to my known_hosts file in my ~/.ssh directory. Since it will now be identified as a known host, I will not get the prompt mentioned above when you were a youngster.

Thanks for sticking with me, here you go. I'm adding the bitbucket RSA key so that I can interact with my git repositories there in a non-interactive way as part of a CI workflow, but whatever you do what you want.

cp ~/.ssh/known_hosts ~/.ssh/known_hosts.old && echo "|1|yr6p7i8doyLhDtrrnWDk7m9QVXk=|LuKNg9gypeDhfRo/AvLTAlxnyQw= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==" >> ~/.ssh/known_hosts

So, that's how you stay a virgin for today. You can do the same with github by following similar directions on your own time.

I saw so many stack overflow posts telling you to programmatically add the key blindly without any kind of checking. The more you check the key from different machines on different networks, the more trust you can have that the host is the one it says it is-- and that is the best you can hope from this layer of security.

WRONG ssh -oStrictHostKeyChecking=no hostname [command]

WRONG ssh-keyscan -t rsa -H hostname >> ~/.ssh/known_hosts

Don't do either of the above things, please. You're given the opportunity to increase your chances of avoiding someone eavesdropping on your data transfers via a man in the middle attack-- take that opportunity. The difference is literally verifying that the RSA key you have is the one of the bona fide server and now you know how to get that information to compare them so you can trust the connection. Just remember more comparisons from different computers & networks will usually increase your ability to trust the connection.


Posted 2010-04-16T04:15:11.227

Reputation: 217

I think this is the best solution to this problem. However, be very careful while using Nmap on something like Amazon EC2, I got a warning about the port scanning that Nmap does! Fill in their form before doing portscanning! – Waqas Shah – 2018-01-05T04:37:00.380

...well, yeah. I don't know why you would do the port scanning from EC2. If you are logged in to your account, you can just get the keys from the actual machines.

This is more for machines you don't have control over. I would assume you would have a local machine not subject to AWS port scanning restrictions to use.

But, if you are in that edge case situation where you must run nmap with AWS, I suppose this warning would be helpful. – BradChesney79 – 2018-01-15T19:51:14.827

Using nmap to read the SSH host key from your workstation and then trusting that value is no different than connecting via SSH with StructHostKeyChecking turned off. Its just as vulnerable to a man-in-the-middle attack. – Micah R Ledbetter – 2018-02-07T17:40:06.750

...@MicahRLedbetter which is why I suggested that "more comparisons from different computers & networks will usually increase your ability to trust the connection".

But, that is my point. If you only ever check your target host from one set of environment conditions then how would you ever know of any discrepancies?

Did you have any better suggestions? – BradChesney79 – 2018-02-07T19:06:21.180

This is security theater. Doing something complicated to create the appearance of greater security. It doesn't matter how many different methods you use to ask the host for its key. Like asking the same person multiple times if you can trust them (maybe you call, email, text, and snail mail) . They'll always say yes, but if you're asking the wrong person, it doesn't matter. – vastlysuperiorman – 2018-06-26T17:28:54.963

@vastlysuperiorman, from the same computer on the same network or even a different computer on the same network-- yes, it would be a waste of time. Different machine on a completely separate network-- no. If you achieve the same results from asking via different paths with isolated risks of a compromised request and tainted connection, you can increase the confidence your connection to the remote resource has not been unusually tampered with. In most cases, overkill. Just depends on the minimum security requirements. – BradChesney79 – 2018-06-27T14:19:46.343


I do a one-liner script, a bit long but useful to make this task for hosts with multiples IPs, using dig and bash

(; ssh-keyscan -H $host; for ip in $(dig @ +short); do ssh-keyscan -H $host,$ip; ssh-keyscan -H $ip; done) 2> /dev/null >> .ssh/known_hosts

Felipe Alcacibar

Posted 2010-04-16T04:15:11.227

Reputation: 146


The following avoid duplicate entries in ~/.ssh/known_hosts:

if ! grep "$(ssh-keyscan 2>/dev/null)" ~/.ssh/known_hosts > /dev/null; then
    ssh-keyscan >> ~/.ssh/known_hosts

Amadu Bah

Posted 2010-04-16T04:15:11.227

Reputation: 171

This is not the way to do it. MITM. – jameshfisher – 2017-11-28T16:42:13.243


This whole

  • ssh-key-scan
  • ssh-copy-id
  • ECSDA key warning

business kept annoying me so I opted for

One script to rule them all

This is a variant of the script at with Amadu Bah's answer in a loop.

example call

./sshcheck somedomain site1 site2 site3

The script will loop over the names sites and modify the .ssh/config and .ssh/known_hosts file and do ssh-copy-id on request - for the last feature just the let the ssh test calls fail e.g. by hitting enter 3 times on the password request.

sshcheck script

# WF 2017-08-25
# check ssh access to bitplan servers

#ansi colors
green='\033[0;32m' # '\e[1;32m' is too bright for white bg.

# a colored message 
#   params:
#     1: l_color - the color of the message
#     2: l_msg - the message to display
color_msg() {
  local l_color="$1"
  local l_msg="$2"
  echo -e "${l_color}$l_msg${endColor}"

# error
#   show an error message and exit
#   params:
#     1: l_msg - the message to display
error() {
  local l_msg="$1"
  # use ansi red for error
  color_msg $red "Error: $l_msg" 1>&2
  exit 1

# show the usage
usage() {
  echo "usage: $0 domain sites"
  exit 1 

# check known_hosts entry for server
checkknown() {
  local l_server="$1"
  #echo $l_server
  local l_sid="$(ssh-keyscan $l_server 2>/dev/null)" 
  #echo $l_sid
  if (! grep "$l_sid" $sknown) > /dev/null 
    color_msg $blue "adding $l_server to $sknown"
    ssh-keyscan $l_server >> $sknown 2>&1

# check the given server
checkserver() {
  local l_server="$1"
  grep $l_server $sconfig > /dev/null
  if [ $? -eq 1 ]
    color_msg $blue "adding $l_server to $sconfig"
    today=$(date "+%Y-%m-%d")
    echo "# added $today by $0"  >> $sconfig
    echo "Host $l_server" >> $sconfig
    echo "   StrictHostKeyChecking no" >> $sconfig
    echo "   userKnownHostsFile=/dev/null" >> $sconfig
    echo "" >> $sconfig
    checkknown $l_server
    color_msg $green "$l_server found in $sconfig"
  ssh -q $l_server id > /dev/null
  if [ $? -eq 0 ]
    color_msg $green "$l_server accessible via ssh"
    color_msg $red "ssh to $l_server failed" 
    color_msg $blue "shall I ssh-copy-id credentials to $l_server?"
    read answer
    case $answer in
      y|yes) ssh-copy-id $l_server

# check all servers
checkservers() {
me=$(hostname -f)
for server in $(echo $* | sort)
  case $os in
   # Mac OS X
     pingoption=" -t1";;
    *) ;;

  pingresult=$(ping $pingoption -i0.2 -c1 $server)
  echo $pingresult | grep 100 > /dev/null
  if [ $? -eq 1 ]
    checkserver $server
    checkserver $server.$domain
    color_msg $red "ping to $server failed"

# check configuration
checkconfig() {
  if [ -f $sconfig ]
    color_msg $green "$sconfig exists"
    ls -l $sconfig


case  $# in
  0) usage ;;
  1) usage ;;
    color_msg $blue "checking ssh configuration for domain $domain sites $*"
    checkservers $* 
    #for server in $(echo $* | sort)
    #  checkknown $server 

Wolfgang Fahl

Posted 2010-04-16T04:15:11.227

Reputation: 257


How are you building these machines? can you run a dns update script? can you join an IPA Domain?

FreeIPA does this automatically, but essentially all you need is SSHFP dns records and DNSSEC on your zone (freeipa provides as configurable options (dnssec disabled by default)).

You can get the existing SSHFP records from your host by running.

ssh-keygen -r IN SSHFP 1 1 4d8589de6b1a48e148d8fc9fbb967f1b29f53ebc IN SSHFP 1 2 6503272a11ba6d7fec2518c02dfed88f3d455ac7786ee5dbd72df63307209d55 IN SSHFP 3 1 5a7a1e8ab8f25b86b63c377b303659289b895736 > IN SSHFP 3 2 1f50f790117dfedd329dbcf622a7d47551e12ff5913902c66a7da28e47de4f4b

then once published, you'd add VerifyHostKeyDNS yes to your ssh_config or ~/.ssh/config

If/When google decides to flip on DNSSEC, you could ssh in without a hostkey prompt.


BUT my domain is not signed yet, so for now you'd see....

debug1: Server host key: ecdsa-sha2-nistp256 SHA256:H1D3kBF9/t0ynbz2IqfUdVHhL/WROQLGan2ijkfeT0s

debug1: found 4 insecure fingerprints in DNS

debug1: matching host key fingerprint

found in DNS The authenticity of host ' (2605:6400:10:434::10)' can't be established. ECDSA key fingerprint is SHA256:H1D3kBF9/t0ynbz2IqfUdVHhL/WROQLGan2ijkfeT0s. Matching host key fingerprint found in DNS. Are you sure you want to continue connecting (yes/no)? no

Jacob Evans

Posted 2010-04-16T04:15:11.227

Reputation: 5 293


To do this properly, what you really want to do is collect the host public keys of the VMs as you create them and drop them into a file in known_hosts format. You can then use the -o GlobalKnownHostsFile=..., pointing to that file, to ensure that you're connecting to the host you believe you should be connecting to. How you do this depends on how you're setting up the virtual machines, however, but reading it off the virtual filesystem, if possible, or even getting the host to print the contents of /etc/ssh/ during configuration may do the trick.

That said, this may not be worthwhile, depending on what sort of environment you're working in and who your anticipated adversaries are. Doing a simple "store on first connect" (via a scan or simply during the first "real" connection) as described in several other answers above may be considerably easier and still provide some modicum of security. However, if you do this I strongly suggest you change the user known hosts file (-o UserKnownHostsFile=...) to a file specific for this particular test installation; this will avoid polluting your personal known hosts file with test information and make it easy to clean up the now useless public keys when you delete your VMs.

Curt J. Sampson

Posted 2010-04-16T04:15:11.227

Reputation: 811


Here is how to do a collection of hosts

define a collection of hosts


Then define two tasks to add the keys to known hosts:

- command: "ssh-keyscan {{item}}"
   register: known_host_keys
   with_items: "{{ssh_hosts}}"
     - "ssh"

 - name: Add ssh keys to know hosts
     name: "{{item.item}}"
     key: "{{item.stdout}}"
     path: ~/.ssh/known_hosts
   with_items: "{{known_host_keys.results}}"

Vackar Afzal

Posted 2010-04-16T04:15:11.227

Reputation: 11