[Verifying my OpenPGP key: openpgp4fpr:FED82F1C73FF53FB1EE9926336615E0FD12833CF]

  • 1 Post
  • 14 Comments
Joined 4 years ago
cake
Cake day: February 18th, 2021

help-circle











  • I start for every connection-group an own ssh-agent with different ssh-keys in it. And i connect from my laptop sometimes (regulary) to my desktop-machine and forward the agent to the desktop. This is a setup, i need.

    And i have a script, which chooses from ssh config, (Match section) the ssh-agent i need for this connection-group. This script starts automatically an ssh-agent and loads the identities (private-keys, hardware-token…) into this ssh-agent and per configfile it is choosen as IdentityAgent.

    When i’m connected to my desktop with my laptop and i work on my desktop, then i use the forwarded agent, because i have some keys only on my laptop, which i want to use also from my desktop. So i link the forwarded agent-socket to the IdentityAgent, which is configured in ssh-config for this connection… When there is no forwared ssh-agent, the symlink is deleted and a new agent is started with a socketfile on the same path.

    It sound’s a bit complicated… and yes, it is.

    An i don’t get it, why sometimes the socketfile is deleted and sometimes it remains. Now i tested it from home on the remote-connection. The temporary, forwarded agent-socket is a symlink to my regular socket-file. and i killed the running ssh-agent… and also the symlink is removed.

    It is strange behaviour… a process unlinks a socket-file, which does not belong to him, only the name is the same… and not every time.



  • The services should be able, to talk to each other via ssh?

    Or do you have groups of servers?

    How many we are talking about?

    They are all virtual servers?

    Where is the hub located?

    In our company we have many services and many servers. We are talking about hundrets of services and servers. Snd they are very secure.

    So we have the servers on a big esxi (more than one) in 3 datacenters.

    There is one jumphost (high available… several instances). Direct connection from our workstations to a server is not possible. We have to use this jumphost. Login on the jumphost is not possible, only for jumping (ssh option -J).

    On the jumphost is for each user the publickey from a hardwaretoken. (Yubikey, etoken, nitrokey, name it) on its user in authorized-keys file. Only one pubkey.

    So you are not able to jump over the jumphost to a server, without a valid hardwaretoken.

    A NAT-Rule gives each user a individual source-IP…

    Then you can see in auditlog on each server who did the shit… even if he made sudo su… the source-ip is individualized for each user.

    And services run in different subnets and VLAN without connection to each other. So only services can talk together, who must talk.

    Another server is an ansible machine. This can connect to every single server too and fo good and really bad things… so this ansible-machine and the jumphost are in a physically secured zone in the Datacenter.

    You need an extra permission and an extra physical key, to come to this machines…

    And if one Service gets compromized, maximum the servers in the same vlan or subnet can be affected too. And the servers, which got an extra firewall-hole.

    So… if you are afraid of using ssh in your environment…

    Use hardware-keys for the ssh privatekey. No softwarekeys! If machines need to talk together via ssh, make smallest possible jails with subnets or vlans around them. Think about allowed commands in ssh-config/authorized_keys file!!! Think about a jumphost and allow different users only machines which they need. Think about physically protection about the jumphost. Think about serverinitiated backups…

    👍