No, unfortunately it’s still producing the same error: it requests a password when attempting to connect to an allocation. My Unige password does not work.
In this case cpu205 is where I have a job, and the following config in ~/.ssh/config
Host baobab
HostName login2.baobab.hpc.unige.ch
User gercek
RequestTTY force
Host cpu*
Hostname %h
User gercek
ForwardX11 yes
ProxyJump baobab
results in a successful proxy jump into baobab but prompts password for gercek@cpu205.
The issue is that the expected behavior when using ProxyJump in ssh is to forward the authentication agent from the local machine to the final target, meaning that the login node and the keys necessary aboard for node connections are bypassed. The only way around this is the previous post I made, which executes ssh again on the jump host. This however breaks VS code and is really not great.
Edit: Potential solution:
Is there a reason you don’t propagate our public keys to the nodes as well? This would solve the issue completely I believe, since we wouldn’t need the baobab login node host key to connect, but the nodes would still be inaccessible to those who don’t have access to the login node.
Prior to the switch to LDAP, the ~/.ssh/authorized_keys file was shared among the login and compute nodes. This would allow a ProxyJump config to work as the node authentication agent would use that file to determine the user’s host key validity. Now that we use LDAP, it seems the LDAP public keys only live on the login node for some reason. That is why this broke recently.
If you want to keep people from running remote VS code process on the login node, this is a good way to encourage it, by allowing it to work on compute nodes.