ProxyJump ssh not working on Baobab


I have resolved the LDAP issue I was having earlier, but I have an issue attempting to use ProxyJump to ssh in to the node which I have reserved. My ~/.ssh/config is set up guided by this post:

With the actual contents being:

Host baobab
    User gercek

#Host cpu*
#    HostName %h
#    User gercek
#    ProxyJump baobab

Host cpu327
    HostName cpu327
    User gercek
    ProxyJump baobab

When I attempt to connect to node cpu327 (where I have a running reservation) I am prompted for a password which does not accept my usual account pw. This occurs when I use the above configuration, as well as when I use the wildcard Host block in the config file instead of the cpu327 specific one.

Any advice on a workaround?


For what it’s worth, this scheme works for Yggdrasil (even the wildcard Host entry)


please check this post:

This is weird, we’ll check what is going on.

Thanks! For the moment, if anyone else is facing similar issues, this setup in my ssh config file works:

Host baobab
    User gercek

Host cpu*
    User gercek
    RequestTTY force
    RemoteCommand ssh %n

I think it works because HostBasedAuthentication isn’t used when using ProxyJump, so instead I just define the host cpu* as the login node and manually run a command to ssh into the passed node name.

@Yann.Sagon is there any news on this front? The above workaround is fine for ssh only, and when I try to use it to connect VS code to a compute allocation it fails and results in the VS code instance being run on the login node.


we had an issue: it wasn’t possible to connect to a node from login2 without a key. This is now solved and the HostBasedAuthentication is working again. not sure if this helps you?

We’ll apply the fix to Yggdrasil during the maintenance next week.

No, unfortunately it’s still producing the same error: it requests a password when attempting to connect to an allocation. My Unige password does not work.

In this case cpu205 is where I have a job, and the following config in ~/.ssh/config

Host baobab
    User gercek
    RequestTTY force

Host cpu*
    Hostname %h
    User gercek
    ForwardX11 yes
    ProxyJump baobab

results in a successful proxy jump into baobab but prompts password for gercek@cpu205.

The issue is that the expected behavior when using ProxyJump in ssh is to forward the authentication agent from the local machine to the final target, meaning that the login node and the keys necessary aboard for node connections are bypassed. The only way around this is the previous post I made, which executes ssh again on the jump host. This however breaks VS code and is really not great.

Edit: Potential solution:
Is there a reason you don’t propagate our public keys to the nodes as well? This would solve the issue completely I believe, since we wouldn’t need the baobab login node host key to connect, but the nodes would still be inaccessible to those who don’t have access to the login node.

Prior to the switch to LDAP, the ~/.ssh/authorized_keys file was shared among the login and compute nodes. This would allow a ProxyJump config to work as the node authentication agent would use that file to determine the user’s host key validity. Now that we use LDAP, it seems the LDAP public keys only live on the login node for some reason. That is why this broke recently.

If you want to keep people from running remote VS code process on the login node, this is a good way to encourage it, by allowing it to work on compute nodes.

I guess we still allows that, don’t we? I’m able to connect to cpu001.baobab directly from my laptop using ProxyJump:

[ysagon@localhost ~]$ ssh cpu001
Last login: Mon Sep 11 16:00:25 2023 from
Installed: Thu Aug 17 14:28:26 CEST 2023

This is my ssh config on my laptop:

ysagon@localhost ~]$ cat .ssh/config
Host bao
   User sagon
Host cpu*
   HostName cpu%h (wrong) <<< typo>>> HostName %h (correct)
   User sagon
   ProxyJump bao

I’ve added as well my public key to Baobab .ssh/authorized_keys and I’m having the corresponding private key loaded on my laptop’s ssh-agent.

For whatever reason the issue seems to have resolved itself! I changed nothing since my last post but it now works directly using SSH to a node where I have an allocation. Thank you!

Computing is magical :stuck_out_tongue_winking_eye:

Hi @Yann.Sagon,

despite having followed all suggested changes in the ssh config etc. I am still facing the problem described by @Berk.Gercek. Is there anything else I could try or check to solve this?

Thanks in advance,

@Yann.Sagon @Adrien.Albert @Gael.Rossignol would you have any ideas why the proxy jump doesn’t work for @Tomke.Schroeer ?

Running ssh -J -vvv cpu212 the ssh key is accepted by the login node but the same key when sent to the worker node is rejected (receive packet: type 51).
It then defaults to asking for a password, but the ISIS+ password is also not accepted.

Unfortunately I cannot reproduce the issue with my ssh config and user account.


Dear Johnny,

I did a typo, please see the edited config.

Anyway it seems there is a real issue: it is working and not for my colleague @Adrien.Albert. I’m testing from Rocky9. @Adrien.Albert will investigate tomorrow.


Hi @Tomke.Schroeer

Which OS and kernel version do you use ?

Hi @Tomke.Schroeer

I did the procedure again starting from the begening and it’s working.

  1. On your local machine, Save old ssh key and create a new one
mkdir ~/.ssh/old
mv ~/.ssh/*  ~/.ssh/old

On the cluster, make sure you have not id_rsa key (make a back up too)

  1. Copy the in end wait for 5-10 min the synchronisation with AD is done.

3 On your local machine configure the proxyjump:

[alberta@localhost .ssh]$ cat ~/.ssh/config

host bao
   User alberta
Host cpu*
   HostName %h
   User alberta
   ProxyJump bao
  1. copy your public in the authorized_key_file by running:
[alberta@localhost .ssh]$ ssh-copy-id -f
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/alberta/.ssh/"

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh ''"
and check to make sure that only the key(s) you wanted were added.

I check and my authorized key files contains my last key (and others)

  1. Alloc a test job and open a new tab on your local machine and try to connect on the allocated node:

:warning: Make sure your test is on Baobab cluster

On baobab:

(baobab)-[alberta@login2 ~]$ salloc
salloc: Pending job allocation 5574654
salloc: job 5574654 queued and waiting for resources
salloc: job 5574654 has been allocated resources
salloc: Granted job allocation 5574654
salloc: Waiting for resource configuration
salloc: Nodes cpu001 are ready for job

On your local machine:
( My first test was on cpu026 this is the message a i got)

[alberta@localhost .ssh]$ ssh cpu026
The authenticity of host 'cpu026 (<no hostip for proxy command>)' can't be established.
RSA key fingerprint is SHA256:tKqp4nljL+EGVKl8T0VF2nS36DkHVFMpLxQOPg/gKvg.
RSA key fingerprint is MD5:8f:75:c4:18:8a:75:f1:f1:19:4d:85:92:3b:b6:2a:e1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'cpu026' (RSA) to the list of known hosts.
Last login: Tue Oct 24 10:49:29 2023
Installed: Thu Aug 17 14:40:08 CEST 2023

But working on cpu001 too:

[alberta@localhost ~]$ ssh cpu001
Last login: Mon Oct 23 16:43:34 2023
Installed: Thu Aug 17 14:28:26 CEST 2023
(baobab)-[alberta@cpu001 ~]$

Hi @Adrien.Albert

Thanks for the comprehensive instructions! Unfortunately I still get asked for a password when trying to connect to the cpu node directly. I am using macOS Sonoma and kernel version 23.0.0.

Is there anything else I can try? Could there be a general problem with my account?


PS: Also when I try to connect to the cpu on baobab from yggdrasil I get asked for a password for schroeer@cpu212

Hello @Tomke.Schroeer ,

I am sorry but I do not have any ( more )idea about your actual issue.

Are you available for a zoom meeting to see more in details what is going wrong ?


yes sure, let me know which room

Following our Zoom meeting, we reached to connect directly to the compute node from a local machine.


Make sure to back up your .ssh directory on both your local machine and the clusters.

You should not have an id_rsa{,.pub} file in your ssh directory. By default, SSH uses this key, and having a different one from your local machine can cause conflicts.

However, by configuring your SSH profile in .ssh/config and specifying the keyfile used, you may be able to make it work. ( not tested)