Chown command is taking too long and container does not start

Hi

We are running Gitea via docker and running this on Rancher and are having issues with the container running due to the chown command for git/data, session data etc that is run when the container starts.

Our container runs on one host and the all the data is mounted to an NFS volume. Over the weekend the NFS server got rebooted and thus the container went down and had been running for about 3 months. The result is that to do that chown command is taking over an hour to run, the container doesn’t start and thus fails the health check and the container restarts, so this happens over and over. So i disabled the health check to get it up and running. There’s a similar comment in this post

I see our session config is just

[session]
PROVIDER_CONFIG = /data/gitea/sessions
PROVIDER        = file

So I can update it to include the GC option

However, what I’d like to do is delete all the session data and perhaps update the ini so that it uses in- memory. Can someone please tell me if it’s

  1. Safe to delete the session data? Any implications?
  2. Other than losing the users session, what are the risks to using in-memory? We are using Gitea internally, so i don’t think keeping user sessions is vital and would have thought deleting them would be ok?

Cheers

Hi Support

Please could I get some advice on the above questions?

Thanks

Hi @Jonny147,

As you mentioned, users may become logged out when you switch from file based sessions to in memory sessions, the same goes for when the session data is deleted. I recommend that if you are going to delete the sessions/switch to in-memory then 1. you have a backup, and 2. that you do it during a non-peak usage time.

As in-memory only lasts as long as the binary is running, I also suggest you to check out the redis session connector, as it can keep the sessions in memory, and persist that data to disk without running into the many files problem you are experiencing.

Hope this helps,
@techknowlogick

Hi @techknowlogick

Thanks for the reply. From what you say, I think i won’t use the In-Memory option as the container was running with no downtime for 3 months and thus if you say the memory lasts as long as the binary running the memory will eventually go through the roof!

Therefore, I think what I’ll do is,

  • stop the container
  • backup the current sessions
  • delete them from the configured location
  • keep using the file memory but set the garbage collection option to a day which is not set by default
  • restart the container
  • then monitor the GC process to check the sessions are being removed after a day and don’t grow and grow

Cheers,
Jonny