I setup a new instance a week or two ago, and have some subscriptions to communities on lemmy.world
. The logs for the Lemmy process are currently showing a constant flood of warnings for what look like ordinary activity (likes, creates, undos, etc). Here are two recent entries:
lemmy-1 | 2024-05-13T21:41:48.243217Z WARN lemmy_server::root_span_builder: Unknown:
lemmy-1 | 0: lemmy_apub::insert_received_activity
lemmy-1 | with ap_id=Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("lemmy.world")), port: None, path: "/activities/announce/like/3a96d6df-3229-4e5a-a6d1-ce2f6f3ca3d5", query: None, fragment: None }
lemmy-1 | at crates/apub/src/lib.rs:198
lemmy-1 | 1: lemmy_apub::activities::community::announce::receive
lemmy-1 | at crates/apub/src/activities/community/announce.rs:153
lemmy-1 | 2: lemmy_server::root_span_builder::HTTP request
lemmy-1 | with http.method=POST http.scheme="http" http.host=leftopia.org http.target=/inbox otel.kind="server" request_id=43212a7e-1c3c-4b6a-84d6-4cd6082af392
lemmy-1 | at src/root_span_builder.rs:16
lemmy-1 | 2024-05-13T21:41:48.356594Z WARN lemmy_server::root_span_builder: Unknown:
lemmy-1 | 0: lemmy_apub::insert_received_activity
lemmy-1 | with ap_id=Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("lemmy.world")), port: None, path: "/activities/announce/like/9f70cd5b-175a-4ac3-a852-322494f01981", query: None, fragment: None }
lemmy-1 | at crates/apub/src/lib.rs:198
lemmy-1 | 1: lemmy_apub::activities::community::announce::receive
lemmy-1 | at crates/apub/src/activities/community/announce.rs:153
lemmy-1 | 2: lemmy_server::root_span_builder::HTTP request
lemmy-1 | with http.method=POST http.scheme="http" http.host=leftopia.org http.target=/inbox otel.kind="server" request_id=050271b4-2081-4264-936a-329d70374239
lemmy-1 | at src/root_span_builder.rs:16
Tailing the Docker logs is like watching a torrent of these warnings, and I’m worried about the stress its adding to my fairly meager VM. Any tips on how to debug this? I’m proficient with Rust, so I’m willing and able to jump into the code if it’ll help.
Might just be 'cos LW is currently sending every activity out twice. Video from earlier today: https://imgur.com/a/lX0LPZk
You could probably tail your nginx log in another terminal to see if the journalctl errors match up with 400 errors for duplicates there…
Might not be this, of course (could just be that I saw this earlier, and it’s influencing my thinking)
It’s definitely happening when I’m getting updates from lemmy.world, and while I don’t know how to get at the HTTP details you’re showing in your video, I do see a lot of 400’s in the nginx log from Docker:
proxy-1 | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world" proxy-1 | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world" proxy-1 | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world" proxy-1 | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world" proxy-1 | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world" proxy-1 | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world" proxy-1 | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world" proxy-1 | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world"
I’ve been digging in the source code, and if it’s just a duplicate record in the database the fix could be as simple as adding a couple of lines of code here to silently ignore a duplication error: https://github.com/LemmyNet/lemmy/blob/main/crates/apub/src/lib.rs#L211
Edit: on second thought, it might be better to deal with it higher up the call stack here: https://github.com/LemmyNet/lemmy/blob/main/crates/apub/src/activities/community/announce.rs#L163
records can’t be duplicated in the database, the activity id is a unique key:
lemmy=# \d sent_activity Table "public.sent_activity" Column | Type | Collation | Nullable | Default -----------------------------+--------------------------+-----------+----------+------------------------------------------- id | bigint | | not null | nextval('sent_activity_id_seq'::regclass) ap_id | text | | not null | data | json | | not null | sensitive | boolean | | not null | published | timestamp with time zone | | not null | now() send_inboxes | text[] | | not null | send_community_followers_of | integer | | | send_all_instances | boolean | | not null | actor_type | actor_type_enum | | not null | actor_apub_id | text | | | Indexes: "sent_activity_pkey" PRIMARY KEY, btree (id) "sent_activity_ap_id_key" UNIQUE CONSTRAINT, btree (ap_id)
Yes, and if a duplicate does arrive (as appears to be happening), the current code doesn’t do anything about the corresponding database error, resulting in a scary multi-line warning for something that could be safely ignored. A new Lemmy administrator (like me) has no way of knowing this is at best an info-level event, or even just a debug-level event since it has no real effect on anything.
ah i was misreading your comment, i thought you were talking about the sending side. for the receiving side i agree, but the reason for the duplicate activities is yet to be found: https://github.com/LemmyNet/lemmy/issues/4609
The video was recording a VM that had been tunnelled through to with ngrok, so the HTTP details were from that software’s ‘Inspect’ window.
(I don’t know how best to configure your server to ignore the errors, because Rust is too scary, ha ha).
Cool, I’ve been meaning to check out ngrok sometime. Looks really useful.
I don’t think there’s a way to filter out the problem since it appears to be an automatic warning due to an uncaught error. I have some ideas on a code fix now, and may submit a PR for it in the near future.