It is probably due to a number of people stopping using their alts after some instance hopping.
Also a few people who came to see how it was, and weren’t attracted enough to become regular visitors.
Curious to see at which number we’ll stabilize.
Next peak will probably happen after either major features release (e.g. exhaustive mod tools allowing reluctant communities to move from Reddit) or the next Reddit fuck up (e.g. removing old.reddit)
Stats on each server: https://lemmy.fediverse.observer/list
I already did an insane amount of work to populate a Lemmy database with over 10 million posts. It is so incredibly slow out of the box that the normal API would take days to accomplish this. i had to rewrite the SQL TRIGGER logic to allow bulk inserts.
Here is my work on that:
DROP TRIGGER site_aggregates_post_insert ON public.post; /* TRIGGER will be replaced with per-statement INSERT only */ CREATE TRIGGER site_aggregates_post_insert AFTER INSERT ON public.post REFERENCING NEW TABLE AS new_rows FOR EACH STATEMENT EXECUTE FUNCTION site_aggregates_post_insert(); DROP TRIGGER community_aggregates_post_count ON public.post; /* TRIGGER will be replaced with per-statement INSERT only */ CREATE TRIGGER community_aggregates_post_count AFTER INSERT ON public.post REFERENCING NEW TABLE AS new_rows FOR EACH STATEMENT EXECUTE FUNCTION community_aggregates_post_count(); DROP TRIGGER person_aggregates_post_count ON public.post; /* TRIGGER will be replaced with per-statement INSERT only */ CREATE TRIGGER person_aggregates_post_count AFTER INSERT ON public.post REFERENCING NEW TABLE AS new_rows FOR EACH STATEMENT EXECUTE FUNCTION person_aggregates_post_count(); /* TRIGGER will be replaced with per-statement INSERT only no Lemmy-delete or SQL DELETE to be performed during this period. */ CREATE OR REPLACE FUNCTION public.site_aggregates_post_insert() RETURNS trigger LANGUAGE plpgsql AS $$ BEGIN UPDATE site_aggregates SET posts = posts + (SELECT count(*) FROM new_rows WHERE local = true) WHERE site_id = 1 ; RETURN NULL; END $$; CREATE OR REPLACE FUNCTION public.community_aggregates_post_count() RETURNS trigger LANGUAGE plpgsql AS $$ BEGIN UPDATE community_aggregates ca SET posts = posts + p.new_post_count FROM ( SELECT count(*) AS new_post_count, community_id FROM new_rows GROUP BY community_id ) AS p WHERE ca.community_id = p.community_id; RETURN NULL; END $$; /* TRIGGER will be replaced with per-statement INSERT only no Lemmy-delete or SQL DELETE to be performed during this period. */ CREATE OR REPLACE FUNCTION public.person_aggregates_post_count() RETURNS trigger LANGUAGE plpgsql AS $$ BEGIN UPDATE person_aggregates personagg SET post_count = post_count + p.new_post_count FROM ( SELECT count(*) AS new_post_count, creator_id FROM new_rows GROUP BY creator_id ) AS p WHERE personagg.person_id = p.creator_id; RETURN NULL; END $$; /* *********************************************************************************************** ** comment table */ DROP TRIGGER post_aggregates_comment_count ON public.comment; /* TRIGGER will be replaced with per-statement INSERT only */ CREATE TRIGGER post_aggregates_comment_count AFTER INSERT ON public.comment REFERENCING NEW TABLE AS new_rows FOR EACH STATEMENT EXECUTE FUNCTION post_aggregates_comment_count(); -- IMPORTANT NOTE: this logic for INSERT TRIGGER always assumes that the published datestamp is now(), which was a logical assumption with general use of Lemmy prior to federation being added. CREATE OR REPLACE FUNCTION public.post_aggregates_comment_count() RETURNS trigger LANGUAGE plpgsql AS $$ BEGIN UPDATE -- per statement update 1 post_aggregates postagg SET comments = comments + c.new_comment_count FROM ( SELECT count(*) AS new_comment_count, post_id FROM new_rows GROUP BY post_id ) AS c WHERE postagg.post_id = c.post_id; UPDATE -- per statement update 2 post_aggregates postagg SET newest_comment_time = max_published FROM ( SELECT MAX(published) AS max_published, post_id FROM new_rows GROUP BY post_id ) AS c WHERE postagg.post_id = c.post_id; UPDATE -- per statement update 3 post_aggregates postagg SET newest_comment_time_necro = max_published FROM ( SELECT MAX(published) AS max_published, post_id, creator_id FROM new_rows WHERE published > ('now'::timestamp - '2 days'::interval) GROUP BY post_id, creator_id ) AS c WHERE postagg.post_id = c.post_id AND c.creator_id != postagg.creator_id ; RETURN NULL; END $$; DROP TRIGGER community_aggregates_comment_count ON public.comment; CREATE TRIGGER community_aggregates_comment_count AFTER INSERT ON public.comment REFERENCING NEW TABLE AS new_rows FOR EACH STATEMENT EXECUTE FUNCTION public.community_aggregates_comment_count(); CREATE OR REPLACE FUNCTION public.community_aggregates_comment_count() RETURNS trigger LANGUAGE plpgsql AS $$ BEGIN UPDATE community_aggregates ca SET comments = comments + p.new_comment_count FROM ( SELECT count(*) AS new_comment_count, community_id FROM new_rows AS nr JOIN post AS pp ON nr.post_id = pp.id GROUP BY pp.community_id ) AS p WHERE ca.community_id = p.community_id ; RETURN NULL; END $$; DROP TRIGGER person_aggregates_comment_count ON public.comment; CREATE TRIGGER person_aggregates_comment_count AFTER INSERT ON public.comment REFERENCING NEW TABLE AS new_rows FOR EACH STATEMENT EXECUTE FUNCTION public.person_aggregates_comment_count(); CREATE OR REPLACE FUNCTION public.person_aggregates_comment_count() RETURNS trigger LANGUAGE plpgsql AS $$ BEGIN UPDATE person_aggregates personagg SET comment_count = comment_count + p.new_comment_count FROM ( SELECT count(*) AS new_comment_count, creator_id FROM new_rows GROUP BY creator_id ) AS p WHERE personagg.person_id = p.creator_id; RETURN NULL; END $$; DROP TRIGGER site_aggregates_comment_insert ON public.comment; CREATE TRIGGER site_aggregates_comment_insert AFTER INSERT ON public.comment REFERENCING NEW TABLE AS new_rows FOR EACH STATEMENT EXECUTE FUNCTION public.site_aggregates_comment_insert(); CREATE OR REPLACE FUNCTION public.site_aggregates_comment_insert() RETURNS trigger LANGUAGE plpgsql AS $$ BEGIN UPDATE site_aggregates SET comments = comments + ( SELECT count(*) FROM new_rows WHERE local = true ) WHERE site_id = 1 ; RETURN NULL; END $$;
With this in place, 300,000 posts a minute can be generated and reaching levels of 5 million or 10 million don’t take too long.
That’s really cool work! It’s a bit beyond my pay grade, so I can’t really comment too much about it.
I had a look at the PR you mentioned, and again, while I can’t comment on the contents because I am a little out of my depth, may I voice my opinion on the exchange? This is coming from a place of trying to help, since I really do appreciate all the work you’ve put in and are putting in, and the fediverse can really use your talents, so I hope I don’t offend you.
From my reading, it didn’t appear that you were being ignored/hazed, and it seemed like the devs would have been open to your improvements. From working and leading big teams, I’ve noticed that communication and managing emotions is often much harder than writing code. In the thread, it appeared that communication had broken down on both sides (and seemed to have been the case in prior interactions too). Since you mentioned your struggles with autism in the thread, I wonder if that played a part in the tone of the devs perhaps being misinterpreted ? This is, of course only my interpretation, and I could be completely wrong.
Ultimately Lemmy itself is an example of trying to build a community and consensus amongst a broad and diverse group of people, who will often not see eye to eye.
In any case I would like to say I personally appreciate your hard work and really do hope you’re able to help make Lemmy better. Thank you!
Ok, so let’s look at recent changes that they have deployed… https://github.com/LemmyNet/lemmy/issues/3886
One of which makes entire tree of comments disappear. Do you see developers fretting over this and fixing it? Or do you see them ignoring the May 27 PostgreSQL JOIN problem.
How did such a bug go out? Do you see Lemmy developers actually using Lemmy to test things and notice these crashes and problems? Do you look at their posting and comment history? Do they actually go login over at Beehaw and Lemmy.world and see just how terrible the code performance is?
If it isn’t hazing, what is it?
It’s as if they build a product only for other people to use… and they don’t notice any of the constant crashes, incredibly slow performance etc - and they act like nobody in the computer industry ever heard of Memcache or Redis to solve performance problems. If it isn’t extreme hazing going on, then what is it?
I’ve been reading a lot of your exchanges on the Lemmy GitHub and I can tell you with a high degree of confidence that you are not the subject of a hazing ritual. What’s going on is a miscommunication issue; there’s no ill will directed towards you.
The Lemmy devs are under a great deal of stress these days due to the recent influx of activity, both on the big Lemmy instances as well as the Lemmy GitHub. You’ve clearly gone to great lengths to investigate various SQL bottlenecks in Lemmy, and this work does not go unnoticed or unappreciated.
The problem you’re likely running into is that the Lemmy devs are trying to address a wide array of issues, whereas you are zoomed in on some very specific performance problems. Whether or not the core devs are wrong when they say your findings are irrelevant is beside the point. What they are really saying is that they do not have the attention bandwidth to try to see what you are seeing right now.
If you find yourself unable to work with the Lemmy project, there are other fedi projects in Rust like Mitra or Kitsune which might be more receptive to your contributions. I’m personally very interested in seeing rudimentary Lemmy (Groups et.al.) compatibility in Kitsune.
Can you explain to me why it isn’t social hazing?
Do you know how to read a SQL statement? I just can’t grasp how it isn’t social hazing. I’ve been reading SQL statements for decades, this is obviously a problematic one.
Can you offer alternate explanations of how 3 people could think that SQL statement isn’t … poor performing and gong to cause problems? And how an SQL statement without a WHERE clause took them months to discover and fix?
Extreme hazing is my best answer. I just can’t accept that the SQL statements don’t speak for themselves along with the server crashes. 57K users for 1300 servers is very… taking several seconds to load 10 posts…
Look at the date… May… this has been going on since May. If it isn’t social hazing … what is it? I keep asking myself that.
Like I said, this was my interpretation based on reading that exchange. It’s difficult to convey tone or intention with text, but I didn’t detect hostility from the devs, but I did sense that they were frustrated that process wasn’t being followed. Perhaps they should not have gotten hung up on that, but it didn’t appear to be malicious.
I do, and your arguments about the joins being problematic seemed solid. From having worked on systems with huge scale, I also agree that Lemmy doesn’t seem to be big enough to be brought to its knees by the volume of posts it’s processing. However, I’m far from an expert, so I don’t want to suggest any certainty about the root causes, especially as I don’t have the energy or inclination to dig as deep into it as I would to form that opinion.
I don’t know why they weren’t receptive, but perhaps they themselves felt attacked. I know that wasn’t your intention, but misunderstanding happen, especially over text.
Funny, because I’m a published author and expert on messaging systems… like Lemmy. Iv’e been building them since 1986 professionally.
There was a massive thread I posted dozens of comments on that came before today’s pull request… I suggest you read that too.
Did you notice them even acknowledge server crashes are happening? Do you think developers ever suggest Memcache or Redis? Or discuss how Reddit solved their scaling in 2010 with PostgreSQL?
I don’t have any trouble understanding a bad SQL statement that has 14 JOINs and being told “JOIN is a distraction” after posting tons of examples.
Do we really need to spoon fed the stuff I did post?
Have you never seen social hazing in action? is it possible that I might be on to something going on psychologically besides my autism?
I can’t believe anyone thinks a server should be crashing with 1 user on it.
Okay, I can’t speak to whether social hazing happened or not, but I can tell you that you’re making me extremely uncomfortable.
I started a dialogue, but at this point you’re now sending multiple messages for each of my replies, and asking a lot from me in terms of attention. I do not wish to continue this conversation, but I wish you all the best.
Welcome to discussions with RoundSparow!
It can be a bit tiring interaction wise, but you usually can learn a lot
Haha, indeed. Any time I see an open-source discussion (especially a heated one), I’m reminded about just how much effort it takes to contribute. I’m happy to just stick to browsing memes :P
who would have predicted that Elon Musk would do all the wild things he did with Twitter. Reddit pissing everyone off in June… pretty odd how audiences are behaving in 2023 towards all this. Oh yha, Threads, that coming on the scene too. 2023 has really been odd for audiences.
The SQL speaks for itself, but I don’t know what’s going on in terms of why people are treating social media platforms like Lemmy, Twitter, Threads, Reddit this year so unusually. This SQL statement kind of thing has been covered in so many books, conferences, etc. It’s like forgotten history now in the era of Elon Musk X and Reddit Apollo times.
I don’t know what to say other than I can try to hire a translator or teacher to explain how this SQL problem is obvious and well understood 13 years ago. I mean, there was a whole “NoSQL movement” because of this kind of thing. But I clearly can’t get people to hear past all the Elon Musk, Threads, Lemmy from Reddit … and I’m left describing it as ‘social hazing’ or whatever is gong on with social media.
Lemmy has like 5 different Rust programming communities, but nobody fixing Lemmy. It’s surreal in 2023 the Elon Musk X days. I think it’s making all of us uncomfortable. The social movement underway.
Here, you can dig into what posted days before the pull request you read:
https://github.com/LemmyNet/lemmy/issues/2877#issuecomment-1685314733
June 4:
Given that more than 8 JOIN statements is something PostgreSQL specifically concerns itself with (join_collapse_limit). I hand-edit the query with a single IN clause and the performance problem disappears. 8 full seconds becomes less than 200ms against 5,431,043 posts. And that 200ms is still high, as I was extremely over-reaching with “LIMIT 1000” in case the end-user went wild with blocking lists or some other filtering before reaching the final “LIMIT 10”. When I change it to “LIMIT 20” in the subquery, it drops almost in half to 115ms… still meeting the needs of the outer “LIMIT 10” by double. More of the core query filtering can be put into the IN subquery, as we aren’t dealing with more than 500 length pages (currently limited to 50).
SELECT "post"."id" AS post_id, "post"."name" AS post_title, -- "post"."url", "post"."body", "post"."creator_id", "post"."community_id", "post"."removed", "post"."locked", "post"."published", "post"."updated", "post"."deleted", "post"."nsfw", "post"."embed_title", "post"."embed_description", "post"."thumbnail_url", -- "post"."ap_id", "post"."local", "post"."embed_video_url", "post"."language_id", "post"."featured_community", "post"."featured_local", "person"."id" AS p_id, "person"."name", -- "person"."display_name", "person"."avatar", "person"."banned", "person"."published", "person"."updated", -- "person"."actor_id", "person"."bio", "person"."local", "person"."private_key", "person"."public_key", "person"."last_refreshed_at", "person"."banner", "person"."deleted", "person"."inbox_url", "person"."shared_inbox_url", "person"."matrix_user_id", "person"."admin", -- "person"."bot_account", "person"."ban_expires", "person"."instance_id" AS p_inst, "community"."id" AS c_id, "community"."name" AS community_name, -- "community"."title", "community"."description", "community"."removed", "community"."published", "community"."updated", "community"."deleted", -- "community"."nsfw", "community"."actor_id", "community"."local", "community"."private_key", "community"."public_key", "community"."last_refreshed_at", "community"."icon", "community"."banner", -- "community"."followers_url", "community"."inbox_url", "community"."shared_inbox_url", "community"."hidden", "community"."posting_restricted_to_mods", "community"."instance_id" AS c_inst, -- "community"."moderators_url", "community"."featured_url", ("community_person_ban"."id" IS NOT NULL) AS ban, -- "post_aggregates"."id", "post_aggregates"."post_id", "post_aggregates"."comments", "post_aggregates"."score", "post_aggregates"."upvotes", "post_aggregates"."downvotes", "post_aggregates"."published", -- "post_aggregates"."newest_comment_time_necro", "post_aggregates"."newest_comment_time", "post_aggregates"."featured_community", "post_aggregates"."featured_local", --"post_aggregates"."hot_rank", "post_aggregates"."hot_rank_active", "post_aggregates"."community_id", "post_aggregates"."creator_id", "post_aggregates"."controversy_rank", -- "community_follower"."pending", ("post_saved"."id" IS NOT NULL) AS save, ("post_read"."id" IS NOT NULL) AS read, ("person_block"."id" IS NOT NULL) as block, "post_like"."score", coalesce(("post_aggregates"."comments" - "person_post_aggregates"."read_comments"), "post_aggregates"."comments") AS unread FROM ( (((((((((( ( ( "post_aggregates" INNER JOIN "person" ON ("post_aggregates"."creator_id" = "person"."id") ) INNER JOIN "community" ON ("post_aggregates"."community_id" = "community"."id") ) LEFT OUTER JOIN "community_person_ban" ON (("post_aggregates"."community_id" = "community_person_ban"."community_id") AND ("community_person_ban"."person_id" = "post_aggregates"."creator_id")) ) INNER JOIN "post" ON ("post_aggregates"."post_id" = "post"."id") ) LEFT OUTER JOIN "community_follower" ON (("post_aggregates"."community_id" = "community_follower"."community_id") AND ("community_follower"."person_id" = 3)) ) LEFT OUTER JOIN "community_moderator" ON (("post"."community_id" = "community_moderator"."community_id") AND ("community_moderator"."person_id" = 3)) ) LEFT OUTER JOIN "post_saved" ON (("post_aggregates"."post_id" = "post_saved"."post_id") AND ("post_saved"."person_id" = 3)) ) LEFT OUTER JOIN "post_read" ON (("post_aggregates"."post_id" = "post_read"."post_id") AND ("post_read"."person_id" = 3)) ) LEFT OUTER JOIN "person_block" ON (("post_aggregates"."creator_id" = "person_block"."target_id") AND ("person_block"."person_id" = 3)) ) LEFT OUTER JOIN "post_like" ON (("post_aggregates"."post_id" = "post_like"."post_id") AND ("post_like"."person_id" = 3)) ) LEFT OUTER JOIN "person_post_aggregates" ON (("post_aggregates"."post_id" = "person_post_aggregates"."post_id") AND ("person_post_aggregates"."person_id" = 3)) ) LEFT OUTER JOIN "community_block" ON (("post_aggregates"."community_id" = "community_block"."community_id") AND ("community_block"."person_id" = 3))) LEFT OUTER JOIN "local_user_language" ON (("post"."language_id" = "local_user_language"."language_id") AND ("local_user_language"."local_user_id" = 3)) ) WHERE post_aggregates.id IN ( SELECT id FROM post_aggregates WHERE "post_aggregates"."creator_id" = 3 ORDER BY "post_aggregates"."featured_local" DESC , "post_aggregates"."published" DESC LIMIT 1000 ) AND ((((((( ( (("community"."deleted" = false) AND ("post"."deleted" = false)) AND ("community"."removed" = false)) AND ("post"."removed" = false) ) AND ("post_aggregates"."creator_id" = 3) ) AND ("post"."nsfw" = false)) AND ("community"."nsfw" = false) ) AND ("local_user_language"."language_id" IS NOT NULL) ) AND ("community_block"."person_id" IS NULL) ) AND ("person_block"."person_id" IS NULL) ) ORDER BY "post_aggregates"."featured_local" DESC , "post_aggregates"."published" DESC LIMIT 10 OFFSET 0 ;
If it isn’t social hazing, then what is going on here? Why has this issue gone on since May and servers are crashing every day?