Performance issues (mostly) within flowpack/media-ui

Hello together,

i’m running neos on the following environment:

System: Kubernetes 1.30
Webserver: ingress-nginx 1.12.2
Application: Neos CMS 8.3.23 (Flow 8.3.15, UI 8.3.13)
Image Processor: gmagick-2.0.6RC1 (from 2021)
Storage: Local (PVC) gp3/3000 IOPS
DB: PostgreSQL 17
Cache: Redis 8.0.1
Application context: FLOW_CONTEXT Production

k8s pod:

/app/neos $ nginx -v
nginx version: nginx/1.26.3
/app/neos $ php -v
PHP 8.3.21 (cli) (built: May  9 2025 17:25:07) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.3.21, Copyright (c) Zend Technologies
    with Zend OPcache v8.3.21, Copyright (c), by Zend Technologies

Neos CMS behaves quite slow within the backend (mostly at working with assets using the mentioned media ui). The frontend seems to be fine (more or less, sometimes it also takes up to 3 seconds to load a page)

I’ve tested it with chrome in incognito mode and with disabled caching. I would say that almost 99% of the time is about Waiting for server response.

Example at navigating and logging in to https://host.net/neos

Path Status Type Source Size Duration
login 303 document / Redirect Other 3.4 kB 3.92 s
neos 303 document / Redirect login 3.4 kB 2.10 s
content 200 document /neos 10.9 kB 2.68 s
Host.css?bust=ffec557e 200 stylesheet content:14 18.3 kB 24 ms
HostOnlyStyles.css?bust=d25e73d4 200 stylesheet content:14 2.7 kB 27 ms
Plugin.css?bust=1633d74a 200 stylesheet content:14 10.3 kB 28 ms
main.bundle.css?bust=21709d05 200 stylesheet content:14 11.2 kB 28 ms
Host.js?bust=28d91bbe 200 script content:15 1,220 kB 406 ms
Plugin.js?bust=bd9633f1 200 script content:15 134 kB 81 ms
Plugin.js?bust=45fcb999 200 script content:15 104 kB 80 ms
node-type?version=b39639e6ba1c70815001308e30760e55_1747198253 200 fetch index.ts:108 13.8 kB 3.56 s
xliff.json?locale=de&version=b39639e6ba1c70815001308e30760e55_1747198253 200 fetch index.ts:108 28.5 kB 3.44 s
status 200 fetch index.ts:108 3.4 kB 1.68 s
preview?node%5B__contextNodePath%5D=%2Fsites%2F_PROJECT_%40user-USERNAME%3Blanguage%3Dde 200 document frame.tsx:101 12.2 kB 7.75 s
flow-query 200 fetch index.ts:108 4.4 kB 8.40 s
Imagedata:image/gif;base… 200 gif getEmptyImage.js:5 (memory) 0 ms
flow-query 200 fetch index.ts:108 4.1 kB 7.70 s
get-additional-node-metadata 200 fetch index.ts:108 3.6 kB 7.47 s
flow-query 200 fetch index.ts:108 6.5 kB 7.55 s
get-additional-node-metadata 200 fetch index.ts:108 3.6 kB 7.08 s

A call to the Media UI (flowpack/media-ui: 1.4) takes between 18-20 seconds in all of the cases (only 9 assets (2 JPGs at 340kb/136kb, 7 SVGs at ~4kb each)):

Example at navigating to https://host.net/neos/management/mediaui:

Path Status Type Source Size Duration
mediaui 200 document Other 9.7 kB 1.97 s
status 200 fetch ApiService.js:37 3.4 kB 10.22 s
media-assets 200 fetch useAssetCollectionsQuery.ts:10 3.4 kB 16.82 s
media-assets 200 fetch useAssetSourcesQuery.ts:14 3.6 kB 17.92 s
media-assets 200 fetch useTagsQuery.ts:10 3.4 kB 19.61 s
media-assets 200 fetch useAssetCountQuery.ts:33 3.4 kB 19.71 s
media-assets 200 fetch useAssetCountQuery.ts:33 3.4 kB 19.49 s
media-assets 200 fetch useConfigQuery.ts:30 3.6 kB 20.08 s
media-assets 200 fetch useAssetsQuery.ts:62 5.0 kB 20.26 s
media-assets 200 fetch useChangedAssetsQuery.ts:23 3.5 kB 2.30 s
[…]

I’m using flowpack/neos-debug 1.0.3, however this only gives results within the frontend. The render time is usually about 200 ms with no slow db queries.

If there are more elements on a the page, it is around 150ms-1s.
E.g. 105 queries with 230.82ms execution time.

I don’t see any connection issues with ingress-nginx / postgresql / redis:

PostgreSQL:

kubectl exec -it <neos-pod> -- /bin/sh
/app/neos $ time psql -h postgresql -p 64000 -U DATABASEUSER -d DATABASENAME -c 'SELECT * FROM "public"."neos_flow_security_account";'
    persistence_object_identifier     | accountidentifier | authenticationprovidername |                          credentialssource                           |    creationdate     | expirationdate |                                              roleidentifiers                                               | lastsuccessfulauthenticationdate | failedauthenticationcount
--------------------------------------+-------------------+----------------------------+----------------------------------------------------------------------+---------------------+----------------+------------------------------------------------------------------------------------------------------------+----------------------------------+---------------------------
 55984f70-83e3-43a0-9b8b-8ef66909585f | <censored>           | Neos.Neos:Backend          | bcrypt=><censored> | 2025-04-08 09:39:53 |                | <censored> | 2025-05-14 07:47:07              |                         0
(1 row)

real    0m 0.02s
user    0m 0.01s
sys     0m 0.00s

Redis:

/app/neos $ time nc -vz redis-service 6379
redis-service (192.168.118.58:6379) open
real    0m 0.00s
user    0m 0.00s
sys     0m 0.00s

ingress-nginx:

curl -w "\nDNS Lookup: %{time_namelookup}s\nConnect: %{time_connect}s\nTLS Handshake: %{time_appconnect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\nRedirect: %{time_redirect}s\n" -o /dev/null -s https://host.net

DNS Lookup: 0.020354s
Connect: 0.041255s
TLS Handshake: 0.069687s
TTFB: 0.092064s
Total: 0.092138s
Redirect: 0.000000s

I thought about using flowpack/neos-asset-usage, but as there is a dependency to flowpack/entity-usage, it is not working in my setup since i’m using postgresql.

As i’m covering the most hints mentioned in Sebastians article, i wonder where else could i debug and find the bottleneck?

Thanks a lot! :slight_smile:

Hi!

So, to better understand your problem, your only performance issue is the Media.Ui backend module?

I’m usually testing and performance tuning the Media.Ui in a customer project with nearly 1.000.000 assets stored in Postgres/S3 and it loads in like 1-2 seconds until it’s fully finished and ready to use.
It’s a bit weird, that all async requests are slow. Usually when there is a bug, only one or a few are affected. And you have so few assets, so even if an index would be missing it shouldn’t cause such issues.

You could enable the sql query logger (but disable Neos.Debug while doing that) and see what queries are executed and run them yourself in the db.

Your nginx accepts more than one request? If not they would block each other.
You could also verify this by using a tool like jmeter or k6 to create a bit of traffic.

Hey @sebobo,

thanks for your reply! :slight_smile:

Actually it seems like it is only within the (new) Media.UI module. Even if the time until the backend (/neos/content) is fully loaded, could also be a bit more performant. Takes around 2.32 Seconds for it to be ready after an successful login.

I also did the tests with another device, using firefox.

Just the GraphQL-Query assetCount:

{"operationName":"ASSET_COUNT","variables":{"assetCollectionId":null,"assetSourceId":"neos","mediaType":"","assetType":"","tagId":null,"searchTerm":""},"query":"query ASSET_COUNT($searchTerm: String, $assetSourceId: AssetSourceId, $assetCollectionId: AssetCollectionId, $mediaType: MediaType, $assetType: AssetType, $tagId: TagId) {\n  assetCount(\n    searchTerm: $searchTerm\n    assetSourceId: $assetSourceId\n    assetCollectionId: $assetCollectionId\n    mediaType: $mediaType\n    assetType: $assetType\n    tagId: $tagId\n  )\n}\n"}

With the result of:

{"data":{"assetCount":9}}

took 17.65s.

Log: (sorry, in german :slight_smile:)

In Warteschlange aufgenommen: 5,37 s
Gestartet: 5,37 s
Heruntergeladen: 23,12 s
Anfrage-Zeiten
Blockiert: -1 ms
DNS-Auflösung: 0 ms
Verbindungsaufbau: 0 ms
TLS-Konfiguration: 0 ms
Senden: 2 ms
Warten: 17,65 s
Empfangen: 91 ms
Server-Zeiten 
Process request 12,72 s

I do see the issue somewhere between PHP/Neos-Backend. Might also be an resource (CPU/RAM) issue at some point. Or maybe one of my middlewares might be a blocking factor here.

I’m running nginx with an php-fpm pool which is using several workers in parallel, i do see no blocking events within the log.

The confusing part: I’m using the same docker image, the same architecture for two other site packages. Only this package seems to have the performance issues.

In general i totally believe that it is fast, especially because that i’m not having those issues within my local development environment running on docker or minikube.

However, thanks for the tool suggestions! I will try to find out more, and maybe deactivate all the middlewares that might be blocking as well.

I’ll keep it posted here.

The Media.Ui is probably the module that sends out most requests after load compared to all core modules.
So it would be interesting what those requests wait for, nginx, php, database…

Good Morning @sebobo,

i’ve seen some diffs between my instances (slow one and fast one), especially at two parts:

nginx.conf

fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;

vs. now using:

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

I think i’ve changed that once to test, because i was reading here, that it improved the communication between nginx and php-fpm. Tbh. i never had a issue with that, so i don’t know if i should keep that or not.

opcache

I suppose the most of the improvements came from here. I’ve changed the old config:

opcache.memory_consumption = 512
opcache.max_accelerated_files = 40000
opcache.validate_timestamps = 1
opcache.jit_buffer_size = 128M
opcache.jit = 1255
opcache.enable = 1
opcache.enable_cli = 1
opcache.interned_strings_buffer = 32
opcache.revalidate_freq = 60
opcache.log_verbosity_level = 1
opcache.max_file_size = 0
opcache.protect_memory = 1
opcache.fast_shutdown = 1
opcache.save_comments = 1
opcache.enable_file_override = 1
error_reporting = E_ALL & ~E_DEPRECATED

To new:

opcache.memory_consumption = 512
opcache.max_accelerated_files = 40000
opcache.validate_timestamps = 0
opcache.jit_buffer_size = 128M
opcache.jit = 1255
opcache.enable = 1
opcache.enable_cli = 1
opcache.interned_strings_buffer = 32
opcache.revalidate_freq = 0
opcache.log_verbosity_level = 1
opcache.max_file_size = 0
opcache.protect_memory = 1
opcache.fast_shutdown = 1
opcache.save_comments = 1
opcache.enable_file_override = 1
error_reporting = E_ALL & ~E_DEPRECATED

and added:

realpath_cache_size = 16M
realpath_cache_ttl = 600

This improved the timing:

Path Status Type Source Size Duration Improvement
mediaui 200 document Other 8.8 kB 1.35 s 31.5%
status 200 fetch ApiService.js:37 3.4 kB 1.13 s 88.9%
media-assets 200 fetch useAssetCollectionsQuery.ts:10 3.4 kB 9.74 s 42.1%
media-assets 200 fetch useAssetSourcesQuery.ts:14 3.6 kB 9.83 s 45.2%
media-assets 200 fetch useTagsQuery.ts:10 3.4 kB 9.52 s 51.5%
media-assets 200 fetch useAssetCountQuery.ts:33 3.4 kB 10.62 s 46.1%
media-assets 200 fetch useAssetCountQuery.ts:33 3.4 kB 10.71 s 45.1%
media-assets 200 fetch useConfigQuery.ts:30 3.6 kB 9.71 s 51.7%
media-assets 200 fetch useAssetsQuery.ts:62 5.0 kB 11.39 s 43.8%
media-assets 200 fetch useChangedAssetsQuery.ts:23 3.5 kB 2.11 s 8.3%

That was actually already remarkable.

What do you think? Is there still a way to improve the speed?

You still have some weird issue in your stack.

This is what I would expect (this screenshot is from an instance with ~50.000 assets)

Your other changes a good of course, but don’t explain the issue itself IMO.

Hey @sebbo,

that timings would be a dream :slight_smile:

After some testing (this time with a local minikube, and only two assets) i think i’ve found the bottleneck.

I’ve enabled the query log this way:

Neos:
  Flow:
    persistence:
      doctrine:
        sqlLogger: 'Neos\Flow\Persistence\Doctrine\Logging\SqlLogger'
    log:
      psr3:
        loggerFactory: Neos\Flow\Log\PsrLoggerFactory
        'Neos\Flow\Log\PsrLoggerFactory':
          sqlLogger:
            default:
              options:
                severityThreshold: '%LOG_DEBUG%'

And i’ve focussed only on the media part, which generated queries such as the asset count:

SELECT count(a.persistence_object_identifier) c FROM neos_media_domain_model_asset a WHERE a.dtype NOT IN('neos_media_imagevariant')

I’m using some Policies to restrict media types being able to be uploaded. First i’ve blamed them, anyway removing the Policy didn’t brought up any advantage.

Query 2:

SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0, n0_.lastmodified AS lastmodified_1, n0_.title AS title_2, n0_.caption AS caption_3, n0_.copyrightnotice AS copyrightnotice_4, n0_.assetsourceidentifier AS assetsourceidentifier_5, n3_.width AS width_6, n3_.height AS height_7, n4_.name AS name_8, n4_.presetidentifier AS presetidentifier_9, n4_.presetvariantname AS presetvariantname_10, n4_.width AS width_11, n4_.height AS height_12, n5_.width AS width_13, n5_.height AS height_14, n0_.dtype AS dtype_15, n0_.resource AS resource_16, n4_.originalasset AS originalasset_17 FROM neos_media_domain_model_asset n0_ LEFT JOIN neos_media_domain_model_audio n1_ ON n0_.persistence_object_identifier = n1_.persistence_object_identifier LEFT JOIN neos_media_domain_model_document n2_ ON n0_.persistence_object_identifier = n2_.persistence_object_identifier LEFT JOIN neos_media_domain_model_image n3_ ON n0_.persistence_object_identifier = n3_.persistence_object_identifier LEFT JOIN neos_media_domain_model_imagevariant n4_ ON n0_.persistence_object_identifier = n4_.persistence_object_identifier LEFT JOIN neos_media_domain_model_video n5_ ON n0_.persistence_object_identifier = n5_.persistence_object_identifier WHERE (n0_.dtype NOT IN ('neos_media_imagevariant') AND n0_.assetsourceidentifier = ? AND n0_.dtype NOT IN ('neos_media_imagevariant')) AND ( ( NOT (( (n0_.resource IN (SELECT subselectd271cfb764051c57942d2dcbef891f2d.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/pdf') AS subselectd271cfb764051c57942d2dcbef891f2d ) )  AND n0_.persistence_object_identifier IN (
SELECT n0__a.persistence_object_identifier
FROM neos_media_domain_model_asset AS n0__a
LEFT JOIN neos_media_domain_model_asset_tags_join n0__atj ON n0__a.persistence_object_identifier = n0__atj.media_asset
LEFT JOIN neos_media_domain_model_tag n0__t ON n0__t.persistence_object_identifier = n0__atj.media_tag
WHERE n0__t.label = 'confidential')))) AND ( NOT ( (n0_.resource IN (SELECT subselecta985f45676418ea46586e05848596203.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/javascript') AS subselecta985f45676418ea46586e05848596203 ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselectf3363e2cbd1f1f08c9dc971df9f9cce3.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/json') AS subselectf3363e2cbd1f1f08c9dc971df9f9cce3 ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselect7469e20917d8b189ec50f416d9781119.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/msword') AS subselect7469e20917d8b189ec50f416d9781119 ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselecte1ee8345e9bb75fab9e6bad0014b65cb.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/vnd.openxmlformats-officedocument.wordprocessingml.document') AS subselecte1ee8345e9bb75fab9e6bad0014b65cb ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselect95b734ed401e58aee21e586ea6533596.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/msexcel') AS subselect95b734ed401e58aee21e586ea6533596 ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselecte2e1f495f80ba93abaae5c7783b3cb24.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet') AS subselecte2e1f495f80ba93abaae5c7783b3cb24 ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselect0a9a572de6d2e40b9d2d7c21ffcd6eed.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/mspowerpoint') AS subselect0a9a572de6d2e40b9d2d7c21ffcd6eed ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselect1eab7e8daa0e843a61c897c460567b1d.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/vnd.openxmlformats-officedocument.presentationml.presentation') AS subselect1eab7e8daa0e843a61c897c460567b1d ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselect43783aede78ca4341ce63422f427cb1d.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/gzip') AS subselect43783aede78ca4341ce63422f427cb1d ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselect342218692571af7ab050ddd758415de8.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/x-macbinary') AS subselect342218692571af7ab050ddd758415de8 ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselect0b533bb5ba2fae8ec762833b919587e0.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/xhtml+xml') AS subselect0b533bb5ba2fae8ec762833b919587e0 ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselecte6dff56a4040631c30817f827dff71fb.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/x-httpd-php') AS subselecte6dff56a4040631c30817f827dff71fb ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselectcd2a9205cad5002477f2023b55083531.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'application/force-download') AS subselectcd2a9205cad5002477f2023b55083531 ) ) )) AND ( NOT ( (n0_.resource IN (SELECT subselect9ae6a0c504c6b617d6a55e7fb8b416b6.persistence_object_identifier_0 FROM (SELECT n0_.persistence_object_identifier AS persistence_object_identifier_0 FROM neos_flow_resourcemanagement_persistentresource n0_ WHERE n0_.mediatype = 'image/gif') AS subselect9ae6a0c504c6b617d6a55e7fb8b416b6 ) ) )) ) ORDER BY n0_.lastmodified DESC LIMIT 20

At the end, i was running the same command from my neos pod to connect to the postgresql pod (both are running in the same namespace):

Query 1:

time psql -h postgresql -p 64000 -U postgresqluser -d databasename -c "EXPLAIN ANALYZE SELECT count(a.persistence_object_identifier) c FROM neos_media_domain_model_asset a WHERE a.dtype NOT IN('n
eos_media_imagevariant')"
Password for user postgresqluser:
                                                            QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=10.60..10.61 rows=1 width=8) (actual time=0.014..0.014 rows=1 loops=1)
   ->  Seq Scan on neos_media_domain_model_asset a  (cost=0.00..10.50 rows=39 width=98) (actual time=0.008..0.009 rows=2 loops=1)
         Filter: ((dtype)::text <> 'neos_media_imagevariant'::text)
         Rows Removed by Filter: 2
 Planning Time: 0.207 ms
 Execution Time: 0.072 ms
(6 rows)

real    0m 5.11s
user    0m 0.01s
sys     0m 0.00s

Query 2:

/var/Neos $ time psql -h postgresql -p 64000 -U postgresqluser -d databasename -c "EXPLAIN ANALYZE SELECT n0_.persistence_object_identifier, n0_.lastmodified, n0_.title, n0_.caption, n0_.copyrightnotice, n0_
.assetsourceidentifier, n3_.width, n3_.height, n4_.name, n4_.presetidentifier, n4_.presetvariantname, n4_.width, n4_.height, n5_.width, n5_.height, n0_.dtype, n0_.resource, n4_.originalasset
> FROM neos_media_domain_model_asset n0_
> LEFT JOIN neos_media_domain_model_audio n1_ ON n0_.persistence_object_identifier = n1_.persistence_object_identifier
> LEFT JOIN neos_media_domain_model_document n2_ ON n0_.persistence_object_identifier = n2_.persistence_object_identifier
> LEFT JOIN neos_media_domain_model_image n3_ ON n0_.persistence_object_identifier = n3_.persistence_object_identifier
> LEFT JOIN neos_media_domain_model_imagevariant n4_ ON n0_.persistence_object_identifier = n4_.persistence_object_identifier
> LEFT JOIN neos_media_domain_model_video n5_ ON n0_.persistence_object_identifier = n5_.persistence_object_identifier
> WHERE n0_.dtype NOT IN ('neos_media_imagevariant')
> AND n0_.assetsourceidentifier = 'neos'
> ORDER BY n0_.lastmodified DESC LIMIT 20"
Password for user postgresqluser:
                                                                                                      QUERY PLAN                                                                                       
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=49.49..49.54 rows=20 width=3486) (actual time=0.064..0.066 rows=2 loops=1)
   ->  Sort  (cost=49.49..50.05 rows=225 width=3486) (actual time=0.064..0.065 rows=2 loops=1)
         Sort Key: n0_.lastmodified DESC
         Sort Method: quicksort  Memory: 25kB
         ->  Nested Loop Left Join  (cost=0.45..43.50 rows=225 width=3486) (actual time=0.041..0.046 rows=2 loops=1)
               ->  Nested Loop Left Join  (cost=0.29..27.34 rows=15 width=3478) (actual time=0.031..0.035 rows=2 loops=1)
                     ->  Nested Loop Left Join  (cost=0.14..19.15 rows=1 width=3470) (actual time=0.026..0.029 rows=2 loops=1)
                           ->  Seq Scan on neos_media_domain_model_asset n0_  (cost=0.00..10.60 rows=1 width=1816) (actual time=0.008..0.010 rows=2 loops=1)
                                 Filter: (((dtype)::text <> 'neos_media_imagevariant'::text) AND ((assetsourceidentifier)::text = 'neos'::text))
                                 Rows Removed by Filter: 2
                           ->  Index Scan using typo3_media_domain_model_imagevariant_pkey on neos_media_domain_model_imagevariant n4_  (cost=0.14..8.16 rows=1 width=1752) (actual time=0.008..0.008 rows=0 loops=2)
                                 Index Cond: ((persistence_object_identifier)::text = (n0_.persistence_object_identifier)::text)
                     ->  Index Scan using typo3_media_domain_model_image_pkey on neos_media_domain_model_image n3_  (cost=0.15..8.17 rows=1 width=106) (actual time=0.003..0.003 rows=1 loops=2)
                           Index Cond: ((persistence_object_identifier)::text = (n0_.persistence_object_identifier)::text)
               ->  Memoize  (cost=0.16..8.18 rows=1 width=106) (actual time=0.005..0.005 rows=0 loops=2)
                     Cache Key: n0_.persistence_object_identifier
                     Cache Mode: logical
                     Hits: 0  Misses: 2  Evictions: 0  Overflows: 0  Memory Usage: 1kB
                     ->  Index Scan using typo3_media_domain_model_video_pkey on neos_media_domain_model_video n5_  (cost=0.15..8.17 rows=1 width=106) (actual time=0.002..0.002 rows=0 loops=2)
                           Index Cond: ((persistence_object_identifier)::text = (n0_.persistence_object_identifier)::text)
 Planning Time: 0.748 ms
 Execution Time: 0.123 ms
(22 rows)

real    0m 7.97s
user    0m 0.00s
sys     0m 0.01s

It looks like the connection between neos and postgresql is very slow. Not sure how to fix this, yet.

The second query looks weird.
Please make really sure you disabled all custom privileges regarding assets and retest.

And you can also prevent uploads with AOP which doesn’t cost performance.

@sebobo

thanks for your input, i still haven’t found the actual bottleneck. I’ve built my image and deployed it several times with no policies at all, removed my own packages (middlewares), tried to dig deep into the connections and used strace to find slow calls and played quite around with opcache settings, changed the php-fpm connection from tcp to unix socket, tried to tune redis - but at the end i couldn’t find the real issue. So far, the graphql remains slow.

Maybe the issue is on a higher level, regarding
ingress-nginx ↔ neos pod (nginx/php-fpm) ↔ postgresql/redis

Sorry, no idea how to help you further. If you can reproduce the issue somehow with a demo distribution without any special setup I could look into it, but as I never got this issue reported before I have no clue right now.
I released v2 beta yesterday with a new GraphQL implementation, but I don’t expect it to make any difference for you.

@sebobo i’ve just did that including the Neos Demo and the media-ui package only. I’ve posted my whole stack and the configuration here: Installation · GitHub

With this, i could reproduce it as it is the same “slow”. I actually wonder where the issue might be, i don’t have that within my docker-compose setup. Well, there i don’t use nginx-ingress in front :thinking:

Great, thx!
Will try to find some time to run this and test.

Can you also add a screenshot of the loading times of the Content module requests?

Thanks! :slight_smile:

I also added the network tab as a screenshot when i’m opening the backend for the first time.

All those backend queries after login are also quite slow with 1.8s and more which should be around 150-200ms.

Could it be that our postgresql indices are not optimal for media? I know we tweaked a lot over time but I am not sure if that is true for the postgres migrations.

I fixed some of time some time ago, but he anyway only has a 10 assets or so.

1 Like

I’m having a similar (local) stack using docker-compose, with many more images (890) and the same site package, and even more. But also using postgresql 17. Here it looks like this (quite fast, compared):

The main difference here - i’m not using nginx within the image, ingress-nginx, and also no redis:

Dockerfile:

ARG PHP_VERSION=8.4.7
ARG ALPINE_VERSION=3.21

# Base image (https://hub.docker.com/_/php) 
FROM php:${PHP_VERSION}-fpm-alpine${ALPINE_VERSION} AS neos_build

# Install GraphicsMagick and other dependencies
RUN apk update \
    && apk upgrade \
    && apk add --no-cache \
        postgresql-client \
        postgresql-dev \
        graphicsmagick \
        graphicsmagick-dev \
		libtool \
        git \
        ${PHPIZE_DEPS} \
    && pecl install redis \
	&& pecl install channel://pecl.php.net/gmagick-2.0.6RC1 \
    && docker-php-ext-enable redis gmagick \
    && docker-php-ext-install \
        pdo_pgsql \
        pgsql \
    && apk del --no-cache \
        pcre-dev ${PHPIZE_DEPS} \
    && rm -rf /tmp/*

# Install the driver for the database
RUN docker-php-ext-install mysqli && \
    docker-php-ext-install pdo_mysql && \
    docker-php-ext-install pdo_pgsql && \
    docker-php-ext-install pgsql

# PHP Settings
COPY neos-files/php/docker-php-opcache.ini /usr/local/etc/php/conf.d/
COPY neos-files/php/docker-php-memlimit.ini /usr/local/etc/php/conf.d/

# Set the working directory to /app
WORKDIR /app

# Copy everything in the project into the container
#COPY . /app

COPY bin /app/bin
COPY Build /app/Build
COPY Configuration /app/Configuration
COPY Data /app/Data
COPY Packages /app/Packages
COPY Web /app/Web
COPY .editorconfig /app/.editorconfig
COPY .env /app/.env
COPY .gitignore /app/.gitignore
COPY composer.json /app/composer.json
COPY composer.lock /app/composer.lock
COPY flow /app/flow

# Stage 2: Final Stage
FROM neos_build AS final

# Expose port
EXPOSE 8081

# Start the dev server
CMD [ "./flow", "server:run", "--host", "0.0.0.0" ]

docker-compose.yaml:

# NEOS DEVELOPMENT ENVIRONMENT
#
# For instructions how to use docker-compose, see
# https://docs.neos.io/cms/installation-development-setup/docker-and-docker-compose-setup#docker-compose-cheat-sheet
services:
  # Neos CMS
  neos:
    build:
      context: .
      dockerfile: Dockerfile.dev
    environment:
      FLOW_CONTEXT: 'Development/Docker'
      FLOWNATIVE_PROMETHEUS_ENABLE: true
      TZ: 'Europe/Berlin'
    volumes:
      - ./composer.json:/app/composer.json
      - ./composer.lock:/app/composer.lock
      - ./Packages/Libraries/:/app/Packages/Libraries/
      - ./Configuration/:/app/Configuration/
      - DistributionPackages:/app/DistributionPackages
    ports:
      - 8081:8081

  db:
    image: postgres:17.5-alpine3.21
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
      TZ: 'Europe/Berlin'
    volumes:
      - db:/var/lib/postgresql/data
    ports:
      - 5432:5432
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
      interval: 5s
      timeout: 5s
      retries: 10

volumes:
  db:
  DistributionPackages:
    driver: local
    driver_opts:
      type: none
      device: /home/neos/neos-cms/DistributionPackages
      o: bind

:confused:

Nothing really new here, besides that i’ve updated neos & neos-ui to the latest 8.3.* and re-activated the old /management/media module, as i didn’t remember if it was as slow as the mediaui. I’ve added the responses in a screenshot within the Chrome networking tab also to the gist, but I can tell already: It is very fast (compared).

To me, it looks like more or less all requests using GraphQL are slow, especially when I’m at the mediaui :detective: :magnifying_glass_tilted_left: