Optimize OJS and fixing slow load

7 ways to optimize OJS from slow & unresponsive load

Optimizing the slowness of OJS to have a better improved performance

As your journal grows, the need to optimize your OJS or Open Journal Systems is a must, as it can have slow website performance, sluggish and long page load times, and poor user experience if left unoptimized. These hidden inefficiencies often elude the casual observer, leading to mounting frustration for journal managers and system administrators.

Optimize OJS for better speed improvement

For years, a pervasive myth has persisted: that OJS/OMP, much like any typical content management system (CMS), can be installed, configured with default settings, and left to run smoothly. This belief has resulted in countless institutions unknowingly running slow/sluggish Open Journal Systems instances, plagued by poor responsiveness, delayed page rendering, and even indexing failures in platforms like Google Scholar. In the academic world—where fast access and system reliability are critical—such slowdowns directly translate into reduced visibility and diminished scholarly impact. This is the reason why we need a configuration that is exclusively optimized to run an OJS.

Our extensive experience, forged over years of managing a performance-tuned and optimizing hosting infrastructure dedicated to academic journals, has repeatedly revealed the limitations of this assumption. OJS and OMP do not behave like standard CMS platforms such as WordPress or Joomla. They are not built for simple blog posts or static business pages—they are purpose-built for the complex, high-volume needs of scholarly publishing. These platforms are custom-optimized to manage rigorous editorial workflows, handle thousands of metadata records, and support real-time interaction with indexing services and search engines.

Unlike standard websites, a single OJS installation often serves dozens—or even hundreds—of journals, all generating their own traffic, search engine crawls, and editorial activity. Without proactive performance optimization, even powerful servers can suffer from degraded speed, slow backend processes, and overloaded PHP workers. It becomes clear that without technical expertise for optimization and the right tuning strategies, OJS can become not just slow, but fragile under load.

Supercharge Your OJS Experience with our OJS Optimized Hosting & OJS Expert Support

Running OJS smoothly requires high-performance hosting and top-tier security—our specialized OJS hosting delivers both. Say goodbye to slow load times, crashes, and security risks with a server environment fine-tuned for OJS optimization.

🔹 Blazing Fast Performance – Exclusive OJT Blazing Cache, PHP optimization, and caching for seamless operation.
🔹 Ironclad Security – Protected with our in-house Guardian AI and OJT Advanced Security.
🔹 OJS-Specific Expertise – Our team knows OJS inside out, ensuring peak performance.

Prefer hassle-free management? Our OJS Expert Service handles setup, maintenance, and troubleshooting—so you can focus on publishing, not technical headaches.

Why using a WordPress general hosting is not suitable for OJS.

Many of the publisher easily choose general hosting to host their OJS or Open Journal Systems. While this can be justified as it can instantly and easily host a journal. However, this may affect the system performance of OJS and negatively affect the journal indexing in the long run. This is because the OJS and WordPress systems are fundamentally different. The WordPress proposed hosting is not optimized for OJS.

The comparison to platforms like WordPress, while superficially appealing, quickly unravels under scrutiny. While WordPress is fast at managing single-site instances—or perhaps a modest multisite setup with limited editorial complexity—OJS is built to optimize as a sophisticated multi-journal environment.

This means a single OJS installation can, and frequently does, house dozens—or even hundreds—of-entirely independent editorial processes, each with its own unique set of users, submission flows, published content, and metadata schemas. The implications for performance are substantial: each journal effectively behaves like its own website, and all are accessed concurrently, particularly during indexing, harvesting, or citation checking operations. As a result, even moderate levels of organic traffic combined with bot activity can quickly expose weaknesses in server configuration, database tuning, OJS optimization, or caching strategy. This architecture that can lead to slow ojs.

This fundamental architectural difference is often misunderstood, and that misunderstanding leads to a far more serious consequence: poor indexing and visibility, and performance. Many OJS-based journals that aspire to appear in Google Scholar, Scopus, or other reputable scholarly indexes end up being ignored or even penalized, not because of the quality of their content, but because of avoidable technical issues in how their OJS instance is hosted and optimized.

Indexing bots—such as those deployed by Google Scholar, Crossref, and DOAJ—operate with specific behavioral expectations: they require fast and responsive access, a meta-tag that is built for indexing, properly formatted metadata, and minimal occurrences of HTTP errors. These crawlers are designed to prioritize optimized platforms (OJS) that consistently deliver content without delays or disruptions. For unoptimized or default installation with OJS, this become a burden.

However, when an OJS installation suffers from slow page load times, high latency, or is hosted on a shared server with poor optimization, the system often fails to meet these expectations. Frequent 500 or 503 server errors, delayed database responses, or sluggish rendering of article and issue pages signal instability to these indexing systems. Over time, this degrades the journal’s crawl reputation. As a result, search engine bots may reduce their crawling frequency, lower the domain’s indexing priority, or—more critically—silently exclude the journal from their databases altogether. There is one of the reasons why it needs to optimize OJS.

These indexing issues are rarely accompanied by direct notifications, making them difficult to detect until it’s too late. Journals may continue publishing high-quality articles, unaware that their content is not being discovered or cited simply due to under-performing server infrastructure or a lack of proper performance optimization. In academic publishing, where visibility equals impact, even minor slow site speed issues can have a major consequences on reach and credibility.

It is not uncommon for journal managers or editorial boards to invest heavily in the peer-review process, editorial workflows, and metadata curation—only to find that their articles are not discoverable online due to technical underperformance or unoptimized OJS. What’s even more concerning is that these indexing systems rarely provide feedback or notifications; the exclusion happens silently. By the time the issue is noticed, months of content may have been skipped or partially indexed, harming the visibility and reputation of the journal. This is the main reason to optimize strategy for OJS is in the need in the first place.

The need of OJS optimization performance for indexing and visibility benefit.

In short, hosting OJS without an in-depth understanding of how it behaves under load is not just a performance issue—it’s a visibility issue. And in the competitive landscape of scholarly publishing, visibility is everything. A journal that cannot be found is a journal that cannot be cited. And a journal that cannot be cited, no matter how strong its content, will struggle to achieve the recognition and impact it deserves.

This is why our infrastructure is purpose-built for OJS. We don’t treat it like WordPress, because it isn’t. We implement optimization and tuning strategies, cache layers, query optimization, and intelligent load handling—all designed specifically to meet the expectations of academic crawlers, editorial users, and international researchers accessing your journal.

Some of the implications for OJS that is not properly optimizing for performance are staggering:

  • Concurrent Crawls and Metadata-Driven Queries: Unlike a typical website that might experience sequential visits, a single OJS instance is routinely subjected to simultaneous crawls across an astonishing array of journal paths, issues, volumes, and individual articles. Each of these interactions triggers separate, often resource-intensive, metadata-driven queries against the underlying database. This is not merely about serving static content; it’s about dynamically assembling and delivering highly structured information on demand.
  • Deeply Nested Content Trees: Where WordPress typically loads a single post per page with relatively flat relationships, OJS navigates and renders deeply nested content trees. This involves intricate relationships between galley formats, various submission stages, a multitude of plugin triggers, and granular permission structures. Every page load, every user interaction, every editorial decision, can potentially traverse this complex web of interconnected data, demanding significant processing power and efficient database interaction.
  • The Converging Storm of Concurrent Access: The server’s PHP/MySQL stack, the very heart of the OJS operation, becomes a focal point for a relentless barrage of concurrent requests. Crawlers from search engines and academic indexing services, citation indexers meticulously cataloging scholarly output, API calls from integrated systems, and the constant activity of editorial staff—all converge on this single point. This simultaneous demand creates a perfect storm, pushing the system to its limits and often beyond.

Consider a real-world scenario we encountered: an OJS instance hosted on a formidable 32-core server, seemingly over-provisioned for its task. Yet, despite this impressive hardware, the default configurations proved woefully inadequate. The system struggled to maintain reliable uptime, buckling under the pressure of routine indexer crawls and sudden surges in submission activity. It was this critical juncture, and countless others like it, that compelled us to develop—and continuously refine—a comprehensive suite of highly specialized, fine-tuned performance interventions. These are not generic fixes, but bespoke solutions born from a deep understanding of OJS’s unique architectural nuances and the demanding environment of scholarly publishing.


1. What to choose between the MyISAM vs. InnoDB for OJS optimization ?

The adage “InnoDB is better than MyISAM” has become a mantra in database circles, often recited without a full appreciation of its underlying complexities, especially within the performance-critical environment of Open Journal Systems (OJS). While fundamentally true that InnoDB offers optimized superior transactional integrity, row-level locking, faster and crash recovery capabilities compared to the table-level locking of MyISAM, merely switching storage engines is akin to patching a leak with a band-aid when the entire plumbing system requires an overhaul.

For OJS, the real issue impacting database speed, query performance, and overall responsiveness lies far deeper than storage engine choice; it resides within the very fabric of its database interactions, particularly its intricate subquery structures and the relentless load stress placed upon them under high-traffic conditions.

OJS’s operational core relies heavily on a complex web of SQL queries, frequently employing multiple JOIN operations across a myriad of interconnected entities. These include, but are not limited to, submissions, users, user_settings, issues, sections, galleys, and genres. While the Object-Relational Mapping (ORM in OJS > 3.3 or DAO in OJS < 3.2) logic embedded within OJS strives to abstract these complex database interactions, the reality is that such abstractions often result in resource-intensive database slow loads. This burden becomes especially evident under concurrent pressure, such as during simultaneous web crawling, active editorial workflows, or critical metadata exports like Crossref submissions.

In non-optimized server environments or under congested hosting setups, these operations can lead to lag-prone performance and degraded responsiveness. To maintain a fast and reliable experience, efficient SQL tuning and infrastructure-level optimization become essential. OJS has also been known for having heavy, inefficient queries for just viewing the submission, showing the list of articles in the issue. In just viewing submission in the backend side, it may take > 15.000 queries although it should not be that complex, which lead to a slow and unresponsive page. This is the reason why we create a better approach by optimizing OJS up to more than 300% speed improvement.

Our extensive forensic analysis of countless OJS installations has illuminated several critical areas where default database configurations fall woefully short, leading to insidious performance degradation:

  • The InnoDB Buffer Pool: The InnoDB buffer pool is arguably the single most critical component for MySQL performance, acting as a cache for both data and indexes. A common oversight is to size this buffer pool based on arbitrary percentages or, even worse, simply accepting default values. The optimal size, however, must be meticulously calculated based on the live dataset size, not merely the number of tables. An undersized buffer pool leads to constant disk I/O, transforming even the fastest SSDs into bottlenecks. Conversely, an oversized buffer pool can lead to excessive memory swapping, crippling overall system responsiveness. Our approach involves dynamic monitoring of dataset growth and adjusting the buffer pool size to ensure the most frequently accessed data remains resident in memory, minimizing disk access and maximizing query throughput.
  • Collation Mismatch: A seemingly innocuous detail, collation settings can introduce significant, yet often silent, query degradation. We frequently uncover instances where collation mismatches exist between table-level, column-level, and the client connection layer. For example, finding utf8_general_ci alongside utf8mb4_unicode_ci within the same database schema is an immediate red flag. Such inconsistencies force MySQL to perform implicit character set conversions during query execution, consuming valuable CPU cycles and negating the benefits of indexing. Our rigorous audits involve a comprehensive review of collation settings, ensuring uniformity and optimal performance across the entire database schema, thereby eliminating these hidden performance penalties.
  • Optimizing the MySQL configuration: The default my.cnf configuration file, shipped with most MySQL installations, is, to put it mildly, laughably under-tuned for the demanding workload of even a modest 3-journal OJS deployment. These defaults are designed for generic, low-demand scenarios, not for the high-concurrency, data-intensive operations characteristic of OJS. While tools like MySQLTuner-perl offer a valuable first-pass audit, our proprietary configurations extend far beyond these basic recommendations. We engage in granular, per-journal workload sampling over extended periods, analyzing query patterns, peak loads, and resource utilization to craft bespoke my.cnf optimizations. This includes fine-tuning parameters such as query_cache_size, tmp_table_size, max_heap_table_size, and various thread-related settings, ensuring that MySQL is operating at its peak efficiency for the specific demands of each OJS instance.

Example of the my.cnf OJS optimization for 8GB/16GB memory size :

[mysqld]
# Basic Performance
max_connections         = 200
connect_timeout         = 10
wait_timeout            = 180
interactive_timeout     = 180
thread_cache_size       = 64

# Query Cache (only if using MySQL < 5.7 or MariaDB)
query_cache_limit       = 1M
query_cache_size        = 64M
query_cache_type        = 1

# Temporary Tables
tmp_table_size          = 128M
max_heap_table_size     = 128M

# Sort and Join Buffers
sort_buffer_size        = 4M
join_buffer_size        = 4M

# InnoDB Tuning
innodb_buffer_pool_size = 4G  # Set this to ~60-70% of available RAM
innodb_log_file_size    = 512M
innodb_log_buffer_size  = 64M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table   = 1
innodb_flush_method     = O_DIRECT

# Table Caching
table_open_cache        = 1024
open_files_limit        = 65535

# Thread Handling
thread_stack            = 256K
max_allowed_packet      = 64M

[mysqldump]
quick
quote-names
max_allowed_packet      = 64M

Note that this configuration is an example for improving a MySQL or MariaDB site, as the configuration improvement will vary based on the hardware specification, and it needs more optimization based on analytical data.

Furthermore, the sheer volume of data generated by OJS, particularly within metrics and usage statistics tables, presents another formidable challenge. It is not uncommon for a metrics table to swell to several million rows within a relatively short period. Without active optimization, proactive cleanup logic, or intelligent table partitioning strategies, even seemingly “basic” OJS installations inevitably succumb to performance degradation. Our solutions incorporate automated routines for data archival, purging of stale or irrelevant data, and, where appropriate, implementing table partitioning to manage large datasets more efficiently, ensuring that the database remains lean, responsive, and performance even as the volume of scholarly activity grows exponentially. Database optimization is one of the key steps to optimizing OJS. One of indication that it needs to optimize is slow loading of any process in OJS.

Although PKP staff always mention you to use the database on the same server as OJS, Ssssst…. here is the secret we can share with you based on our expert experience when handling and optimizing a high-traffic OJS that holds hundreds of journals. We conclude that it is actually better to separate the OJS code or the web server and the database server. However, this should be handled with care as it should be properly configured to optimize ojs and the database. On another though, the database server should have the same private LAN connection to the web server.

2. Protecting your OJS with malicious bot traffic is a must to optimize OJS

In the intricate ecosystem of web servers, the access log serves as a digital diary, meticulously recording every interaction. A cursory glance at these logs, particularly for a publicly accessible OJS/OMP instance, often reveals a startling and frankly horrifying truth: the vast majority of incoming requests are not from legitimate human users or even well-behaved academic crawlers. Instead, they are a relentless torrent of automated probes, malicious bots, and opportunistic scripts, ceaselessly scanning for vulnerabilities and attempting to exploit common weaknesses.

These digital parasites are not interested in scholarly articles; they are systematically searching for /wp-login.php, /xmlrpc.php, .env files, .git/config directories, and a litany of other signatures associated with unrelated, often vulnerable, systems. This is the reason why we need to optimize OJS for any useless bot traffic.

For an OJS/OMP environment, this incessant, wasteful traffic is not merely an annoyance; it is a deadly drain on critical resources. Every single one of these illegitimate requests, even those resulting in a seemingly harmless 404 “Not Found” error, consumes valuable server resources. This includes:

  • PHP-FPM Workers: Each request, regardless of its legitimacy, ties up a PHP-FPM worker process. In a high-traffic scenario, a flood of junk requests can quickly exhaust the available worker pool, leading to legitimate user requests being queued, delayed, or outright rejected. This translates directly into slow page loads and an unresponsive user experience for actual journal visitors and editorial staff.
  • Session Handlers: Many of these probes attempt to establish sessions, even if malformed. This places an unnecessary burden on session management, consuming memory and disk I/O, further contributing to system slowdowns.
  • Apache or NGINX I/O: The web server itself (Apache or NGINX) expends significant input/output resources processing these requests, logging them, and generating error responses. This overhead, while seemingly minor for a single request, scales dramatically under a sustained attack, impacting the server’s ability to serve legitimate content efficiently.
  • Potential Plugin Execution: In some cases, poorly configured systems or certain types of probes can even trigger the execution of OJS plugins, adding another layer of unnecessary processing and resource consumption, even for requests that are ultimately rejected.

Our solution to this pervasive problem is not a blunt instrument, but a sophisticated, multi-layered request filtering system using our exclusive Guardian AI and OJT Advanced Security that are designed to eliminate wasteful traffic with surgical precision. This proactive defense mechanism operates at various levels, ensuring that only legitimate and relevant traffic reaches the core OJS application for optimization purposes:

  • Automated Banning via Fail2Ban: For clients exhibiting repeated patterns of malformed URL attempts, brute-force attacks, or other suspicious behavior, we implement dynamic, automated banning using Fail2Ban. This robust intrusion prevention framework monitors log files for predefined patterns and automatically updates firewall rules to temporarily or permanently block the offending IP addresses. This proactive measure prevents sustained attacks from a single source, safeguarding server resources and maintaining system integrity.
  • Blocking Direct Access to Sensitive Files: A surprisingly common vulnerability in many OJS installations is the inadvertent exposure of sensitive files. We enforce strict rules to block direct web access to files with extensions like .bak, .zip, .env, .swp, .DS_Store, and other common leakage paths. These files, often remnants of development, backups, or system configurations, can contain critical information that, if exposed, could compromise the entire system. You would be astonished by the number of OJS servers we’ve audited that were unknowingly leaking backup archives or old migration folders, essentially leaving a digital breadcrumb trail for malicious actors.
  • Hardening MIME Types and Disabling Directory Listing: As a fundamental security and performance measure, we meticulously harden MIME types, ensuring that the server only serves content with explicitly defined and secure media types. Simultaneously, we disable directory listing by default. This prevents attackers from browsing directory contents and discovering potentially sensitive files or gaining insights into the server’s file structure. These seemingly minor configurations are crucial in preventing information leakage and reducing the attack surface.
  • Block excessive AI crawls to your journal: AI consumes a lot of resources that crawl your journal, which will drain and slow the resources of your journal. To prevent this, you need to analyze your access_log on your server and check the majority of the resource that affect the performance of your OJS. After you got the culprit, you can block the agent by adding additional code to block such request in the future in the robots.txt file.

Here is the example of content of robots.txt (for example, blocking Ai2Bot-Dolma):

User-agent: Ai2Bot-Dolma
Disallow: /

By implementing these rigorous, multi-layered filtering mechanisms, we dramatically reduce and optimize the load on the OJS application and its underlying database. This not only enhances security by thwarting malicious probes but also significantly improves performance by dedicating server resources exclusively to legitimate scholarly publishing activities. It’s a testament to the principle that sometimes, the best way to optimize ojs is to simply eliminate what doesn’t belong.

3. Static Asset Handling optimization can help you in better optimization of OJS

The concept of caching static assets is a cornerstone of web performance optimization from the slowness, yet within the nuanced environment of Open Journal Systems (OJS), it presents a unique set of challenges and often leads to a profound misunderstanding. The common refrain, echoed across countless basic web optimization tutorials, is to simply “enable mod_expires” or its NGINX equivalent. While this directive is a necessary component of effective caching, relying solely on it for an OJS installation is akin to believing that merely owning a car makes you a race car driver. The reality is far more intricate, and the pitfalls of improper static asset handling can create an illusion of caching while your server continues to bear an unnecessary and crippling load.

The fundamental trap lies in the dynamic nature of OJS. While the end-user experience might appear static—a published article, a journal cover image, a CSS stylesheet—the underlying system is profoundly dynamic. This means:

  • Access Controls on Every Asset: Unlike a truly static website where assets are served directly, every static asset within an OJS environment, unless explicitly and carefully excluded, still potentially runs through the application’s access control mechanisms. This adds a layer of processing overhead to each request, even for seemingly innocuous files like images or JavaScript. Without proper configuration, the server expends valuable CPU cycles and memory verifying permissions for files that should ideally be served directly from cache.
  • The Absence of Proper Caching Headers: Even with mod_expires enabled, if your server fails to send the correct caching headers (e.g., Cache-Control, ETag, Last-Modified), every .jpg, .js, and .css file will be needlessly re-negotiated with the client on each subsequent visit. This results in redundant data transfers, increased server load, and a sluggish user experience. The browser, unaware of the asset’s freshness, will constantly ask the server if it needs a new copy, leading to a cascade of unnecessary requests.
  • CDN Integration: While Content Delivery Networks (CDNs) like Cloudflare are invaluable for distributing static assets globally and reducing latency, their effectiveness is entirely contingent upon proper cache-control at the origin server. If the OJS server isn’t sending appropriate caching headers, the CDN will frequently re-fetch assets from the origin, negating much of its benefit. The CDN becomes a mere proxy rather than a true caching layer, and your origin server continues to be hammered by requests that should have been served from the CDN’s edge.

Our approach to static asset handling in OJS/OMP environments transcends these simplistic recommendations, focusing on a robust, multi-faceted strategy that ensures optimal caching and minimal server load. We implement meticulous configurations for both Apache and NGINX, the two most prevalent web servers in the OJS ecosystem:

For Apache Environments:

<IfModule mod_expires.c>
    ExpiresActive On
    ExpiresByType image/jpeg "access plus 1 month"
    ExpiresByType image/png "access plus 1 month"
    ExpiresByType image/gif "access plus 1 month"
    ExpiresByType image/webp "access plus 1 month"
    ExpiresByType image/svg+xml "access plus 1 month"
    ExpiresByType application/javascript "access plus 1 week"
    ExpiresByType text/css "access plus 1 week"
    ExpiresByType font/woff2 "access plus 1 month"
    ExpiresByType font/woff "access plus 1 month"
    ExpiresByType application/vnd.ms-fontobject "access plus 1 month"
    ExpiresByType application/x-font-ttf "access plus 1 month"
</IfModule>

<IfModule mod_headers.c>
    <FilesMatch "\.(ico|jpg|jpeg|png|gif|css|js|woff2?|svg|ttf|eot)$">
        Header set Cache-Control "public, max-age=2592000, immutable"
    </FilesMatch>
</IfModule>

This configuration goes beyond basic image and JavaScript files, encompassing a wider array of common web assets, including various image formats, fonts, and stylesheets. The ExpiresByType directive sets a future expiration date for these resources, instructing the client browser and intermediate caches to store them for a specified duration. Crucially, the Header set Cache-Control directive, particularly with immutable, signals to compliant caches that the resource will not change over its lifetime, allowing for aggressive caching without revalidation. The max-age value of 2592000 seconds corresponds to one month, a sensible duration for static assets that rarely change.

For NGINX Environments:

location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2?|svg|ttf|eot)$ {
    expires max;
    add_header Cache-Control "public, max-age=31536000, immutable";
    add_header Pragma "public";
    add_header Vary "Accept-Encoding";
}

NGINX, known for its high performance and efficient static file serving, is configured with similar precision. The expires max; directive sets an extremely long expiration time (effectively one year), ensuring that browsers and caches store these assets for as long as possible. The Cache-Control header is again set to public, max-age=31536000, immutable, reinforcing the aggressive caching strategy. We also add Pragma "public" for backward compatibility with older HTTP/1.0 caches and Vary "Accept-Encoding" to ensure that caches serve the correct compressed or uncompressed version of the asset based on the client’s capabilities. This comprehensive approach ensures that static assets are served with maximum efficiency, offloading significant burden from the OJS application and the backend database.

Without these meticulous configurations, merely enabling mod_expires creates a dangerous illusion of caching. Bots and legitimate users alike will continue to hammer your origin server, requesting assets that should have been served from their local cache or a CDN. This not only degrades performance but also consumes valuable bandwidth and processing power, diverting resources from the core mission of scholarly publishing. True optimization of static assets in OJS is not a simple toggle; it is a finely tuned symphony of server directives, header management, and a deep understanding of web caching principles.


4. Remove and Stopping Spam is one of key element in OJS optimization

In the digital realm, spam is not merely an irritating nuisance; it is a insidious form of digital pollution that, if left unchecked, can cripple the performance and integrity of even the most robust systems. For Open Journal Systems (OJS) and Open Monograph Press (OMP) installations, the threat of spam extends far beyond the visible annoyance of unwanted user registrations or submission attempts. It represents an unseen war, a relentless assault on the underlying database and server resources that, if not decisively won, can transform a vibrant scholarly platform into a sluggish, unresponsive quagmire. While many administrators might believe that implementing a reCAPTCHA solution is the panacea for all spam woes, the reality is far more complex: spambot behavior is a constantly evolving threat, often outpacing the update cycles of standard anti-spam plugins.

The more users that OJS hold in your database, trust me your OJS will be become slaggy, especially when you want to load submission for the journal manager, editor or try to operate the submission workflow including review and publishing. Did you know that in listing a submission, OJS has more than 14.000 queries. Yes this is ridiculous, but it a concern that you need to optimize your OJS on normal operation. Here is the post from one of the OJS user that found such lot of query :

Too much query natively make OJS slow
Read more here : Slow loading speed of the submission page on PKP Forum

The first way to optimize your OJS will be handling the current record in your OJS database by removing lot of OJS user spam in your OJS. Read our detailed and technical guide on How to remove lot of OJS user spam.

The consequences of failing to adequately harden an OJS instance against spam are multifaceted and severe, impacting performance at every layer of the application stack:

  • Massive users Table Inflation: Uncontrolled spam registrations lead to an exponential and often grotesque inflation of the users table in the OJS database. Each fake user, though seemingly harmless, adds a row of data that must be stored, indexed, and potentially queried. Over time, this bloat can transform what should be a lean, efficient table into a cumbersome behemoth, significantly slowing down any operation that interacts with user data.
  • Increased Query Load on User Dropdowns and Interfaces: As the users table swells, any OJS interface that involves selecting or displaying user information—such as author dropdowns during submission, reviewer assignment interfaces, or the “Users & Roles” management page—experiences a dramatic increase in query load. Simple operations that once took milliseconds can now take seconds, or even minutes, leading to frustrating delays for editors, authors, and reviewers. This directly impacts the efficiency of the editorial workflow.
  • API Throttling Failures and Database Bloat: Critical OJS functionalities, such as Crossref XML exports or other API-driven data synchronizations, often involve querying large subsets of user data. When the database is bloated with spam accounts, these operations can become agonizingly slow, leading to API throttling failures, timeouts, and incomplete data exports. The integrity of scholarly metadata dissemination is directly compromised by the presence of digital detritus.

Our comprehensive strategy for combating spam in OJS environments is proactive and multi-pronged, designed to stop malicious activity at the earliest possible point, thereby preserving system resources and maintaining optimal performance:

  • Behavioral Spam Heuristics in Registration Forms: We move beyond simple CAPTCHA challenges by implementing sophisticated behavioral spam heuristics directly within the registration forms. This involves analyzing user interaction patterns, such as typing speed, mouse movements, and form field completion order, to identify and flag suspicious, non-human behavior. Bots, by their very nature, often exhibit predictable and non-human-like patterns, which these heuristics are designed to detect and block before a new user account is even created.
  • Automated Pruning Based on Known Email Domain Patterns: Leveraging community intelligence and our own continuously updated threat intelligence, we implement automated routines to prune or block registrations based on known spam email domain patterns. The PKP community, for instance, maintains a valuable collaborative list of spam user patterns, which we integrate into our defense mechanisms. This allows for the proactive identification and rejection of accounts originating from known spam sources, preventing them from ever polluting the database.
  • Regular Database Optimization with mysqlcheck: Beyond initial setup, ongoing maintenance is crucial. We schedule regular mysqlcheck operations, specifically targeting the user_settings table and other join-heavy areas of the database. This utility helps to optimize, repair, and analyze tables, ensuring that indexes remain efficient and data fragmentation is minimized. This proactive optimization is vital for maintaining query performance, especially in tables that experience frequent insertions and updates.

In our hosting service, we have taken a special tool to protect your journal from spam users. This is to make sure that your OJS will run smoothly without any technical issues. This is the exclusive approach that only available by our service as it is not available without deep research of how OJS work and its architecture.
For more details on read our article: How to protect your OJS from spam user

The litmus test for an OJS instance’s spam resilience is often its “Users & Roles” management page. If this page loads slowly, if filtering and searching for legitimate users takes an inordinate amount of time, then you are already late to the battle. The spam has already infiltrated your system, consuming resources and degrading the user experience. Our approach ensures that the fight against spam is won at the gates, not within the inner sanctum of your database, thereby safeguarding the performance and integrity of your scholarly publishing platform.


5. Microcaching

Microcaching, particularly when implemented with NGINX, is often hailed as a magical elixir for web performance, capable of dramatically reducing server load and accelerating content delivery. For Open Journal Systems (OJS) and Open Monograph Press (OMP) environments, where many published articles are accessed repeatedly, the allure of microcaching is undeniable. By serving cached copies of frequently requested pages directly from the web server’s memory or fast disk, it bypasses the need to hit the PHP application and the backend database for every request, leading to near-instantaneous load times for static content. We have rigorously tested and successfully deployed NGINX’s microcaching mechanism using configurations similar to the following:

proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=microcache:10m max_size=100m inactive=60s use_temp_path=off;
proxy_cache microcache;
proxy_cache_valid 200 302 1m; # Cache successful responses for 1 minute
proxy_cache_valid 404 1s;    # Cache 404s for 1 second
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_cache_background_update on;
proxy_cache_revalidate on;
proxy_cache_min_uses 1;
proxy_ignore_headers Cache-Control Expires Set-Cookie;

And indeed, for the specific use case of serving static article views, the results are nothing short of miraculous. Pages that once took hundreds of milliseconds to render now appear in tens of milliseconds, dramatically improving the reader experience and significantly offloading the backend application. However, the true nature of microcaching, particularly in a dynamic environment like OJS, is that it is a double-edged sword. Its power comes with significant caveats, and without intelligent implementation, it can quickly transform from a performance boon into a source of frustration and data inconsistency.

For more detail about how we utilize this approach some issue with that approach can be read here : How we manage to improve OJS to 300%

The primary “gotchas” that often trip up even experienced administrators when implementing microcaching in OJS are:

  • Stale Content for Editorial Operations: The most critical challenge arises during editorial operations. If a journal manager edits metadata for an article, approves a galley, or makes any change that impacts the content of a cached page, the microcache will continue to serve the old, stale content until its expiration time. This can lead to significant confusion and errors for editorial staff who expect to see their changes reflected immediately. Without a robust invalidation mechanism, microcaching can actively hinder the editorial workflow.
  • Lack of Automatic Cache Invalidation in OJS: Unlike some CMS platforms that have built-in hooks for cache invalidation upon content updates, OJS, by default, does not possess an automatic mechanism to signal to an external microcache that a page has changed and needs to be purged. This means that even if an editor publishes a new article or updates an existing one, the microcache will remain blissfully unaware, continuing to serve the outdated version until its TTL (Time To Live) expires.
  • Disk Flooding from Frequent Editorial Activity: In highly active OJS instances with frequent editorial activity, a poorly configured microcache can lead to a rapid accumulation of temporary cache keys on disk. Each time a page is updated, a new cached version might be generated, while the old one remains until expiration or manual purging. This can quickly consume disk space and degrade disk I/O performance, ironically creating a new bottleneck in the pursuit of speed.

It was precisely these inherent limitations and the critical need for real-time data consistency in a scholarly publishing environment that led us to develop OJT Blazing Cache. This is not merely a generic caching solution; it is our proprietary microcaching engine, meticulously engineered to integrate seamlessly with OJS’s internal hooks and events. OJT Blazing Cache addresses the fundamental shortcomings of generic microcaching by:

  • Intelligent, Selective Invalidation: Instead of relying on broad cache expiration times, OJT Blazing Cache actively listens for specific OJS events—such as article publication, metadata updates, submission status changes, or user role modifications. Upon detecting such an event, it triggers a selective invalidation of only the affected cached pages, ensuring that editorial operations immediately reflect the latest data without purging the entire cache. This precision minimizes cache misses and maximizes the efficiency of the caching layer.
  • Real-time Synchronization: Our engine maintains a real-time understanding of OJS content changes, ensuring that the cached layer is always synchronized with the live database. This eliminates the risk of serving stale content to editors and readers, a critical requirement for maintaining the integrity and trustworthiness of scholarly publications.
  • Optimized Disk Management: OJT Blazing Cache employs sophisticated algorithms for cache storage and purging, preventing disk flooding and ensuring efficient utilization of resources, even under conditions of high editorial activity.

In essence, while NGINX microcaching provides the raw power, OJT Blazing Cache provides the intelligence and precision required to wield that power effectively within the dynamic and sensitive context of OJS/OMP. It transforms a potentially problematic performance enhancement into a reliable, high-performance solution that respects the integrity of the editorial workflow and the immediacy of scholarly dissemination.

This article explain on How we use OJT Blazing Cache to optimize OJS speed that received almost 3000%.

6. Optimizing PHP-FPM Configuration

In the intricate architecture of Open Journal Systems (OJS) and Open Monograph Press (OMP), PHP-FPM (FastCGI Process Manager) stands as an unsung hero, silently orchestrating the execution of PHP scripts that power every dynamic interaction on the platform. Yet, despite its pivotal role, PHP-FPM configuration remains, by far, the most ignored and consequently, the most insidious bottleneck in countless OJS deployments. The vast majority of shared hosting providers, driven by a need to maximize server density, offer little to no control over PHP-FPM settings, leaving OJS instances to languish under generic, sub-optimal configurations. Even on dedicated servers, where administrators theoretically have full control, the default FPM pools are almost universally not tuned for the unique demands of simultaneous journal loading, leading to chronic performance issues that manifest as slow page loads, timeouts, and an overall unresponsive user experience.

The tell-tale sign of an under-tuned PHP-FPM configuration is often found buried within server logs, a cryptic warning that screams for attention:

WARNING: [pool www] server reached pm.max_children setting (5), consider raising it.

This seemingly innocuous message is a critical indicator that your PHP-FPM processes are being exhausted. It means that all available PHP workers are busy processing requests, and any new incoming requests are being queued, leading to delays and potential timeouts. In a multi-journal OJS environment, where numerous concurrent requests for different journals, articles, and editorial actions are commonplace, hitting this limit is a recipe for disaster.

Our deep expertise in OJS performance optimization has led us to meticulously fine-tune PHP-FPM configurations, moving beyond the simplistic defaults to create highly responsive and resilient environments. Our strategies involve a nuanced understanding of traffic patterns and resource allocation:

  • pm = static for Predictable Traffic: For OJS instances with highly predictable and consistent traffic patterns, we often opt for pm = static. This mode pre-forks a fixed number of PHP-FPM worker processes at startup. While it consumes more memory upfront, it eliminates the overhead of dynamically spawning new processes, ensuring that workers are always immediately available to handle incoming requests. This is ideal for environments where the baseline load is high and consistent, providing maximum responsiveness.
  • pm = ondemand for Variable Load: Conversely, for OJS instances experiencing highly variable or bursty traffic, pm = ondemand is often the superior choice. In this mode, PHP-FPM spawns worker processes only when a request arrives and terminates them after a period of inactivity. This conserves memory during low-traffic periods, making it suitable for environments with fluctuating loads or those where memory resources are more constrained. The slight overhead of spawning new processes is outweighed by the memory savings and adaptability to unpredictable demand.
  • Socket-Based Pool Separation per Journal Cluster: For large-scale OJS deployments hosting multiple distinct journal clusters, we implement advanced configurations involving socket-based PHP-FPM pool separation. Instead of a single, monolithic PHP-FPM pool serving all journals, we create separate, isolated pools for different journal clusters. Each pool listens on its own Unix socket, allowing for granular resource allocation and preventing a traffic surge on one journal from impacting the performance of others. This architectural segregation enhances stability, improves resource isolation, and allows for highly specific tuning based on the unique demands of each cluster.

Beyond these fundamental pm settings, our optimization extends to exposing and leveraging the PHP-FPM status page. By configuring OJS environments to expose this critical monitoring interface (typically at /php-status), we gain invaluable real-time insights into the health and performance of the PHP-FPM pools. A comprehensive guide on how to enable PHP-FPM status is readily available, and its implementation is a non-negotiable step in our optimization process.

Once the PHP-FPM status page is live, our engineers correlate spike patterns observed in the status metrics with external events, such as Googlebot crawls, peak submission periods, or sudden editorial rushes. This correlation allows for data-driven adjustments to pm.max_children, pm.start_servers, pm.min_spare_servers, and pm.max_spare_servers parameters. For truly dynamic and large-scale environments, we integrate these metrics with advanced monitoring and automation tools like Ansible and Prometheus. This enables us to implement dynamic scaling of PHP-FPM pool sizes, automatically adjusting the number of available workers in real-time based on current load, ensuring optimal performance and resource utilization without manual intervention. This level of proactive, data-driven PHP-FPM management is what truly differentiates a high-performing OJS instance from one that merely limps along.

Here’s a commonly optimized PHP-FPM configuration suitable for an OJS/OMP installation on a dedicated server or VPS with moderate traffic (e.g., 4–8 CPU cores, 8–16 GB RAM). Below the config, you’ll find a clear explanation that emphasizes it’s not a one-size-fits-all (“no silver bullet”) and requires ongoing monitoring. Example: www.conf – Optimized PHP-FPM Pool Configuration

This file is typically located in /etc/php/*/fpm/pool.d/www.conf or similar, depending on your PHP version (this won’t work if you are using hosting with cpanel).
Detail about setting up this optimized configuration php-fpm can be obtained from this article : How to configure PHP-FPM ?

[www]

; Choose process management style: static, dynamic, or ondemand
pm = dynamic

; Max number of child processes allowed
pm.max_children = 30

; Min number of idle server processes
pm.min_spare_servers = 5

; Max number of idle server processes
pm.max_spare_servers = 10

; Number of child processes to start initially
pm.start_servers = 8

; Max number of requests each child process should execute before respawning
pm.max_requests = 500

; Optional: increase buffer limits for OJS-heavy forms
php_admin_value[memory_limit] = 512M
php_admin_value[upload_max_filesize] = 64M
php_admin_value[post_max_size] = 128M
php_admin_value[max_execution_time] = 300

While this optimized OJS configuration for PHP-FPM offers a solid baseline for many OJS/OMP servers, it’s crucial to understand that PHP-FPM performance tuning is highly dependent on your specific environment:

  • A journal with low traffic can run well with pm = ondemand, reducing idle resources.
  • A heavily crawled or multi-journal OJS instance may require pm = static and higher pm.max_children values.
  • Running too many children on a small server can crash your system due to memory exhaustion.
  • Too few children on a powerful server can underutilized your resources, leading to slow response times under load.

Thus, there is no universal or fixed PHP-FPM config that works for every server.


7. Monitoring and removing useless code

In the dynamic and often demanding world of scholarly publishing, the performance of an Open Journal Systems (OJS) or Open Monograph Press (OMP) installation is not a static achievement; it is a continuous process, an unceasing vigil that demands constant attention, proactive monitoring, and diligent cleanup. To treat performance as a one-time configuration task is to fundamentally misunderstand the nature of these complex platforms. OJS, by its very design, is a living, breathing system that accumulates data, logs activity, and processes an ever-growing volume of scholarly content. Without a robust framework for ongoing monitoring and systematic cleanup, even the most meticulously optimized initial setup will inevitably succumb to degradation, slowing down quietly until it becomes a bottleneck for the entire publishing workflow.

Our commitment to sustained OJS performance extends far beyond initial deployment, encompassing a comprehensive suite of ongoing operational procedures integrated into all production environments. These are not merely best practices; they are existential necessities for maintaining the health, responsiveness, and long-term viability of any OJS instance:

  • Automated Cleanup of metrics and usageStats Tables: The metrics and usageStats tables within the OJS database are invaluable for tracking article views, downloads, and other usage data. However, if left unchecked, these tables can grow to astronomical sizes, becoming significant performance liabilities. We implement automated cron jobs to periodically clean up data older than a specified threshold, typically six months. This involves archiving historical data for long-term analysis while purging the active tables of stale entries, ensuring that queries against these tables remain fast and efficient. This proactive management prevents database bloat and maintains query responsiveness.
  • Regular Database Integrity Audits via Cron: The integrity of the OJS database is paramount. We schedule regular, automated database integrity audits via cron. These audits utilize MySQL’s built-in tools (e.g., CHECK TABLE, OPTIMIZE TABLE) to identify and repair any corruption, fragmentation, or inconsistencies within the database tables. Proactive integrity checks prevent minor issues from escalating into major data corruption, safeguarding the scholarly content and ensuring reliable operation.
  • Comprehensive System Resource Logging and Analysis: Our monitoring framework extends beyond the application layer to encompass the underlying server infrastructure. We implement continuous logging of critical system resources, including CPU utilization, memory consumption, disk I/O, and network activity. Tools like htop (for real-time process monitoring), mysqladmin (for MySQL server status), and iostat (for disk I/O statistics) are configured to log their output at regular intervals across all containers and virtual machines hosting OJS. This granular data is then fed into centralized logging and analysis platforms, allowing our engineers to identify performance trends, pinpoint bottlenecks, and correlate resource spikes with specific OJS activities or external events. This deep visibility is crucial for proactive problem-solving and capacity planning.

Additional improvement :

1. Removing Version Check

On each OJS native source code, it always doing checking on the version of OJS each time in the backend you doing any operation. This will do a useless request in the background that will slowdown your OJS. Here is how to remove it technically :

on this file \lib\pkp\classes\site\VersionCheck.inc.php you need to remove this code (in line 40 or line 50 – may vary with OJS version) :

return self::parseVersionXML(
			$application->getVersionDescriptorUrl() .
			($includeId?'?id=' . urlencode($uniqueSiteId) .
				'&oai=' . urlencode($request->url('index', 'oai'))
			:'')
		);

Remove or comment that code so it won’t do checking version for OJS oftenly.

2. Disable check remote OJS latest version

To prevent simultant request to get the latest version of OJS you can disable the process by removing these code :

1. On path : \lib\pkp\pages\admin\AdminHandler.inc.php remove this block of code

		import('lib.pkp.classes.site.VersionCheck');
		$latestVersion = VersionCheck::checkIfNewVersionExists();

		// Display a warning message if there is a new version of OJS available
		if (Config::getVar('general', 'show_upgrade_warning') && $latestVersion) {
			$currentVersion = VersionCheck::getCurrentDBVersion();
			$templateMgr->assign([
				'newVersionAvailable' => true,
				'currentVersion' => $currentVersion,
				'latestVersion' => $latestVersion,
			]);
		}

2. On path: \lib\pkp\pages\management\ManagementHandler.inc.php

Find this line of code and change to the new code :

$latestVersion = VersionCheck::checkIfNewVersionExists();

to

$latestVersion = false;

By deploying the code modification, you won’t get the notification modal if there is a new version is released by OJS.

3. Disable plugin that suck up the resource of database

Although a plugin is meant to improve the functionality of the OJS and with its uniqueness of OJS as a publishing platform, some plugins may help to improve the visibility and quality of the journal publishing. However, it is worth noting that not all plugins should be installed carelessly in your journal.

Here are some plugins that need to remove away to your journal :

  1. Recommend Articles by Author
  2. Recommend Similar Articles
  3. Web Feed Plugin
  4. AddThis Plugin

Rule of thumb: minimize the use or installation of unnecessary plugins that are not required by your journal or indexing. Even one plugin may slow down your journal site. This may become the reason why your journal becomes slow or unoptimized.

Additional Tips

Setup a monitoring for uptime watcher

As journal traffic grows organically over time, the need for real-time external monitoring becomes not just beneficial, but essential. Even a 3 to 6-hour period of site slowness or downtime can have serious consequences. Indexing bots, such as those from Google Scholar, may interpret the site as slow, which means in their language as inactive or unreliable, leading to a loss of indexing status. In many cases, Google Scholar silently deprioritizes or even removes journals it deems technically unstable or inconsistent in availability, regardless of content quality.

To prevent such outcomes, our hosting infrastructure includes automated server and uptime monitoring, purpose-built for OJS/OMP environments. Our system continuously tracks server health and journal responsiveness. When any instance shows slow response times or downtime, our team is instantly notified—allowing us to take proactive action before search engine crawlers or end users are affected.

If you’re managing your own OJS installation, we strongly recommend optimizing it to set up uptime monitoring using third-party services like UptimeRobot or BetterStack. For those who prefer a self-hosted option, tools like Uptime Kuma offer customizable monitoring dashboards to track site availability and performance metrics. Regardless of the method, continuous monitoring is a non-negotiable part of maintaining your journal’s discoverability and trust in today’s scholarly ecosystem.

Conclusion:

If you have journeyed through the intricate technical landscape detailed within this document, you have undoubtedly arrived at a profound realization: the optimization of Open Journal Systems (OJS) and Open Monograph Press (OMP) performance is not a trivial undertaking. It is not, as many might naively assume, a matter of merely flipping a caching switch, installing a single plugin, or simply throwing more hardware at the problem. Such simplistic approaches are destined to fail, for they fundamentally misunderstand the deep-seated complexities inherent in these sophisticated scholarly publishing platforms.

True OJS/OMP optimization is an art and a science, a meticulous craft that demands a multi-faceted approach built upon:

  • A Granular Understanding of OJS Architecture: It requires an intimate, granular understanding of how OJS operates at its core—how its database schema is structured, how its PHP application interacts with the web server, how its various modules and plugins interweave, and how its editorial workflows impact system resources. This is not about surface-level knowledge; it’s about delving into the very source code and architectural decisions that define the platform.
  • Anticipating the Digital Ecosystem: It necessitates the ability to anticipate and model the behavior of the diverse digital ecosystem that interacts with OJS. This includes the relentless activity of search engine crawlers, the nuanced demands of human editors and reviewers, the specific resource consumption patterns of various plugins, and the impact of external API integrations. Performance is not just about serving content; it’s about gracefully handling the sum total of all these interactions.
  • Designing a Layered, Adaptive Performance Strategy: Ultimately, it demands the design and implementation of a layered performance strategy—a comprehensive, adaptive framework that integrates database optimization, intelligent traffic filtering, meticulous static asset caching, proactive spam prevention, finely tuned PHP-FPM configurations, and continuous monitoring. This strategy must be dynamic, capable of adapting and evolving as your journal grows, as traffic patterns shift, and as new versions of OJS are released.

With the complexities of OJS that may influence its valued indexing, it is why we, with our specialized expertise and battle-tested methodologies, offer managed OJS or Open Journal Systems hosting solutions that incorporate every single one of the aforementioned optimizations. Our environments are not merely configured; they are pre-configured, rigorously stress-tested under real-world conditions, and actively monitored 24/7 by a team of dedicated experts.

Because in the world of OJS, complexity is not the problem to be avoided; it is the very essence that makes performance optimization not just essential, but a strategic imperative. It is the challenge that, when mastered, unlocks the full potential of scholarly publishing, ensuring that your research reaches its audience with unparalleled speed, reliability, and impact.

Ready to transform your OJS instance from a bottleneck into a beacon of scholarly dissemination?

Get in touch with our team today. We offer a service through our exclusive OJS Hosting and Expert assistance for your OJS. We also audit of your existing OJS setup, meticulously analyzing its current performance, identifying hidden bottlenecks, and providing a clear road map to optimizing your OJS. Discover where your real performance challenges are hiding, and let us show you how a truly optimized OJS environment can elevate your scholarly publishing endeavors.


About the Author
user-avatar

Project Manager

Hendra here, I love writing about OJS and share knowledge about OJS. My passion is about OJS, OMP platform and doing some research on creating innovated products for that platform to help publisher to improve their publication.

Leave a Comment

Your email address will not be published. Required fields are marked *

Open Journal Theme

Need More Services  or Question?

Openjournaltheme.com started in 2016 by a passionate team that focused to provide affordable OJS, OMP,  OPS,  Dspace, Eprints products and services. Our mission to help publishers to be more focus on their content research rather than tackled by many technical OJS issues.

Under the legal company name :
Inovasi Informatik Sinergi Inc.

Secure Payment :

All the client’s financial account data is stored in the respective third-party site (such as Paypal, Wise and Direct Payment).
*Payment on Credit card can be done by request
Your financial account is guaranteed protection. We never keep any of the clients’ financial data.

Index