Views Table and Views Cache Query vs. Custom Block Plugin and Direct Query
Moving from Views to a Custom Controller or Block Plugin is a classic "Performance vs. Convenience" trade-off in Drupal. While Views is incredibly powerful, it is essentially a massive SQL generator and thermal-processing engine wrapped in layers of abstraction.
The "Views Overhead": Why it's Slower
When you use a direct query in a controller, you are telling the database exactly what you want. When you use Views, Drupal has to:
* Load the View Entity: Fetch the configuration (YAML).
* Build the Query: Run through "Handlers" (fields, filters, sorts) to programmatically build the SQL.
* Execute & Metadata: Run the query and attach cache tags/contexts to every row.
* The Rendering Tax: This is the biggest killer. For every row, Views often runs a template file, processes "Preprocess" functions, and checks permissions. If you have 100 rows, that's 100 sets of function calls.
The Two Layers of Views Caching
Views has its own internal caching system that sits "inside" the Drupal Render Cache. You can find these settings under Advanced > Caching in the View UI.
1. Query Results Cache
* What it does: Stores the raw data returned by the SQL query.
* When to use: If your SQL query is complex (many joins or subqueries).
* The Catch: It does not cache the HTML. Drupal still has to run the expensive "Rendering Tax" mentioned above for every page load.
2. Rendered Output Cache
* What it does: Stores the final HTML of the View.
* When to use: Almost always. This is what makes a View feel fast to the end user.
* The Catch: If you have high-cardinality cache contexts (like a view that changes for every single user), the cache hit rate will be low, and the overhead remains.
Views vs. Custom Controller: A Comparison
| Feature | Views | Custom Controller (Direct Query) |
|---|---|---|
| Development Speed | Fast (UI-based) | Slower (Code-based) |
| Execution Speed | Moderate/Heavy | Blazing Fast |
| Memory Usage | High (Loads many objects) | Low |
| Maintenance | Easy for non-developers | Requires a developer to change a column |
| Caching | Complex, granular | Requires manual #cache metadata |
When to Stick with the Controller
You made the right call using a Controller if:
* You are displaying large datasets (e.g., a table with 500+ rows).
* You don't need the "extra" features of Views like UI-based filtering or bulk operations.
* You are performing complex calculations on the data after it leaves the database but before it hits the table.
How to make your Controller as "Smart" as a View
Since you are using a direct query, you must manually attach the Cache Tags so the table clears when the data changes. If your query pulls from nodes, add this to your render array:
$build['my_table'] = [
'#type' => 'table',
'#header' => $header,
'#rows' => $rows,
'#cache' => [
'tags' => ['node_list'], // Clears if any node is added/deleted
'contexts' => ['user.permissions'],
],
];
Would you like me to show you how to add a "Pager" to your custom controller query so it handles large datasets without timing out?
The choice between a Controller and a Block Plugin depends entirely on the "real estate" of the page. In Drupal, these serve two distinct architectural purposes.
Controller vs. Block Plugin: When to Use Which?
| Feature | Controller (Route) | Block Plugin |
|---|---|---|
| URL | Has its own dedicated path (e.g., /reports/my-table). | No URL; it is "placed" on existing pages. |
| Context | The "Main Content" of the page. | Secondary content (Sidebar, Header, Footer). |
| Layout | You control the full page body via a routing file. | Managed via the "Block Layout" UI or Layout Builder. |
| Reusability | Harder to reuse on multiple pages. | Highly reusable; can be placed anywhere. |
The "Both" Approach: Often, developers write the logic in a Service, then call that service from a Controller for a dedicated "Full Screen" view, and from a Block for a "Dashboard Widget" view.
Building a High-Performance Table with a Pager
Since you are bypassing Views for performance, you need to use the PagerSelectExtender. This allows you to handle thousands of rows by only querying 50 at a time, keeping your memory footprint tiny.
1. The Controller Logic (MyReportController.php)
public function buildTable() {
$database = \Drupal::database();
// 1. Create a pager-enabled query
$query = $database->select('node_field_data', 'n')
->extend('\Drupal\Core\Database\Query\PagerSelectExtender');
$query->fields('n', ['nid', 'title', 'created'])
->condition('n.type', 'article')
->limit(20) // Rows per page
->orderBy('n.created', 'DESC');
$results = $query->execute();
$rows = [];
foreach ($results as $row) {
$rows[] = [
$row->nid,
$row->title,
\Drupal::service('date.formatter')->format($row->created, 'short'),
];
}
return [
'table' => [
'#type' => 'table',
'#header' => ['ID', 'Title', 'Date'],
'#rows' => $rows,
'#empty' => $this->t('No data found.'),
],
// 2. Add the pager element to the render array
'pager' => [
'#type' => 'pager',
],
'#cache' => [
'tags' => ['node_list:article'],
'contexts' => ['url.query_args'], // Crucial for pager (?page=1, ?page=2)
],
];
}
Why this is faster than Views
* Selective Fetching: You are only fetching 3 columns (nid, title, created). Views often fetches the entire entity object or dozens of columns "just in case."
* No Row Preprocessing: You are iterating through a raw object array. Views would be triggering template_preprocess_views_view_fields() and multiple Twig lookups for every single cell.
* Specific Cache Contexts: By adding url.query_args, you ensure that Page 1 and Page 2 are cached separately in the Dynamic Page Cache, preventing users from seeing the wrong results when they click "Next."
The Verdict
* Use a Controller if this table is a standalone "Report" page.
* Use a Block if you want to show this table as a summary on a sidebar or a user's profile.
* Use a Service if you need both.
Would you like me to show you how to move the query logic into a "Service" so you can easily call it from both a Block and a Controller?
Moving your logic into a Service is the "Drupal Way." It keeps your code DRY (Don't Repeat Yourself), makes it easier to unit test, and allows you to swap out the logic in one place while updating both your Controller and your Block simultaneously.
Here is how to set up a service and inject it into your components.
1. Define the Service (my_module.services.yml)
First, tell Drupal about your service. This allows Drupal to handle the database connection for you.
services:
my_module.report_generator:
class: Drupal\my_module\ReportGeneratorService
arguments: ['@database', '@date.formatter']
2. Create the Service Class (src/ReportGeneratorService.php)
This class holds the "heavy lifting" logic you previously had in the controller.
namespace Drupal\my_module;
use Drupal\Core\Database\Connection;
use Drupal\Core\Datetime\DateFormatterInterface;
class ReportGeneratorService {
protected $database;
protected $dateFormatter;
public function __construct(Connection $database, DateFormatterInterface $date_formatter) {
$this->database = $database;
$this->dateFormatter = $date_formatter;
}
public function getReportData($limit = 20) {
$query = $this->database->select('node_field_data', 'n')
->extend('\Drupal\Core\Database\Query\PagerSelectExtender');
$query->fields('n', ['nid', 'title', 'created'])
->condition('n.type', 'article')
->limit($limit)
->orderBy('n.created', 'DESC');
$results = $query->execute();
$rows = [];
foreach ($results as $row) {
$rows[] = [
$row->nid,
$row->title,
$this->dateFormatter->format($row->created, 'short'),
];
}
return $rows;
}
}
3. Use Dependency Injection (DI)
Now, instead of calling the database directly, your Controller and Block will "ask" for your new service.
In the Controller:
// Inside your Controller class
public static function create(ContainerInterface $container) {
return new static(
$container->get('my_module.report_generator')
);
}
public function content() {
return [
'table' => [
'#type' => 'table',
'#rows' => $this->reportGenerator->getReportData(20),
// ... same cache/pager setup as before
],
];
}
In the Block Plugin:
// Inside your Block class
public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) {
return new static(
$configuration, $plugin_id, $plugin_definition,
$container->get('my_module.report_generator')
);
}
Summary of Benefits
* Cleaner Controller: Your controller is now just a "traffic cop" that hands data to the template.
* Reusable Logic: If you need to export this table as a CSV later, you just create a new route and call the same service.
* Performance: Because the service uses the PagerSelectExtender, both the Block and the Controller will remain memory-efficient even as your node count grows.
Would you like me to explain how to add "Sortable Headers" to this service so users can click the table columns to reorder the data?
Adding sortable headers to a manual query is a huge performance win. In Views, this often triggers a massive re-calculation of the entire query structure, but in a custom service, we can use the TableSort extender to let the database handle the heavy lifting with minimal overhead.
To do this, we need to modify the service to accept the table header and then update the query to respond to it.
1. Update the Service (ReportGeneratorService.php)
We add the TableSortExtender to the query chain. This extender looks at the active URL (e.g., ?sort=desc&order=Title) and automatically injects the correct ORDER BY clause into your SQL.
public function getReportData($header, $limit = 20) {
$query = $this->database->select('node_field_data', 'n')
->extend('\Drupal\Core\Database\Query\PagerSelectExtender')
->extend('\Drupal\Core\Database\Query\TableSortExtender'); // Add this!
$query->fields('n', ['nid', 'title', 'created'])
->condition('n.type', 'article')
->limit($limit)
->orderByHeader($header); // This maps the header keys to the query
$results = $query->execute();
$rows = [];
foreach ($results as $row) {
$rows[] = [
$row->nid,
$row->title,
$this->dateFormatter->format($row->created, 'short'),
];
}
return $rows;
}
2. Define the Header in your Controller
The $header array is special. It tells Drupal which columns are sortable and which database fields they map to.
public function content() {
// Define headers with 'field' keys matching your DB columns
$header = [
['data' => $this->t('ID'), 'field' => 'nid', 'sort' => 'asc'],
['data' => $this->t('Title'), 'field' => 'title'],
['data' => $this->t('Date'), 'field' => 'created'],
];
$rows = $this->reportGenerator->getReportData($header, 20);
return [
'table' => [
'#type' => 'table',
'#header' => $header, // Passing the same header array here
'#rows' => $rows,
'#empty' => $this->t('No data found.'),
],
'pager' => [
'#type' => 'pager',
],
'#cache' => [
'tags' => ['node_list:article'],
'contexts' => ['url.query_args'], // MUST include this so sort results are cached separately
],
];
}
Why this is better than "Default" Views
* SQL Efficiency: The TableSortExtender only appends the necessary ORDER BY logic. It doesn't attempt to "guess" relationships or join tables you don't need.
* Cache Predictability: By using the url.query_args cache context, Drupal generates a unique cache entry for every combination of sort and page. For example:
* /my-report?page=0&sort=asc&order=ID → Cache Entry A
* /my-report?page=0&sort=desc&order=ID → Cache Entry B
* User Experience: The table render element automatically turns your header text into clickable links with "up/down" arrows, handling all the logic for you.
A Critical Cache Warning
Because you are now using url.query_args, your cache bin could grow quite large if you have thousands of unique visitors clicking different sort combinations. If you notice your database getting heavy, you might want to set a Max-Age (e.g., 6 hours) on this specific report to ensure old sort combinations are eventually purged.
Would you like to see how to add a simple "Search" filter to this service as well?
Adding a search filter to your custom service is the final step in matching the core functionality of Views while maintaining your performance lead.
To do this, we’ll pass a search string from the Controller (retrieved from the URL or a form) into the Service, and use the database condition() method with a wildcard.
1. Update the Service (ReportGeneratorService.php)
We add a $search parameter. If a search term is provided, we use a LIKE condition. In Drupal’s database API, % is the wildcard.
public function getReportData($header, $limit = 20, $search = NULL) {
$query = $this->database->select('node_field_data', 'n')
->extend('\Drupal\Core\Database\Query\PagerSelectExtender')
->extend('\Drupal\Core\Database\Query\TableSortExtender');
$query->fields('n', ['nid', 'title', 'created'])
->condition('n.type', 'article')
->limit($limit)
->orderByHeader($header);
// Add the search filter logic
if (!empty($search)) {
// This searches for the string anywhere in the title
$query->condition('n.title', '%' . $this->database->escapeLike($search) . '%', 'LIKE');
}
$results = $query->execute();
$rows = [];
// ... (rest of the row processing logic)
return $rows;
}
2. Update the Controller to Handle the Search Input
In a real-world scenario, you might use a formal Drupal Form, but for a high-performance "quick filter," you can simply grab a query parameter from the URL (e.g., ?search=pizza).
public function content(Request $request) {
// Get the search term from the URL
$search_term = $request->query->get('search');
$header = [
['data' => $this->t('ID'), 'field' => 'nid', 'sort' => 'asc'],
['data' => $this->t('Title'), 'field' => 'title'],
['data' => $this->t('Date'), 'field' => 'created'],
];
// Pass the search term to the service
$rows = $this->reportGenerator->getReportData($header, 20, $search_term);
$build = [];
// Add a simple search form (standard HTML for speed, or use Drupal Form API)
$build['search_form'] = [
'#markup' => '<form method="get"><input type="text" name="search" value="' . Html::escape($search_term) . '"><button type="submit">Search</button></form>',
];
$build['table'] = [
'#type' => 'table',
'#header' => $header,
'#rows' => $rows,
'#empty' => $this->t('No results matching "@term".', ['@term' => $search_term]),
];
$build['pager'] = ['#type' => 'pager'];
$build['#cache'] = [
'tags' => ['node_list:article'],
'contexts' => ['url.query_args:search', 'url.query_args:sort', 'url.query_args:page'],
];
return $build;
}
Why this is "Leaner" than Views
* Security Built-in: By using $this->database->escapeLike(), you are protected against SQL injection, which is one of the main things Views handles for you.
* Granular Caching: Notice I changed the cache context to url.query_args:search. This tells Drupal to only vary the cache based on the search parameter, rather than every single possible URL argument. This keeps your cache bin cleaner.
* No "Join" Bloat: Views often joins the node table to the node_field_data table unnecessarily. Here, you are querying exactly one table.
Final Tip: Performance Indexing
Since you are now filtering by title using LIKE, if your table grows to tens of thousands of rows, this query might slow down.
* The Fix: Ensure you have a database index on the title column of the node_field_data table.
Would you like me to show you how to wrap this entire thing in a "Download CSV" button so users can export the filtered results?
To add a CSV export functionality, we want to reuse your service's logic but bypass the Pager (since a CSV should usually contain all filtered results, not just the first 20) and the HTML rendering.
Here is how to implement a high-performance export in Drupal 11.
1. Update the Service to be "Pager-Optional"
We need to modify the service so it can return either a paged result (for the UI) or the full result (for the CSV).
public function getReportData($header, $limit = 20, $search = NULL, $apply_pager = TRUE) {
$query = $this->database->select('node_field_data', 'n')
->extend('\Drupal\Core\Database\Query\TableSortExtender');
// Only extend the pager if we actually want it
if ($apply_pager) {
$query = $query->extend('\Drupal\Core\Database\Query\PagerSelectExtender')->limit($limit);
}
$query->fields('n', ['nid', 'title', 'created'])
->condition('n.type', 'article')
->orderByHeader($header);
if (!empty($search)) {
$query->condition('n.title', '%' . $this->database->escapeLike($search) . '%', 'LIKE');
}
return $query->execute();
}
2. Add the Export Method to your Controller
We will create a second method in your controller that returns a Symfony\Component\HttpFoundation\StreamedResponse. This is crucial for performance because it "streams" the data row-by-row to the browser instead of loading thousands of rows into PHP's memory first.
use Symfony\Component\HttpFoundation\StreamedResponse;
public function exportCsv(Request $request) {
$search_term = $request->query->get('search');
$header = [
['data' => 'ID', 'field' => 'nid'],
['data' => 'Title', 'field' => 'title'],
['data' => 'Date', 'field' => 'created'],
];
// Fetch all results without the pager
$results = $this->reportGenerator->getReportData($header, NULL, $search_term, FALSE);
$response = new StreamedResponse(function() use ($results) {
$handle = fopen('php://output', 'w+');
// Add CSV Header
fputcsv($handle, ['ID', 'Title', 'Date']);
// Stream the rows
foreach ($results as $row) {
fputcsv($handle, [
$row->nid,
$row->title,
\Drupal::service('date.formatter')->format($row->created, 'short'),
]);
}
fclose($handle);
});
$response->headers->set('Content-Type', 'text/csv');
$response->headers->set('Content-Disposition', 'attachment; filename="report.csv"');
return $response;
}
3. Update the UI with the Export Link
In your main content() method, add a link that carries over the current search and sort parameters so the user exports exactly what they are looking at.
$export_url = Url::fromRoute('my_module.export_route', [], [
'query' => \Drupal::request()->query->all(),
])->toString();
$build['export_link'] = [
'#markup' => '<a href="' . $export_url . '" class="button">Download CSV</a>',
];
Why this wins over Views Data Export
* Memory Efficiency: Using StreamedResponse and fputcsv means your server can export a 50,000-row table while using almost zero RAM. Views Data Export often attempts to build the entire file in a temporary directory first, which can crash on smaller servers.
* Speed: There is no "View" to initialize. It's a straight line from the Database to the CSV stream.
* Consistency: Because you shared the logic in the Service, the data in the CSV will perfectly match the data in the table, including the search filters.
Final Architectural Note
By following this path, you’ve essentially built a "Mini-Views" system that is specifically tuned for your data. You have:
* A Service for logic.
* A Controller for the page.
* Cache Metadata for speed.
* Query Extenders for UI features.
Would you like me to show you how to add a "Clear Cache" button to this page that only invalidates the node_list:article tag so admins can refresh the data manually?
Recent content
-
6 hours 19 minutes ago
-
6 hours 47 minutes ago
-
7 hours 23 minutes ago
-
8 hours 19 minutes ago
-
1 week 1 day ago
-
1 week 4 days ago
-
3 weeks 4 days ago
-
3 weeks 5 days ago
-
3 weeks 6 days ago
-
4 weeks 1 day ago