Laravel Pipelines: Sending Multiple Parameters

laravel pipelines

Published:October 17, 2023

Updated: November 2, 2023

Views: 1,682

Laravel

In this insightful exploration of Laravel Pipelines, we'll zoom in on a critical aspect of Laravel's pipeline functionality β€” handling multiple parameters.

As a Laravel developer, you're undoubtedly aware that pipelines offer a user-friendly approach to sequentially modify an object as it passes through several stages. But what if you need to handle more than one parameter? Though it may first appear challenging, becoming familiar with this procedure enhances both the flexibility and efficiency of your code.

Decoding the Laravel Pipeline Helper

The Pipeline helper is a great place to start. We can see that the Pipeline class has a function named send, which takes in a passable object. Looking at this code, it's clear that it only allows for one object at once through the pipeline.

<?php

/**
* Set the object being sent through the pipeline.
*
* @param  mixed  $passable
* @return $this
*/
public function send($passable)
{
	$this->passable = $passable;

	return $this;
}

Designing our Data Transfer Object

Let's put this into perspective with an example - suppose we're tasked with web crawling, and we need access to both the crawler instance and a Site model for updates based on retrieved crawling data.

The solution to this problem comes in the form of creating a DTO, or Data Transfer Object. We'll create a folder named Transporters under the app folder. This folder will house classes responsible for managing the data transfer through our pipeline.

We'll name our class SiteCrawlerTransporter.php, which only accepts two parameters -- the Site model and the Crawler. This object can now be passed through the pipeline for sequenced actions.

<?php

namespace App\Transporters;

use App\Models\Site;
use Symfony\Component\DomCrawler\Crawler;

class SiteCrawlerTransporter
{
    public function __construct(
        public Site $site,
        public Crawler $crawler,
    ) {
    }
}

We've used a Job class in this scenario to instigate the various actions within the crawling process.

In this Job class, we methodically transfer the DTO through a Pipeline and later return the resulting output:

<?php

namespace App\Jobs;

use App\Models\Site;
use Illuminate\Bus\Queueable;
use App\Transporters\SiteCrawlerTransporter;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Http;
use Illuminate\Support\Facades\Log;
use Illuminate\Support\Facades\Pipeline;
use Symfony\Component\DomCrawler\Crawler;

class SiteCrawlerJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    /**
     * Create a new job instance.
     */
    public function __construct(
        public Site $site,
    ) {
        //
    }

    /**
     * Execute the job.
     */
    public function handle(): void
    {
        $response = Http::get($this->site->domain);

        if ($response->failed()) {
            Log::info("Failed to crawl site: $this->site->domain");

            return;
        }

        Pipeline::send(
            new SiteCrawlerTransporter(
                site: $this->site,
                crawler: new Crawler($response->body()),
            )
        )
            ->through([
                function (SiteCrawlerTransporter $payload, Closure $next) {
                    // Here we can access the Crawler and the Site
					// $payload->crawler - Crawler Instance
					// $payload->site - Site Model
 
                    return $next($user);
                },
            ])
            ->thenReturn();

		Log::info("Crawling completed for site: $this->site->domain");
    }
}

Within the through method, we outline all the stages this Object is expected to pass. At each stage, we define closure functions that accept this Payload (our DTO) and should return a closure code that refers to the next step. This way, the data can proceed sequentially through the process.

For simpler cases with fewer steps, this method can work pretty well. However, in our instance, where the Pipeline contained more than 10 stages, managing closures can become clumsy and less effective when it comes to reusability and maintenance of the code.

Therefore, we prefer to create a separate folder Crawler within the app folder, where we can keep all our class files for the pipeline.

As an example, here is how our CrawlPageTitle class looks:

<?php

namespace App\Crawler;

use App\Transporters\SiteCrawlerTransporter;
use Closure;

class CrawlPageTitle
{
    public function handle(SiteCrawlerTransporter $payload, Closure $next)
    {
        $payload->site->update([
            'title' => $payload->crawler->filter('title')->text(),
        ]);

        return $next($payload);
    }
}

Now if we go back to our SiteCrawlerJob we can replace our closures with classes and our Job should look similar to the handle function:

<?php

namespace App\Jobs;

use App\Models\Site;
use Illuminate\Bus\Queueable;
use App\Transporters\SiteCrawlerTransporter;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Http;
use Illuminate\Support\Facades\Log;
use Illuminate\Support\Facades\Pipeline;
use Symfony\Component\DomCrawler\Crawler;

class SiteCrawlerJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    /**
     * Create a new job instance.
     */
    public function __construct(
        public Site $site,
    ) {
        //
    }

    /**
    * Execute the job.
    */
    public function handle(): void
    {
        $response = Http::get($this->site->domain);

        if ($response->failed()) {
            Log::info("Failed to crawl site: $this->site->domain");

            return;
        }

        Pipeline::send(
            new SiteCrawlerTransporter(
                site: $this->site,
                crawler: new Crawler($response->body()),
            )
        )
            ->through([
                App\Crawler\CrawlPageTitle::class,
                App\Crawler\CrawlPageImages::class,
                App\Crawler\CrawlPageCategories::class,
                // ... other classes.
            ])
            ->thenReturn();

        Log::info("Crawling completed for site: $this->site->domain");
    }
}


Conclusion

Understanding and utilizing multiple parameters in Laravel Pipelines can truly change the way you design and manage your Pipelines. It not only increases the flexibility of your applications, but also allows for a cleaner, more maintainable codebase.


Bring Your Ideas to Life πŸš€

Accelerate Your Business with a Top Laravel Development Agency

If you need help with a Laravel project let's get in touch.

Lucky Media is proud to be recognized as a Top Laravel Development Agency by Clutch, a leading B2B ratings and reviews platform.

Our Services and Specializations

At Lucky Media, we offer a range of services including website development, web application development, and mobile apps development. We specialize in Statamic, React Native, Next.js, AI and ML solutions. We also provide staff augmentation and TALL stack development services.

Case Studies: Our Work in Action

For more insights into our work, check out our case studies on revolutionising lead generation with AI, customized coaching site, healthcare digitization, next-level performance, lead generation and patient journey, WordPress to Statamic migration, and improving user experience. These case studies provide a glimpse into how we tailor our technology choices to meet specific client needs and deliver exceptional results.

Heading Pattern

Related Posts

Stay up to date

Be updated with all news, products and tips we share!