Laravel Pipelines: Sending Multiple Parameters

lokman musliu
Lokman Musliu

October 17, 2023 · 3 min read · 2,074 views

laravel pipelines

In this insightful exploration of Laravel Pipelines, we'll zoom in on a critical aspect of Laravel's pipeline functionality — handling multiple parameters.

As a seasoned Laravel developer, you're likely familiar with the elegance of pipelines for processing an object through a series of tasks or stages. But when your application complexity grows, and you're faced with juggling multiple parameters, the challenge intensifies. Yet, mastering multi-parameter handling can significantly amplify the flexibility and power of your code, allowing for more sophisticated and efficient operations.

Decoding the Laravel Pipeline Helper

To tackle multi-parameter scenarios, let's delve into the core of Laravel's Pipeline helper. The send method within the Pipeline class is your starting point, designed to accept a single passable object. At first glance, this seems to imply a limitation to handling just one piece of data at a time through the pipeline. However, with a deeper understanding and some creative structuring, you can navigate beyond this apparent constraint.

<?php

/**
* Set the object being sent through the pipeline.
*
* @param  mixed  $passable
* @return $this
*/
public function send($passable)
{
	$this->passable = $passable;

	return $this;
}

Designing our Data Transfer Object

Let's put this into perspective with an example - suppose we're tasked with web crawling, and we need access to both the crawler instance and a Site model for updates based on retrieved crawling data.

The solution to this problem comes in the form of creating a DTO, or Data Transfer Object. We'll create a folder named Transporters under the app folder. This folder will house classes responsible for managing the data transfer through our pipeline.

We'll name our class SiteCrawlerTransporter.php, which only accepts two parameters -- the Site model and the Crawler. This object can now be passed through the pipeline for sequenced actions.

<?php

namespace App\Transporters;

use App\Models\Site;
use Symfony\Component\DomCrawler\Crawler;

class SiteCrawlerTransporter
{
    public function __construct(
        public Site $site,
        public Crawler $crawler,
    ) {
    }
}

We've used a Job class in this scenario to instigate the various actions within the crawling process.

In this Job class, we methodically transfer the DTO through a Pipeline and later return the resulting output:

<?php

namespace App\Jobs;

use App\Models\Site;
use Illuminate\Bus\Queueable;
use App\Transporters\SiteCrawlerTransporter;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Http;
use Illuminate\Support\Facades\Log;
use Illuminate\Support\Facades\Pipeline;
use Symfony\Component\DomCrawler\Crawler;

class SiteCrawlerJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    /**
     * Create a new job instance.
     */
    public function __construct(
        public Site $site,
    ) {
        //
    }

    /**
     * Execute the job.
     */
    public function handle(): void
    {
        $response = Http::get($this->site->domain);

        if ($response->failed()) {
            Log::info("Failed to crawl site: $this->site->domain");

            return;
        }

        Pipeline::send(
            new SiteCrawlerTransporter(
                site: $this->site,
                crawler: new Crawler($response->body()),
            )
        )
            ->through([
                function (SiteCrawlerTransporter $payload, Closure $next) {
                    // Here we can access the Crawler and the Site
					// $payload->crawler - Crawler Instance
					// $payload->site - Site Model
 
                    return $next($user);
                },
            ])
            ->thenReturn();

		Log::info("Crawling completed for site: $this->site->domain");
    }
}

Within the through method, we outline all the stages this Object is expected to pass. At each stage, we define closure functions that accept this Payload (our DTO) and should return a closure code that refers to the next step. This way, the data can proceed sequentially through the process.

For simpler cases with fewer steps, this method can work pretty well. However, in our instance, where the Pipeline contained more than 10 stages, managing closures can become clumsy and less effective when it comes to reusability and maintenance of the code.

Therefore, we prefer to create a separate folder Crawler within the app folder, where we can keep all our class files for the pipeline.

As an example, here is how our CrawlPageTitle class looks:

<?php

namespace App\Crawler;

use App\Transporters\SiteCrawlerTransporter;
use Closure;

class CrawlPageTitle
{
    public function handle(SiteCrawlerTransporter $payload, Closure $next)
    {
        $payload->site->update([
            'title' => $payload->crawler->filter('title')->text(),
        ]);

        return $next($payload);
    }
}

Now if we go back to our SiteCrawlerJob we can replace our closures with classes and our Job should look similar to the handle function:

<?php

namespace App\Jobs;

use App\Models\Site;
use Illuminate\Bus\Queueable;
use App\Transporters\SiteCrawlerTransporter;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Http;
use Illuminate\Support\Facades\Log;
use Illuminate\Support\Facades\Pipeline;
use Symfony\Component\DomCrawler\Crawler;

class SiteCrawlerJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    /**
     * Create a new job instance.
     */
    public function __construct(
        public Site $site,
    ) {
        //
    }

    /**
    * Execute the job.
    */
    public function handle(): void
    {
        $response = Http::get($this->site->domain);

        if ($response->failed()) {
            Log::info("Failed to crawl site: $this->site->domain");

            return;
        }

        Pipeline::send(
            new SiteCrawlerTransporter(
                site: $this->site,
                crawler: new Crawler($response->body()),
            )
        )
            ->through([
                App\Crawler\CrawlPageTitle::class,
                App\Crawler\CrawlPageImages::class,
                App\Crawler\CrawlPageCategories::class,
                // ... other classes.
            ])
            ->thenReturn();

        Log::info("Crawling completed for site: $this->site->domain");
    }
}


Conclusion

Understanding and utilizing multiple parameters in Laravel Pipelines can truly change the way you design and manage your Pipelines. It not only increases the flexibility of your applications, but also allows for a cleaner, more maintainable codebase.


Bring Your Ideas to Life 🚀

If you need help with a Laravel project let's get in touch.

Lucky Media is proud to be recognized as a Top Laravel Development Agency

lokman musliu
Lokman Musliu

Founder and CEO of Lucky Media

Technologies:

Laravel
Heading Pattern

Related Posts

Stay up to date

Be updated with all news, products and tips we share!