Hey everyone. A couple questions that have come up as I’ve started to test out ZenML. Would love any feedback.
1. We have some pipelines that need to be run at least partially on MacOS, because we use CoreML models. I know there isn’t a containerized way to do this, but I was just wondering if you had any thoughts on best practices there with ZenML. Would it be best to just use the local orchestrator and what not, and potentially have it write to a deployed ZenML server? We currently use Gitlab pipelines to kick stuff off, so we may just have to continue using that and then use ZenML to organize it and get the step logic out of it.
2. Is there any way to dynamically add steps to a pipeline? For example, if you wanted to iterate over a list of strings, and run some logic on each one, where parallelization would be useful, is there a way to do that inside of a pipeline, so that each string can have its own step, and potentially run in its own environment (depending on the orchestrator)? I saw a comment mentioning doing this and kicking off a pipeline for each string, but it would be nice to be able to contain it within one pipeline.
Thanks!