strickvl commited on
Commit
c54a34b
1 Parent(s): 7faa0e4

Add new SentenceTransformer model.

Browse files
Files changed (2) hide show
  1. README.md +290 -747
  2. model.safetensors +1 -1
README.md CHANGED
@@ -29,7 +29,7 @@ tags:
29
  - generated_from_trainer
30
  - dataset_size:1490
31
  - loss:MatryoshkaLoss
32
- - loss:MultipleNegativesRankingLoss
33
  widget:
34
  - source_sentence: Where is the global configuration directory located in ZenML's
35
  default setup?
@@ -113,6 +113,7 @@ widget:
113
 
114
 
115
  As shown above, the global config directory stores the following information:'
 
116
  - 'Reranking for better retrieval
117
 
118
 
@@ -145,132 +146,9 @@ widget:
145
 
146
 
147
  Last updated 1 month ago'
148
- - '─────────────────────────────────────────────────┨┃ RESOURCE TYPES │ 🔵 gcp-generic,
149
- 📦 gcs-bucket, 🌀 kubernetes-cluster, 🐳 docker-registry ┃
150
-
151
-
152
- ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
153
-
154
-
155
- ┃ RESOURCE NAME │ <multiple> ┃
156
-
157
-
158
- ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
159
-
160
-
161
- ┃ SECRET ID │ 4694de65-997b-4929-8831-b49d5e067b97 ┃
162
-
163
-
164
- ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
165
-
166
-
167
- ┃ SESSION DURATION │ N/A ┃
168
-
169
-
170
- ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
171
-
172
-
173
- ┃ EXPIRES IN │ 59m46s ┃
174
-
175
-
176
- ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
177
-
178
-
179
- ┃ OWNER │ default ┃
180
-
181
-
182
- ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
183
-
184
-
185
- ┃ WORKSPACE │ default ┃
186
-
187
-
188
- ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
189
-
190
-
191
- ┃ SHARED │ ➖ ┃
192
-
193
-
194
- ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
195
-
196
-
197
- ┃ CREATED_AT │ 2023-05-19 09:04:33.557126 ┃
198
-
199
-
200
- ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
201
-
202
-
203
- ┃ UPDATED_AT │ 2023-05-19 09:04:33.557127 ┃
204
-
205
-
206
- ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
207
-
208
-
209
- Configuration
210
-
211
-
212
- ┏━━━━━━━━━━━━┯━━━━━━━━━━━━┓'
213
  - source_sentence: Where can I find the instructions to enable CUDA for GPU-backed
214
  hardware in ZenML SDK Docs?
215
  sentences:
216
- - 'Configure a code repository
217
-
218
-
219
- Connect a Git repository to ZenML to track code changes and collaborate on MLOps
220
- projects.
221
-
222
-
223
- Throughout the lifecycle of a MLOps pipeline, it can get quite tiresome to always
224
- wait for a Docker build every time after running a pipeline (even if the local
225
- Docker cache is used). However, there is a way to just have one pipeline build
226
- and keep reusing it until a change to the pipeline environment is made: by connecting
227
- a code repository.
228
-
229
-
230
- With ZenML, connecting to a Git repository optimizes the Docker build processes.
231
- It also has the added bonus of being a better way of managing repository changes
232
- and enabling better code collaboration. Here is how the flow changes when running
233
- a pipeline:
234
-
235
-
236
- You trigger a pipeline run on your local machine. ZenML parses the @pipeline function
237
- to determine the necessary steps.
238
-
239
-
240
- The local client requests stack information from the ZenML server, which responds
241
- with the cloud stack configuration.
242
-
243
-
244
- The local client detects that we''re using a code repository and requests the
245
- information from the git repo.
246
-
247
-
248
- Instead of building a new Docker image, the client checks if an existing image
249
- can be reused based on the current Git commit hash and other environment metadata.
250
-
251
-
252
- The client initiates a run in the orchestrator, which sets up the execution environment
253
- in the cloud, such as a VM.
254
-
255
-
256
- The orchestrator downloads the code directly from the Git repository and uses
257
- the existing Docker image to run the pipeline steps.
258
-
259
-
260
- Pipeline steps execute, storing artifacts in the cloud-based artifact store.
261
-
262
-
263
- Throughout the execution, the pipeline run status and metadata are reported back
264
- to the ZenML server.
265
-
266
-
267
- By connecting a Git repository, you avoid redundant builds and make your MLOps
268
- processes more efficient. Your team can work on the codebase simultaneously, with
269
- ZenML handling the version tracking and ensuring that the correct code version
270
- is always used for each run.
271
-
272
-
273
- Creating a GitHub Repository'
274
  - 'Migration guide 0.39.1 → 0.41.0
275
 
276
 
@@ -397,499 +275,248 @@ widget:
397
  param_1: int, param_2: Optional[float] = None
398
 
399
 
400
- ) -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]:
401
-
402
-
403
- result = int(param_1 * (param_2 or 1))
404
-
405
-
406
- result_uri = get_step_context().get_output_artifact_uri()
407
-
408
-
409
- return result, result_uri
410
-
411
-
412
- # Run the Step separately
413
-
414
-
415
- my_step()
416
-
417
-
418
- # Define a Pipeline
419
-
420
-
421
- @pipeline'
422
- - ' SDK Docs .
423
-
424
-
425
- Enabling CUDA for GPU-backed hardwareNote that if you wish to use this step operator
426
- to run steps on a GPU, you will need to follow the instructions on this page to
427
- ensure that it works. It requires adding some extra settings customization and
428
- is essential to enable CUDA for the GPU to give its full acceleration.
429
-
430
-
431
- PreviousStep Operators
432
-
433
-
434
- NextGoogle Cloud VertexAI
435
-
436
-
437
- Last updated 19 days ago'
438
- - source_sentence: What are the special metadata types supported by ZenML and how
439
- are they used?
440
- sentences:
441
- - 'Special Metadata Types
442
-
443
-
444
- Tracking your metadata.
445
-
446
-
447
- ZenML supports several special metadata types to capture specific kinds of information.
448
- Here are examples of how to use the special types Uri, Path, DType, and StorageSize:
449
-
450
-
451
- from zenml.metadata.metadata_types import StorageSize, DType
452
-
453
-
454
- from zenml import log_artifact_metadata
455
-
456
-
457
- log_artifact_metadata(
458
-
459
-
460
- metadata={
461
-
462
-
463
- "dataset_source": Uri("gs://my-bucket/datasets/source.csv"),
464
-
465
-
466
- "preprocessing_script": Path("/scripts/preprocess.py"),
467
-
468
-
469
- "column_types": {
470
-
471
-
472
- "age": DType("int"),
473
-
474
-
475
- "income": DType("float"),
476
-
477
-
478
- "score": DType("int")
479
-
480
-
481
- },
482
-
483
-
484
- "processed_data_size": StorageSize(2500000)
485
-
486
-
487
- In this example:
488
-
489
-
490
- Uri is used to indicate a dataset source URI.
491
-
492
-
493
- Path is used to specify the filesystem path to a preprocessing script.
494
-
495
-
496
- DType is used to describe the data types of specific columns.
497
-
498
-
499
- StorageSize is used to indicate the size of the processed data in bytes.
500
-
501
-
502
- These special types help standardize the format of metadata and ensure that it
503
- is logged in a consistent and interpretable manner.
504
-
505
-
506
- PreviousGroup metadata
507
-
508
-
509
- NextFetch metadata within steps
510
-
511
-
512
- Last updated 19 days ago'
513
- - 's is achieved using the log_model_metadata method:from zenml import get_step_context,
514
- step, log_model_metadata
515
-
516
-
517
- @step
518
-
519
-
520
- def svc_trainer(
521
-
522
-
523
- X_train: pd.DataFrame,
524
-
525
-
526
- y_train: pd.Series,
527
-
528
-
529
- gamma: float = 0.001,
530
-
531
-
532
- ) -> Annotated[ClassifierMixin, "sklearn_classifier"],:
533
-
534
-
535
- # Train and score model
536
-
537
-
538
- ...
539
-
540
-
541
- model.fit(dataset[0], dataset[1])
542
-
543
-
544
- accuracy = model.score(dataset[0], dataset[1])
545
-
546
-
547
- model = get_step_context().model
548
-
549
-
550
- log_model_metadata(
551
-
552
-
553
- # Model name can be omitted if specified in the step or pipeline context
554
-
555
-
556
- model_name="iris_classifier",
557
-
558
-
559
- # Passing None or omitting this will use the `latest` version
560
-
561
-
562
- version=None,
563
-
564
-
565
- # Metadata should be a dictionary of JSON-serializable values
566
-
567
-
568
- metadata={"accuracy": float(accuracy)}
569
-
570
-
571
- # A dictionary of dictionaries can also be passed to group metadata
572
-
573
-
574
- # in the dashboard
575
-
576
-
577
- # metadata = {"metrics": {"accuracy": accuracy}}
578
-
579
-
580
- from zenml.client import Client
581
-
582
-
583
- # Get an artifact version (in this the latest `iris_classifier`)
584
-
585
-
586
- model_version = Client().get_model_version(''iris_classifier'')
587
-
588
-
589
- # Fetch it''s metadata
590
-
591
-
592
- model_version.run_metadata["accuracy"].value
593
-
594
-
595
- The ZenML Pro dashboard offers advanced visualization features for artifact exploration,
596
- including a dedicated artifacts tab with metadata visualization:
597
-
598
-
599
- Choosing log metadata with artifacts or model versions depends on the scope and
600
- purpose of the information you wish to capture. Artifact metadata is best for
601
- details specific to individual outputs, while model version metadata is suitable
602
- for broader information relevant to the overall model. By utilizing ZenML''s metadata
603
- logging capabilities and special types, you can enhance the traceability, reproducibility,
604
- and analysis of your ML workflows.
605
-
606
-
607
- Once metadata has been logged to a model, we can retrieve it easily with the client:
608
-
609
-
610
- from zenml.client import Client
611
-
612
-
613
- client = Client()
614
-
615
-
616
- model = client.get_model_version("my_model", "my_version")
617
-
618
-
619
- print(model.run_metadata["metadata_key"].value)'
620
- - 'Hugging Face
621
-
622
-
623
- Deploying models to Huggingface Inference Endpoints with Hugging Face :hugging_face:.
624
-
625
-
626
- Hugging Face Inference Endpoints provides a secure production solution to easily
627
- deploy any transformers, sentence-transformers, and diffusers models on a dedicated
628
- and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint
629
- is built from a model from the Hub.
630
-
631
-
632
- This service provides dedicated and autoscaling infrastructure managed by Hugging
633
- Face, allowing you to deploy models without dealing with containers and GPUs.
634
-
635
-
636
- When to use it?
637
-
638
-
639
- You should use Hugging Face Model Deployer:
640
-
641
-
642
- if you want to deploy Transformers, Sentence-Transformers, or Diffusion models
643
- on dedicated and secure infrastructure.
644
-
645
-
646
- if you prefer a fully-managed production solution for inference without the need
647
- to handle containers and GPUs.
648
-
649
-
650
- if your goal is to turn your models into production-ready APIs with minimal infrastructure
651
- or MLOps involvement
652
-
653
-
654
- Cost-effectiveness is crucial, and you want to pay only for the raw compute resources
655
- you use.
656
-
657
-
658
- Enterprise security is a priority, and you need to deploy models into secure offline
659
- endpoints accessible only via a direct connection to your Virtual Private Cloud
660
- (VPCs).
661
-
662
-
663
- If you are looking for a more easy way to deploy your models locally, you can
664
- use the MLflow Model Deployer flavor.
665
-
666
-
667
- How to deploy it?
668
-
669
-
670
- The Hugging Face Model Deployer flavor is provided by the Hugging Face ZenML integration,
671
- so you need to install it on your local machine to be able to deploy your models.
672
- You can do this by running the following command:
673
-
674
 
675
- zenml integration install huggingface -y
676
 
 
677
 
678
- To register the Hugging Face model deployer with ZenML you need to run the following
679
- command:
680
 
 
681
 
682
- zenml model-deployer register <MODEL_DEPLOYER_NAME> --flavor=huggingface --token=<YOUR_HF_TOKEN>
683
- --namespace=<YOUR_HF_NAMESPACE>
684
 
 
685
 
686
- Here,
687
 
 
688
 
689
- token parameter is the Hugging Face authentication token. It can be managed through
690
- Hugging Face settings.'
691
- - source_sentence: What are the benefits of deploying stack components directly from
692
- the ZenML CLI?
693
- sentences:
694
- - 'cess this particular Kubernetes cluster in AWS ?":zenml service-connector verify
695
- aws-multi-type --resource-type s3-bucket
696
 
 
697
 
698
- Example Command Output
 
699
 
700
 
701
- Service connector ''aws-multi-type'' is correctly configured with valid credentials
702
- and has access to the following resources:
 
703
 
704
 
705
- ┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
 
 
 
706
 
707
 
708
- RESOURCE TYPE │ RESOURCE NAMES ┃
709
 
710
 
711
- ┠───────────────┼───────────────────────────────────────┨
712
 
713
 
714
- 📦 s3-bucket │ s3://aws-ia-mwaa-715803424590 ┃
 
 
 
 
715
 
716
 
717
- ┃ │ s3://zenfiles ┃
718
 
719
 
720
- ┃ │ s3://zenml-demos ┃
 
721
 
722
 
723
- ┃ │ s3://zenml-generative-chat ┃
724
 
725
 
726
- ┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
727
 
728
 
729
- zenml service-connector verify aws-multi-type --resource-type kubernetes-cluster
730
- --resource-id zenhacks-cluster
731
 
732
 
733
- Example Command Output
734
 
735
 
736
- Service connector ''aws-multi-type'' is correctly configured with valid credentials
737
- and has access to the following resources:
738
 
739
 
740
- ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┓
741
 
742
 
743
- ┃ RESOURCE TYPE │ RESOURCE NAMES ┃
744
 
745
 
746
- ┠───────────────────────┼──────────────────┨
747
 
748
 
749
- 🌀 kubernetes-cluster │ zenhacks-cluster ┃
750
 
751
 
752
- ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛
753
 
754
 
755
- Verifying the multi-instance Service Connector displays all the resources that
756
- it can access. We can also scope the verification to a single resource:
757
 
758
 
759
- zenml service-connector verify aws-s3-multi-instance
760
 
761
 
762
- Example Command Output
763
 
764
 
765
- Service connector ''aws-s3-multi-instance'' is correctly configured with valid
766
- credentials and has access to the following resources:
767
 
768
 
769
- ┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
770
 
771
 
772
- RESOURCE TYPE RESOURCE NAMES ┃
773
 
774
 
775
- ┠───────────────┼───────────────────────────────────────┨
776
 
777
 
778
- 📦 s3-bucket │ s3://aws-ia-mwaa-715803424590 ┃
 
779
 
780
 
781
- ┃ │ s3://zenfiles ┃
782
 
783
 
784
- ┃ │ s3://zenml-demos ┃
785
 
786
 
787
- ┃ │ s3://zenml-generative-chat ┃'
788
- - ' for GCS, Docker and Kubernetes Python clients. Italso allows for the configuration
789
- of local Docker and Kubernetes CLIs.
790
 
791
 
792
- The GCP Service Connector is part of the GCP ZenML integration. You can either
 
793
 
794
 
795
- install the entire integration or use a pypi extra to install it independently
 
 
 
 
796
 
797
 
798
- of the integration:
 
 
 
799
 
800
 
801
- pip install "zenml[connectors-gcp]" installs only prerequisites for the GCP
 
802
 
803
 
804
- Service Connector Type
 
805
 
806
 
807
- zenml integration install gcp installs the entire GCP ZenML integration
 
808
 
809
 
810
- It is not required to install and set up the GCP CLI on your local machine to
 
811
 
812
 
813
- use the GCP Service Connector to link Stack Components to GCP resources and
 
814
 
815
 
816
- services. However, it is recommended to do so if you are looking for a quick
 
817
 
818
 
819
- setup that includes using the auto-configuration Service Connector features.
820
 
821
 
822
- ──────────────────────────────────────────────────────────────────────────────────
 
823
 
824
 
825
- Fetching details about the GCP kubernetes-cluster resource type (i.e. the GKE
826
- cluster):
 
 
827
 
828
 
829
- zenml service-connector describe-type gcp --resource-type kubernetes-cluster
 
 
 
 
 
 
830
 
831
 
832
- Example Command Output
833
 
834
 
835
- ╔══════════════════════════════════════════════════════════════════════════════╗
836
 
837
 
838
- ║ 🌀 GCP GKE Kubernetes cluster (resource type: kubernetes-cluster) ║
839
 
840
 
841
- ╚══════════════════════════════════════════════════════════════════════════════╝
842
 
843
 
844
- Authentication methods: implicit, user-account, service-account, oauth2-token,
845
 
846
 
847
- impersonation
848
 
849
 
850
- Supports resource instances: True
851
 
852
 
853
- Authentication methods:
854
 
855
 
856
- 🔒 implicit
857
 
858
 
859
- 🔒 user-account
860
 
861
 
862
- 🔒 service-account
863
 
864
 
865
- 🔒 oauth2-token
866
 
867
 
868
- 🔒 impersonation
869
 
870
 
871
- Allows Stack Components to access a GKE registry as a standard Kubernetes
872
 
873
 
874
- cluster resource. When used by Stack Components, they are provided a
875
 
876
 
877
- pre-authenticated Python Kubernetes client instance.
878
 
879
 
880
- The configured credentials must have at least the following GCP permissions
881
 
882
 
883
- associated with the GKE clusters that it can access:
884
 
885
 
886
- container.clusters.list
887
 
888
 
889
- container.clusters.get
890
 
891
 
892
- In addition to the above permissions, the credentials should include permissions'
 
893
  - '⚒️Manage stacks
894
 
895
 
@@ -1009,162 +636,78 @@ widget:
1009
 
1010
  The GCP Service Connector allows auto-discovering and fetching credentials and
1011
  configuration set up by the GCP CLI on your local host.'
1012
- - 'strator │ eks_seldon │ aws_secret_manager ┃┠────────┼──────────────────────┼──────────────────────────────────────┼────────┼─────────┼────────────────────┼──────────────────────┼───────────────────────┼────────────────┼────────────────────┨
1013
-
1014
-
1015
- ┃ 👉 │ default │ fe913bb5-e631-4d4e-8c1b-936518190ebb │ │
1016
- default │ │ default │ default │ │ ┃
1017
-
1018
-
1019
- ┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┛
1020
-
1021
-
1022
- Example of migrating a profile into the default project using a name prefix:
1023
-
1024
-
1025
- $ zenml profile migrate /home/stefan/.config/zenml/profiles/zenbytes --prefix
1026
- zenbytes_
1027
-
1028
-
1029
- No component flavors to migrate from /home/stefan/.config/zenml/profiles/zenbytes/stacks.yaml...
1030
-
1031
-
1032
- Migrating stack components from /home/stefan/.config/zenml/profiles/zenbytes/stacks.yaml...
1033
-
1034
-
1035
- Created artifact_store ''zenbytes_s3_store'' with flavor ''s3''.
1036
-
1037
-
1038
- Created container_registry ''zenbytes_ecr_registry'' with flavor ''default''.
1039
-
1040
-
1041
- Created experiment_tracker ''zenbytes_mlflow_tracker'' with flavor ''mlflow''.
1042
-
1043
-
1044
- Created experiment_tracker ''zenbytes_mlflow_tracker_local'' with flavor ''mlflow''.
1045
-
1046
-
1047
- Created model_deployer ''zenbytes_eks_seldon'' with flavor ''seldon''.
1048
-
1049
-
1050
- Created model_deployer ''zenbytes_mlflow'' with flavor ''mlflow''.
1051
-
1052
-
1053
- Created orchestrator ''zenbytes_eks_orchestrator'' with flavor ''kubeflow''.
1054
-
1055
-
1056
- Created secrets_manager ''zenbytes_aws_secret_manager'' with flavor ''aws''.
1057
-
1058
-
1059
- Migrating stacks from /home/stefan/.config/zenml/profiles/zenbytes/stacks.yaml...
1060
-
1061
-
1062
- Created stack ''zenbytes_aws_kubeflow_stack''.
1063
-
1064
-
1065
- Created stack ''zenbytes_local_with_mlflow''.
1066
-
1067
-
1068
- $ zenml stack list
1069
-
1070
-
1071
- Using the default local database.
1072
-
1073
-
1074
- Running with active project: ''default'' (global)'
1075
- - 'Evaluation in 65 lines of code
1076
-
1077
-
1078
- Learn how to implement evaluation for RAG in just 65 lines of code.
1079
-
1080
-
1081
- Our RAG guide included a short example for how to implement a basic RAG pipeline
1082
- in just 85 lines of code. In this section, we''ll build on that example to show
1083
- how you can evaluate the performance of your RAG pipeline in just 65 lines. For
1084
- the full code, please visit the project repository here. The code that follows
1085
- requires the functions from the earlier RAG pipeline code to work.
1086
-
1087
-
1088
- # ...previous RAG pipeline code here...
1089
-
1090
-
1091
- # see https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_rag_pipeline.py
1092
-
1093
-
1094
- eval_data = [
1095
-
1096
-
1097
- "question": "What creatures inhabit the luminescent forests of ZenML World?",
1098
-
1099
-
1100
- "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing
1101
- Zenbots.",
1102
-
1103
-
1104
- },
1105
-
1106
-
1107
- "question": "What do Fractal Fungi do in the melodic caverns of ZenML World?",
1108
 
1109
 
1110
- "expected_answer": "Fractal Fungi emit pulsating tones that resonate through the
1111
- crystalline structures, creating a symphony of otherworldly sounds in the melodic
1112
- caverns of ZenML World.",
1113
 
1114
 
1115
- },
 
 
 
1116
 
1117
 
1118
- "question": "Where do Gravitational Geckos live in ZenML World?",
 
1119
 
1120
 
1121
- "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML
1122
- World.",
1123
 
1124
 
1125
- },
1126
 
1127
 
1128
- def evaluate_retrieval(question, expected_answer, corpus, top_n=2):
 
1129
 
1130
 
1131
- relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n)
 
1132
 
1133
 
1134
- score = any(
 
1135
 
1136
 
1137
- any(word in chunk for word in tokenize(expected_answer))
 
1138
 
1139
 
1140
- for chunk in relevant_chunks
 
 
1141
 
1142
 
1143
- return score
 
1144
 
1145
 
1146
- def evaluate_generation(question, expected_answer, generated_answer):
1147
 
1148
 
1149
- client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
 
 
1150
 
1151
 
1152
- chat_completion = client.chat.completions.create(
1153
 
1154
 
1155
- messages=[
 
1156
 
1157
 
1158
- "role": "system",
 
1159
 
1160
 
1161
- "content": "You are an evaluation judge. Given a question, an expected answer,
1162
- and a generated answer, your task is to determine if the generated answer is relevant
1163
- and accurate. Respond with ''YES'' if the generated answer is satisfactory, or
1164
- ''NO'' if it is not.",
1165
 
1166
 
1167
- },'
 
 
1168
  model-index:
1169
  - name: zenml/finetuned-snowflake-arctic-embed-m
1170
  results:
@@ -1176,49 +719,49 @@ model-index:
1176
  type: dim_384
1177
  metrics:
1178
  - type: cosine_accuracy@1
1179
- value: 0.3493975903614458
1180
  name: Cosine Accuracy@1
1181
  - type: cosine_accuracy@3
1182
- value: 0.572289156626506
1183
  name: Cosine Accuracy@3
1184
  - type: cosine_accuracy@5
1185
- value: 0.6445783132530121
1186
  name: Cosine Accuracy@5
1187
  - type: cosine_accuracy@10
1188
- value: 0.7530120481927711
1189
  name: Cosine Accuracy@10
1190
  - type: cosine_precision@1
1191
- value: 0.3493975903614458
1192
  name: Cosine Precision@1
1193
  - type: cosine_precision@3
1194
- value: 0.1907630522088353
1195
  name: Cosine Precision@3
1196
  - type: cosine_precision@5
1197
- value: 0.1289156626506024
1198
  name: Cosine Precision@5
1199
  - type: cosine_precision@10
1200
- value: 0.07530120481927709
1201
  name: Cosine Precision@10
1202
  - type: cosine_recall@1
1203
- value: 0.3493975903614458
1204
  name: Cosine Recall@1
1205
  - type: cosine_recall@3
1206
- value: 0.572289156626506
1207
  name: Cosine Recall@3
1208
  - type: cosine_recall@5
1209
- value: 0.6445783132530121
1210
  name: Cosine Recall@5
1211
  - type: cosine_recall@10
1212
- value: 0.7530120481927711
1213
  name: Cosine Recall@10
1214
  - type: cosine_ndcg@10
1215
- value: 0.5491448856982105
1216
  name: Cosine Ndcg@10
1217
  - type: cosine_mrr@10
1218
- value: 0.48445926563396463
1219
  name: Cosine Mrr@10
1220
  - type: cosine_map@100
1221
- value: 0.4935434037345097
1222
  name: Cosine Map@100
1223
  - task:
1224
  type: information-retrieval
@@ -1228,49 +771,49 @@ model-index:
1228
  type: dim_256
1229
  metrics:
1230
  - type: cosine_accuracy@1
1231
- value: 0.3253012048192771
1232
  name: Cosine Accuracy@1
1233
  - type: cosine_accuracy@3
1234
- value: 0.5783132530120482
1235
  name: Cosine Accuracy@3
1236
  - type: cosine_accuracy@5
1237
- value: 0.6445783132530121
1238
  name: Cosine Accuracy@5
1239
  - type: cosine_accuracy@10
1240
- value: 0.7469879518072289
1241
  name: Cosine Accuracy@10
1242
  - type: cosine_precision@1
1243
- value: 0.3253012048192771
1244
  name: Cosine Precision@1
1245
  - type: cosine_precision@3
1246
- value: 0.19277108433734935
1247
  name: Cosine Precision@3
1248
  - type: cosine_precision@5
1249
- value: 0.1289156626506024
1250
  name: Cosine Precision@5
1251
  - type: cosine_precision@10
1252
- value: 0.07469879518072287
1253
  name: Cosine Precision@10
1254
  - type: cosine_recall@1
1255
- value: 0.3253012048192771
1256
  name: Cosine Recall@1
1257
  - type: cosine_recall@3
1258
- value: 0.5783132530120482
1259
  name: Cosine Recall@3
1260
  - type: cosine_recall@5
1261
- value: 0.6445783132530121
1262
  name: Cosine Recall@5
1263
  - type: cosine_recall@10
1264
- value: 0.7469879518072289
1265
  name: Cosine Recall@10
1266
  - type: cosine_ndcg@10
1267
- value: 0.5389489067913165
1268
  name: Cosine Ndcg@10
1269
  - type: cosine_mrr@10
1270
- value: 0.4727792120864409
1271
  name: Cosine Mrr@10
1272
  - type: cosine_map@100
1273
- value: 0.4829032647176661
1274
  name: Cosine Map@100
1275
  - task:
1276
  type: information-retrieval
@@ -1280,49 +823,49 @@ model-index:
1280
  type: dim_128
1281
  metrics:
1282
  - type: cosine_accuracy@1
1283
- value: 0.29518072289156627
1284
  name: Cosine Accuracy@1
1285
  - type: cosine_accuracy@3
1286
- value: 0.5421686746987951
1287
  name: Cosine Accuracy@3
1288
  - type: cosine_accuracy@5
1289
- value: 0.6325301204819277
1290
  name: Cosine Accuracy@5
1291
  - type: cosine_accuracy@10
1292
- value: 0.7289156626506024
1293
  name: Cosine Accuracy@10
1294
  - type: cosine_precision@1
1295
- value: 0.29518072289156627
1296
  name: Cosine Precision@1
1297
  - type: cosine_precision@3
1298
- value: 0.180722891566265
1299
  name: Cosine Precision@3
1300
  - type: cosine_precision@5
1301
- value: 0.12650602409638553
1302
  name: Cosine Precision@5
1303
  - type: cosine_precision@10
1304
- value: 0.07289156626506023
1305
  name: Cosine Precision@10
1306
  - type: cosine_recall@1
1307
- value: 0.29518072289156627
1308
  name: Cosine Recall@1
1309
  - type: cosine_recall@3
1310
- value: 0.5421686746987951
1311
  name: Cosine Recall@3
1312
  - type: cosine_recall@5
1313
- value: 0.6325301204819277
1314
  name: Cosine Recall@5
1315
  - type: cosine_recall@10
1316
- value: 0.7289156626506024
1317
  name: Cosine Recall@10
1318
  - type: cosine_ndcg@10
1319
- value: 0.5118201113899791
1320
  name: Cosine Ndcg@10
1321
  - type: cosine_mrr@10
1322
- value: 0.4423407917383822
1323
  name: Cosine Mrr@10
1324
  - type: cosine_map@100
1325
- value: 0.45200042096222626
1326
  name: Cosine Map@100
1327
  - task:
1328
  type: information-retrieval
@@ -1332,49 +875,49 @@ model-index:
1332
  type: dim_64
1333
  metrics:
1334
  - type: cosine_accuracy@1
1335
- value: 0.26506024096385544
1336
  name: Cosine Accuracy@1
1337
  - type: cosine_accuracy@3
1338
- value: 0.5240963855421686
1339
  name: Cosine Accuracy@3
1340
  - type: cosine_accuracy@5
1341
- value: 0.5843373493975904
1342
  name: Cosine Accuracy@5
1343
  - type: cosine_accuracy@10
1344
- value: 0.7048192771084337
1345
  name: Cosine Accuracy@10
1346
  - type: cosine_precision@1
1347
- value: 0.26506024096385544
1348
  name: Cosine Precision@1
1349
  - type: cosine_precision@3
1350
- value: 0.17469879518072287
1351
  name: Cosine Precision@3
1352
  - type: cosine_precision@5
1353
- value: 0.11686746987951806
1354
  name: Cosine Precision@5
1355
  - type: cosine_precision@10
1356
- value: 0.07048192771084334
1357
  name: Cosine Precision@10
1358
  - type: cosine_recall@1
1359
- value: 0.26506024096385544
1360
  name: Cosine Recall@1
1361
  - type: cosine_recall@3
1362
- value: 0.5240963855421686
1363
  name: Cosine Recall@3
1364
  - type: cosine_recall@5
1365
- value: 0.5843373493975904
1366
  name: Cosine Recall@5
1367
  - type: cosine_recall@10
1368
- value: 0.7048192771084337
1369
  name: Cosine Recall@10
1370
  - type: cosine_ndcg@10
1371
- value: 0.4807789244303294
1372
  name: Cosine Ndcg@10
1373
  - type: cosine_mrr@10
1374
- value: 0.409741346337732
1375
  name: Cosine Mrr@10
1376
  - type: cosine_map@100
1377
- value: 0.41754560205770985
1378
  name: Cosine Map@100
1379
  ---
1380
 
@@ -1430,7 +973,7 @@ model = SentenceTransformer("zenml/finetuned-snowflake-arctic-embed-m")
1430
  sentences = [
1431
  'What is the expiration time for the GCP OAuth2 token in the ZenML configuration?',
1432
  '━━━━━┛\n\nConfiguration\n\n┏━━━━━━━━━━━━┯━━━━━━━━━━━━┓┃ PROPERTY │ VALUE ┃\n\n┠────────────┼────────────┨\n\n┃ project_id │ zenml-core ┃\n\n┠────────────┼────────────┨\n\n┃ token │ [HIDDEN] ┃\n\n┗━━━━━━━━━━━━┷━━━━━━━━━━━━┛\n\nNote the temporary nature of the Service Connector. It will expire and become unusable in 1 hour:\n\nzenml service-connector list --name gcp-oauth2-token\n\nExample Command Output\n\n┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓\n\n┃ ACTIVE │ NAME │ ID │ TYPE │ RESOURCE TYPES │ RESOURCE NAME │ SHARED │ OWNER │ EXPIRES IN │ LABELS ┃\n\n┠────────┼──────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨\n\n┃ │ gcp-oauth2-token │ ec4d7d85-c71c-476b-aa76-95bf772c90da │ 🔵 gcp │ 🔵 gcp-generic │ <multiple> │ ➖ │ default │ 59m35s │ ┃\n\n┃ │ │ │ │ 📦 gcs-bucket │ │ │ │ │ ┃\n\n┃ │ │ │ │ 🌀 kubernetes-cluster │ │ │ │ │ ┃\n\n┃ │ │ │ │ 🐳 docker-registry │ │ │ │ │ ┃\n\n┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━┛\n\nAuto-configuration\n\nThe GCP Service Connector allows auto-discovering and fetching credentials and configuration set up by the GCP CLI on your local host.',
1433
- 'Evaluation in 65 lines of code\n\nLearn how to implement evaluation for RAG in just 65 lines of code.\n\nOur RAG guide included a short example for how to implement a basic RAG pipeline in just 85 lines of code. In this section, we\'ll build on that example to show how you can evaluate the performance of your RAG pipeline in just 65 lines. For the full code, please visit the project repository here. The code that follows requires the functions from the earlier RAG pipeline code to work.\n\n# ...previous RAG pipeline code here...\n\n# see https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_rag_pipeline.py\n\neval_data = [\n\n"question": "What creatures inhabit the luminescent forests of ZenML World?",\n\n"expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots.",\n\n},\n\n"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?",\n\n"expected_answer": "Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds in the melodic caverns of ZenML World.",\n\n},\n\n"question": "Where do Gravitational Geckos live in ZenML World?",\n\n"expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World.",\n\n},\n\ndef evaluate_retrieval(question, expected_answer, corpus, top_n=2):\n\nrelevant_chunks = retrieve_relevant_chunks(question, corpus, top_n)\n\nscore = any(\n\nany(word in chunk for word in tokenize(expected_answer))\n\nfor chunk in relevant_chunks\n\nreturn score\n\ndef evaluate_generation(question, expected_answer, generated_answer):\n\nclient = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))\n\nchat_completion = client.chat.completions.create(\n\nmessages=[\n\n"role": "system",\n\n"content": "You are an evaluation judge. Given a question, an expected answer, and a generated answer, your task is to determine if the generated answer is relevant and accurate. Respond with \'YES\' if the generated answer is satisfactory, or \'NO\' if it is not.",\n\n},',
1434
  ]
1435
  embeddings = model.encode(sentences)
1436
  print(embeddings.shape)
@@ -1476,21 +1019,21 @@ You can finetune this model on your own dataset.
1476
 
1477
  | Metric | Value |
1478
  |:--------------------|:-----------|
1479
- | cosine_accuracy@1 | 0.3494 |
1480
- | cosine_accuracy@3 | 0.5723 |
1481
- | cosine_accuracy@5 | 0.6446 |
1482
- | cosine_accuracy@10 | 0.753 |
1483
- | cosine_precision@1 | 0.3494 |
1484
- | cosine_precision@3 | 0.1908 |
1485
- | cosine_precision@5 | 0.1289 |
1486
- | cosine_precision@10 | 0.0753 |
1487
- | cosine_recall@1 | 0.3494 |
1488
- | cosine_recall@3 | 0.5723 |
1489
- | cosine_recall@5 | 0.6446 |
1490
- | cosine_recall@10 | 0.753 |
1491
- | cosine_ndcg@10 | 0.5491 |
1492
- | cosine_mrr@10 | 0.4845 |
1493
- | **cosine_map@100** | **0.4935** |
1494
 
1495
  #### Information Retrieval
1496
  * Dataset: `dim_256`
@@ -1498,43 +1041,43 @@ You can finetune this model on your own dataset.
1498
 
1499
  | Metric | Value |
1500
  |:--------------------|:-----------|
1501
- | cosine_accuracy@1 | 0.3253 |
1502
- | cosine_accuracy@3 | 0.5783 |
1503
- | cosine_accuracy@5 | 0.6446 |
1504
- | cosine_accuracy@10 | 0.747 |
1505
- | cosine_precision@1 | 0.3253 |
1506
- | cosine_precision@3 | 0.1928 |
1507
- | cosine_precision@5 | 0.1289 |
1508
- | cosine_precision@10 | 0.0747 |
1509
- | cosine_recall@1 | 0.3253 |
1510
- | cosine_recall@3 | 0.5783 |
1511
- | cosine_recall@5 | 0.6446 |
1512
- | cosine_recall@10 | 0.747 |
1513
- | cosine_ndcg@10 | 0.5389 |
1514
- | cosine_mrr@10 | 0.4728 |
1515
- | **cosine_map@100** | **0.4829** |
1516
 
1517
  #### Information Retrieval
1518
  * Dataset: `dim_128`
1519
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
1520
 
1521
- | Metric | Value |
1522
- |:--------------------|:----------|
1523
- | cosine_accuracy@1 | 0.2952 |
1524
- | cosine_accuracy@3 | 0.5422 |
1525
- | cosine_accuracy@5 | 0.6325 |
1526
- | cosine_accuracy@10 | 0.7289 |
1527
- | cosine_precision@1 | 0.2952 |
1528
- | cosine_precision@3 | 0.1807 |
1529
- | cosine_precision@5 | 0.1265 |
1530
- | cosine_precision@10 | 0.0729 |
1531
- | cosine_recall@1 | 0.2952 |
1532
- | cosine_recall@3 | 0.5422 |
1533
- | cosine_recall@5 | 0.6325 |
1534
- | cosine_recall@10 | 0.7289 |
1535
- | cosine_ndcg@10 | 0.5118 |
1536
- | cosine_mrr@10 | 0.4423 |
1537
- | **cosine_map@100** | **0.452** |
1538
 
1539
  #### Information Retrieval
1540
  * Dataset: `dim_64`
@@ -1542,21 +1085,21 @@ You can finetune this model on your own dataset.
1542
 
1543
  | Metric | Value |
1544
  |:--------------------|:-----------|
1545
- | cosine_accuracy@1 | 0.2651 |
1546
- | cosine_accuracy@3 | 0.5241 |
1547
- | cosine_accuracy@5 | 0.5843 |
1548
- | cosine_accuracy@10 | 0.7048 |
1549
- | cosine_precision@1 | 0.2651 |
1550
- | cosine_precision@3 | 0.1747 |
1551
- | cosine_precision@5 | 0.1169 |
1552
- | cosine_precision@10 | 0.0705 |
1553
- | cosine_recall@1 | 0.2651 |
1554
- | cosine_recall@3 | 0.5241 |
1555
- | cosine_recall@5 | 0.5843 |
1556
- | cosine_recall@10 | 0.7048 |
1557
- | cosine_ndcg@10 | 0.4808 |
1558
- | cosine_mrr@10 | 0.4097 |
1559
- | **cosine_map@100** | **0.4175** |
1560
 
1561
  <!--
1562
  ## Bias, Risks and Limitations
@@ -1578,22 +1121,22 @@ You can finetune this model on your own dataset.
1578
 
1579
 
1580
  * Size: 1,490 training samples
1581
- * Columns: <code>positive</code> and <code>anchor</code>
1582
  * Approximate statistics based on the first 1000 samples:
1583
- | | positive | anchor |
1584
- |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
1585
- | type | string | string |
1586
- | details | <ul><li>min: 9 tokens</li><li>mean: 21.02 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 375.16 tokens</li><li>max: 512 tokens</li></ul> |
1587
  * Samples:
1588
- | positive | anchor |
1589
- |:-----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1590
- | <code>What details can you provide about the mlflow_training_pipeline runs listed in the ZenML documentation?</code> | <code>mlflow_training_pipeline', ┃┃ │ │ │ 'zenml_pipeline_run_uuid': 'a5d4faae-ef70-48f2-9893-6e65d5e51e98', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.005'} ┃<br><br>┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨<br><br>┃ tensorflow-mnist-model │ 2 │ Run #2 of the mlflow_training_pipeline. │ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_09_08_467212', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃<br><br>┃ │ │ │ 'zenml_pipeline_run_uuid': '11858dcf-3e47-4b1a-82c5-6fa25ba4e037', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.003'} ┃<br><br>┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨<br><br>┃ tensorflow-mnist-model │ 1 │ Run #1 of the mlflow_training_pipeline. │ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_08_52_398499', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃<br><br>┃ │ │ │ 'zenml_pipeline_run_uuid': '29fb22c1-6e0b-4431-9e04-226226506d16', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.001'} ┃</code> |
1591
- | <code>How do you register a GCP Service Connector that uses account impersonation to access the zenml-bucket-sl GCS bucket?</code> | <code>esource-id zenml-bucket-sl<br><br>Example Command OutputError: Service connector 'gcp-empty-sa' verification failed: connector authorization failure: failed to fetch GCS bucket<br><br>zenml-bucket-sl: 403 GET https://storage.googleapis.com/storage/v1/b/zenml-bucket-sl?projection=noAcl&prettyPrint=false:<br><br>empty-connectors@zenml-core.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket.<br><br>Permission 'storage.buckets.get' denied on resource (or it may not exist).<br><br>Next, we'll register a GCP Service Connector that actually uses account impersonation to access the zenml-bucket-sl GCS bucket and verify that it can actually access the bucket:<br><br>zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl<br><br>Example Command Output<br><br>Expanding argument value service_account_json to contents of file /home/stefan/aspyre/src/zenml/empty-connectors@zenml-core.json.<br><br>Successfully registered service connector `gcp-impersonate-sa` with access to the following resources:<br><br>┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┓<br><br>┃ RESOURCE TYPE │ RESOURCE NAMES ┃<br><br>┠───────────────┼──────────────────────┨<br><br>┃ 📦 gcs-bucket │ gs://zenml-bucket-sl ┃<br><br>┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┛<br><br>External Account (GCP Workload Identity)<br><br>Use GCP workload identity federation to authenticate to GCP services using AWS IAM credentials, Azure Active Directory credentials or generic OIDC tokens.</code> |
1592
- | <code>Can you explain how data validation helps in detecting data drift and model drift in ZenML pipelines?</code> | <code>of your models at different stages of development.if you have pipelines that regularly ingest new data, you should use data validation to run regular data integrity checks to signal problems before they are propagated downstream.<br><br>in continuous training pipelines, you should use data validation techniques to compare new training data against a data reference and to compare the performance of newly trained models against previous ones.<br><br>when you have pipelines that automate batch inference or if you regularly collect data used as input in online inference, you should use data validation to run data drift analyses and detect training-serving skew, data drift and model drift.<br><br>Data Validator Flavors<br><br>Data Validator are optional stack components provided by integrations. The following table lists the currently available Data Validators and summarizes their features and the data types and model types that they can be used with in ZenML pipelines:<br><br>Data Validator Validation Features Data Types Model Types Notes Flavor/Integration Deepchecks data quality<br>data drift<br>model drift<br>model performance tabular: pandas.DataFrame CV: torch.utils.data.dataloader.DataLoader tabular: sklearn.base.ClassifierMixin CV: torch.nn.Module Add Deepchecks data and model validation tests to your pipelines deepchecks Evidently data quality<br>data drift<br>model drift<br>model performance tabular: pandas.DataFrame N/A Use Evidently to generate a variety of data quality and data/model drift reports and visualizations evidently Great Expectations data profiling<br>data quality tabular: pandas.DataFrame N/A Perform data testing, documentation and profiling with Great Expectations great_expectations Whylogs/WhyLabs data drift tabular: pandas.DataFrame N/A Generate data profiles with whylogs and upload them to WhyLabs whylogs<br><br>If you would like to see the available flavors of Data Validator, you can use the command:<br><br>zenml data-validator flavor list<br><br>How to use it</code> |
1593
  * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
1594
  ```json
1595
  {
1596
- "loss": "MultipleNegativesRankingLoss",
1597
  "matryoshka_dims": [
1598
  384,
1599
  256,
@@ -1742,11 +1285,11 @@ You can finetune this model on your own dataset.
1742
  </details>
1743
 
1744
  ### Training Logs
1745
- | Epoch | Step | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 |
1746
- |:----------:|:-----:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
1747
- | 0.6667 | 1 | 0.3884 | 0.4332 | 0.4464 | 0.3140 |
1748
- | 2.0 | 3 | 0.4430 | 0.4689 | 0.4962 | 0.4019 |
1749
- | **2.6667** | **4** | **0.452** | **0.4829** | **0.4935** | **0.4175** |
1750
 
1751
  * The bold row denotes the saved checkpoint.
1752
 
@@ -1788,15 +1331,15 @@ You can finetune this model on your own dataset.
1788
  }
1789
  ```
1790
 
1791
- #### MultipleNegativesRankingLoss
1792
  ```bibtex
1793
- @misc{henderson2017efficient,
1794
- title={Efficient Natural Language Response Suggestion for Smart Reply},
1795
- author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
1796
  year={2017},
1797
- eprint={1705.00652},
1798
  archivePrefix={arXiv},
1799
- primaryClass={cs.CL}
1800
  }
1801
  ```
1802
 
 
29
  - generated_from_trainer
30
  - dataset_size:1490
31
  - loss:MatryoshkaLoss
32
+ - loss:TripletLoss
33
  widget:
34
  - source_sentence: Where is the global configuration directory located in ZenML's
35
  default setup?
 
113
 
114
 
115
  As shown above, the global config directory stores the following information:'
116
+ - How do you configure the network settings on a Linux server?
117
  - 'Reranking for better retrieval
118
 
119
 
 
146
 
147
 
148
  Last updated 1 month ago'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
  - source_sentence: Where can I find the instructions to enable CUDA for GPU-backed
150
  hardware in ZenML SDK Docs?
151
  sentences:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152
  - 'Migration guide 0.39.1 → 0.41.0
153
 
154
 
 
275
  param_1: int, param_2: Optional[float] = None
276
 
277
 
278
+ ) -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
279
 
 
280
 
281
+ result = int(param_1 * (param_2 or 1))
282
 
 
 
283
 
284
+ result_uri = get_step_context().get_output_artifact_uri()
285
 
 
 
286
 
287
+ return result, result_uri
288
 
 
289
 
290
+ # Run the Step separately
291
 
 
 
 
 
 
 
 
292
 
293
+ my_step()
294
 
295
+
296
+ # Define a Pipeline
297
 
298
 
299
+ @pipeline'
300
+ - How do I integrate Google Cloud VertexAI into my existing Kubernetes cluster?
301
+ - ' SDK Docs .
302
 
303
 
304
+ Enabling CUDA for GPU-backed hardwareNote that if you wish to use this step operator
305
+ to run steps on a GPU, you will need to follow the instructions on this page to
306
+ ensure that it works. It requires adding some extra settings customization and
307
+ is essential to enable CUDA for the GPU to give its full acceleration.
308
 
309
 
310
+ PreviousStep Operators
311
 
312
 
313
+ NextGoogle Cloud VertexAI
314
 
315
 
316
+ Last updated 19 days ago'
317
+ - source_sentence: What are the special metadata types supported by ZenML and how
318
+ are they used?
319
+ sentences:
320
+ - 'Special Metadata Types
321
 
322
 
323
+ Tracking your metadata.
324
 
325
 
326
+ ZenML supports several special metadata types to capture specific kinds of information.
327
+ Here are examples of how to use the special types Uri, Path, DType, and StorageSize:
328
 
329
 
330
+ from zenml.metadata.metadata_types import StorageSize, DType
331
 
332
 
333
+ from zenml import log_artifact_metadata
334
 
335
 
336
+ log_artifact_metadata(
 
337
 
338
 
339
+ metadata={
340
 
341
 
342
+ "dataset_source": Uri("gs://my-bucket/datasets/source.csv"),
 
343
 
344
 
345
+ "preprocessing_script": Path("/scripts/preprocess.py"),
346
 
347
 
348
+ "column_types": {
349
 
350
 
351
+ "age": DType("int"),
352
 
353
 
354
+ "income": DType("float"),
355
 
356
 
357
+ "score": DType("int")
358
 
359
 
360
+ },
 
361
 
362
 
363
+ "processed_data_size": StorageSize(2500000)
364
 
365
 
366
+ In this example:
367
 
368
 
369
+ Uri is used to indicate a dataset source URI.
 
370
 
371
 
372
+ Path is used to specify the filesystem path to a preprocessing script.
373
 
374
 
375
+ DType is used to describe the data types of specific columns.
376
 
377
 
378
+ StorageSize is used to indicate the size of the processed data in bytes.
379
 
380
 
381
+ These special types help standardize the format of metadata and ensure that it
382
+ is logged in a consistent and interpretable manner.
383
 
384
 
385
+ PreviousGroup metadata
386
 
387
 
388
+ NextFetch metadata within steps
389
 
390
 
391
+ Last updated 19 days ago'
392
+ - 'Configure a code repository
 
393
 
394
 
395
+ Connect a Git repository to ZenML to track code changes and collaborate on MLOps
396
+ projects.
397
 
398
 
399
+ Throughout the lifecycle of a MLOps pipeline, it can get quite tiresome to always
400
+ wait for a Docker build every time after running a pipeline (even if the local
401
+ Docker cache is used). However, there is a way to just have one pipeline build
402
+ and keep reusing it until a change to the pipeline environment is made: by connecting
403
+ a code repository.
404
 
405
 
406
+ With ZenML, connecting to a Git repository optimizes the Docker build processes.
407
+ It also has the added bonus of being a better way of managing repository changes
408
+ and enabling better code collaboration. Here is how the flow changes when running
409
+ a pipeline:
410
 
411
 
412
+ You trigger a pipeline run on your local machine. ZenML parses the @pipeline function
413
+ to determine the necessary steps.
414
 
415
 
416
+ The local client requests stack information from the ZenML server, which responds
417
+ with the cloud stack configuration.
418
 
419
 
420
+ The local client detects that we''re using a code repository and requests the
421
+ information from the git repo.
422
 
423
 
424
+ Instead of building a new Docker image, the client checks if an existing image
425
+ can be reused based on the current Git commit hash and other environment metadata.
426
 
427
 
428
+ The client initiates a run in the orchestrator, which sets up the execution environment
429
+ in the cloud, such as a VM.
430
 
431
 
432
+ The orchestrator downloads the code directly from the Git repository and uses
433
+ the existing Docker image to run the pipeline steps.
434
 
435
 
436
+ Pipeline steps execute, storing artifacts in the cloud-based artifact store.
437
 
438
 
439
+ Throughout the execution, the pipeline run status and metadata are reported back
440
+ to the ZenML server.
441
 
442
 
443
+ By connecting a Git repository, you avoid redundant builds and make your MLOps
444
+ processes more efficient. Your team can work on the codebase simultaneously, with
445
+ ZenML handling the version tracking and ensuring that the correct code version
446
+ is always used for each run.
447
 
448
 
449
+ Creating a GitHub Repository'
450
+ - Can you explain the process of setting up a virtual environment in Python?
451
+ - source_sentence: What are the benefits of deploying stack components directly from
452
+ the ZenML CLI?
453
+ sentences:
454
+ - '─────────────────────────────────────────────────┨┃ RESOURCE TYPES │ 🔵 gcp-generic,
455
+ 📦 gcs-bucket, 🌀 kubernetes-cluster, 🐳 docker-registry ┃
456
 
457
 
458
+ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
459
 
460
 
461
+ ┃ RESOURCE NAME │ <multiple> ┃
462
 
463
 
464
+ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
465
 
466
 
467
+ ┃ SECRET ID │ 4694de65-997b-4929-8831-b49d5e067b97 ┃
468
 
469
 
470
+ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
471
 
472
 
473
+ ┃ SESSION DURATION │ N/A ┃
474
 
475
 
476
+ ┠──────────────────┼────────────────────────────────────────────────────────────────���─────────┨
477
 
478
 
479
+ EXPIRES IN │ 59m46s ┃
480
 
481
 
482
+ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
483
 
484
 
485
+ OWNER │ default ┃
486
 
487
 
488
+ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
489
 
490
 
491
+ WORKSPACE │ default ┃
492
 
493
 
494
+ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
495
 
496
 
497
+ SHARED │ ➖ ┃
498
 
499
 
500
+ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
501
 
502
 
503
+ CREATED_AT │ 2023-05-19 09:04:33.557126 ┃
504
 
505
 
506
+ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
507
 
508
 
509
+ UPDATED_AT │ 2023-05-19 09:04:33.557127 ┃
510
 
511
 
512
+ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
513
 
514
 
515
+ Configuration
516
 
517
 
518
+ ┏━━━━━━━━━━━━┯━━━━━━━━━━━━┓'
519
+ - How do you set up a custom service account for Vertex AI?
520
  - '⚒️Manage stacks
521
 
522
 
 
636
 
637
  The GCP Service Connector allows auto-discovering and fetching credentials and
638
  configuration set up by the GCP CLI on your local host.'
639
+ - 'Hugging Face
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
640
 
641
 
642
+ Deploying models to Huggingface Inference Endpoints with Hugging Face :hugging_face:.
 
 
643
 
644
 
645
+ Hugging Face Inference Endpoints provides a secure production solution to easily
646
+ deploy any transformers, sentence-transformers, and diffusers models on a dedicated
647
+ and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint
648
+ is built from a model from the Hub.
649
 
650
 
651
+ This service provides dedicated and autoscaling infrastructure managed by Hugging
652
+ Face, allowing you to deploy models without dealing with containers and GPUs.
653
 
654
 
655
+ When to use it?
 
656
 
657
 
658
+ You should use Hugging Face Model Deployer:
659
 
660
 
661
+ if you want to deploy Transformers, Sentence-Transformers, or Diffusion models
662
+ on dedicated and secure infrastructure.
663
 
664
 
665
+ if you prefer a fully-managed production solution for inference without the need
666
+ to handle containers and GPUs.
667
 
668
 
669
+ if your goal is to turn your models into production-ready APIs with minimal infrastructure
670
+ or MLOps involvement
671
 
672
 
673
+ Cost-effectiveness is crucial, and you want to pay only for the raw compute resources
674
+ you use.
675
 
676
 
677
+ Enterprise security is a priority, and you need to deploy models into secure offline
678
+ endpoints accessible only via a direct connection to your Virtual Private Cloud
679
+ (VPCs).
680
 
681
 
682
+ If you are looking for a more easy way to deploy your models locally, you can
683
+ use the MLflow Model Deployer flavor.
684
 
685
 
686
+ How to deploy it?
687
 
688
 
689
+ The Hugging Face Model Deployer flavor is provided by the Hugging Face ZenML integration,
690
+ so you need to install it on your local machine to be able to deploy your models.
691
+ You can do this by running the following command:
692
 
693
 
694
+ zenml integration install huggingface -y
695
 
696
 
697
+ To register the Hugging Face model deployer with ZenML you need to run the following
698
+ command:
699
 
700
 
701
+ zenml model-deployer register <MODEL_DEPLOYER_NAME> --flavor=huggingface --token=<YOUR_HF_TOKEN>
702
+ --namespace=<YOUR_HF_NAMESPACE>
703
 
704
 
705
+ Here,
 
 
 
706
 
707
 
708
+ token parameter is the Hugging Face authentication token. It can be managed through
709
+ Hugging Face settings.'
710
+ - Can you list the steps to set up a Docker registry on a Kubernetes cluster?
711
  model-index:
712
  - name: zenml/finetuned-snowflake-arctic-embed-m
713
  results:
 
719
  type: dim_384
720
  metrics:
721
  - type: cosine_accuracy@1
722
+ value: 0.29518072289156627
723
  name: Cosine Accuracy@1
724
  - type: cosine_accuracy@3
725
+ value: 0.5240963855421686
726
  name: Cosine Accuracy@3
727
  - type: cosine_accuracy@5
728
+ value: 0.5843373493975904
729
  name: Cosine Accuracy@5
730
  - type: cosine_accuracy@10
731
+ value: 0.6867469879518072
732
  name: Cosine Accuracy@10
733
  - type: cosine_precision@1
734
+ value: 0.29518072289156627
735
  name: Cosine Precision@1
736
  - type: cosine_precision@3
737
+ value: 0.17469879518072293
738
  name: Cosine Precision@3
739
  - type: cosine_precision@5
740
+ value: 0.11686746987951804
741
  name: Cosine Precision@5
742
  - type: cosine_precision@10
743
+ value: 0.0686746987951807
744
  name: Cosine Precision@10
745
  - type: cosine_recall@1
746
+ value: 0.29518072289156627
747
  name: Cosine Recall@1
748
  - type: cosine_recall@3
749
+ value: 0.5240963855421686
750
  name: Cosine Recall@3
751
  - type: cosine_recall@5
752
+ value: 0.5843373493975904
753
  name: Cosine Recall@5
754
  - type: cosine_recall@10
755
+ value: 0.6867469879518072
756
  name: Cosine Recall@10
757
  - type: cosine_ndcg@10
758
+ value: 0.4908042072911187
759
  name: Cosine Ndcg@10
760
  - type: cosine_mrr@10
761
+ value: 0.42844234079173843
762
  name: Cosine Mrr@10
763
  - type: cosine_map@100
764
+ value: 0.43576329240226386
765
  name: Cosine Map@100
766
  - task:
767
  type: information-retrieval
 
771
  type: dim_256
772
  metrics:
773
  - type: cosine_accuracy@1
774
+ value: 0.25903614457831325
775
  name: Cosine Accuracy@1
776
  - type: cosine_accuracy@3
777
+ value: 0.5060240963855421
778
  name: Cosine Accuracy@3
779
  - type: cosine_accuracy@5
780
+ value: 0.5783132530120482
781
  name: Cosine Accuracy@5
782
  - type: cosine_accuracy@10
783
+ value: 0.6445783132530121
784
  name: Cosine Accuracy@10
785
  - type: cosine_precision@1
786
+ value: 0.25903614457831325
787
  name: Cosine Precision@1
788
  - type: cosine_precision@3
789
+ value: 0.1686746987951807
790
  name: Cosine Precision@3
791
  - type: cosine_precision@5
792
+ value: 0.11566265060240961
793
  name: Cosine Precision@5
794
  - type: cosine_precision@10
795
+ value: 0.0644578313253012
796
  name: Cosine Precision@10
797
  - type: cosine_recall@1
798
+ value: 0.25903614457831325
799
  name: Cosine Recall@1
800
  - type: cosine_recall@3
801
+ value: 0.5060240963855421
802
  name: Cosine Recall@3
803
  - type: cosine_recall@5
804
+ value: 0.5783132530120482
805
  name: Cosine Recall@5
806
  - type: cosine_recall@10
807
+ value: 0.6445783132530121
808
  name: Cosine Recall@10
809
  - type: cosine_ndcg@10
810
+ value: 0.4548319777111225
811
  name: Cosine Ndcg@10
812
  - type: cosine_mrr@10
813
+ value: 0.39346194301013593
814
  name: Cosine Mrr@10
815
  - type: cosine_map@100
816
+ value: 0.40343211538391555
817
  name: Cosine Map@100
818
  - task:
819
  type: information-retrieval
 
823
  type: dim_128
824
  metrics:
825
  - type: cosine_accuracy@1
826
+ value: 0.2710843373493976
827
  name: Cosine Accuracy@1
828
  - type: cosine_accuracy@3
829
+ value: 0.46987951807228917
830
  name: Cosine Accuracy@3
831
  - type: cosine_accuracy@5
832
+ value: 0.5662650602409639
833
  name: Cosine Accuracy@5
834
  - type: cosine_accuracy@10
835
+ value: 0.6144578313253012
836
  name: Cosine Accuracy@10
837
  - type: cosine_precision@1
838
+ value: 0.2710843373493976
839
  name: Cosine Precision@1
840
  - type: cosine_precision@3
841
+ value: 0.1566265060240964
842
  name: Cosine Precision@3
843
  - type: cosine_precision@5
844
+ value: 0.11325301204819276
845
  name: Cosine Precision@5
846
  - type: cosine_precision@10
847
+ value: 0.061445783132530116
848
  name: Cosine Precision@10
849
  - type: cosine_recall@1
850
+ value: 0.2710843373493976
851
  name: Cosine Recall@1
852
  - type: cosine_recall@3
853
+ value: 0.46987951807228917
854
  name: Cosine Recall@3
855
  - type: cosine_recall@5
856
+ value: 0.5662650602409639
857
  name: Cosine Recall@5
858
  - type: cosine_recall@10
859
+ value: 0.6144578313253012
860
  name: Cosine Recall@10
861
  - type: cosine_ndcg@10
862
+ value: 0.44433019669319024
863
  name: Cosine Ndcg@10
864
  - type: cosine_mrr@10
865
+ value: 0.3893574297188756
866
  name: Cosine Mrr@10
867
  - type: cosine_map@100
868
+ value: 0.3989315479842741
869
  name: Cosine Map@100
870
  - task:
871
  type: information-retrieval
 
875
  type: dim_64
876
  metrics:
877
  - type: cosine_accuracy@1
878
+ value: 0.21686746987951808
879
  name: Cosine Accuracy@1
880
  - type: cosine_accuracy@3
881
+ value: 0.42168674698795183
882
  name: Cosine Accuracy@3
883
  - type: cosine_accuracy@5
884
+ value: 0.5180722891566265
885
  name: Cosine Accuracy@5
886
  - type: cosine_accuracy@10
887
+ value: 0.5843373493975904
888
  name: Cosine Accuracy@10
889
  - type: cosine_precision@1
890
+ value: 0.21686746987951808
891
  name: Cosine Precision@1
892
  - type: cosine_precision@3
893
+ value: 0.14056224899598396
894
  name: Cosine Precision@3
895
  - type: cosine_precision@5
896
+ value: 0.10361445783132528
897
  name: Cosine Precision@5
898
  - type: cosine_precision@10
899
+ value: 0.05843373493975902
900
  name: Cosine Precision@10
901
  - type: cosine_recall@1
902
+ value: 0.21686746987951808
903
  name: Cosine Recall@1
904
  - type: cosine_recall@3
905
+ value: 0.42168674698795183
906
  name: Cosine Recall@3
907
  - type: cosine_recall@5
908
+ value: 0.5180722891566265
909
  name: Cosine Recall@5
910
  - type: cosine_recall@10
911
+ value: 0.5843373493975904
912
  name: Cosine Recall@10
913
  - type: cosine_ndcg@10
914
+ value: 0.39639025659520544
915
  name: Cosine Ndcg@10
916
  - type: cosine_mrr@10
917
+ value: 0.3364529546758464
918
  name: Cosine Mrr@10
919
  - type: cosine_map@100
920
+ value: 0.34658882510541217
921
  name: Cosine Map@100
922
  ---
923
 
 
973
  sentences = [
974
  'What is the expiration time for the GCP OAuth2 token in the ZenML configuration?',
975
  '━━━━━┛\n\nConfiguration\n\n┏━━━━━━━━━━━━┯━━━━━━━━━━━━┓┃ PROPERTY │ VALUE ┃\n\n┠────────────┼────────────┨\n\n┃ project_id │ zenml-core ┃\n\n┠────────────┼────────────┨\n\n┃ token │ [HIDDEN] ┃\n\n┗━━━━━━━━━━━━┷━━━━━━━━━━━━┛\n\nNote the temporary nature of the Service Connector. It will expire and become unusable in 1 hour:\n\nzenml service-connector list --name gcp-oauth2-token\n\nExample Command Output\n\n┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓\n\n┃ ACTIVE │ NAME │ ID │ TYPE │ RESOURCE TYPES │ RESOURCE NAME │ SHARED │ OWNER │ EXPIRES IN │ LABELS ┃\n\n┠────────┼──────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨\n\n┃ │ gcp-oauth2-token │ ec4d7d85-c71c-476b-aa76-95bf772c90da │ 🔵 gcp │ 🔵 gcp-generic │ <multiple> │ ➖ │ default │ 59m35s │ ┃\n\n┃ │ │ │ │ 📦 gcs-bucket │ │ │ │ │ ┃\n\n┃ │ │ │ │ 🌀 kubernetes-cluster │ │ │ │ │ ┃\n\n┃ │ │ │ │ 🐳 docker-registry │ │ │ │ │ ┃\n\n┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━┛\n\nAuto-configuration\n\nThe GCP Service Connector allows auto-discovering and fetching credentials and configuration set up by the GCP CLI on your local host.',
976
+ 'Can you list the steps to set up a Docker registry on a Kubernetes cluster?',
977
  ]
978
  embeddings = model.encode(sentences)
979
  print(embeddings.shape)
 
1019
 
1020
  | Metric | Value |
1021
  |:--------------------|:-----------|
1022
+ | cosine_accuracy@1 | 0.2952 |
1023
+ | cosine_accuracy@3 | 0.5241 |
1024
+ | cosine_accuracy@5 | 0.5843 |
1025
+ | cosine_accuracy@10 | 0.6867 |
1026
+ | cosine_precision@1 | 0.2952 |
1027
+ | cosine_precision@3 | 0.1747 |
1028
+ | cosine_precision@5 | 0.1169 |
1029
+ | cosine_precision@10 | 0.0687 |
1030
+ | cosine_recall@1 | 0.2952 |
1031
+ | cosine_recall@3 | 0.5241 |
1032
+ | cosine_recall@5 | 0.5843 |
1033
+ | cosine_recall@10 | 0.6867 |
1034
+ | cosine_ndcg@10 | 0.4908 |
1035
+ | cosine_mrr@10 | 0.4284 |
1036
+ | **cosine_map@100** | **0.4358** |
1037
 
1038
  #### Information Retrieval
1039
  * Dataset: `dim_256`
 
1041
 
1042
  | Metric | Value |
1043
  |:--------------------|:-----------|
1044
+ | cosine_accuracy@1 | 0.259 |
1045
+ | cosine_accuracy@3 | 0.506 |
1046
+ | cosine_accuracy@5 | 0.5783 |
1047
+ | cosine_accuracy@10 | 0.6446 |
1048
+ | cosine_precision@1 | 0.259 |
1049
+ | cosine_precision@3 | 0.1687 |
1050
+ | cosine_precision@5 | 0.1157 |
1051
+ | cosine_precision@10 | 0.0645 |
1052
+ | cosine_recall@1 | 0.259 |
1053
+ | cosine_recall@3 | 0.506 |
1054
+ | cosine_recall@5 | 0.5783 |
1055
+ | cosine_recall@10 | 0.6446 |
1056
+ | cosine_ndcg@10 | 0.4548 |
1057
+ | cosine_mrr@10 | 0.3935 |
1058
+ | **cosine_map@100** | **0.4034** |
1059
 
1060
  #### Information Retrieval
1061
  * Dataset: `dim_128`
1062
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
1063
 
1064
+ | Metric | Value |
1065
+ |:--------------------|:-----------|
1066
+ | cosine_accuracy@1 | 0.2711 |
1067
+ | cosine_accuracy@3 | 0.4699 |
1068
+ | cosine_accuracy@5 | 0.5663 |
1069
+ | cosine_accuracy@10 | 0.6145 |
1070
+ | cosine_precision@1 | 0.2711 |
1071
+ | cosine_precision@3 | 0.1566 |
1072
+ | cosine_precision@5 | 0.1133 |
1073
+ | cosine_precision@10 | 0.0614 |
1074
+ | cosine_recall@1 | 0.2711 |
1075
+ | cosine_recall@3 | 0.4699 |
1076
+ | cosine_recall@5 | 0.5663 |
1077
+ | cosine_recall@10 | 0.6145 |
1078
+ | cosine_ndcg@10 | 0.4443 |
1079
+ | cosine_mrr@10 | 0.3894 |
1080
+ | **cosine_map@100** | **0.3989** |
1081
 
1082
  #### Information Retrieval
1083
  * Dataset: `dim_64`
 
1085
 
1086
  | Metric | Value |
1087
  |:--------------------|:-----------|
1088
+ | cosine_accuracy@1 | 0.2169 |
1089
+ | cosine_accuracy@3 | 0.4217 |
1090
+ | cosine_accuracy@5 | 0.5181 |
1091
+ | cosine_accuracy@10 | 0.5843 |
1092
+ | cosine_precision@1 | 0.2169 |
1093
+ | cosine_precision@3 | 0.1406 |
1094
+ | cosine_precision@5 | 0.1036 |
1095
+ | cosine_precision@10 | 0.0584 |
1096
+ | cosine_recall@1 | 0.2169 |
1097
+ | cosine_recall@3 | 0.4217 |
1098
+ | cosine_recall@5 | 0.5181 |
1099
+ | cosine_recall@10 | 0.5843 |
1100
+ | cosine_ndcg@10 | 0.3964 |
1101
+ | cosine_mrr@10 | 0.3365 |
1102
+ | **cosine_map@100** | **0.3466** |
1103
 
1104
  <!--
1105
  ## Bias, Risks and Limitations
 
1121
 
1122
 
1123
  * Size: 1,490 training samples
1124
+ * Columns: <code>positive</code>, <code>anchor</code>, and <code>negative</code>
1125
  * Approximate statistics based on the first 1000 samples:
1126
+ | | positive | anchor | negative |
1127
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
1128
+ | type | string | string | string |
1129
+ | details | <ul><li>min: 9 tokens</li><li>mean: 21.02 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 375.16 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 17.51 tokens</li><li>max: 31 tokens</li></ul> |
1130
  * Samples:
1131
+ | positive | anchor | negative |
1132
+ |:-----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------|
1133
+ | <code>What details can you provide about the mlflow_training_pipeline runs listed in the ZenML documentation?</code> | <code>mlflow_training_pipeline', ┃┃ │ │ │ 'zenml_pipeline_run_uuid': 'a5d4faae-ef70-48f2-9893-6e65d5e51e98', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.005'} ┃<br><br>┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨<br><br>┃ tensorflow-mnist-model │ 2 │ Run #2 of the mlflow_training_pipeline. │ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_09_08_467212', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃<br><br>┃ │ │ │ 'zenml_pipeline_run_uuid': '11858dcf-3e47-4b1a-82c5-6fa25ba4e037', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.003'} ┃<br><br>┠────────────────────────┼───────────────┼───────────────────────────────────────��─┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨<br><br>┃ tensorflow-mnist-model │ 1 │ Run #1 of the mlflow_training_pipeline. │ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_08_52_398499', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃<br><br>┃ │ │ │ 'zenml_pipeline_run_uuid': '29fb22c1-6e0b-4431-9e04-226226506d16', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.001'} ┃</code> | <code>Can you explain how to configure the TensorFlow settings for a different project?</code> |
1134
+ | <code>How do you register a GCP Service Connector that uses account impersonation to access the zenml-bucket-sl GCS bucket?</code> | <code>esource-id zenml-bucket-sl<br><br>Example Command OutputError: Service connector 'gcp-empty-sa' verification failed: connector authorization failure: failed to fetch GCS bucket<br><br>zenml-bucket-sl: 403 GET https://storage.googleapis.com/storage/v1/b/zenml-bucket-sl?projection=noAcl&prettyPrint=false:<br><br>empty-connectors@zenml-core.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket.<br><br>Permission 'storage.buckets.get' denied on resource (or it may not exist).<br><br>Next, we'll register a GCP Service Connector that actually uses account impersonation to access the zenml-bucket-sl GCS bucket and verify that it can actually access the bucket:<br><br>zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl<br><br>Example Command Output<br><br>Expanding argument value service_account_json to contents of file /home/stefan/aspyre/src/zenml/empty-connectors@zenml-core.json.<br><br>Successfully registered service connector `gcp-impersonate-sa` with access to the following resources:<br><br>┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┓<br><br>┃ RESOURCE TYPE │ RESOURCE NAMES ┃<br><br>┠───────────────┼──────────────────────┨<br><br>┃ 📦 gcs-bucket │ gs://zenml-bucket-sl ┃<br><br>┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┛<br><br>External Account (GCP Workload Identity)<br><br>Use GCP workload identity federation to authenticate to GCP services using AWS IAM credentials, Azure Active Directory credentials or generic OIDC tokens.</code> | <code>What is the process for setting up a ZenML pipeline using AWS IAM credentials?</code> |
1135
+ | <code>Can you explain how data validation helps in detecting data drift and model drift in ZenML pipelines?</code> | <code>of your models at different stages of development.if you have pipelines that regularly ingest new data, you should use data validation to run regular data integrity checks to signal problems before they are propagated downstream.<br><br>in continuous training pipelines, you should use data validation techniques to compare new training data against a data reference and to compare the performance of newly trained models against previous ones.<br><br>when you have pipelines that automate batch inference or if you regularly collect data used as input in online inference, you should use data validation to run data drift analyses and detect training-serving skew, data drift and model drift.<br><br>Data Validator Flavors<br><br>Data Validator are optional stack components provided by integrations. The following table lists the currently available Data Validators and summarizes their features and the data types and model types that they can be used with in ZenML pipelines:<br><br>Data Validator Validation Features Data Types Model Types Notes Flavor/Integration Deepchecks data quality<br>data drift<br>model drift<br>model performance tabular: pandas.DataFrame CV: torch.utils.data.dataloader.DataLoader tabular: sklearn.base.ClassifierMixin CV: torch.nn.Module Add Deepchecks data and model validation tests to your pipelines deepchecks Evidently data quality<br>data drift<br>model drift<br>model performance tabular: pandas.DataFrame N/A Use Evidently to generate a variety of data quality and data/model drift reports and visualizations evidently Great Expectations data profiling<br>data quality tabular: pandas.DataFrame N/A Perform data testing, documentation and profiling with Great Expectations great_expectations Whylogs/WhyLabs data drift tabular: pandas.DataFrame N/A Generate data profiles with whylogs and upload them to WhyLabs whylogs<br><br>If you would like to see the available flavors of Data Validator, you can use the command:<br><br>zenml data-validator flavor list<br><br>How to use it</code> | <code>What are the best practices for deploying web applications using Docker and Kubernetes?</code> |
1136
  * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
1137
  ```json
1138
  {
1139
+ "loss": "TripletLoss",
1140
  "matryoshka_dims": [
1141
  384,
1142
  256,
 
1285
  </details>
1286
 
1287
  ### Training Logs
1288
+ | Epoch | Step | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 |
1289
+ |:-------:|:-----:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
1290
+ | 0.6667 | 1 | 0.3884 | 0.4332 | 0.4464 | 0.3140 |
1291
+ | **2.0** | **3** | **0.4064** | **0.4195** | **0.4431** | **0.3553** |
1292
+ | 2.6667 | 4 | 0.3989 | 0.4034 | 0.4358 | 0.3466 |
1293
 
1294
  * The bold row denotes the saved checkpoint.
1295
 
 
1331
  }
1332
  ```
1333
 
1334
+ #### TripletLoss
1335
  ```bibtex
1336
+ @misc{hermans2017defense,
1337
+ title={In Defense of the Triplet Loss for Person Re-Identification},
1338
+ author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
1339
  year={2017},
1340
+ eprint={1703.07737},
1341
  archivePrefix={arXiv},
1342
+ primaryClass={cs.CV}
1343
  }
1344
  ```
1345
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d4668d669952a0c5efe3f1ceaa1f9ad2864dff665f2ce38ae067ebf5fff22496
3
  size 435588776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f6c582bbe82af64b23557e72963c469a2155751ff7c68eeedf3125475224864
3
  size 435588776