sourceName
stringclasses 1
value | url
stringlengths 52
145
| action
stringclasses 1
value | body
stringlengths 0
60.5k
| format
stringclasses 1
value | metadata
dict | title
stringlengths 5
125
| updated
stringclasses 3
values |
---|---|---|---|---|---|---|---|
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-search-cene-1 | created | # The Atlas Search 'cene: Season 1
# The Atlas Search 'cene: Season 1
Welcome to the first season of a video series dedicated to Atlas Search! This series of videos is designed to guide you through the journey from getting started and understanding the concepts, to advanced techniques.
## What is Atlas Search?
[Atlas Search][1] is an embedded full-text search in MongoDB Atlas that gives you a seamless, scalable experience for building relevance-based app features. Built on Apache Lucene, Atlas Search eliminates the need to run a separate search system alongside your database.
By integrating the database, search engine, and sync mechanism into a single, unified, and fully managed platform, Atlas Search is the fastest and easiest way to build relevance-based search capabilities directly into applications.
> Hip to the *'cene*
>
> The name of this video series comes from a contraction of "Lucene",
> the search engine library leveraged by Atlas. Or it's a short form of "scene".
## Episode Guide
### **[Episode 1: What is Atlas Search & Quick Start][2]**
In this first episode of the Atlas Search 'cene, learn what Atlas Search is, and get a quick start introduction to setting up Atlas Search on your data. Within a few clicks, you can set up a powerful, full-text search index on your Atlas collection data, and leverage the fast, relevant results to your users queries.
### **[Episode 2: Configuration / Development Environment][3]**
In order to best leverage Atlas Search, configuring it for your querying needs leads to success. In this episode, learn how Atlas Search maps your documents to its index, and discover the configuration control you have.
### **[Episode 3: Indexing][4]**
While Atlas Search automatically indexes your collections content, it does demand attention to the indexing configuration details in order to match users queries appropriately. This episode covers how Atlas Search builds an inverted index, and the options one must consider.
### **[Episode 4: Searching][5]**
Atlas Search provides a rich set of query operators and relevancy controls. This episode covers the common query operators, their relevancy controls, and ends with coverage of the must-have Query Analytics feature.
### **[Episode 5: Faceting][6]**
Facets produce additional context for search results, providing a list of subsets and counts within. This episode details the faceting options available in Atlas Search.
### **[Episode 6: Advanced Search Topics][7]**
In this episode, we go through some more advanced search topics including embedded documents, fuzzy search, autocomplete, highlighting, and geospatial.
### **[Episode 7: Query Analytics][8]**
Are your users finding what they are looking for? Are your top queries returning the best results? This episode covers the important topic of query analytics. If you're using search, you need this!
### **[Episode 8: Tips & Tricks][9]**
In this final episode of The Atlas Search 'cene Season 1, useful techniques to introspect query details and see the relevancy scoring computation details. Also shown is how to get facets and search results back in one API call.
[1]: https://www.mongodb.com/atlas/search
[2]: https://www.mongodb.com/developer/videos/what-is-atlas-search-quick-start/
[3]: https://www.mongodb.com/developer/videos/atlas-search-configuration-development-environment/
[4]: https://www.mongodb.com/developer/videos/mastering-indexing-for-perfect-query-matches/
[5]: https://www.mongodb.com/developer/videos/query-operators-relevancy-controls-for-precision-searches/
[6]: https://www.mongodb.com/developer/videos/faceting-mastery-unlock-the-full-potential-of-atlas-search-s-contextual-insights/
[7]: https://www.mongodb.com/developer/videos/atlas-search-mastery-elevate-your-search-with-fuzzy-geospatial-highlighting-hacks/
[8]: https://www.mongodb.com/developer/videos/atlas-search-query-analytics/
[9]: https://www.mongodb.com/developer/videos/tips-and-tricks-the-atlas-search-cene-season-1-episode-8/ | md | {
"tags": [
"Atlas"
],
"pageDescription": "The Atlas Search 'cene: Season 1",
"contentType": "Video"
} | The Atlas Search 'cene: Season 1 | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/atlas-open-ai-review-summary | created | # Using MongoDB Atlas Triggers to Summarize Airbnb Reviews with OpenAI
In the realm of property rentals, reviews play a pivotal role. MongoDB Atlas triggers, combined with the power of OpenAI's models, can help summarize and analyze these reviews in real-time. In this article, we'll explore how to utilize MongoDB Atlas triggers to process Airbnb reviews, yielding concise summaries and relevant tags.
This article is an additional feature added to the hotels and apartment sentiment search application developed in Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality.
## Introduction
MongoDB Atlas triggers allow users to define functions that execute in real-time in response to database operations. These triggers can be harnessed to enhance data processing and analysis capabilities. In this example, we aim to generate summarized reviews and tags for a sample Airbnb dataset.
Our original data model has each review embedded in the listing document as an array:
```javascript
"reviews": { "_id": "2663437",
"date": { "$date": "2012-10-20T04:00:00.000Z" }, \
"listing_id": "664017",
"reviewer_id": "633940",
"reviewer_name": "Patricia",
"comments": "I booked the room at Marinete's apartment for my husband. He was staying in Rio for a week because he was studying Portuguese. He loved the place. Marinete was very helpfull, the room was nice and clean. \r\nThe location is perfect. He loved the time there. \r\n\r\n" },
{ "_id": "2741592",
"date": { "$date": "2012-10-28T04:00:00.000Z" },
"listing_id": "664017",
"reviewer_id": "3932440",
"reviewer_name": "Carolina",
"comments": "Es una muy buena anfitriona, preocupada de que te encuentres cómoda y te sugiere que actividades puedes realizar. Disfruté mucho la estancia durante esos días, el sector es central y seguro." }, ... ]
```
## Prerequisites
- App Services application (e.g., application-0). Ensure linkage to the cluster with the Airbnb data.
- OpenAI account with API access.
![Open AI Key
### Secrets and Values
1. Navigate to your App Services application.
2. Under "Values," create a secret named `openAIKey` with your OPEN AI API key.
3. Create a linked value named OpenAIKey and link to the secret.
## The trigger code
The provided trigger listens for changes in the sample_airbnb.listingsAndReviews collection. Upon detecting a new review, it samples up to 50 reviews, sends them to OpenAI's API for summarization, and updates the original document with the summarized content and tags.
Please notice that the trigger reacts to updates that were marked with `"process" : false` flag. This field indicates that there were no summary created for this batch of reviews yet.
Example of a review update operation that will fire this trigger:
```javascript
listingsAndReviews.updateOne({"_id" : "1129303"}, { $push : { "reviews" : new_review } , $set : { "process" : false" }});
```
### Sample reviews function
To prevent overloading the API with a large number of reviews, a function sampleReviews is defined to randomly sample up to 50 reviews:
```javscript
function sampleReviews(reviews) {
if (reviews.length <= 50) {
return reviews;
}
const sampledReviews = ];
const seenIndices = new Set();
while (sampledReviews.length < 50) {
const randomIndex = Math.floor(Math.random() * reviews.length);
if (!seenIndices.has(randomIndex)) {
seenIndices.add(randomIndex);
sampledReviews.push(reviews[randomIndex]);
}
}
return sampledReviews;
}
```
### Main trigger logic
The main trigger logic is invoked when an update change event is detected with a `"process" : false` field.
```javascript
exports = async function(changeEvent) {
// A Database Trigger will always call a function with a changeEvent.
// Documentation on ChangeEvents: https://www.mongodb.com/docs/manual/reference/change-events
// This sample function will listen for events and replicate them to a collection in a different Database
function sampleReviews(reviews) {
// Logic above...
if (reviews.length <= 50) {
return reviews;
}
const sampledReviews = [];
const seenIndices = new Set();
while (sampledReviews.length < 50) {
const randomIndex = Math.floor(Math.random() * reviews.length);
if (!seenIndices.has(randomIndex)) {
seenIndices.add(randomIndex);
sampledReviews.push(reviews[randomIndex]);
}
}
return sampledReviews;
}
// Access the _id of the changed document:
const docId = changeEvent.documentKey._id;
const doc= changeEvent.fullDocument;
// Get the MongoDB service you want to use (see "Linked Data Sources" tab)
const serviceName = "mongodb-atlas";
const databaseName = "sample_airbnb";
const collection = context.services.get(serviceName).db(databaseName).collection(changeEvent.ns.coll);
// This function is the endpoint's request handler.
// URL to make the request to the OpenAI API.
const url = 'https://api.openai.com/v1/chat/completions';
// Fetch the OpenAI key stored in the context values.
const openai_key = context.values.get("openAIKey");
const reviews = doc.reviews.map((review) => {return {"comments" : review.comments}});
const sampledReviews= sampleReviews(reviews);
// Prepare the request string for the OpenAI API.
const reqString = `Summerize the reviews provided here: ${JSON.stringify(sampledReviews)} | instructions example:\n\n [{"comment" : "Very Good bed"} ,{"comment" : "Very bad smell"} ] \nOutput: {"overall_review": "Overall good beds and bad smell" , "neg_tags" : ["bad smell"], pos_tags : ["good bed"]}. No explanation. No 'Output:' string in response. Valid JSON. `;
console.log(`reqString: ${reqString}`);
// Call OpenAI API to get the response.
let resp = await context.http.post({
url: url,
headers: {
'Authorization': [`Bearer ${openai_key}`],
'Content-Type': ['application/json']
},
body: JSON.stringify({
model: "gpt-4",
temperature: 0,
messages: [
{
"role": "system",
"content": "Output json generator follow only provided example on the current reviews"
},
{
"role": "user",
"content": reqString
}
]
})
});
// Parse the JSON response
let responseData = JSON.parse(resp.body.text());
// Check the response status.
if(resp.statusCode === 200) {
console.log("Successfully received code.");
console.log(JSON.stringify(responseData));
const code = responseData.choices[0].message.content;
// Get the required data to be added into the document
const updateDoc = JSON.parse(code)
// Set a flag that this document does not need further re-processing
updateDoc.process = true
await collection.updateOne({_id : docId}, {$set : updateDoc});
} else {
console.error("Failed to generate filter JSON.");
console.log(JSON.stringify(responseData));
return {};
}
};
```
Key steps include:
- API request preparation: Reviews from the changed document are sampled and prepared into a request string for the OpenAI API. The format and instructions are tailored to ensure the API returns a valid JSON with summarized content and tags.
- API interaction: Using the context.http.post method, the trigger sends the prepared data to the OpenAI API.
- Updating the original document: Upon a successful response from the API, the trigger updates the original document with the summarized content, negative tags (neg_tags), positive tags (pos_tags), and a process flag set to true.
Here is a sample result that is added to the processed listing document:
```
"process": true,
"overall_review": "Overall, guests had a positive experience at Marinete's apartment. They praised the location, cleanliness, and hospitality. However, some guests mentioned issues with the dog and language barrier.",
"neg_tags": [ "language barrier", "dog issues" ],
"pos_tags": [ "great location", "cleanliness", "hospitality" ]
```
Once the data is added to our documents, providing this information in our VUE application is as simple as adding this HTML template:
```html
Overall Review (ai based) : {{ listing.overall_review }}
{{tag}}
{{tag}}
```
## Conclusion
By integrating MongoDB Atlas triggers with OpenAI's powerful models, we can efficiently process and analyze large volumes of reviews in real-time. This setup not only provides concise summaries of reviews but also categorizes them into positive and negative tags, offering valuable insights to property hosts and potential renters.
Questions? Comments? Let’s continue the conversation over in our [community forums. | md | {
"tags": [
"MongoDB",
"JavaScript",
"AI",
"Node.js"
],
"pageDescription": "Uncover the synergy of MongoDB Atlas triggers and OpenAI models in real-time analysis and summarization of Airbnb reviews. ",
"contentType": "Tutorial"
} | Using MongoDB Atlas Triggers to Summarize Airbnb Reviews with OpenAI | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/getting-started-with-mongodb-and-codewhisperer | created | # Getting Started with MongoDB and AWS Codewhisperer
**Introduction**
----------------
Amazon CodeWhisperer is trained on billions of lines of code and can generate code suggestions — ranging from snippets to full functions — in real-time, based on your comments and existing code. AI code assistants have revolutionized developers’ coding experience, but what sets Amazon CodeWhisperer apart is that MongoDB has collaborated with the AWS Data Science team, enhancing its capabilities!
At MongoDB, we are always looking to enhance the developer experience, and we've fine-tuned the CodeWhisperer Foundational Models to deliver top-notch code suggestions — trained on, and tailored for, MongoDB. This gives developers of all levels the best possible experience when using CodeWhisperer for MongoDB functions.
This tutorial will help you get CodeWhisperer up and running in VS Code, but CodeWhisperer also works with a number of other IDEs, including IntelliJ IDEA, AWS Cloud9, AWS Lambda console, JupyterLab, and Amazon SageMaker Studio. On the [Amazon CodeWhisperer site][1], you can find tutorials that demonstrate how to set up CodeWhisperer on different IDEs, as well as other documentation.
*Note:* CodeWhisperer allows users to start without an AWS account because usually, creating an AWS account requires a credit card. Currently, CodeWhisperer is free for individual users. So it’s super easy to get up and running.
**Installing CodeWhisperer for VS Code**
CodeWhisperer doesn’t have its own VS Code extension. It is part of a larger extension for AWS services called AWS Toolkit. AWS Toolkit is available in the VS Code extensions store.
1. Open VS Code and navigate to the extensions store (bottom icon on the left panel).
2. Search for CodeWhisperer and it will show up as part of the AWS Toolkit.
![Searching for the AWS ToolKit Extension][2]
3. Once found, hit Install. Next, you’ll see the full AWS Toolkit
Listing
![The AWS Toolkit full listing][3]
4. Once installed, you’ll need to authorize CodeWhisperer via a Builder
ID to connect to your AWS developer account (or set up a new account
if you don’t already have one).
![Authorise CodeWhisperer][4]
**Using CodeWhisperer**
-----------------------
Navigating code suggestions
![CodeWhisperer Running][5]
With CodeWhisperer installed and running, as you enter your prompt or code, CodeWhisperer will offer inline code suggestions. If you want to keep the suggestion, use **TAB** to accept it. CodeWhisperer may provide multiple suggestions to choose from depending on your use case. To navigate between suggestions, use the left and right arrow keys to view them, and **TAB** to accept.
If you don’t like the suggestions you see, keep typing (or hit **ESC**). The suggestions will disappear, and CodeWhisperer will generate new ones at a later point based on the additional context.
**Requesting suggestions manually**
You can request suggestions at any time. Use **Option-C** on Mac or **ALT-C** on Windows. After you receive suggestions, use **TAB** to accept and arrow keys to navigate.
**Getting the best recommendations**
For best results, follow these practices.
- Give CodeWhisperer something to work with. The more code your file contains, the more context CodeWhisperer has for generating recommendations.
- Write descriptive comments in natural language — for example
```
// Take a JSON document as a String and store it in MongoDB returning the _id
```
Or
```
//Insert a document in a collection with a given _id and a discountLevel
```
- Specify the libraries you prefer at the start of your file by using import statements.
```
// This Java class works with MongoDB sync driver.
// This class implements Connection to MongoDB and CRUD methods.
```
- Use descriptive names for variables and functions
- Break down complex tasks into simpler tasks
**Provide feedback**
----------------
As with all generative AI tools, they are forever learning and forever expanding their foundational knowledge base, and MongoDB is looking for feedback. If you are using Amazon CodeWhisperer in your MongoDB development, we’d love to hear from you.
We’ve created a special “codewhisperer” tag on our [Developer Forums][6], and if you tag any post with this, it will be visible to our CodeWhisperer project team and we will get right on it to help and provide feedback. If you want to see what others are doing with CodeWhisperer on our forums, the [tag search link][7] will jump you straight into all the action.
We can’t wait to see your thoughts and impressions of MongoDB and Amazon CodeWhisperer together.
[1]: https://aws.amazon.com/codewhisperer/resources/#Getting_started
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1bfd28a846063ae9/65481ef6e965d6040a3dcc37/CW_1.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltde40d5ae1b9dd8dd/65481ef615630d040a4b2588/CW_2.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt636bb8d307bebcee/65481ef6a6e009040a740b86/CW_3.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf1e0ebeea2089e6a/65481ef6077aca040a5349da/CW_4.png
[6]: https://www.mongodb.com/community/forums/
[7]: https://www.mongodb.com/community/forums/tag/codewhisperer | md | {
"tags": [
"MongoDB",
"JavaScript",
"Java",
"Python",
"AWS",
"AI"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Getting Started with MongoDB and AWS Codewhisperer | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/code-examples/java/rest-apis-java-spring-boot | created | # REST APIs with Java, Spring Boot, and MongoDB
## GitHub repository
If you want to write REST APIs in Java at the speed of light, I have what you need. I wrote this template to get you started. I have tried to solve as many problems as possible in it.
So if you want to start writing REST APIs in Java, clone this project, and you will be up to speed in no time.
```shell
git clone https://github.com/mongodb-developer/java-spring-boot-mongodb-starter
```
That’s all folks! All you need is in this repository. Below I will explain a few of the features and details about this template, but feel free to skip what is not necessary for your understanding.
## README
All the extra information and commands you need to get this project going are in the `README.md` file which you can read in GitHub.
## Spring and MongoDB configuration
The configuration can be found in the MongoDBConfiguration.java class.
```java
package com.mongodb.starter;
import ...]
import static org.bson.codecs.configuration.CodecRegistries.fromProviders;
import static org.bson.codecs.configuration.CodecRegistries.fromRegistries;
@Configuration
public class MongoDBConfiguration {
@Value("${spring.data.mongodb.uri}")
private String connectionString;
@Bean
public MongoClient mongoClient() {
CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());
CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);
return MongoClients.create(MongoClientSettings.builder()
.applyConnectionString(new ConnectionString(connectionString))
.codecRegistry(codecRegistry)
.build());
}
}
```
The important section here is the MongoDB configuration, of course. Firstly, you will notice the connection string is automatically retrieved from the `application.properties` file, and secondly, you will notice the configuration of the `MongoClient` bean.
A `Codec` is the interface that abstracts the processes of decoding a BSON value into a Java object and encoding a Java object into a BSON value.
A `CodecRegistry` contains a set of `Codec` instances that are accessed according to the Java classes that they encode from and decode to.
The MongoDB driver is capable of encoding and decoding BSON for us, so we do not have to take care of this anymore. All the configuration we need for this project to run is here and nowhere else.
You can read [the driver documentation if you want to know more about this topic.
## Multi-document ACID transactions
Just for the sake of it, I also used multi-document ACID transactions in a few methods where it could potentially make sense to use ACID transactions. You can check all the code in the `MongoDBPersonRepository` class.
Here is an example:
```java
private static final TransactionOptions txnOptions = TransactionOptions.builder()
.readPreference(ReadPreference.primary())
.readConcern(ReadConcern.MAJORITY)
.writeConcern(WriteConcern.MAJORITY)
.build();
@Override
public List saveAll(List personEntities) {
try (ClientSession clientSession = client.startSession()) {
return clientSession.withTransaction(() -> {
personEntities.forEach(p -> p.setId(new ObjectId()));
personCollection.insertMany(clientSession, personEntities);
return personEntities;
}, txnOptions);
}
}
```
As you can see, I’m using an auto-closeable try-with-resources which will automatically close the client session at the end. This helps me to keep the code clean and simple.
Some of you may argue that it is actually too simple because transactions (and write operations, in general) can throw exceptions, and I’m not handling any of them here… You are absolutely right and this is an excellent transition to the next part of this article.
## Exception management
Transactions in MongoDB can raise exceptions for various reasons, and I don’t want to go into the details too much here, but since MongoDB 3.6, any write operation that fails can be automatically retried once. And the transactions are no different. See the documentation for retryWrites.
If retryable writes are disabled or if a write operation fails twice, then MongoDB will send a MongoException (extends RuntimeException) which should be handled properly.
Luckily, Spring provides the annotation `ExceptionHandler` to help us do that. See the code in my controller `PersonController`. Of course, you will need to adapt and enhance this in your real project, but you have the main idea here.
```java
@ExceptionHandler(RuntimeException.class)
public final ResponseEntity handleAllExceptions(RuntimeException e) {
logger.error("Internal server error.", e);
return new ResponseEntity<>(e, HttpStatus.INTERNAL_SERVER_ERROR);
}
```
## Aggregation pipeline
MongoDB's aggregation pipeline is a very powerful and efficient way to run your complex queries as close as possible to your data for maximum efficiency. Using it can ease the computational load on your application.
Just to give you a small example, I implemented the `/api/persons/averageAge` route to show you how I can retrieve the average age of the persons in my collection.
```java
@Override
public double getAverageAge() {
List pipeline = List.of(group(new BsonNull(), avg("averageAge", "$age")), project(excludeId()));
return personCollection.aggregate(pipeline, AverageAgeDTO.class).first().averageAge();
}
```
Also, you can note here that I’m using the `personCollection` which was initially instantiated like this:
```java
private MongoCollection personCollection;
@PostConstruct
void init() {
personCollection = client.getDatabase("test").getCollection("persons", PersonEntity.class);
}
```
Normally, my personCollection should encode and decode `PersonEntity` object only, but you can overwrite the type of object your collection is manipulating to return something different — in my case, `AverageAgeDTO.class` as I’m not expecting a `PersonEntity` class here but a POJO that contains only the average age of my "persons".
## Swagger
Swagger is the tool you need to document your REST APIs. You have nothing to do — the configuration is completely automated. Just run the server and navigate to http://localhost:8080/swagger-ui.html. the interface will be waiting for you.
for more information.
## Nyan Cat
Yes, there is a Nyan Cat section in this post. Nyan Cat is love, and you need some Nyan Cat in your projects. :-)
Did you know that you can replace the Spring Boot logo in the logs with pretty much anything you want?
and the "Epic" font for each project name. It's easier to identify which log file I am currently reading.
## Conclusion
I hope you like my template, and I hope I will help you be more productive with MongoDB and the Java stack.
If you see something which can be improved, please feel free to open a GitHub issue or directly submit a pull request. They are very welcome. :-)
If you are new to MongoDB Atlas, give our Quick Start post a try to get up to speed with MongoDB Atlas in no time.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt876f3404c57aa244/65388189377588ba166497b0/swaggerui.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf2f06ba5af19464d/65388188d31953242b0dbc6f/nyancat.png | md | {
"tags": [
"Java",
"Spring"
],
"pageDescription": "Take a shortcut to REST APIs with this Java/Spring Boot and MongoDB example application that embeds all you'll need to get going.",
"contentType": "Code Example"
} | REST APIs with Java, Spring Boot, and MongoDB | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/swift/halting-development-on-swift-driver | created | # Halting Development on MongoDB Swift Driver
MongoDB is halting development on our server-side Swift driver. We remain excited about Swift and will continue our development of our mobile Swift SDK.
We released our server-side Swift driver in 2020 as an open source project and are incredibly proud of the work that our engineering team has contributed to the Swift community over the last four years. Unfortunately, today we are announcing our decision to stop development of the MongoDB server-side Swift driver. We understand that this news may come as a disappointment to the community of current users.
There are still ways to use MongoDB with Swift:
- Use the MongoDB driver with server-side Swift applications as is
- Use the MongoDB C Driver directly in your server-side Swift projects
- Usage of another community Swift driver, mongokitten
Community members and developers are welcome to fork our existing driver and add features as you see fit - the Swift driver is under the Apache 2.0 license and source code is available on GitHub. For those developing client/mobile applications, MongoDB offers the Realm Swift SDK with real time sync to MongoDB Atlas.
We would like to take this opportunity to express our heartfelt appreciation for the enthusiastic support that the Swift community has shown for MongoDB. Your loyalty and feedback have been invaluable to us throughout our journey, and we hope to resume development on the server-side Swift driver in the future. | md | {
"tags": [
"Swift",
"MongoDB"
],
"pageDescription": "The latest news regarding the MongoDB driver for Swift.",
"contentType": "News & Announcements"
} | Halting Development on MongoDB Swift Driver | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/online-archive-query-performance | created | # Optimizing your Online Archive for Query Performance
## Contributed By
This article was contributed by Prem Krishna, a Senior Product Manager for Analytics at MongoDB.
## Introduction
With Atlas Online Archive, you can tier off cold data or infrequently accessed data from your MongoDB cluster to a MongoDB-managed cloud object storage - Amazon S3 or Microsoft Azure Blob Storage. This can lower the cost via archival cloud storage for old data, while active data that is more often accessed and queried remains in the primary database.
> FYI: If using Online Archive and also using MongoDB's Atlas Data Federation, users can also see a unified view of production data, and *archived data* side by side through a read-only, federated database instance.
In this blog, we are going to be discussing how to improve the performance of your online archive by choosing the correct partitioning fields.
## Why is partitioning so critical when configuring Online Archive?
Once you have started archiving data, you cannot edit any partition fields as the structure of how the data will be stored in the object storage becomes fixed after the archival job begins. Therefore, you'll want to think critically about your partitioning strategy beforehand.
Also, archival query performance is determined by how the data is structured in object storage, so it is important to not only choose the correct partitions but also choose the correct order of partitions.
## Do this...
**Choose the most frequently queried fields.** You can choose up to 2 partition fields for a custom query-based archive or up to three fields on a date-based online archive. Ensure that the most frequently queried fields for the archive are chosen. Note that we are talking about how you are going to query the archive and not the custom query criteria provided at the time of archiving!
**Check the order of partitioned fields.** While selecting the partitions is important, it is equally critical to choose the correct *order* of partitions. The most frequently queried field should be the first chosen partition field, followed by the second and third. That's simple enough.
## Not this
**Don't add irrelevant fields as partitions.** If you are not querying a specific field from the archive, then that field should not be added as a partition field. Remember that you can add a maximum of 2 or 3 partition fields, so it is important to choose these fields carefully based on how you query your archive.
**Don't ignore the “Move down” option.** The “Move down” option is applicable to an archive with a data-based rule. For example, if you want to query on Field_A the most, then Field_B, and then on exampleDate, ensure you are selecting the “Move Down” option next to the “Archive date field” on top.
**Don't choose high cardinality partition(s).** Choosing a high cardinality field such as `_id` will create a large number of partitions in the object storage. Then querying the archive for any aggregate based queries will cause increased latency. The same is applicable if multiple partitions are selected such that the collective fields when grouped together can be termed as high cardinality. For example, if you are selecting Field_A, Field_B and Field_C as your partitions and if a combination of these fields are creating unique values, then it will result in high cardinality partitions.
> Please note that this is **not applicable** for new Online Archives.
## Additional guidance
In addition to the partitioning guidelines, there are a couple of additional considerations that are relevant for the optimal configuration of your data archival strategy.
**Add data expiration rules and scheduled windows**
These fields are optional but are relevant for your use cases and can improve your archival speeds and for how long your data needs to be present in the archive.
**Index required fields**
Before archiving the data, ensure that your data is indexed for optimal performance. You can run an explain plan on the archival query to verify whether the archival rule will use an index.
## Conclusion
It is important to follow these do’s and don’ts before hitting “Begin Archiving” to archive your data so that the partitions are correctly configured thereby optimizing the performance of your online archives.
For more information on configuration or Online Archive, please see the documentation for setting up an Online Archive and our blog post on how to create an Online Archive.
Dig deeper into this topic with this tutorial.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
| md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "Get all the do's and don'ts around optimization of your data archival strategy.",
"contentType": "Article"
} | Optimizing your Online Archive for Query Performance | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/using-confluent-cloud-atlas-stream-processing | created | # Using the Confluent Cloud with Atlas Stream Processing
> Atlas Stream Processing is now available. Learn more about it here.
Apache Kafka is a massively popular streaming platform today. It is available in the open-source community and also as software (e.g., Confluent Platform) for self-managing. Plus, you can get a hosted Kafka (or Kafka-compatible) service from a number of providers, including AWS Managed Streaming for Apache Kafka (MSK), RedPanda Cloud, and Confluent Cloud, to name a few.
In this tutorial, we will configure network connectivity between MongoDB Atlas Stream Processing instances and a topic within the Confluent Cloud. By the end of this tutorial, you will be able to process stream events from Confluent Cloud topics and emit the results back into a Confluent Cloud topic.
Confluent Cloud dedicated clusters support connectivity through secure public internet endpoints with their Basic and Standard clusters. Private network connectivity options such as Private Link connections, VPC/VNet peering, and AWS Transit Gateway are available in the Enterprise and Dedicated cluster tiers.
**Note:** At the time of this writing, Atlas Stream Processing only supports internet-facing Basic and Standard Confluent Cloud clusters. This post will be updated to accommodate Enterprise and Dedicated clusters when support is provided for private networks.
The easiest way to get started with connectivity between Confluent Cloud and MongoDB Atlas is by using public internet endpoints. Public internet connectivity is the only option for Basic and Standard Confluent clusters. Rest assured that Confluent Cloud clusters with internet endpoints are protected by a proxy layer that prevents types of DoS, DDoS, SYN flooding, and other network-level attacks. We will also use authentication API keys with the SASL_SSL authentication method for secure credential exchange.
In this tutorial, we will set up and configure Confluent Cloud and MongoDB Atlas for network connectivity and then work through a simple example that uses a sample data generator to stream data between MongoDB Atlas and Confluent Cloud.
## Tutorial prerequisites
This is what you’ll need to follow along:
- An Atlas project (free or paid tier)
- An Atlas database user with atlasAdmin permission
- For the purposes of this tutorial, we’ll have the user “tutorialuser.”
- MongoDB shell (Mongosh) version 2.0+
- Confluent Cloud cluster (any configuration)
## Configure Confluent Cloud
For this tutorial, you need a Confluent Cloud cluster created with a topic, “solardata,” and an API access key created. If you already have this, you may skip to Step 2.
To create a Confluent Cloud cluster, log into the Confluent Cloud portal, select or create an environment for your cluster, and then click the “Add Cluster” button.
In this tutorial, we can use a **Basic** cluster type.
, click on “Stream Processing” from the Services menu. Next, click on the “Create Instance” button. Provide a name, cloud provider, and region. Note: For a lower network cost, choose the cloud provider and region that matches your Confluent Cloud cluster. In this tutorial, we will use AWS us-east-1 for both Confluent Cloud and MongoDB Atlas.
before continuing this tutorial.
Connection information can be found by clicking on the “Connect” button on your SPI. The connect dialog is similar to the connect dialog when connecting to an Atlas cluster. To connect to the SPI, you will need to use the **mongosh** command line tool.
.
> Log in today to get started. Atlas Stream Processing is now available to all developers in Atlas. Give it a try today!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcfb9c8a1f971ace1/652994177aecdf27ae595bf9/image24.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt63a22c62ae627895/652994381e33730b6478f0d1/image5.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte3f1138a6294748f/65299459382be57ed901d434/image21.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3ccf2827c99f1c83/6529951a56a56b7388898ede/image19.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltaea830d5730e5f51/652995402e91e47b2b547e12/image20.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9c425a65bb77f282/652995c0451768c2b6719c5f/image13.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2748832416fdcf8e/652996cd24aaaa5cb2e56799/image15.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9010c25a76edb010/652996f401c1899afe4a465b/image7.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt27b3762b12b6b871/652997508adde5d1c8f78a54/image3.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to configure network connectivity between Confluent Cloud and MongoDB Atlas Stream Processing.",
"contentType": "Tutorial"
} | Using the Confluent Cloud with Atlas Stream Processing | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/charts-javascript-sdk | created |
Refresh
Only in USA
| md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Learn how to visualize your data with MongoDB Charts.",
"contentType": "Tutorial"
} | Working with MongoDB Charts and the New JavaScript SDK | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/how-send-mongodb-document-changes-slack-channel | created | # How to Send MongoDB Document Changes to a Slack Channel
In this tutorial, we will explore a seamless integration of your database with Slack using Atlas Triggers and the Slack API. Discover how to effortlessly send notifications to your desired Slack channels, effectively connecting the operations happening within your collections and relaying them in real-time updates.
The overall flow will be:
.
Once this has been completed, we are ready to start creating our first database trigger that will react every time there is an operation in a certain collection.
## Atlas trigger
For this tutorial, we will create a trigger that monitors all changes in a `test` collection for `insert`, `update`, and `delete` operations.
To create a new database trigger, you will need to:
1. Click the **Data Services** tab in the top navigation of your screen if you haven't already navigated to Atlas.
2. Click **Triggers** in the left-hand navigation.
3. On the **Overview** tab of the **Triggers** page, click **Add Trigger** to open the trigger configuration page.
4. Enter the configuration values for the trigger and click **Save** at the bottom of the page.
Please note that this trigger will make use of the *event ordering* as we want the operations to be processed according to when they were performed.
The trigger configuration values will look like this:
using the UI, we need to:
1. Click the **Data Services** tab in the top navigation of your screen if you haven't already navigated to Atlas.
2. Click **Functions** in the left navigation menu.
3. Click **New Function** in the top right of the **Functions** page.
4. Enter a unique, identifying name for the function in the **Name** field.
5. Configure **User Authentication**. Functions in App Services always execute in the context of a specific application user or as a system user that bypasses rules. For this tutorial, we are going to use **System user**.
### "processEvent" function
The processEvent function will process the change events every time an operation we are monitoring in the given collection is processed. In this way, we are going to create an object that we will then send to the function in charge of sending this message in Slack.
The code of the function is the following:
```javascript
exports = function(changeEvent) {
const docId = changeEvent.documentKey._id;
const { updateDescription, operationType } = changeEvent;
var object = {
operationType,
docId,
};
if (updateDescription) {
const updatedFields = updateDescription.updatedFields; // A document containing updated fields
const removedFields = updateDescription.removedFields; // An array of removed fields
object = {
...object,
updatedFields,
removedFields
};
}
const result = context.functions.execute("sendToSlack", object);
return true;
};
```
In this function, we will create an object that we will then send as a parameter to another function that will be in charge of sending to our Slack channel.
Here we will use change event and its properties to capture the:
1. `_id` of the object that has been modified/inserted.
2. Operation that has been performed.
3. Fields of the object that have been modified or deleted when the operation has been an `update`.
With all this, we create an object and make use of the internal function calls to execute our `sendToSlack` function.
### "sendToSlack" function
This function will make use of the "chat.postMessage" method of the Slack API to send a message to a specific channel.
To use the Slack library, you must add it as a dependency in your Atlas function. Therefore, in the **Functions** section, we must go to the **Dependencies** tab and install `@slack/web-api`.
You will need to have a Slack token that will be used for creating the `WebClient` object as well as a Slack application. Therefore:
1. Create or use an existing Slack app: This is necessary as the subsequent token we will need will be linked to a Slack App. For this step, you can navigate to the Slack application and use your credentials to authenticate and create or use an existing app you are a member of.
2. Within this app, we will need to create a bot token that will hold the authentication API key to send messages to the corresponding channel in the Slack app created. Please note that you will need to add as many authorization scopes on your token as you need, but the bare minimum is to add the `chat:write` scope to allow your app to post messages.
A full guide on how to get these two can be found in the Slack official documentation.
First, we will perform the logic with the received object to create a message adapted to the event that occurred.
```javascript
var message = "";
if (arg.operationType == 'insert') {
message += `A new document with id \`${arg.docId}\` has been inserted`;
} else if (arg.operationType == 'update') {
message += `The document \`${arg.docId}\` has been updated.`;
if (arg.updatedFields && Object.keys(arg.updatedFields).length > 0) {
message += ` The fileds ${JSON.stringify(arg.updatedFields)} has been modified.`;
}
if (arg.removedFields && arg.removedFields.length > 0) {
message += ` The fileds ${JSON.stringify(arg.removedFields)} has been removed.`;
}
} else {
message += `An unexpected operation affecting document \`${arg.docId}\` ocurred`;
}
```
Once we have the library, we must use it to create a `WebClient` client that we will use later to make use of the methods we need.
```javascript
const { WebClient } = require('@slack/web-api');
// Read a token from the environment variables
const token = context.values.get('SLACK_TOKEN');
// Initialize
const app = new WebClient(token);
```
Finally, we can send our message with:
```javascript
try {
// Call the chat.postMessage method using the WebClient
const result = await app.chat.postMessage({
channel: channelId,
text: `New Event: ${message}`
});
console.log(result);
}
catch (error) {
console.error(error);
}
```
The full function code will be as:
```javascript
exports = async function(arg){
const { WebClient } = require('@slack/web-api');
// Read a token from the environment variables
const token = context.values.get('SLACK_TOKEN');
const channelId = context.values.get('CHANNEL_ID');
// Initialize
const app = new WebClient(token);
var message = "";
if (arg.operationType == 'insert') {
message += `A new document with id \`${arg.docId}\` has been inserted`;
} else if (arg.operationType == 'update') {
message += `The document \`${arg.docId}\` has been updated.`;
if (arg.updatedFields && Object.keys(arg.updatedFields).length > 0) {
message += ` The fileds ${JSON.stringify(arg.updatedFields)} has been modified.`;
}
if (arg.removedFields && arg.removedFields.length > 0) {
message += ` The fileds ${JSON.stringify(arg.removedFields)} has been removed.`;
}
} else {
message += `An unexpected operation affecting document \`${arg.docId}\` ocurred`;
}
try {
// Call the chat.postMessage method using the WebClient
const result = await app.chat.postMessage({
channel: channelId,
text: `New Event: ${message}`
});
console.log(result);
}
catch (error) {
console.error(error);
}
};
```
Note: The bot token we use must have the minimum permissions to send messages to a certain channel. We must also have the application created in Slack added to the channel where we want to receive the messages.
If everything is properly configured, every change in the collection and monitored operations will be received in the Slack channel:
to only detect certain changes and then adapt the change event to only receive certain fields with a "$project".
## Conclusion
In this tutorial, we've learned how to seamlessly integrate your database with Slack using Atlas Triggers and the Slack API. This integration allows you to send real-time notifications to your Slack channels, keeping your team informed about important operations within your database collections.
We started by creating a new application in Atlas and then set up a database trigger that reacts to specific collection operations. We explored the `processEvent` function, which processes change events and prepares the data for Slack notifications. Through a step-by-step process, we demonstrated how to create a message and use the Slack API to post it to a specific channel.
Now that you've grasped the basics, it's time to take your integration skills to the next level. Here are some steps you can follow:
- **Explore advanced use cases**: Consider how you can adapt the principles you've learned to more complex scenarios within your organization. Whether it's custom notifications or handling specific database events, there are countless possibilities.
- **Dive into the Slack API documentation**: For a deeper understanding of what's possible with Slack's API, explore their official documentation. This will help you harness the full potential of Slack's features.
By taking these steps, you'll be well on your way to creating powerful, customized integrations that can streamline your workflow and keep your team in the loop with real-time updates. Good luck with your integration journey!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8fcfb82094f04d75/653816cde299fbd2960a4695/image2.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc7874f54dc0cd8be/653816e70d850608a2f05bb9/image3.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt99aaf337d37c41ae/653816fd2c35813636b3a54d/image1.png | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Learn how to use triggers in MongoDB Atlas to send information about changes to a document to Slack.",
"contentType": "Tutorial"
} | How to Send MongoDB Document Changes to a Slack Channel | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/doc-modeling-vector-search | created | # How to Model Your Documents for Vector Search
Atlas Vector Search was recently released, so let’s dive into a tutorial on how to properly model your documents when utilizing vector search to revolutionize your querying capabilities!
## Data modeling normally in MongoDB
Vector search is new, so let’s first go over the basic ways of modeling your data in a MongoDB document before continuing on into how to incorporate vector embeddings.
Data modeling in MongoDB revolves around organizing your data into documents within various collections. Varied projects or organizations will require different ways of structuring data models due to the fact that successful data modeling depends on the specific requirements of each application, and for the most part, no one document design can be applied for every situation. There are some commonalities, though, that can guide the user. These are:
1. Choosing whether to embed or reference your related data.
2. Using arrays in a document.
3. Indexing your documents (finding fields that are frequently used and applying the appropriate indexing, etc.).
For a more in-depth explanation and a comprehensive guide of data modeling with MongoDB, please check out our data modeling article.
## Setting up an example data model
We are going to be building our vector embedding example using a MongoDB document for our MongoDB TV series. Here, we have a single MongoDB document representing our MongoDB TV show, without any embeddings in place. We have a nested array featuring our array of seasons, and within that, our array of different episodes. This way, in our document, we are capable of seeing exactly which season each episode is a part of, along with the episode number, the title, the description, and the date:
```
{
"_id": ObjectId("238478293"),
"title": "MongoDB TV",
"description": "All your MongoDB updates, news, videos, and podcast episodes, straight to you!",
"genre": "Programming", "Database", "MongoDB"],
"seasons": [
{
"seasonNumber": 1,
"episodes": [
{
"episodeNumber": 1,
"title": "EASY: Build Generative AI Applications",
"description": "Join Jesse Hall….",
"date": ISODate("Oct52023")
},
{
"episodeNumber": 2,
"title": "RAG Architecture & MongoDB: The Future of Generative AI Apps",
"description": "Join Prakul Agarwal…",
"date": ISODate("Oct42023")
}
]
},
{
"seasonNumber": 2,
"episodes": [
{
"episodeNumber": 1,
"title": "Cloud Connect - Harness the Power of AI/ML and Generative AI on AWS with MongoDB Atlas",
"description": "Join Igor Alekseev….",
"date": ISODate("Oct32023")
},
{
"episodeNumber": 2,
"title": "The Index: Here’s what you missed last week…",
"description": "Join Megan Grant…",
"date": ISODate("Oct22023")
}
]
}
]
}
```
Now that we have our example set up, let’s incorporate vector embeddings and discuss the proper techniques to set you up for success.
## Integrating vector embeddings for vector search in our data model
Let’s first understand exactly what vector search is: Vector search is the way to search based on *meaning* rather than specific words. This comes in handy when querying using similarities rather than searching based on keywords. When using vector search, you can query using a question or a phrase rather than just a word. In a nutshell, vector search is great for when you can’t think of *exactly* that book or movie, but you remember the plot or the climax.
This process happens when text, video, or audio is transformed via an encoder into vectors. With MongoDB, we can do this using OpenAI, Hugging Face, or other natural language processing models. Once we have our vectors, we can upload them in the base of our document and conduct vector search using them. Please keep in mind the [current limitations of vector search and how to properly embed your vectors.
You can store your vector embeddings alongside other data in your document, or you can store them in a new collection. It is really up to the user and the project goals. Let’s go over what a document with vector embeddings can look like when you incorporate them into your data model, using the same example from above:
```
{
"_id": ObjectId("238478293"),
"title": "MongoDB TV",
"description": "All your MongoDB updates, news, videos, and podcast episodes, straight to you!",
"genre": "Programming", "Database", "MongoDB"],
“vectorEmbeddings”: [ 0.25, 0.5, 0.75, 0.1, 0.1, 0.8, 0.2, 0.6, 0.6, 0.4, 0.9, 0.3, 0.2, 0.7, 0.5, 0.8, 0.1, 0.8, 0.2, 0.6 ],
"seasons": [
{
"seasonNumber": 1,
"episodes": [
{
"episodeNumber": 1,
"title": "EASY: Build Generative AI Applications",
"description": "Join Jesse Hall….",
"date": ISODate("Oct 5, 2023")
},
{
"episodeNumber": 2,
"title": "RAG Architecture & MongoDB: The Future of Generative AI Apps",
"description": "Join Prakul Agarwal…",
"date": ISODate("Oct 4, 2023")
}
]
},
{
"seasonNumber": 2,
"episodes": [
{
"episodeNumber": 1,
"title": "Cloud Connect - Harness the Power of AI/ML and Generative AI on AWS with MongoDB Atlas",
"description": "Join Igor Alekseev….",
"date": ISODate("Oct 3, 2023")
},
{
"episodeNumber": 2,
"title": "The Index: Here’s what you missed last week…",
"description": "Join Megan Grant…",
"date": ISODate("Oct 2, 2023")
}
]
}
]
}
```
Here, you have your vector embeddings classified at the base in your document. Currently, there is a limitation where vector embeddings cannot be nested in an array in your document. Please ensure your document has your embeddings at the base. There are various tutorials on our [Developer Center, alongside our YouTube account and our documentation, that can help you figure out how to embed these vectors into your document and how to acquire the necessary vectors in the first place.
## Extras: Indexing with vector search
When you’re using vector search, it is necessary to create a search index so you’re able to be successful with your semantic search. To do this, please view our Vector Search documentation. Here is the skeleton code provided by our documentation:
```
{
"fields":
{
"type": "vector",
"path": "",
"numDimensions": ,
"similarity": "euclidean | cosine | dotProduct"
},
{
"type": "filter",
"path": ""
},
...
]
}
```
When setting up your search index, you want to change the “” to be your vector path. In our case, it would be “vectorEmbeddings”. “type” can stay the way it is. For “numDimensions”, please match the dimensions of the model you’ve chosen. This is just the number of vector dimensions, and the value cannot be greater than 4096. This limitation comes from the base embedding model that is being used, so please ensure you’re using a supported LLM (large language model) such as OpenAI or Hugging Face. When using one of these, there won’t be any issues running into vector dimensions. For “similarity”, please pick which vector function you want to use to search for the top K-nearest neighbors.
## Extras: Querying with vector search
When you’re ready to query and find results from your embedded documents, it’s time to create an aggregation pipeline on your embedded vector data. To do this, you can use the“$vectorSearch” operator, which is a new aggregation stage in Atlas. It helps execute an Approximate Nearest Neighbor query.
For more information on this step, please check out the tutorial on Developer Center about [building generative AI applications, and our YouTube video on vector search.
| md | {
"tags": [
"MongoDB",
"AI"
],
"pageDescription": "Follow along with this comprehensive tutorial on how to properly model your documents for MongoDB Vector Search.",
"contentType": "Tutorial"
} | How to Model Your Documents for Vector Search | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/code-examples/python/dog-care-example-app | created | # Example Application for Dog Care Providers (DCP)
## Creator
Radvile Razmute contributed this project.
## About the project
My project explores how to use MongoDB Shell, MongoDB Atlas, and MongoDB Compass. This project aimed to develop a database for dog care providers and demonstrate how this data can be manipulated in MongoDB. The Dog Welfare Federation (DWF) is concerned that some providers who provide short/medium term care for dogs when the owner is unable to – e.g., when away on holidays, may not be delivering the service they promise. Up to now, the DWF has managed the data using a SQL database. As the scale of its operations expanded, the organization needed to invest in a cloud database application. As an alternative to the relational SQL database, the Dog Welfare Federation decided to look at the database development using MongoDB services.
The Dog database uses fictitious data that I have created myself. The different practical stages of the project have been documented in my project report and may guide the beginners taking their first steps into MongoDB.
## Inspiration
The assignment was given to me by my lecturer. And when he was deciding on the topics for the project, he knew that I love dogs. And that's why my project was all about the dogs. Even though the lecturer gave me the assignment, it was my idea to prepare this project in a way that does not only benefit me.
When I followed courses via MongoDB University, I noticed that these courses gave me a flavor of MongoDB, but not the basic concepts. I wanted to turn a database development project into a kind of a guide for somebody who never used MongoDB and who actually can take the project say: "Okay, these are the basic concepts, this is what happens when you run the query, this is the result of what you get, and this is how you can validate that your result and your query is correct." So that's how the whole MongoDB project for beginners was born.
My guide tells you how to use MongoDB, what steps you need to follow to create an application, upload data, use the data, etc. It's one thing to know what those operators are doing, but it's an entirely different thing to understand how they connect and what impact they make.
## Why MongoDB?
My lecturer Noel Tierney, a lecturer in Computer Applications in Athlone Institute of Technology, Ireland, gave me the assignment to use MongoDB. He gave them instructions on the project and what kind of outcome he would like to see. I was asked to use MongoDB, and I decided to dive deeper into everything the platform offers. Besides that, as I mentioned briefly in the introduction: the organization DWF was planning on scaling and expanding their business, and they wanted to look into database development with MongoDB. This was a good chance for me to learn everything about NoSQL.
## How it works
The project teaches you how to set up a MongoDB database for dog care providers. It includes three main sections, including MongoDB Shell, MongoDB Atlas, and MongoDB Compass. The MongoDB Shell section demonstrates how the data can be manipulated using simple queries and the aggregation method. I'm discussing how to import data into a local cluster, create queries, and retrieve & update queries. The other two areas include an overview of MongoDB Atlas and MongoDB Compass; I also discuss querying and the aggregation framework per topic. Each section shows step-by-step instructions on how to set up the application and how also to include some data manipulation examples. As mentioned above, I created all the sample data myself, which was a ton of work! I made a spreadsheet with 2000 different lines of sample data. To do that, I had to Google dog breeds, dog names, and their temperaments. I wanted it to be close to reality.
## Challenges and learning
When I started working with MongoDB, the first big thing that I had to get over was the braces everywhere. So it was quite challenging for me to understand where the query finishes. But I’ve been reading a lot of documentation, and creating this guide gave me quite a good understanding of the basics of MongoDB. I learned a lot about the technical side of databases because I was never familiar with them; I even had no idea how it works. Using MongoDB and learning about MongoDB, and using MongoDB was a great experience. When I had everything set up: the MongoDB shell, Compass, and Atlas, I could see how that information is moving between all these different environments, and that was awesome. I think it worked quite well. I hope that my guide will be valuable for new learners. It demonstrates that users like me, who had no prior skills in using MongoDB, can quickly become MongoDB developers.
Access the complete report, which includes the queries you need - here.
| md | {
"tags": [
"Python",
"MongoDB"
],
"pageDescription": " Learn MongoDB by creating a database for dog care providers!",
"contentType": "Code Example"
} | Example Application for Dog Care Providers (DCP) | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/leafsteroidsresources | created | # Leafsteroid Resources
Leafsteroids is a MongoDB Demo showing the following services and integrations
------------------------------------------------------------------------
**Atlas App Services**
All in one backend. Atlas App Services offers a full-blown REST service using Atlas Functions and HTTPS endpoints.
**Atlas Search**
Used to find the player nickname in the Web UI.
**Atlas Charts**
Event & personalized player dashboards accessible over the web. Built-in visualization right with your data. No additional tools required.
**Document Model**
Every game run is a single document demonstrating rich documents and “data that works together lives together”, while other data entities are simple collections (configuration).
**AWS Beanstalk** Hosts the Blazor Server Application (website).
**AWS EC2**
Used internally by AWS Beanstalk. Used to host our Python game server.
**AWS S3**
Used internally by AWS Beanstalk.
**AWS Private Cloud**
Private VPN connection between AWS and MongoDB.
**At a MongoDB .local Event and want to register to play Leafsteroids? Register Here**
You can build & play Leafsteroids yourself with the following links
## Development Resources
|Resource| Link|
|---|---|
|Github Repo |Here|
|MongoDB TV Livestream
|Here|
|MongoDB & AWS |Here|
|MongoDB on the AWS Marketplace
|Here|
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Leafsteroid Resources | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/create-first-stream-processor | created | # Get Started with Atlas Stream Processing: Creating Your First Stream Processor
>Atlas Stream Processing is now available. Learn more about it here.
If you're not already familiar, Atlas Stream Processing enables processing high-velocity streams of complex data using the same data model and Query API that's used in MongoDB Atlas databases. Streaming data is increasingly critical to building responsive, event-driven experiences for your customers. Stream processing is a fundamental building block powering these applications, by helping to tame the firehouse of data coming from many sources, by finding important events in a stream, or by combining data in motion with data in rest.
In this tutorial, we will create a stream processor that uses sample data included in Atlas Stream Processing. By the end of the tutorial, you will have an operational Stream Processing Instance (SPI) configured with a stream processor. This environment can be used for further experimentation and Atlas Stream Processing tutorials in the future.
### Tutorial Prerequisites
This is what you'll need to follow along:
* An Atlas user with atlasAdmin permission. For the purposes of this tutorial, we'll have the user "tutorialuser".
* MongoDB shell (Mongosh) version 2.0+
## Create the Stream Processing Instance
Let's first create a Stream Processing Instance (SPI). Think of an SPI as a logical grouping of one or more stream processors. When created, the SPI has a connection string similar to a typical MongoDB Atlas cluster.
Under the Services tab in the Atlas Project click, "Stream Processing". Then click the "Create Instance" button.
This will launch the Create Instance dialog.
Enter your desired cloud provider and region, and then click "Create". You will receive a confirmation dialog upon successful creation.
## Configure the connection registry
The connection registry stores connection information to the external data sources you wish to use within a stream processor. In this example, we will use a sample data generator that is available without any extra configuration, but typically you would connect to either Kafka or an Atlas database as a source.
To manage the connection registry, click on "Configure" to navigate to the configuration screen.
Once on the configuration screen, click on the "Connection Registry" tab.
Next, click on the "Add Connection" button. This will launch the Add Connection dialog.
From here, you can add connections to Kafka, other Atlas clusters within the project, or a sample stream. In this tutorial, we will use the Sample Stream connection. Click on "Sample Stream" and select "sample_stream_solar" from the list of available sample streams. Then, click "Add Connection".
The new "sample_stream_solar" will show up in the list of connections.
## Connect to the Stream Processing Instance (SPI)
Now that we have both created the SPI and configured the connection in the connection registry, we can create a stream processor. First, we need to connect to the SPI that we created previously. This can be done using the MongoDB Shell (mongosh).
To obtain the connection string to the SPI, return to the main Stream Processing page by clicking on the "Stream Processing" menu under the Services tab.
Next, locate the "Tutorial" SPI we just created and click on the "Connect" button. This will present a connection dialog similar to what is found when connecting to MongoDB Atlas clusters.
For connecting, we'll need to add a connection IP address and create a database user, if we haven't already.
Then we'll choose our connection method. If you do not already have mongosh installed, install it using the instructions provided in the dialog.
Once mongosh is installed, copy the connection string from the "I have the MongoDB Shell installed" view and run it in your terminal.
```
Command Terminal > mongosh <> --tls --authenticationDatabase admin --username tutorialuser
Enter password: *******************
Current Mongosh Log ID: 64e9e3bf025581952de31587
Connecting to: mongodb://*****
Using MongoDB: 6.2.0
Using Mongosh: 2.0.0
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
AtlasStreamProcessing>
```
To confirm your sample_stream_solar is added as a connection, issue `sp.listConnections()`. Our connection to sample_stream_solar is shown as expected.
```
AtlasStreamProcessing> sp.listConnections()
{
ok: 1,
connections:
{
name: 'sample_stream_solar',
type: 'inmemory',
createdAt: ISODate("2023-08-26T18:42:48.357Z")
}
]
}
```
## Create a stream processor
If you are reading through this post as a prerequisite to another tutorial, you can return to that tutorial now to continue.
In this section, we will wrap up by creating a simple stream processor to process the sample_stream_solar source that we have used throughout this tutorial. This sample_stream_solar source represents the observed energy production of different devices (unique solar panels). Stream processing could be helpful in measuring characteristics such as panel efficiency or when replacement is required for a device that is no longer producing energy at all.
First, let's define a [$source stage to describe where Atlas Stream Processing will read the stream data from.
```
var solarstream={$source:{"connectionName": "sample_stream_solar"}}
```
Now we will issue .process to view the contents of the stream in the console.
`sp.process(solarstream])`
.process lets us sample our source data and quickly test the stages of a stream processor to ensure that it is set up as intended. A sample of this data is as follows:
```
{
device_id: 'device_2',
group_id: 3,
timestamp: '2023-08-27T13:51:53.375+00:00',
max_watts: 250,
event_type: 0,
obs: {
watts: 168,
temp: 15
},
_ts: ISODate("2023-08-27T13:51:53.375Z"),
_stream_meta: {
sourceType: 'sampleData',
timestamp: ISODate("2023-08-27T13:51:53.375Z")
}
}
```
## Wrapping up
In this tutorial, we started by introducing Atlas Stream Processing and why stream processing is a building block for powering modern applications. We then walked through the basics of creating a stream processor – we created a Stream Processing Instance, configured a source in our connection registry using sample solar data (included in Atlas Stream Processing), connected to a Stream Processing Instance, and finally tested our first stream processor using .process. You are now ready to explore Atlas Stream Processing and create your own stream processors, adding advanced functionality like windowing and validation.
If you enjoyed this tutorial and would like to learn more check out the [MongoDB Atlas Stream Processing announcement blog post. For more on stream processors in Atlas Stream Processing, visit our documentation.
### Learn more about MongoDB Atlas Stream Processing
For more on managing stream processors in Atlas Stream Processing, visit our documentation.
>Log in today to get started. Atlas Stream Processing is now available to all developers in Atlas. Give it a try today! | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to create a stream processor end-to-end using MongoDB Atlas Stream Processing.",
"contentType": "Tutorial"
} | Get Started with Atlas Stream Processing: Creating Your First Stream Processor | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/instant-graphql-apis-mongodb-grafbase | created | # Instant GraphQL APIs for MongoDB with Grafbase
# Instant GraphQL APIs for MongoDB with Grafbase
In the ever-evolving landscape of web development, efficient data management and retrieval are paramount for creating dynamic and responsive applications. MongoDB, a versatile NoSQL database, and GraphQL, a powerful query language for APIs, have emerged as a dynamic duo that empowers developers to build robust, flexible, and high-performance applications.
When combined, MongoDB and GraphQL offer a powerful solution for front-end developers, especially when used at the edge.
You may be curious about the synergy between an unstructured database and a structured query language. Fortunately, Grafbase offers a solution that seamlessly combines both by leveraging its distinctive connector schema transformations.
## Prerequisites
In this tutorial, you’ll see how easy it is to get set up with MongoDB and Grafbase, simplifying the introduction of GraphQL into your applications.
You will need the following to get started:
- An account with Grafbase
- An account with MongoDB Atlas
- A database with data API access enabled
## Enable data API access
You will need a database with MongoDB Atlas to follow along — create one now!
For the purposes of this tutorial, I’ve created a free shared cluster with a single database deployment. We’ll refer to this instance as your “Data Source” later.
through the `g.datasource(mongodb)` call.
## Create models for data
The MongoDB connector empowers developers to organize their MongoDB collections in a manner that allows Grafbase to autonomously generate the essential queries and mutations for document creation, retrieval, update, and deletion within these collections.
Within Grafbase, each configuration for a collection is referred to as a "model," and you have the flexibility to employ the supported GraphQL Scalars to represent data within the collection(s).
It's important to consider that in cases where you possess pre-existing documents in your collection, not all fields are applicable to every document.
Let’s work under the assumption that you have no existing documents and want to create a new collection for `users`. Using the Grafbase TypeScript SDK, we can write the schema for each user model. It looks something like this:
```ts
const address = g.type('Address', {
street: g.string().mapped('street_name')
})
mongodb
.model('User', {
name: g.string(),
email: g.string().optional(),
address: g.ref(address)
})
.collection('users')
```
This schema will generate a fully working GraphQL API with queries and mutations as well as all input types for pagination, ordering, and filtering:
- `userCreate` – Create a new user
- `userCreateMany` – Batch create new users
- `userUpdate` – Update an existing user
- `userUpdateMany` – Batch update users
- `userDelete` – Delete a user
- `userDeleteMany` – Batch delete users
- `user` – Fetch a single user record
- `userCollection` – Fetch multiple users from a collection
MongoDB automatically generates collections when you first store data, so there’s no need to manually create a collection for users at this step.
We’re now ready to start the Grafbase development server using the CLI:
```bash
npx grafbase dev
```
This command runs the entire Grafbase GraphQL API locally that you can use when developing your front end. The Grafbase API communicates directly with your Atlas Data API.
Once the command is running, you’ll be able to visit http://127.0.0.1:4000 and explore the GraphQL API.
## Insert users with GraphQL to MongoDB instance
Let’s test out creating users inside our MongoDB collection using the generated `userCreate` mutation that was provided to us by Grafbase.
Using Pathfinder at http://127.0.0.1:4000, execute the following mutation:
```
mutation {
mongo {
userCreate(input: {
name: "Jamie Barton",
email: "jamie@grafbase.com",
age: 40
}) {
insertedId
}
}
}
```
If everything is hooked up correctly, you should see a response that looks something like this:
```json
{
"data": {
"mongo": {
"userCreate": {
"insertedId": "65154a3d4ddec953105be188"
}
}
}
}
```
You should repeat this step a few times to create multiple users.
## Update user by ID
Now we’ve created some users in our MongoDB collection, let’s try updating a user by `insertedId`:
```
mutation {
mongo {
userUpdate(by: {
id: "65154a3d4ddec953105be188"
}, input: {
age: {
set: 35
}
}) {
modifiedCount
}
}
}
```
Using the `userUpdate` mutation above, we `set` a new `age` value for the user where the `id` matches that of the ObjectID we passed in.
If everything was successful, you should see something like this:
```json
{
"data": {
"mongo": {
"userUpdate": {
"modifiedCount": 1
}
}
}
}
```
## Delete user by ID
Deleting users is similar to the create and update mutations above, but we don’t need to provide any additional `input` data since we’re deleting only:
```
mutation {
mongo {
userDelete(by: {
id: "65154a3d4ddec953105be188"
}) {
deletedCount
}
}
}
```
If everything was successful, you should see something like this:
```json
{
"data": {
"mongo": {
"userDelete": {
"deletedCount": 1
}
}
}
}
```
## Fetch all users
Grafbase generates the query `userCollection` that you can use to fetch all users. Grafbase requires a `first` or `last` pagination value with a max value of `100`:
```
query {
mongo {
userCollection(first: 100) {
edges {
node {
id
name
email
age
}
}
}
}
}
```
Here we are fetching the `first` 100 users from the collection. You can also pass a filter and order argument to tune the results:
```
query {
mongo {
userCollection(first: 100, filter: {
age: {
gt: 30
}
}, orderBy: {
age: ASC
}) {
edges {
node {
id
name
email
age
}
}
}
}
}
```
## Fetch user by ID
Using the same GraphQL API, we can fetch a user by the object ID. Grafbase automatically generates the query `user` where we can pass the `id` to the `by` input type:
```
query {
mongo {
user(
by: {
id: "64ee1cfbb315482287acea78"
}
) {
id
name
email
age
}
}
}
```
## Enable faster responses with GraphQL Edge Caching
Every request we make so far to our GraphQL API makes a round trip to the MongoDB database. This is fine, but we can improve response times even further by enabling GraphQL Edge Caching for GraphQL queries.
To enable GraphQL Edge Caching, inside `grafbase/grafbase.config.ts`, add the following to the `config` export:
```ts
export default config({
schema: g,
cache: {
rules:
{
types: 'Query',
maxAge: 60
}
]
}
})
```
This configuration will cache any query. If you only want to disable caching on some collections, you can do that too. [Learn more about GraphQL Edge Caching.
## Deploy to the edge
So far, we’ve been working with Grafbase locally using the CLI, but now it’s time to deploy this around the world to the edge with GitHub.
If you already have an existing GitHub repository, go ahead and commit the changes we’ve made so far. If you don’t already have a GitHub repository, you will need to create one, commit this code, and push it to GitHub.
Now, create a new project with Grafbase and connect your GitHub account. You’ll need to permit Grafbase to read your repository contents, so make sure you select the correct repository and allow that.
Before you click **Deploy**, make sure to insert the environment variables obtained previously in the tutorial. Grafbase also supports environment variables for preview environments, so if you want to use a different MongoDB database for any Grafbase preview deployment, you can configure that later.
, URQL, and Houdini.
If you have questions or comments, continue the conversation over in the MongoDB Developer Community.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt86a1fb09aa5e51ae/65282bf00749064f73257e71/image6.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt67f4040e41799bbc/65282c10814c6c262bc93103/image1.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt75ca38cd9261e241/65282c30ff3bbd5d44ad0aa3/image4.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltaf2a2af39e731dbe/65282c54391807638d3b0e1d/image5.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0c9563b3fdbf34fd/65282c794824f57358f273cf/image3.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt731c99011d158491/65282ca631f9bbb92a9669ad/image2.png | md | {
"tags": [
"Atlas",
"TypeScript",
"GraphQL"
],
"pageDescription": "Learn how to quickly and easily create a GraphQL API from your MongoDB data with Grafbase.",
"contentType": "Tutorial"
} | Instant GraphQL APIs for MongoDB with Grafbase | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/exploring-window-operators-atlas-stream-processing | created | # Exploring Window Operators in Atlas Stream Processing
> Atlas Stream Processing is now available. Learn more about it here.
In our previous post on windowing, we introduced window operators available in Atlas Stream Processing. Window operators are one of the most commonly used operations to effectively process streaming data. Atlas Stream Processing provides two window operators: $tumblingWindow and $hoppingWindow. In this tutorial, we will explore both of these operators using the sample solar data generator provided within Atlas Stream Processing.
## Getting started
Before we begin creating stream processors, make sure you have a database user who has “atlasAdmin” access to the Atlas Project. Also, if you do not already have a Stream Processing Instance created with a connection to the sample_stream_solar data generator, please follow the instructions in Get Started with Atlas Stream Processing: Creating Your First Stream Processor and then continue on.
## View the solar stream sample data
For this tutorial, we will be using the MongoDB shell.
First, confirm sample_stream_solar is added as a connection by issuing `sp.listConnections()`.
```
AtlasStreamProcessing> sp.listConnections()
{
ok: 1,
connections:
{
name: 'sample_stream_solar',
type: 'inmemory',
createdAt: ISODate("2023-08-26T18:42:48.357Z")
}
]
}
```
Next, let’s define a **$source** stage to describe where Atlas Stream Processing will read the stream data from.
```
var solarstream={ $source: { "connectionName": "sample_stream_solar" } }
```
Then, issue a **.process** command to view the contents of the stream on the console.
```
sp.process([solarstream])
```
You will see the stream of solar data printed on the console. A sample of this data is as follows:
```json
{
device_id: 'device_2',
group_id: 3,
timestamp: '2023-08-27T13:51:53.375+00:00',
max_watts: 250,
event_type: 0,
obs: {
watts: 168,
temp: 15
},
_ts: ISODate("2023-08-27T13:51:53.375Z"),
_stream_meta: {
sourceType: 'sampleData',
timestamp: ISODate("2023-08-27T13:51:53.375Z")
}
}
```
## Create a tumbling window query
A tumbling window is a fixed-size window that moves forward in time at regular intervals. In Atlas Stream Processing, you use the [$tumblingWindow operator. In this example, let’s use the operator to compute the average watts over one-minute intervals.
Refer back to the schema from the sample stream solar data. To create a tumbling window, let’s create a variable and define our tumbling window stage.
```javascript
var Twindow= {
$tumblingWindow: {
interval: { size: NumberInt(1), unit: "minute" },
pipeline:
{
$group: {
_id: "$device_id",
max: { $max: "$obs.watts" },
avg: { $avg: "$obs.watts" }
}
}
]
}
}
```
We are calculating the maximum value and average over the span of one-minute, non-overlapping intervals. Let’s use the `.process` command to run the streaming query in the foreground and view our results in the console.
```
sp.process([solarstream,Twindow])
```
Here is an example output of the statement:
```json
{
_id: 'device_4',
max: 236,
avg: 95,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T13:59:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T14:00:00.000Z")
}
}
{
_id: 'device_2',
max: 211,
avg: 117.25,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T13:59:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T14:00:00.000Z")
}
}
```
## Exploring the window operator pipeline
The pipeline that is used within a window function can include blocking stages and non-blocking stages.
[Accumulator operators such as `$avg`, `$count`, `$sort`, and `$limit` can be used within blocking stages. Meaningful data returned from these operators are obtained when run over a series of data versus a single data point. This is why they are considered blocking.
Non-blocking stages do not require multiple data points to be meaningful, and they include operators such as `$addFields`, `$match`, `$project`, `$set`, `$unset`, and `$unwind`, to name a few. You can use non-blocking before, after, or within the blocking stages. To illustrate this, let’s create a query that shows the average, maximum, and delta (the difference between the maximum and average). We will use a non-blocking **$match** to show only the results from device_1, calculate the tumblingWindow showing maximum and average, and then include another non-blocking `$addFields`.
```
var m= { '$match': { device_id: 'device_1' } }
```
```javascript
var Twindow= {
'$tumblingWindow': {
interval: { size: Int32(1), unit: 'minute' },
pipeline:
{
'$group': {
_id: '$device_id',
max: { '$max': '$obs.watts' },
avg: { '$avg': '$obs.watts' }
}
}
]
}
}
var delta = { '$addFields': { delta: { '$subtract': ['$max', '$avg'] } } }
```
Now we can use the .process command to run the stream processor in the foreground and view our results in the console.
```
sp.process([solarstream,m,Twindow,delta])
```
The results of this query will be similar to the following:
```json
{
_id: 'device_1',
max: 238,
avg: 75.3,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:11:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:12:00.000Z")
},
delta: 162.7
}
{
_id: 'device_1',
max: 220,
avg: 125.08333333333333,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:12:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:13:00.000Z")
},
delta: 94.91666666666667
}
{
_id: 'device_1',
max: 238,
avg: 119.91666666666667,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:13:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:14:00.000Z")
},
delta: 118.08333333333333
}
```
Notice the time segments and how they align on the minute.
![Time segments aligned on the minute][1]
Additionally, notice that the output includes the difference between the calculated values of maximum and average for each window.
## Create a hopping window
A hopping window, sometimes referred to as a sliding window, is a fixed-size window that moves forward in time at overlapping intervals. In Atlas Stream Processing, you use the `$hoppingWindow` operator. In this example, let’s use the operator to see the average.
```javascript
var Hwindow = {
'$hoppingWindow': {
interval: { size: 1, unit: 'minute' },
hopSize: { size: 30, unit: 'second' },
pipeline: [
{
'$group': {
_id: '$device_id',
max: { '$max': '$obs.watts' },
avg: { '$avg': '$obs.watts' }
}
}
]
}
}
```
To help illustrate the start and end time segments, let's create a filter to only return device_1.
```
var m = { '$match': { device_id: 'device_1' } }
```
Now let’s issue the `.process` command to view the results in the console.
```
sp.process([solarstream,m,Hwindow])
```
An example result is as follows:
```json
{
_id: 'device_1',
max: 238,
avg: 76.625,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:37:30.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:38:30.000Z")
}
}
{
_id: 'device_1',
max: 238,
avg: 82.71428571428571,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:38:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:39:00.000Z")
}
}
{
_id: 'device_1',
max: 220,
avg: 105.54545454545455,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:38:30.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:39:30.000Z")
}
}
```
Notice the time segments.
![Overlapping time segments][2]
The time segments are overlapping by 30 seconds as was defined by the hopSize option. Hopping windows are useful to capture short-term patterns in data.
## Summary
By continuously processing data within time windows, you can generate real-time insights and metrics, which can be crucial for applications like monitoring, fraud detection, and operational analytics. Atlas Stream Processing provides both tumbling and hopping window operators. Together these operators enable you to perform various aggregation operations such as sum, average, min, and max over a specific window of data. In this tutorial, you learned how to use both of these operators with solar sample data.
### Learn more about MongoDB Atlas Stream Processing
Check out the [MongoDB Atlas Stream Processing announcement blog post. For more on window operators in Atlas Stream Processing, learn more in our documentation.
>Log in today to get started. Atlas Stream Processing is available to all developers in Atlas. Give it a try today!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt73ff54f0367cad3b/650da3ef69060a5678fc1242/image1.jpg
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt833bc1a824472d14/650da41aa5f15dea3afc5b55/image3.jpg | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to use the various window operators such as tumbling window and hopping window with MongoDB Atlas Stream Processing.",
"contentType": "Tutorial"
} | Exploring Window Operators in Atlas Stream Processing | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/python/python-quickstart-fastapi | created | # Getting Started with MongoDB and FastAPI
FastAPI is a modern, high-performance, easy-to-learn, fast-to-code, production-ready, Python 3.6+ framework for building APIs based on standard Python type hints. While it might not be as established as some other Python frameworks such as Django, it is already in production at companies such as Uber, Netflix, and Microsoft.
FastAPI is async, and as its name implies, it is super fast; so, MongoDB is the perfect accompaniment. In this quick start, we will create a CRUD (Create, Read, Update, Delete) app showing how you can integrate MongoDB with your FastAPI projects.
## Prerequisites
- Python 3.9.0
- A MongoDB Atlas cluster. Follow the "Get Started with Atlas" guide to create your account and MongoDB cluster. Keep a note of your username, password, and connection string as you will need those later.
## Running the Example
To begin, you should clone the example code from GitHub.
``` shell
git clone git@github.com:mongodb-developer/mongodb-with-fastapi.git
```
You will need to install a few dependencies: FastAPI, Motor, etc. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active.
``` shell
cd mongodb-with-fastapi
pip install -r requirements.txt
```
It may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.
Once you have installed the dependencies, you need to create an environment variable for your MongoDB connection string.
``` shell
export MONGODB_URL="mongodb+srv://:@/?retryWrites=true&w=majority"
```
Remember, anytime you start a new terminal session, you will need to set this environment variable again. I use direnv to make this process easier.
The final step is to start your FastAPI server.
``` shell
uvicorn app:app --reload
```
Once the application has started, you can view it in your browser at .
Once you have had a chance to try the example, come back and we will walk through the code.
## Creating the Application
All the code for the example application is within `app.py`. I'll break it down into sections and walk through what each is doing.
### Connecting to MongoDB
One of the very first things we do is connect to our MongoDB database.
``` python
client = motor.motor_asyncio.AsyncIOMotorClient(os.environ"MONGODB_URL"])
db = client.get_database("college")
student_collection = db.get_collection("students")
```
We're using the async [motor driver to create our MongoDB client, and then we specify our database name `college`.
### The \_id Attribute and ObjectIds
``` python
# Represents an ObjectId field in the database.
# It will be represented as a `str` on the model so that it can be serialized to JSON.
PyObjectId = Annotatedstr, BeforeValidator(str)]
```
MongoDB stores data as [BSON. FastAPI encodes and decodes data as JSON strings. BSON has support for additional non-JSON-native data types, including `ObjectId` which can't be directly encoded as JSON. Because of this, we convert `ObjectId`s to strings before storing them as the `id` field.
### Database Models
Many people think of MongoDB as being schema-less, which is wrong. MongoDB has a flexible schema. That is to say that collections do not enforce document structure by default, so you have the flexibility to make whatever data-modelling choices best match your application and its performance requirements. So, it's not unusual to create models when working with a MongoDB database. Our application has three models, the `StudentModel`, the `UpdateStudentModel`, and the `StudentCollection`.
``` python
class StudentModel(BaseModel):
"""
Container for a single student record.
"""
# The primary key for the StudentModel, stored as a `str` on the instance.
# This will be aliased to `_id` when sent to MongoDB,
# but provided as `id` in the API requests and responses.
id: OptionalPyObjectId] = Field(alias="_id", default=None)
name: str = Field(...)
email: EmailStr = Field(...)
course: str = Field(...)
gpa: float = Field(..., le=4.0)
model_config = ConfigDict(
populate_by_name=True,
arbitrary_types_allowed=True,
json_schema_extra={
"example": {
"name": "Jane Doe",
"email": "jdoe@example.com",
"course": "Experiments, Science, and Fashion in Nanophotonics",
"gpa": 3.0,
}
},
)
```
This is the primary model we use as the [response model for the majority of our endpoints.
I want to draw attention to the `id` field on this model. MongoDB uses `_id`, but in Python, underscores at the start of attributes have special meaning. If you have an attribute on your model that starts with an underscore, pydantic—the data validation framework used by FastAPI—will assume that it is a private variable, meaning you will not be able to assign it a value! To get around this, we name the field `id` but give it an alias of `_id`. You also need to set `populate_by_name` to `True` in the model's `model_config`
We set this `id` value automatically to `None`, so you do not need to supply it when creating a new student.
``` python
class UpdateStudentModel(BaseModel):
"""
A set of optional updates to be made to a document in the database.
"""
name: Optionalstr] = None
email: Optional[EmailStr] = None
course: Optional[str] = None
gpa: Optional[float] = None
model_config = ConfigDict(
arbitrary_types_allowed=True,
json_encoders={ObjectId: str},
json_schema_extra={
"example": {
"name": "Jane Doe",
"email": "jdoe@example.com",
"course": "Experiments, Science, and Fashion in Nanophotonics",
"gpa": 3.0,
}
},
)
```
The `UpdateStudentModel` has two key differences from the `StudentModel`:
- It does not have an `id` attribute as this cannot be modified.
- All fields are optional, so you only need to supply the fields you wish to update.
Finally, `StudentCollection` is defined to encapsulate a list of `StudentModel` instances. In theory, the endpoint could return a top-level list of StudentModels, but there are some vulnerabilities associated with returning JSON responses with top-level lists.
```python
class StudentCollection(BaseModel):
"""
A container holding a list of `StudentModel` instances.
This exists because providing a top-level array in a JSON response can be a [vulnerability
"""
students: ListStudentModel]
```
### Application Routes
Our application has five routes:
- POST /students/ - creates a new student.
- GET /students/ - view a list of all students.
- GET /students/{id} - view a single student.
- PUT /students/{id} - update a student.
- DELETE /students/{id} - delete a student.
#### Create Student Route
``` python
@app.post(
"/students/",
response_description="Add new student",
response_model=StudentModel,
status_code=status.HTTP_201_CREATED,
response_model_by_alias=False,
)
async def create_student(student: StudentModel = Body(...)):
"""
Insert a new student record.
A unique `id` will be created and provided in the response.
"""
new_student = await student_collection.insert_one(
student.model_dump(by_alias=True, exclude=["id"])
)
created_student = await student_collection.find_one(
{"_id": new_student.inserted_id}
)
return created_student
```
The `create_student` route receives the new student data as a JSON string in a `POST` request. We have to decode this JSON request body into a Python dictionary before passing it to our MongoDB client.
The `insert_one` method response includes the `_id` of the newly created student (provided as `id` because this endpoint specifies `response_model_by_alias=False` in the `post` decorator call. After we insert the student into our collection, we use the `inserted_id` to find the correct document and return this in our `JSONResponse`.
FastAPI returns an HTTP `200` status code by default; but in this instance, a `201` created is more appropriate.
##### Read Routes
The application has two read routes: one for viewing all students and the other for viewing an individual student.
``` python
@app.get(
"/students/",
response_description="List all students",
response_model=StudentCollection,
response_model_by_alias=False,
)
async def list_students():
"""
List all of the student data in the database.
The response is unpaginated and limited to 1000 results.
"""
return StudentCollection(students=await student_collection.find().to_list(1000))
```
Motor's `to_list` method requires a max document count argument. For this example, I have hardcoded it to `1000`; but in a real application, you would use the [skip and limit parameters in `find` to paginate your results.
``` python
@app.get(
"/students/{id}",
response_description="Get a single student",
response_model=StudentModel,
response_model_by_alias=False,
)
async def show_student(id: str):
"""
Get the record for a specific student, looked up by `id`.
"""
if (
student := await student_collection.find_one({"_id": ObjectId(id)})
) is not None:
return student
raise HTTPException(status_code=404, detail=f"Student {id} not found")
```
The student detail route has a path parameter of `id`, which FastAPI passes as an argument to the `show_student` function. We use the `id` to attempt to find the corresponding student in the database. The conditional in this section is using an assignment expression, an addition to Python 3.8 and often referred to by the cute sobriquet "walrus operator."
If a document with the specified `_id` does not exist, we raise an `HTTPException` with a status of `404`.
##### Update Route
``` python
@app.put(
"/students/{id}",
response_description="Update a student",
response_model=StudentModel,
response_model_by_alias=False,
)
async def update_student(id: str, student: UpdateStudentModel = Body(...)):
"""
Update individual fields of an existing student record.
Only the provided fields will be updated.
Any missing or `null` fields will be ignored.
"""
student = {
k: v for k, v in student.model_dump(by_alias=True).items() if v is not None
}
if len(student) >= 1:
update_result = await student_collection.find_one_and_update(
{"_id": ObjectId(id)},
{"$set": student},
return_document=ReturnDocument.AFTER,
)
if update_result is not None:
return update_result
else:
raise HTTPException(status_code=404, detail=f"Student {id} not found")
# The update is empty, but we should still return the matching document:
if (existing_student := await student_collection.find_one({"_id": id})) is not None:
return existing_student
raise HTTPException(status_code=404, detail=f"Student {id} not found")
```
The `update_student` route is like a combination of the `create_student` and the `show_student` routes. It receives the `id` of the document to update as well as the new data in the JSON body. We don't want to update any fields with empty values; so, first of all, we iterate over all the items in the received dictionary and only add the items that have a value to our new document.
If, after we remove the empty values, there are no fields left to update, we instead look for an existing record that matches the `id` and return that unaltered. However, if there are values to update, we use find_one_and_update to $set the new values, and then return the updated document.
If we get to the end of the function and we have not been able to find a matching document to update or return, then we raise a `404` error again.
##### Delete Route
``` python
@app.delete("/students/{id}", response_description="Delete a student")
async def delete_student(id: str):
"""
Remove a single student record from the database.
"""
delete_result = await student_collection.delete_one({"_id": ObjectId(id)})
if delete_result.deleted_count == 1:
return Response(status_code=status.HTTP_204_NO_CONTENT)
raise HTTPException(status_code=404, detail=f"Student {id} not found")
```
Our final route is `delete_student`. Again, because this is acting upon a single document, we have to supply an `id` in the URL. If we find a matching document and successfully delete it, then we return an HTTP status of `204` or "No Content." In this case, we do not return a document as we've already deleted it! However, if we cannot find a student with the specified `id`, then instead we return a `404`.
## Our New FastAPI App Generator
If you're excited to build something more production-ready with FastAPI, React & MongoDB, head over to the Github repository for our new FastAPI app generator and start transforming your web development experience.
## Wrapping Up
I hope you have found this introduction to FastAPI with MongoDB useful. If you would like to learn more, check out my post introducing the FARM stack (FastAPI, React and MongoDB) as well as the FastAPI documentation and this awesome list.
>If you have questions, please head to our developer community website where MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Python",
"MongoDB",
"Django",
"FastApi"
],
"pageDescription": "Getting started with MongoDB and FastAPI",
"contentType": "Quickstart"
} | Getting Started with MongoDB and FastAPI | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/deploy-mongodb-atlas-aws-cloudformation | created | # How to Deploy MongoDB Atlas with AWS CloudFormation
MongoDB Atlas is the multi-cloud developer data platform that provides an integrated suite of cloud database and data services. We help to accelerate and simplify how you build resilient and performant global applications on the cloud provider of your choice.
AWS CloudFormation lets you model, provision, and manage AWS and third-party resources like MongoDB Atlas by treating infrastructure as code (IaC). CloudFormation templates are written in either JSON or YAML.
While there are multiple ways to use CloudFormation to provision and manage your Atlas clusters, such as with Partner Solution Deployments or the AWS CDK, today we’re going to go over how to create your first YAML CloudFormation templates to deploy Atlas clusters with CloudFormation.
These pre-made templates directly leverage MongoDB Atlas resources from the CloudFormation Public Registry and execute via the AWS CLI/AWS Management Console. Using these is best for users who seek to be tightly integrated into AWS with fine-grained access controls.
Let’s get started!
*Prerequisites:*
- Install and configure an AWS Account and the AWS CLI.
- Install and configure the MongoDB Atlas CLI (optional but recommended).
## Step 1: Create a MongoDB Atlas account
Sign up for a free MongoDB Atlas account, verify your email address, and log into your new account.
Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
and contact AWS support directly, who can help confirm the CIDR range to be used in your Atlas PAK IP Whitelist.
on MongoDB Atlas.
). You can set this up with AWS IAM (Identity and Access Management). You can find that in the navigation bar of your AWS. You can find the ARN in the user information in the “Roles” button. Once there, find the role whose ARN you want to use and add it to the Extension Details in CloudFormation. Learn how to create user roles/permissions in the IAM.
required from our GitHub repo. It’s important that you use an ARN with sufficient permissions each time it’s asked for.
.
## Step 7: Deploy the CloudFormation template
In the AWS management console, go to the CloudFormation tab. Then, in the left-hand navigation, click on “Stacks.” In the window that appears, hit the “Create Stack” drop-down. Select “Create new stack with existing resources.”
Next, select “template is ready” in the “Prerequisites” section and “Upload a template” in the “Specify templates” section. From here, you will choose the YAML (or JSON) file containing the MongoDB Atlas deployment that you created in the prior step.
.
The fastest way to get started is to create a MongoDB Atlas account from the AWS Marketplace.
Additionally, you can watch our demo to learn about the other ways to get started with MongoDB Atlas and CloudFormation
Go build with MongoDB Atlas and AWS CloudFormation today!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6a7a0aace015cbb5/6504a623a8cf8bcfe63e171a/image4.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt471e37447cf8b1b1/6504a651ea4b5d10aa5135d6/image8.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3545f9cbf7c8f622/6504a67ceb5afe6d504a833b/image13.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3582d0a3071426e3/6504a69f0433c043b6255189/image12.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb4253f96c019874e/6504a6bace38f40f4df4cddf/image1.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2840c92b6d1ee85d/6504a6d7da83c92f49f9b77e/image7.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd4a32140ddf600fc/6504a700ea4b5d515f5135db/image5.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt49dabfed392fa063/6504a73dbb60f713d4482608/image9.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt592e3f129fe1304b/6504a766a8cf8b5ba23e1723/image11.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbff284987187ce16/6504a78bb8c6d6c2d90e6e22/image10.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0ae450069b31dff9/6504a7b99bf261fdd46bddcf/image3.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7f24645eefdab69c/6504a7da9aba461d6e9a55f4/image2.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7e1c20eba155233a/6504a8088606a80fe5c87f31/image6.png | md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "Learn how to quickly and easily deploy MongoDB Atlas instances with Amazon Web Services (AWS) CloudFormation.",
"contentType": "Tutorial"
} | How to Deploy MongoDB Atlas with AWS CloudFormation | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/nextjs-with-mongodb | created | # How to Integrate MongoDB Into Your Next.js App
> This tutorial uses the Next.js Pages Router instead of the App Router which was introduced in Next.js version 13. The Pages Router is still supported and recommended for production environments.
Are you building your next amazing application with Next.js? Do you wish you could integrate MongoDB into your Next.js app effortlessly? Do you need this done before your coffee has finished brewing? If you answered yes to these three questions, I have some good news for you. We have created a Next.js<>MongoDB integration that will have you up and running in minutes, and you can consider this tutorial your official guide on how to use it.
In this tutorial, we'll take a look at how we can use the **with-mongodb** example to create a new Next.js application that follows MongoDB best practices for connectivity, connection pool monitoring, and querying. We'll also take a look at how to use MongoDB in our Next.js app with things like serverSideProps and APIs. Finally, we'll take a look at how we can easily deploy and host our application on Vercel, the official hosting platform for Next.js applications. If you already have an existing Next.js app, not to worry. Simply drop the MongoDB utility file into your existing project and you are good to go. We have a lot of exciting stuff to cover, so let's dive right in!
## Next.js and MongoDB with one click
Our app is now deployed and running in production. If you weren't following along with the tutorial and just want to quickly start your Next.js application with MongoDB, you could always use the `with-mongodb` starter found on GitHub, but I’ve got an even better one for you.
Visit Vercel and you'll be off to the races in creating and deploying the official Next.js with the MongoDB integration, and all you'll need to provide is your connection string.
## Prerequisites
For this tutorial, you'll need:
- MongoDB Atlas (sign up for free).
- A Vercel account (sign up for free).
- NodeJS 18+.
- npm and npx.
To get the most out of this tutorial, you need to be familiar with React and Next.js. I will cover unique Next.js features with enough details to still be valuable to a newcomer.
## What is Next.js?
If you're not already familiar with it, Next.js is a React-based framework for building modern web applications. The framework adds a lot of powerful features — such as server-side rendering, automatic code splitting, and incremental static regeneration — that make it easy to build, scalable, and production-ready apps.
. You can use a local MongoDB installation if you have one, but if you're just getting started, MongoDB Atlas is a great way to get up and running without having to install or manage your MongoDB instance. MongoDB Atlas has a forever free tier that you can sign up for as well as get the sample data that we'll be using for the rest of this tutorial.
To get our MongoDB URI, in our MongoDB Atlas dashboard:
1. Hit the **Connect** button.
2. Then, click the **Connect to your application** button, and here you'll see a string that contains your **URI** that will look like this:
```
mongodb+srv://:@cluster0..mongodb.net/?retryWrites=true&w=majority
```
If you are new to MongoDB Atlas, you'll need to go to the **Database Access** section and create a username and password, as well as the **Network Access** tab to ensure your IP is allowed to connect to the database. However, if you already have a database user and network access enabled, you'll just need to replace the `` and `` fields with your information.
For the ``, we'll load the MongoDB Atlas sample datasets and use one of those databases.
, and we'll help troubleshoot.
## Querying MongoDB with Next.js
Now that we are connected to MongoDB, let's discuss how we can query our MongoDB data and bring it into our Next.js application. Next.js supports multiple ways to get data. We can create API endpoints, get data by running server-side rendered functions for a particular page, and even generate static pages by getting our data at build time. We'll look at all three examples.
## Example 1: Next.js API endpoint with MongoDB
The first example we'll look at is building and exposing an API endpoint in our Next.js application. To create a new API endpoint route, we will first need to create an `api` directory in our `pages` directory, and then every file we create in this `api` directory will be treated as an individual API endpoint.
Let's go ahead and create the `api` directory and a new file in this `directory` called `movies.tsx`. This endpoint will return a list of 20 movies from our MongoDB database. The implementation for this route is as follows:
```
import clientPromise from "../../lib/mongodb";
import { NextApiRequest, NextApiResponse } from 'next';
export default async (req: NextApiRequest, res: NextApiResponse) => {
try {
const client = await clientPromise;
const db = client.db("sample_mflix");
const movies = await db
.collection("movies")
.find({})
.sort({ metacritic: -1 })
.limit(10)
.toArray();
res.json(movies);
} catch (e) {
console.error(e);
}
}
```
To explain what is going on here, we'll start with the import statement. We are importing our `clientPromise` method from the `lib/mongodb` file. This file contains all the instructions on how to connect to our MongoDB Atlas cluster. Additionally, within this file, we cache the instance of our connection so that subsequent requests do not have to reconnect to the cluster. They can use the existing connection. All of this is handled for you!
Next, our API route handler has the signature of `export default async (req, res)`. If you're familiar with Express.js, this should look very familiar. This is the function that gets run when the `localhost:3000/api/movies` route is called. We capture the request via `req` and return the response via the `res` object.
Our handler function implementation calls the `clientPromise` function to get the instance of our MongoDB database. Next, we run a MongoDB query using the MongoDB Node.js driver to get the top 20 movies out of our **movies** collection based on their **metacritic** rating sorted in descending order.
Finally, we call the `res.json` method and pass in our array of movies. This serves our movies in JSON format to our browser. If we navigate to `localhost:3000/api/movies`, we'll see a result that looks like this:
to capture the `id`. So, if a user calls `http://localhost:3000/api/movies/573a1394f29313caabcdfa3e`, the movie that should be returned is Seven Samurai. **Another tip**: The `_id` property for the `sample_mflix` database in MongoDB is stored as an ObjectID, so you'll have to convert the string to an ObjectID. If you get stuck, create a thread on the MongoDB Community forums and we'll solve it together! Next, we'll take a look at how to access our MongoDB data within our Next.js pages.
## Example 2: Next.js pages with MongoDB
In the last section, we saw how we can create an API endpoint and connect to MongoDB with it. In this section, we'll get our data directly into our Next.js pages. We'll do this using the getServerSideProps() method that is available to Next.js pages.
The `getServerSideProps()` method forces a Next.js page to load with server-side rendering. What this means is that every time this page is loaded, the `getServerSideProps()` method runs on the back end, gets data, and sends it into the React component via props. The code within `getServerSideProps()` is never sent to the client. This makes it a great place to implement our MongoDB queries.
Let's see how this works in practice. Let's create a new file in the `pages` directory, and we'll call it `movies.tsx`. In this file, we'll add the following code:
```
import clientPromise from "../lib/mongodb";
import { GetServerSideProps } from 'next';
interface Movie {
_id: string;
title: string;
metacritic: number;
plot: string;
}
interface MoviesProps {
movies: Movie];
}
const Movies: React.FC = ({ movies }) => {
return (
TOP 20 MOVIES OF ALL TIME
(According to Metacritic)
{movies.map((movie) => (
{MOVIE.TITLE}
{MOVIE.METACRITIC}
{movie.plot}
))}
);
};
export default Movies;
export const getServerSideProps: GetServerSideProps = async () => {
try {
const client = await clientPromise;
const db = client.db("sample_mflix");
const movies = await db
.collection("movies")
.find({})
.sort({ metacritic: -1 })
.limit(20)
.toArray();
return {
props: { movies: JSON.parse(JSON.stringify(movies)) },
};
} catch (e) {
console.error(e);
return { props: { movies: [] } };
}
};
```
As you can see from the example above, we are importing the same `clientPromise` utility class, and our MongoDB query is exactly the same within the `getServerSideProps()` method. The only thing we really needed to change in our implementation is how we parse the response. We need to stringify and then manually parse the data, as Next.js is strict.
Our page component called `Movies` gets the props from our `getServerSideProps()` method, and we use that data to render the page showing the top movie title, metacritic rating, and plot. Your result should look something like this:
![Top 20 movies][6]
This is great. We can directly query our MongoDB database and get all the data we need for a particular page. The contents of the `getServerSideProps()` method are never sent to the client, but the one downside to this is that this method runs every time we call the page. Our data is pretty static and unlikely to change all that often. What if we pre-rendered this page and didn't have to call MongoDB on every refresh? We'll take a look at that next!
## Example 3: Next.js static generation with MongoDB
For our final example, we'll take a look at how static page generation can work with MongoDB. Let's create a new file in the `pages` directory and call it `top.tsx`. For this page, what we'll want to do is render the top 1,000 movies from our MongoDB database.
Top 1,000 movies? Are you out of your mind? That'll take a while, and the database round trip is not worth it. Well, what if we only called this method once when we built the application so that even if that call takes a few seconds, it'll only ever happen once and our users won't be affected? They'll get the top 1,000 movies delivered as quickly as or even faster than the 20 using `serverSideProps()`. The magic lies in the `getStaticProps()` method, and our implementation looks like this:
```
import { ObjectId } from "mongodb";
import clientPromise from "../lib/mongodb";
import { GetStaticProps } from "next";
interface Movie {
_id: ObjectId;
title: string;
metacritic: number;
plot: string;
}
interface TopProps {
movies: Movie[];
}
export default function Top({ movies }: TopProps) {
return (
TOP 1000 MOVIES OF ALL TIME
(According to Metacritic)
{movies.map((movie) => (
{MOVIE.TITLE}
{MOVIE.METACRITIC}
{movie.plot}
))}
);
}
export const getStaticProps: GetStaticProps = async () => {
try {
const client = await clientPromise;
const db = client.db("sample_mflix");
const movies = await db
.collection("movies")
.find({})
.sort({ metacritic: -1 })
.limit(1000)
.toArray();
return {
props: { movies: JSON.parse(JSON.stringify(movies)) },
};
} catch (e) {
console.error(e);
return {
props: { movies: [] },
};
}
};
```
At a glance, this looks very similar to the `movies.tsx` file we created earlier. The only significant changes we made were changing our `limit` from `20` to `1000` and our `getServerSideProps()` method to `getStaticProps()`. If we navigate to `localhost:3000/top` in our browser, we'll see a long list of movies.
![Top 1000 movies][7]
Look at how tiny that scrollbar is. Loading this page took about 3.79 seconds on my machine, as opposed to the 981-millisecond response time for the `/movies` page. The reason it takes this long is that in development mode, the `getStaticProps()` method is called every single time (just like the `getServerSideProps()` method). But if we switch from development mode to production mode, we'll see the opposite. The `/top` page will be pre-rendered and will load almost immediately, while the `/movies` and `/api/movies` routes will run the server-side code each time.
Let's switch to production mode. In your terminal window, stop the current app from running. To run our Next.js app in production mode, we'll first need to build it. Then, we can run the `start` command, which will serve our built application. In your terminal window, run the following commands:
```
npm run build
npm run start
```
When you run the `npm run start` command, your Next.js app is served in production mode. The `getStaticProps()` method will not be run every time you hit the `/top` route as this page will now be served statically. We can even see the pre-rendered static page by navigating to the `.next/server/pages/top.html` file and seeing the 1,000 movies listed in plain HTML.
Next.js can even update this static content without requiring a rebuild with a feature called [Incremental Static Regeneration, but that's outside of the scope of this tutorial. Next, we'll take a look at deploying our application on Vercel.
## Deploying your Next.js app on Vercel
The final step in our tutorial today is deploying our application. We'll deploy our Next.js with MongoDB app to Vercel. I have created a GitHub repo that contains all of the code we have written today. Feel free to clone it, or create your own.
Navigate to Vercel and log in. Once you are on your dashboard, click the **Import Project** button, and then **Import Git Repository**.
, https://nextjs-with-mongodb-mauve.vercel.app/api/movies, and https://nextjs-with-mongodb-mauve.vercel.app/top routes.
## Putting it all together
In this tutorial, we walked through the official Next.js with MongoDB example. I showed you how to connect your MongoDB database to your Next.js application and run queries in multiple ways. Then, we deployed our application using Vercel.
If you have any questions or feedback, reach out through the MongoDB Community forums and let me know what you build with Next.js and MongoDB.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt572f8888407a2777/65de06fac7f05b1b2f8674cc/vercel-homepage.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt833e93bc334716a5/65de07c677ae451d96b0ec98/server-error.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltad2329fe1bb44d8f/65de1b020f1d350dd5ca42a5/database-deployments.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt798b7c3fe361ccbd/65de1b917c85267d37234400/welcome-nextjs.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta204dc4bce246ac6/65de1ff8c7f05b0b4b86759a/json-format.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt955fc3246045aa82/65de2049330e0026817f6094/top-20-movies.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfb7866c7c87e81ef/65de2098ae62f777124be71d/top-1000-movie.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc89beb7757ffec1e/65de20e0ee3a13755fc8e7fc/importing-project-vercel.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0022681a81165d94/65de21086c65d7d78887b5ff/configuring-project.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7b00b1cfe190a7d4/65de212ac5985207f8f6b232/congratulations.png | md | {
"tags": [
"JavaScript",
"Next.js"
],
"pageDescription": "Learn how to easily integrate MongoDB into your Next.js application with the official MongoDB package.",
"contentType": "Tutorial"
} | How to Integrate MongoDB Into Your Next.js App | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/build-go-web-application-gin-mongodb-help-ai | created | # How to Build a Go Web Application with Gin, MongoDB, and with the Help of AI
Building applications with Go provides many advantages. The language is fast, simple, and lightweight while supporting powerful features like concurrency, strong typing, and a robust standard library. In this tutorial, we’ll use the popular Gin web framework along with MongoDB to build a Go-based web application.
Gin is a minimalist web framework for Golang that provides an easy way to build web servers and APIs. It is fast, lightweight, and modular, making it ideal for building microservices and APIs, but can be easily extended to build full-blown applications.
We'll use Gin to build a web application with three endpoints that connect to a MongoDB database. MongoDB is a popular document-oriented NoSQL database that stores data in JSON-like documents. MongoDB is a great fit for building modern applications.
Rather than building the entire application by hand, we’ll leverage a coding AI assistant by Sourcegraph called Cody to help us build our Go application. Cody is the only AI assistant that knows your entire codebase and can help you write, debug, test, and document your code. We’ll use many of these features as we build our application today.
## Prerequisites
Before you begin, you’ll need:
- Go installed on your development machine. Download it on their website.
- A MongoDB Atlas account. Sign up for free.
- Basic familiarity with Go and MongoDB syntax.
- Sourcegraph Cody installed in your favorite IDE. (For this tutorial, we'll be using VS Code). Get it for free.
Once you meet the prerequisites, you’re ready to build. Let’s go.
## Getting started
We'll start by creating a new Go project for our application. For this example, we’ll name the project **mflix**, so let’s go ahead and create the project directory and navigate into it:
```bash
mkdir mflix
cd mflix
```
Next, initialize a new Go module, which will manage dependencies for our project:
```bash
go mod init mflix
```
Now that we have our Go module created, let’s install the dependencies for our project. We’ll keep it really simple and just install the `gin` and `mongodb` libraries.
```bash
go get github.com/gin-gonic/gin
go get go.mongodb.org/mongo-driver/mongo
```
With our dependencies fetched and installed, we’re ready to start building our application.
## Gin application setup with Cody
To start building our application, let’s go ahead and create our entry point into the app by creating a **main.go** file. Next, while we can set up our application manually, we’ll instead leverage Cody to build out our starting point. In the Cody chat window, we can ask Cody to create a basic Go Gin application.
guide. The database that we will work with is called `sample_mflix` and the collection in that database we’ll use is called `movies`. This dataset contains a list of movies with various information like the plot, genre, year of release, and much more.
on the movies collection. Aggregation operations process multiple documents and return computed results. So with this endpoint, the end user could pass in any valid MongoDB aggregation pipeline to run various analyses on the `movies` collection.
Note that aggregations are very powerful and in a production environment, you probably wouldn’t want to enable this level of access through HTTP request payloads. But for the sake of the tutorial, we opted to keep it in. As a homework assignment for further learning, try using Cody to limit the number of stages or the types of operations that the end user can perform on this endpoint.
```go
// POST /movies/aggregations - Run aggregations on movies
func aggregateMovies(c *gin.Context) {
// Get aggregation pipeline from request body
var pipeline interface{}
if err := c.ShouldBindJSON(&pipeline); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
// Run aggregations
cursor, err := mongoClient.Database("sample_mflix").Collection("movies").Aggregate(context.TODO(), pipeline)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
// Map results
var result ]bson.M
if err = cursor.All(context.TODO(), &result); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
// Return result
c.JSON(http.StatusOK, result)
}
```
Now that we have our endpoints implemented, let’s add them to our router so that we can call them. Here again, we can use another feature of Cody, called autocomplete, to intelligently give us statement completions so that we don’t have to write all the code ourselves.
![Cody AI Autocomplete with Go][6]
Our `main` function should now look like:
```go
func main() {
r := gin.Default()
r.GET("/", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "Hello World",
})
})
r.GET("/movies", getMovies)
r.GET("/movies/:id", getMovieByID)
r.POST("/movies/aggregations", aggregateMovies)
r.Run()
}
```
Now that we have our routes set up, let’s test our application to make sure everything is working well. Restart the server and navigate to **localhost:8080/movies**. If all goes well, you should see a large list of movies returned in JSON format in your browser window. If you do not see this, check your IDE console to see what errors are shown.
![Sample Output for the Movies Endpoint][7]
Let’s test the second endpoint. Pick any `id` from the movies collection and navigate to **localhost:8080/movies/{id}** — so for example, **localhost:8080/movies/573a1390f29313caabcd42e8**. If everything goes well, you should see that single movie listed. But if you’ve been following this tutorial, you actually won’t see the movie.
![String to Object ID Results Error][8]
The issue is that in our `getMovie` function implementation, we are accepting the `id` value as a `string`, while the data type in our MongoDB database is an `ObjectID`. So when we run the `FindOne` method and try to match the string value of `id` to the `ObjectID` value, we don’t get a match.
Let’s ask Cody to help us fix this by converting the string input we get to an `ObjectID`.
![Cody AI MongoDB String to ObjectID][9]
Our updated `getMovieByID` function is as follows:
```go
func getMovieByID(c *gin.Context) {
// Get movie ID from URL
idStr := c.Param("id")
// Convert id string to ObjectId
id, err := primitive.ObjectIDFromHex(idStr)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
// Find movie by ObjectId
var movie bson.M
err = mongoClient.Database("sample_mflix").Collection("movies").FindOne(context.TODO(), bson.D{{"_id", id}}).Decode(&movie)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
// Return movie
c.JSON(http.StatusOK, movie)
}
```
Depending on your IDE, you may need to add the `primitive` dependency in your import statement. The final import statement looks like:
```go
import (
"context"
"log"
"net/http"
"github.com/gin-gonic/gin"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
```
If we examine the new code that Cody provided, we can see that we are now getting the value from our `id` parameter and storing it into a variable named `idStr`. We then use the primitive package to try and convert the string to an `ObjectID`. If the `idStr` is a valid string that can be converted to an `ObjectID`, then we are good to go and we use the new `id` variable when doing our `FindOne` operation. If not, then we get an error message back.
Restart your server and now try to get a single movie result by navigating to **localhost:8080/movies/{id}**.
![Single Movie Response Endpoint][10]
For our final endpoint, we are allowing the end user to provide an aggregation pipeline that we will execute on the `mflix` collection. The user can provide any aggregation they want. To test this endpoint, we’ll make a POST request to **localhost:8080/movies/aggregations**. In the body of the request, we’ll include our aggregation pipeline.
![Postman Aggregation Endpoint in MongoDB][11]
Let’s run an aggregation to return a count of comedy movies, grouped by year, in descending order. Again, remember aggregations are very powerful and can be abused. You normally would not want to give direct access to the end user to write and run their own aggregations ad hoc within an HTTP request, unless it was for something like an internal tool. Our aggregation pipeline will look like the following:
```json
[
{"$match": {"genres": "Comedy"}},
{"$group": {
"_id": "$year",
"count": {"$sum": 1}
}},
{"$sort": {"count": -1}}
]
```
Running this aggregation, we’ll get a result set that looks like this:
```json
[
{
"_id": 2014,
"count": 287
},
{
"_id": 2013,
"count": 286
},
{
"_id": 2009,
"count": 268
},
{
"_id": 2011,
"count": 263
},
{
"_id": 2006,
"count": 260
},
...
]
```
It seems 2014 was a big year for comedy. If you are not familiar with how aggregations work, you can check out the following resources:
- [Introduction to the MongoDB Aggregation Framework
- MongoDB Aggregation Pipeline Queries vs SQL Queries
- A Better MongoDB Aggregation Experience via Compass
Additionally, you can ask Cody for a specific explanation about how our `aggregateMovies` function works to help you further understand how the code is implemented using the Cody `/explain` command.
.
And if you have any questions or comments, let’s continue the conversation in our developer forums!
The entire code for our application is above, so there is no GitHub repo for this simple application. Happy coding.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt123181346af4c7e6/65148770b25810649e804636/eVB87PA.gif
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3df7c0149a4824ac/6514820f4f2fa85e60699bf8/image4.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6a72c368f716c7c2/65148238a5f15d7388fc754a/image2.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta325fcc27ed55546/651482786fefa7183fc43138/image7.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc8029e22c4381027/6514880ecf50bf3147fff13f/A7n71ej.gif
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt438f1d659d2f1043/6514887b27287d9b63bf9215/6O8d6cR.gif
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd6759b52be548308/651482b2d45f2927c800b583/image3.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfc8ea470eb6585bd/651482da69060a5af7fc2c40/image5.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte5d9fb517f22f08f/651488d82a06d70de3f4faf9/Y2HuNHe.gif
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc2467265b39e7d2b/651483038f0457d9df12aceb/image6.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt972b959f5918c282/651483244f2fa81286699c09/image1.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9c888329868b60b6/6514892c2a06d7d0a6f4fafd/g4xtxUg.gif | md | {
"tags": [
"MongoDB",
"Go"
],
"pageDescription": "Learn how to build a web application with the Gin framework for Go and MongoDB using the help of Cody AI from Sourcegraph.",
"contentType": "Tutorial"
} | How to Build a Go Web Application with Gin, MongoDB, and with the Help of AI | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/time-series-data-pymongoarrow | created | # Analyze Time-Series Data with Python and MongoDB Using PyMongoArrow and Pandas
In today’s data-centric world, time-series data has become indispensable for driving key organizational decisions, trend analyses, and forecasts. This kind of data is everywhere — from stock markets and IoT sensors to user behavior analytics. But as these datasets grow in volume and complexity, so does the challenge of efficiently storing and analyzing them. Whether you’re an IoT developer or a data analyst dealing with time-sensitive information, MongoDB offers a robust ecosystem tailored to meet both your storage and analytics needs for complex time-series data.
MongoDB has built-in support to store time-series data in a special type of collection called a time-series collection. Time-series collections are different from the normal collections. Time-series collections use an underlying columnar storage format and store data in time-order with an automatically created clustered index. The columnar storage format provides the following benefits:
* Reduced complexity: The columnar format is tailored for time-series data, making it easier to manage and query.
* Query efficiency: MongoDB automatically creates an internal clustered index on the time field which improves query performance.
* Disk usage: This storage approach uses disk space more efficiently compared to traditional collections.
* I/O optimization: The read operations require fewer input/output operations, improving the overall system performance.
* Cache usage: The design allows for better utilization of the WiredTiger cache, further enhancing query performance.
In this tutorial, we will create a time-series collection and then store some time-series data into it. We will see how you can query it in MongoDB as well as how you can read that data into pandas DataFrame, run some analytics on it, and write the modified data back to MongoDB. This tutorial is meant to be a complete deep dive into working with time-series data in MongoDB.
### Tutorial Prerequisites
We will be using the following tools/frameworks:
* MongoDB Atlas database, to store our time-series data. If you don’t already have an Atlas cluster created, go ahead and create one, set up a user, and add your connection IP address to your IP access list.
* PyMongo driver(to connect to your MongoDB Atlas database, see the installation instructions).
* Jupyter Notebook (to run the code, see the installation instructions).
>Note: Before running any code or installing any Python packages, we strongly recommend setting up a separate Python environment. This helps to isolate dependencies, manage packages, and avoid conflicts that may arise from different package versions. Creating an environment is an optional but highly recommended step.
At this point, we are assuming that you have an Atlas cluster created and ready to be used, and PyMongo and Jupyter Notebook installed. Let’s go ahead and launch Jupyter Notebook by running the following command in the terminal:
```
Jupyter Notebook
```
Once you have the Jupyter Notebook up and running, let’s go ahead and fetch the connection string of your MongoDB Atlas cluster and store that as an environment variable, which we will use later to connect to our database. After you have done that, let’s go ahead and connect to our Atlas cluster by running the following commands:
```
import pymongo
import os
from pymongo import MongoClient
MONGO_CONN_STRING = os.environ.get("MONGODB_CONNECTION_STRING")
client = MongoClient(MONGO_CONN_STRING)
```
## Creating a time-series collection
Next, we are going to create a new database and a collection in our cluster to store the time-series data. We will call this database “stock_data” and the collection “stocks”.
```
# Let's create a new database called "stock data"
db = client.stock_data
# Let's create a new time-series collection in the "stock data" database called "stocks"
collection = db.create_collection('stocks', timeseries={
timeField: "timestamp",
metaField: "metadata",
granularity: "hours"
})
```
Here, we used the db.create_collection() method to create a time-series collection called “stock”. In the example above, “timeField”, “metaField”, and “granularity” are reserved fields (for more information on what these are, visit our documentation). The “timeField” option specifies the name of the field in your collection that will contain the date in each time-series document.
The “metaField” option specifies the name of the field in your collection that will contain the metadata in each time-series document.
Finally, the “granularity” option specifies how frequently data will be ingested in your time-series collection.
Now, let’s insert some stock-related information into our collection. We are interested in storing and analyzing the stock of a specific company called “XYZ” which trades its stock on “NASDAQ”.
We are storing some price metrics of this stock at an hourly interval and for each time interval, we are storing the following information:
* **open:** the opening price at which the stock traded when the market opened
* **close:** the final price at which the stock traded when the trading period ended
* **high:** the highest price at which the stock traded during the trading period
* **low:** the lowest price at which the stock traded during the trading period
* **volume:** the total number of shares traded during the trading period
Now that we have become an expert on stock trading and terminology (sarcasm), we will now insert some documents into our time-series collection. Here we have four sample documents. The data points are captured at an interval of one hour.
```
# Create some sample data
data =
{
"metadata": {
"stockSymbol": "ABC",
"exchange": "NASDAQ"
},
"timestamp": datetime(2023, 9, 12, 15, 19, 48),
"open": 54.80,
"high": 59.20,
"low": 52.60,
"close": 53.50,
"volume": 18000
},
{
"metadata": {
"stockSymbol": "ABC",
"exchange": "NASDAQ"
},
"timestamp": datetime(2023, 9, 12, 16, 19, 48),
"open": 51.00,
"high": 54.30,
"low": 50.50,
"close": 51.80,
"volume": 12000
},
{
"metadata": {
"stockSymbol": "ABC",
"exchange": "NASDAQ"
},
"timestamp":datetime(2023, 9, 12, 17, 19, 48),
"open": 52.00,
"high": 53.10,
"low": 50.50,
"close": 52.90,
"volume": 10000
},
{
"metadata": {
"stockSymbol": "ABC",
"exchange": "NASDAQ"
},
"timestamp":datetime(2023, 9, 12, 18, 19, 48),
"open": 52.80,
"high": 60.20,
"low": 52.60,
"close": 55.50,
"volume": 30000
}
]
# insert the data into our collection
collection.insert_many(data)
```
Now, let’s run a find query on our collection to retrieve data at a specific timestamp. Run this query in the Jupyter Notebook after the previous script.
```
collection.find_one({'timestamp': datetime(2023, 9, 12, 15, 19, 48)})
```
//OUTPUT
![Output of find_one() command
As you can see from the output, we were able to query our time-series collection and retrieve data points at a specific timestamp.
Similarly, you can run more powerful queries on your time-series collection by using the aggregation pipeline. For the scope of this tutorial, we won’t be covering that. But, if you want to learn more about it, here is where you can go:
1. MongoDB Aggregation Learning Byte
2. MongoDB Aggregation in Python Learning Byte
3. MongoDB Aggregation Documentation
4. Practical MongoDB Aggregation Book
## Analyzing the data with a pandas DataFrame
Now, let’s see how you can move your time-series data into pandas DataFrame to run some analytics operations.
MongoDB has built a tool just for this purpose called PyMongoArrow. PyMongoArrow is a Python library that lets you move data in and out of MongoDB into other data formats such as pandas DataFrame, Numpy array, and Arrow Table.
Let’s quickly install PyMongoArrow using the pip command in your terminal. We are assuming that you already have pandas installed on your system. If not, you can use the pip command to install it too.
```
pip install pymongoarrow
```
Now, let’s import all the necessary libraries. We are going to be using the same file or notebook (Jupyter Notebook) to run the codes below.
```
import pymongoarrow
import pandas as pd
# pymongoarrow.monkey module provided an interface to patch pymongo, in place, and add pymongoarrow's functionality directly to collection instance.
from pymongoarrow.monkey import patch_all
patch_all()
# Let's use the pymongoarrow's find_pandas_all() function to read MongoDB query result sets into
df = collection.find_pandas_all({})
```
Now, we have read all of our stock data stored in the “stocks” collection into a pandas DataFrame ‘df’.
Let’s quickly print the value stored in the ‘df’ variable to verify it.
```
print(df)
print(type(df))
```
//OUTPUT
Hurray…congratulations! As you can see, we have successfully read our MongoDB data into pandas DataFrame.
Now, if you are a stock market trader, you would be interested in doing a lot of analysis on this data to get meaningful insights. But for this tutorial, we are just going to calculate the hourly percentage change in the closing prices of the stock. This will help us understand the daily price movements in terms of percentage gains or losses.
We will add a new column in our ‘df’ DataFrame called “daily_pct_change”.
```
df = df.sort_values('timestamp')
df'daily_pct_change'] = df['close'].pct_change() * 100
# print the dataframe to see the modified data
print(df)
```
//OUTPUT
![Output of modified DataFrame
As you can see, we have successfully added a new column to our DataFrame.
Now, we would like to persist the modified DataFrame data into a database so that we can run more analytics on it later. So, let’s write this data back to MongoDB using PyMongoArrow’s write function.
We will just create a new collection called “my_new_collection” in our database to write the modified DataFrame back into MongoDB, ensuring data persistence.
```
from pymongoarrow.api import write
coll = db.my_new_collection
# write data from pandas into MongoDB collection called 'coll'
write(coll, df)
# Now, let's verify that the modified data has been written into our collection
print(coll.find_one({}))
```
Congratulations on successfully completing this tutorial.
## Conclusion
In this tutorial, we covered how to work with time-series data using MongoDB and Python. We learned how to store stock market data in a MongoDB time-series collection, and then how to perform simple analytics using a pandas DataFrame. We also explored how PyMongoArrow makes it easy to move data between MongoDB and pandas. Finally, we saved our analyzed data back into MongoDB. This guide provides a straightforward way to manage, analyze, and store time-series data. Great job if you’ve followed along — you’re now ready to handle time-series data in your own projects.
If you want to learn more about PyMongoArrow, check out some of these additional resources:
1. Video tutorial on PyMongoArrow
2. PyMongoArrow article
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to create and query a time-series collection in MongoDB, and analyze the data using PyMongoArrow and pandas.",
"contentType": "Tutorial"
} | Analyze Time-Series Data with Python and MongoDB Using PyMongoArrow and Pandas | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/storing-binary-data-mongodb-cpp | created | # Storing Binary Data with MongoDB and C++
In modern applications, storing and retrieving binary files efficiently is a crucial requirement. MongoDB enables this with binary data type in the BSON which is a binary serialization format used to store documents in MongoDB. A BSON binary value is a byte array and has a subtype (like generic binary subtype, UUID, MD5, etc.) that indicates how to interpret the binary data. See BSON Types — MongoDB Manual for more information.
In this tutorial, we will write a console application in C++, using the MongoDB C++ driver to upload and download binary data.
**Note**:
- When using this method, remember that the BSON document size limit in MongoDB is 16 MB. If your binary files are larger than this limit, consider using GridFS for more efficient handling of large files. See GridFS example in C++ for reference.
- Developers often weigh the trade-offs and strategies when storing binary data in MongoDB. It's essential to ensure that you have also considered different strategies to optimize your data management approach.
## Prerequisites
1. MongoDB Atlas account with a cluster created.
2. IDE (like Microsoft Visual Studio or Microsoft Visual Studio Code) setup with the MongoDB C and C++ Driver installed. Follow the instructions in Getting Started with MongoDB and C++ to install MongoDB C/C++ drivers and set up the dev environment in Visual Studio. Installation instructions for other platforms are available.
3. Compiler with C++17 support (for using `std::filesystem` operations).
4. Your machine’s IP address whitelisted. Note: You can add *0.0.0.0/0* as the IP address, which should allow access from any machine. This setting is not recommended for production use.
## Building the application
> Source code available **here**.
As part of the different BSON types, the C++ driver provides the b_binary struct that can be used for storing binary data value in a BSON document. See the API reference.
We start with defining the structure of our BSON document. We have defined three keys: `name`, `path`, and `data`. These contain the name of the file being uploaded, its full path from the disk, and the actual file data respectively. See a sample document below:
(URI), update it to `mongoURIStr`, and set the different path and filenames to the ones on your disk.
```cpp
int main()
{
try
{
auto mongoURIStr = "";
static const mongocxx::uri mongoURI = mongocxx::uri{ mongoURIStr };
// Create an instance.
mongocxx::instance inst{};
mongocxx::options::client client_options;
auto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 };
client_options.server_api_opts(api);
mongocxx::client conn{ mongoURI, client_options};
const std::string dbName = "fileStorage";
const std::string collName = "files";
auto fileStorageDB = conn.database(dbName);
auto filesCollection = fileStorageDB.collection(collName);
// Drop previous data.
filesCollection.drop();
// Upload all files in the upload folder.
const std::string uploadFolder = "/Users/bishtr/repos/fileStorage/upload/";
for (const auto & filePath : std::filesystem::directory_iterator(uploadFolder))
{
if(std::filesystem::is_directory(filePath))
continue;
if(!upload(filePath.path().string(), filesCollection))
{
std::cout << "Upload failed for: " << filePath.path().string() << std::endl;
}
}
// Download files to the download folder.
const std::string downloadFolder = "/Users/bishtr/repos/fileStorage/download/";
// Search with specific filenames and download it.
const std::string fileName1 = "image-15.jpg", fileName2 = "Hi Seed Shaker 120bpm On Accents.wav";
for ( auto fileName : {fileName1, fileName2} )
{
if (!download(fileName, downloadFolder, filesCollection))
{
std::cout << "Download failed for: " << fileName << std::endl;
}
}
// Download all files in the collection.
auto cursor = filesCollection.find({});
for (auto&& doc : cursor)
{
auto fileName = std::string(docFILE_NAME].get_string().value);
if (!download(fileName, downloadFolder, filesCollection))
{
std::cout << "Download failed for: " << fileName << std::endl;
}
}
}
catch(const std::exception& e)
{
std::cout << "Exception encountered: " << e.what() << std::endl;
}
return 0;
}
```
## Application in action
Before executing this application, add some files (like images or audios) under the `uploadFolder` directory.
![Files to be uploaded from local disk to MongoDB.][2]
Execute the application and you’ll observe output like this, signifying that the files are successfully uploaded and downloaded.
![Application output showing successful uploads and downloads.][3]
You can see the collection in [Atlas or MongoDB Compass reflecting the files uploaded via the application.
, offer a powerful solution for handling file storage in C++ applications. We can't wait to see what you build next! Share your creation with the community and let us know how it turned out!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt24f4df95c9cee69a/6504c0fd9bcd1b134c1d0e4b/image1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7c530c1eb76f566c/6504c12df4133500cb89250f/image3.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt768d2c8c6308391e/6504c153b863d9672da79f4c/image5.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8c199ec2272f2c4f/6504c169a8cf8b4b4a3e1787/image2.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt78bb48b832d91de2/6504c17fec9337ab51ec845e/image4.png | md | {
"tags": [
"Atlas",
"C++"
],
"pageDescription": "Learn how to store binary data to MongoDB using the C++ driver.",
"contentType": "Tutorial"
} | Storing Binary Data with MongoDB and C++ | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/realm-web-sdk | created |
MY MOVIES
| md | {
"tags": [
"JavaScript",
"Realm"
],
"pageDescription": "Send MongoDB Atlas queries directly from the web browser with the Realm Web SDK.",
"contentType": "Quickstart"
} | Realm Web SDK Tutorial | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/bson-data-types-date | created | # Quick Start: BSON Data Types - Date
Dates and times in programming can be a challenge. Which Time Zone is the event happening in? What date format is being used? Is it `MM/DD/YYYY` or `DD/MM/YYYY`? Settling on a standard is important for data storage and then again when displaying the date and time. The recommended way to store dates in MongoDB is to use the BSON Date data type.
The BSON Specification refers to the `Date` type as the *UTC datetime* and is a 64-bit integer. It represents the number of milliseconds since the Unix epoch, which was 00:00:00 UTC on 1 January 1970. This provides a lot of flexibilty in past and future dates. With a 64-bit integer in use, we are able to represent dates *roughly* 290 million years before and after the epoch. As a signed 64-bit integer we are able to represent dates *prior* to 1 Jan 1970 with a negative number and positive numbers represent dates *after* 1 Jan 1970.
## Why & Where to Use
You'll want to use the `Date` data type whenever you need to store date and/or time values in MongoDB. You may have seen a `timestamp` data type as well and thought "Oh, that's what I need." However, the `timestamp` data type should be left for **internal** usage in MongoDB. The `Date` type is the data type we'll want to use for application development.
## How to Use
There are some benefits to using the `Date` data type in that it comes with some handy features and methods. Need to assign a `Date` type to a variable? We have you covered there:
``` javascript
var newDate = new Date();
```
What did that create exactly?
``` none
> newDate;
ISODate("2020-05-11T20:14:14.796Z")
```
Very nice, we have a date and time wrapped as an ISODate. If we need that printed in a `string` format, we can use the `toString()` method.
``` none
> newDate.toString();
Mon May 11 2020 13:14:14 GMT-0700 (Pacific Daylight Time)
```
## Wrap Up
>Get started exploring BSON types, like Date, with MongoDB Atlas today!
The `date` field is the recommended data type to use when you want to store date and time information in MongoDB. It provides the flexibility to store date and time values in a consistent format that can easily be stored and retrieved by your application. Give the BSON `Date` data type a try for your applications. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Working with dates and times can be a challenge. The Date BSON data type is an unsigned 64-bit integer with a UTC (Universal Time Coordinates) time zone.",
"contentType": "Quickstart"
} | Quick Start: BSON Data Types - Date | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-vector-search-openai-filtering | created | # Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality
Search functionality is a critical component of many modern web applications. Providing users with relevant results based on their search queries and additional filters dramatically improves their experience and satisfaction with your app.
In this article, we'll go over an implementation of search functionality using OpenAI's GPT-4 model and MongoDB's
Atlas Vector search. We've created a request handler function that not only retrieves relevant data based on a user's search query but also applies additional filters provided by the user.
Enriching the existing documents data with embeddings is covered in our main Vector Search Tutorial.
## Search in the Airbnb app context ##
Consider a real-world scenario where we have an Airbnb-like app. Users can perform a free text search for listings and also filter results based on certain criteria like the number of rooms, beds, or the capacity of people the property can accommodate.
To implement this functionality, we use MongoDB's full-text search capabilities for the primary search, and OpenAI's GPT-4 model to create embeddings that contain the semantics of the data and use Vector Search to find relevant results.
The code to the application can be found in the following GitHub repository.
## The request handler
For the back end, we have used Atlas app services with a simple HTTPS “GET” endpoint.
Our function is designed to act as a request handler for incoming search requests.
When a search request arrives, it first extracts the search terms and filters from the query parameters. If no search term is provided, it returns a random sample of 30 listings from the database.
If a search term is present, the function makes a POST request to OpenAI's API, sending the search term and asking for an embedded representation of it using a specific model. This request returns a list of “embeddings,” or vector representations of the search term, which is then used in the next step.
```javascript
// This function is the endpoint's request handler.
// It interacts with MongoDB Atlas and OpenAI API for embedding and search functionality.
exports = async function({ query }, response) {
// Query params, e.g. '?search=test&beds=2' => {search: "test", beds: "2"}
const { search, beds, rooms, people, maxPrice, freeTextFilter } = query;
// MongoDB Atlas configuration.
const mongodb = context.services.get('mongodb-atlas');
const db = mongodb.db('sample_airbnb'); // Replace with your database name.
const listingsAndReviews = db.collection('listingsAndReviews'); // Replace with your collection name.
// If there's no search query, return a sample of 30 random documents from the collection.
if (!search || search === "") {
return await listingsAndReviews.aggregate({$sample: {size: 30}}]).toArray();
}
// Fetch the OpenAI key stored in the context values.
const openai_key = context.values.get("openAIKey");
// URL to make the request to the OpenAI API.
const url = 'https://api.openai.com/v1/embeddings';
// Call OpenAI API to get the embeddings.
let resp = await context.http.post({
url: url,
headers: {
'Authorization': [`Bearer ${openai_key}`],
'Content-Type': ['application/json']
},
body: JSON.stringify({
input: search,
model: "text-embedding-ada-002"
})
});
// Parse the JSON response
let responseData = EJSON.parse(resp.body.text());
// Check the response status.
if(resp.statusCode === 200) {
console.log("Successfully received embedding.");
// Fetch a random sample document.
const embedding = responseData.data[0].embedding;
console.log(JSON.stringify(embedding))
let searchQ = {
"index": "default",
"queryVector": embedding,
"path": "doc_embedding",
"k": 100,
"numCandidates": 1000
}
// If there's any filter in the query parameters, add it to the search query.
if (freeTextFilter){
// Turn free text search using GPT-4 into filter
const sampleDocs = await listingsAndReviews.aggregate([
{ $sample: { size: 1 }},
{ $project: {
_id: 0,
bedrooms: 1,
beds: 1,
room_type: 1,
property_type: 1,
price: 1,
accommodates: 1,
bathrooms: 1,
review_scores: 1
}}
]).toArray();
const filter = await context.functions.execute("getSearchAIFilter",sampleDocs[0],freeTextFilter );
searchQ.filter = filter;
}
else if(beds || rooms) {
let filter = { "$and" : []}
if (beds) {
filter.$and.push({"beds" : {"$gte" : parseInt(beds) }})
}
if (rooms)
{
filter.$and.push({"bedrooms" : {"$gte" : parseInt(rooms) }})
}
searchQ.filter = filter;
}
// Perform the search with the defined query and limit the result to 50 documents.
let docs = await listingsAndReviews.aggregate([
{ "$vectorSearch": searchQ },
{ $limit : 50 }
]).toArray();
return docs;
} else {
console.error("Failed to get embeddings");
return [];
}
};
```
To cover the filtering part of the query, we are using embedding and building a filter query to cover the basic filters that a user might request — in the presented example, two rooms and two beds in each.
```js
else if(beds || rooms) {
let filter = { "$and" : []}
if (beds) {
filter.$and.push({"beds" : {"$gte" : parseInt(beds) }})
}
if (rooms)
{
filter.$and.push({"bedrooms" : {"$gte" : parseInt(rooms) }})
}
searchQ.filter = filter;
}
```
## Calling OpenAI API
![AI Filter
Let's consider a more advanced use case that can enhance our filtering experience. In this example, we are allowing a user to perform a free-form filtering that can provide sophisticated sentences, such as, “More than 1 bed and rating above 91.”
We call the OpenAI API to interpret the user's free text filter and translate it into something we can use in a MongoDB query. We send the API a description of what we need, based on the document structure we're working with and the user's free text input. This text is fed into the GPT-4 model, which returns a JSON object with 'range' or 'equals' operators that can be used in a MongoDB search query.
### getSearchAIFilter function
```javascript
// This function is the endpoint's request handler.
// It interacts with OpenAI API for generating filter JSON based on the input.
exports = async function(sampleDoc, search) {
// URL to make the request to the OpenAI API.
const url = 'https://api.openai.com/v1/chat/completions';
// Fetch the OpenAI key stored in the context values.
const openai_key = context.values.get("openAIKey");
// Convert the sample document to string format.
let syntDocs = JSON.stringify(sampleDoc);
console.log(syntDocs);
// Prepare the request string for the OpenAI API.
const reqString = `Convert programmatic command to Atlas $search filter only for range and equals JS:\n\nExample: Based on document structure {"siblings" : '...', "dob" : "..."} give me the filter of all people born 2015 and siblings are 3 \nOutput: {"filter":{ "compound" : { "must" : [ {"range": {"gte": 2015, "lte" : 2015,"path": "dob"} },{"equals" : {"value" : 3 , path :"siblings"}}]}}} \n\n provide the needed filter to accomodate ${search}, pick a path from structure ${syntDocs}. Need just the json object with a range or equal operators. No explanation. No 'Output:' string in response. Valid JSON.`;
console.log(`reqString: ${reqString}`);
// Call OpenAI API to get the response.
let resp = await context.http.post({
url: url,
headers: {
'Authorization': `Bearer ${openai_key}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: "gpt-4",
temperature: 0.1,
messages: [
{
"role": "system",
"content": "Output filter json generator follow only provided rules"
},
{
"role": "user",
"content": reqString
}
]
})
});
// Parse the JSON response
let responseData = JSON.parse(resp.body.text());
// Check the response status.
if(resp.statusCode === 200) {
console.log("Successfully received code.");
console.log(JSON.stringify(responseData));
const code = responseData.choices[0].message.content;
let parsedCommand = EJSON.parse(code);
console.log('parsed' + JSON.stringify(parsedCommand));
// If the filter exists and it's not an empty object, return it.
if (parsedCommand.filter && Object.keys(parsedCommand.filter).length !== 0) {
return parsedCommand.filter;
}
// If there's no valid filter, return an empty object.
return {};
} else {
console.error("Failed to generate filter JSON.");
console.log(JSON.stringify(responseData));
return {};
}
};
```
## MongoDB search and filters
The function then constructs a MongoDB search query using the embedded representation of the search term and any additional filters provided by the user. This query is sent to MongoDB, and the function returns the results as a response —something that looks like the following for a search of “New York high floor” and “More than 1 bed and rating above 91.”
```javascript
{$vectorSearch:{
"index": "default",
"queryVector": embedding,
"path": "doc_embedding",
"filter" : { "$and" : [{"beds": {"$gte" : 1}} , "score": {"$gte" : 91}}]},
"k": 100,
"numCandidates": 1000
}
}
```
## Conclusion
This approach allows us to leverage the power of OpenAI's GPT-4 model to interpret free text input and MongoDB's full-text search capability to return highly relevant search results. The use of natural language processing and AI brings a level of flexibility and intuitiveness to the search function that greatly enhances the user experience.
Remember, however, this is an advanced implementation. Ensure you have a good understanding of how MongoDB and OpenAI operate before attempting to implement a similar solution. Always take care to handle sensitive data appropriately and ensure your AI use aligns with OpenAI's use case policy. | md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js",
"AI"
],
"pageDescription": "This article delves into the integration of search functionality in web apps using OpenAI's GPT-4 model and MongoDB's Atlas Vector search. By harnessing the capabilities of AI and database management, we illustrate how to create a request handler that fetches data based on user queries and applies additional filters, enhancing user experience.",
"contentType": "Tutorial"
} | Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/document-enrichment-and-schema-updates | created | # Document Enrichment and Schema Updates
So your business needs have changed and there’s additional data that needs to be stored within an existing dataset. Fear not! With MongoDB, this is no sweat.
> In this article, I’ll show you how to quickly add and populate additional fields into an existing database collection.
## The Scenario
Let’s say you have a “Netflix” type application and you want to allow users to see which movies they have watched. We’ll use the sample\_mflix database from the sample datasets available in a MongoDB Atlas cluster.
Here is the existing schema for the user collection in the sample\_mflix database:
``` js
{
_id: ObjectId(),
name: ,
email: ,
password:
}
```
## The Solution
There are a few ways we could go about this. Since MongoDB has a flexible data model, we can just add our new data into existing documents.
In this example, we are going to assume that we know the user ID. We’ll use `updateOne` and the `$addToSet` operator to add our new data.
``` js
const { db } = await connectToDatabase();
const collection = await db.collection(“users”).updateOne(
{ _id: ObjectID(“59b99db9cfa9a34dcd7885bf”) },
{
$addToSet: {
moviesWatched: {
,
,
<poster>
}
}
}
);
```
The `$addToSet` operator adds a value to an array avoiding duplicates. If the field referenced is not present in the document, `$addToSet` will create the array field and enter the specified value. If the value is already present in the field, `$addToSet` will do nothing.
Using `$addToSet` will prevent us from duplicating movies when they are watched multiple times.
## The Result
Now, when a user goes to their profile, they will see their watched movies.
But what if the user has not watched any movies? The user will simply not have that field in their document.
I’m using Next.js for this application. I simply need to check to see if a user has watched any movies and display the appropriate information accordingly.
``` js
{ moviesWatched
? "Movies I've Watched"
: "I have not watched any movies yet :("
}
```
## Conclusion
Because of MongoDB’s flexible data model, we can have multiple schemas in one collection. This allows you to easily update data and fields in existing schemas.
If you would like to learn more about schema validation, take a look at the Schema Validation documentation.
I’d love to hear your feedback or questions. Let’s chat in the MongoDB Community. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "So your business needs have changed and there’s additional data that needs to be stored within an existing dataset. Fear not! With MongoDB, this is no sweat. In this article, I’ll show you how to quickly add and populate additional fields into an existing database collection.",
"contentType": "Tutorial"
} | Document Enrichment and Schema Updates | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/serverless-instances-billing-optimize-bill-indexing | created | # How to Optimize Your Serverless Instance Bill with Indexing
Serverless solutions are quickly gaining traction among developers and organizations alike as a means to move fast, minimize overhead, and optimize costs. But shifting from a traditional pre-provisioned and predictable monthly bill to a consumption or usage-based model can sometimes result in confusion around how that bill is generated. In this article, we’ll take you through the basics of our serverless billing model and give you tips on how to best optimize your serverless database for cost efficiency.
## What are serverless instances?
MongoDB Atlas serverless instances, recently announced as generally available, provide an on-demand serverless endpoint for your application with no sizing required. You simply choose a cloud provider and region to get started, and as your app grows, your serverless database will seamlessly scale based on demand and only charge for the resources you use.
Unlike our traditional clusters, serverless instances offer a fundamentally different pricing model that is primarily metered on reads, writes, and storage with automatic tiered discounts on reads as your usage scales. So, you can start small without any upfront commitments and never worry about paying for unused resources if your workload is idle.
### Serverless Database Pricing
Pay only for the operations you run.
| Item | Description | Pricing |
| ---- | ----------- | ------- |
| Read Processing Unit (RPU) | Number of read operations and documents scanned* per operation
*\*Number of documents read in 4KB chunks and indexes read in 256 byte chunks* | $0.10/million for the first 50 million per day\*
*\*Daily RPU tiers: Next 500 million: $0.05/million Reads thereafter: $0.01/million* |
| Write Processing Unit (WPU) | Number of write operations\* to the database
\*Number of documents and indexes written in 1KB chunks | $1.00/million |
| Storage | Data and indexes stored on the database | $0.25/GB-month |
| Standard Backup | Download and restore of backup snapshots\*
\*2 free daily snapshots included per serverless instance* | $2.50/hour\*
\*To download or restore the data* |
| Serverless Continuous Backup | 35-day backup retention for daily snapshots | $0.20/GB-month |
| Data Transfer | Inbound/outbound data to/from the database | $0.015 - $0.10/GB\*
\**Depending on traffic source and destination* |
At first glance, read processing units (RPU) and write processing units (WPU) might be new units to you, so let’s quickly dig into what they mean. We use RPUs and WPUs to quantify the amount of work the database has to do to service a query, or to perform a write. To put it simply, a read processing unit (RPU) refers to the read operations to the database and is calculated based on the number of operations run and documents scanned per operation. Similarly, a write processing unit (WPU) is a write operation to the database and is calculated based on the number of bytes written to each document or index. For further explanation of cost units, please refer to our documentation.
Now that you have a basic understanding of the pricing model, let’s go through an example to provide more context and tips on how to ensure your operations are best optimized to minimize costs.
For this example, we’ll be using the sample dataset in Atlas. To use sample data, simply go to your serverless instance deployment and select “Load Sample Dataset” from the dropdown as seen below.
This will load a few collections, such as weather data and Airbnb listing data. Note that loading the sample dataset will consume approximately one million WPUs (less than $1 in most supported regions), and you will be billed accordingly.
Now, let’s take a look at what happens when we interact with our data and do some search queries.
## Scenario 1: Query on unindexed fields
For this exercise, I chose the sample\_weatherdata collection. While looking at the data in the Atlas Collections view, it’s clear that the weather data collection has information from various places and that most locations have a call letter code as a convenient way to identify where this weather reading data was taken.
For this example, let’s simulate what would happen if a user comes to your weather app and does a lookup by a geographic location. In this weather data collection, geographic locations can be identified by callLetters, which are specific codes for various weather stations across the world. I arbitrarily picked station code “ESVJ,” which is a weather buoy in the Atlantic Ocean.
Here is what we see when we run this query in Atlas Data Explorer:
We can see this query returns three records. Now, let’s take a look at how many RPUs this query would cost me. We should remember that RPUs are calculated based on the number of read operations and the number of documents scanned per operation.
To execute the previous query, a full collection scan is required, which results in approximately 1,000 RPUs.
I took this query and ran this nearly 3,000 times through a shell script. This will simulate around 3,000 users coming to an app to check the weather in a day. Here is the code behind the script:
```
weatherRPUTest.sh
for ((i=0; i<=3000; i++)); do
echo testing $i
mongosh "mongodb+srv://vishalserverless1.qdxrf.mongodb.net/sample_weatherdata" --apiVersion 1 --username vishal --password ******** < mongoTest.js
done
mongoTest.js
db.data.find({callLetters: "ESVJ"})
```
As expected, 3,000 iterations will be 1,000 * 3,000 = 3,000,000 RPUs = 3MM RPUs = $0.30.
Based on this, the cost per user for this application would be $0.01 per user (calculated as: 3,000,000 / 3,000 = 1,000 RPUs = $0.01).
The cost of $0.01 per user seems to be very high for a database lookup, because if this weather app were to scale to reach a similar level of activity to Accuweather, who sees about 9.5B weather requests in a day, you’d be paying close to around $1 million in database costs per day. By leaving your query this way, it’s likely that you’d be faced with an unexpectedly high bill as your usage scales — falling into a common trap that many new serverless users face.
To avoid this problem, we recommend that you follow MongoDB best practices and index your data to optimize your queries for both performance and cost. Indexes are special data structures that store a small portion of the collection's data set in an easy-to-traverse form.
Without indexes, MongoDB must perform a collection scan—i.e., scan every document in a collection—to select those documents that match the query statement (something you just saw in the example above). By adding an index to appropriate queries, you can limit the number of documents it must inspect, significantly reducing the operations you are charged for.
Let’s look at how indexing can help you reduce your RPUs significantly.
## Scenario two: Querying with indexed fields
First, let’s create a simple index on the field ‘callLetters’:
This operation will typically finish within 2-3 seconds. For reference, we can see the size of the index created on the index tab:
Due to the data structure of the index, the exact number of index reads is hard to compute. However, we can run the same script again for 3,000 iterations and compare the number of RPUs.
The 3,000 queries on the indexed field now result in approximately 6,500 RPUs in contrast to the 3 million RPUs from the un-indexed query, which is a **99.8% reduction in RPUs**.
We can see that by simply adding the above index, we were able to reduce the cost per user to roughly $0.000022 (calculated as: 6,500/3,000 = 2.2 RPUs = $0.000022), which is a huge cost saving compared to the previous cost of $0.01 per user.
Therefore, indexing not only helps with improving the performance and scale of your queries, but it can also reduce your consumed RPUs significantly, which reduces your costs. Note that there can be rare scenarios where this is not true (where the size of the index is much larger than the number of documents). However, in most cases, you should see a significant reduction in cost and an improvement in performance.
## Take action to optimize your costs today
As you can see, adopting a usage-based pricing model can sometimes require you to be extra diligent in ensuring your data structure and queries are optimized. But when done correctly, the time spent to do those optimizations often pays off in more ways than one.
If you’re unsure of where to start, we have built-in monitoring tools available in the Atlas UI that can help you. The performance advisor automatically monitors your database for slow-running queries and will suggest new indexes to help improve query performance. Or, if you’re looking to investigate slow-running queries further, you can use query profiler to view a breakdown of all slow-running queries that occurred in the last 24 hours. If you prefer a terminal experience, you can also analyze your query performance in the MongoDB Shell or in MongoDB Compass.
If you need further assistance, you can always contact our support team via chat or the MongoDB support portal. | md | {
"tags": [
"Atlas",
"Serverless"
],
"pageDescription": "Shifting from a pre-provisioned to a serverless database can be challenging. Learn how to optimize your database and save money with these best practices.",
"contentType": "Article"
} | How to Optimize Your Serverless Instance Bill with Indexing | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/unique-indexes-quirks-unique-documents-array-documents | created | # Unique Indexes Quirks and Unique Documents in an Array of Documents
We are developing an application to summarize a user's financial situation. The main page of this application shows us the user's identification and the balances on all banking accounts synced with our application.
As we've seen in blog posts and recommendations of how to get the most out of MongoDB, "Data that is accessed together should be stored together." We thought of the following document/structure to store the data used on the main page of the application:
```javascript
const user = {
_id: 1,
name: { first: "john", last: "smith" },
accounts:
{ balance: 500, bank: "abc", number: "123" },
{ balance: 2500, bank: "universal bank", number: "9029481" },
],
};
```
Based on the functionality of our application, we determined the following rules:
- A user can register in the application and not sync a bank account.
- An account is identified by its `bank` and `number` fields.
- The same account shouldn't be registered for two different users.
- The same account shouldn't be registered multiple times for the same user.
To enforce what was presented above, we decided to create an index with the following characteristics:
- Given that the fields `bank` and `number` must not repeat, this index must be set as [Unique.
- Since we are indexing more than one field, it'll be of type Compound.
- Since we are indexing documents inside of an array, it'll also be of type Multikey.
As a result of that, we have a `Compound Multikey Unique Index` with the following specification and options:
```javascript
const specification = { "accounts.bank": 1, "accounts.number": 1 };
const options = { name: "Unique Account", unique: true };
```
To validate that our index works as we intended, we'll use the following data on our tests:
```javascript
const user1 = { _id: 1, name: { first: "john", last: "smith" } };
const user2 = { _id: 2, name: { first: "john", last: "appleseed" } };
const account1 = { balance: 500, bank: "abc", number: "123" };
```
First, let's add the users to the collection:
```javascript
db.users.createIndex(specification, options); // Unique Account
db.users.insertOne(user1); // { acknowledged: true, insertedId: 1)}
db.users.insertOne(user2); // MongoServerError: E11000 duplicate key error collection: test.users index: Unique Account dup key: { accounts.bank: null, accounts.number: null }
```
Pretty good. We haven't even started working with the accounts, and we already have an error. Let's see what is going on.
Analyzing the error message, it says we have a duplicate key for the index `Unique Account` with the value of `null` for the fields `accounts.bank` and `accounts.number`. This is due to how indexing works in MongoDB. When we insert a document in an indexed collection, and this document doesn't have one or more of the fields specified in the index, the value of the missing fields will be considered `null`, and an entry will be added to the index.
Using this logic to analyze our previous test, when we inserted `user1`, it didn't have the fields `accounts.bank` and `accounts.number` and generated an entry in the index `Unique Account` with the value of `null` for both. When we tried to insert the `user2` in the collection, we had the same behavior, and another entry in the index `Unique Account` would have been created if we hadn't specified this index as `unique`. More info about missing fields and unique indexes can be found in our docs.
The solution for this issue is to only index documents with the fields `accounts.bank` and `accounts.number`. To accomplish that, we can specify a partial filter expression on our index options to accomplish that. Now we have a `Compound Multikey Unique Partial Index` (fancy name, hum, who are we trying to impress here?) with the following specification and options:
```javascript
const specification = { "accounts.bank": 1, "accounts.number": 1 };
const optionsV2 = {
name: "Unique Account V2",
partialFilterExpression: {
"accounts.bank": { $exists: true },
"accounts.number": { $exists: true },
},
unique: true,
};
```
Back to our tests:
```javascript
// Cleaning our environment
db.users.drop({}); // Delete documents and indexes definitions
/* Tests */
db.users.createIndex(specification, optionsV2); // Unique Account V2
db.users.insertOne(user1); // { acknowledged: true, insertedId: 1)}
db.users.insertOne(user2); // { acknowledged: true, insertedId: 2)}
```
Our new index implementation worked, and now we can insert those two users without accounts. Let's test account duplication, starting with the same account for two different users:
```javascript
// Cleaning the collection
db.users.deleteMany({}); // Delete only documents, keep indexes definitions
db.users.insertMany(user1, user2]);
/* Test */
db.users.updateOne({ _id: user1._id }, { $push: { accounts: account1 } }); // { ... matchedCount: 1, modifiedCount: 1 ...}
db.users.updateOne({ _id: user2._id }, { $push: { accounts: account1 } }); // MongoServerError: E11000 duplicate key error collection: test.users index: Unique Account V2 dup key: { accounts.bank: "abc", accounts.number: "123" }
```
We couldn't insert the same account into different users as we expected. Now, we'll try the same account for the same user.
```javascript
// Cleaning the collection
db.users.deleteMany({}); // Delete only documents, keep indexes definitions
db.users.insertMany([user1, user2]);
/* Test */
db.users.updateOne({ _id: user1._id }, { $push: { accounts: account1 } }); // { ... matchedCount: 1, modifiedCount: 1 ...}
db.users.updateOne({ _id: user1._id }, { $push: { accounts: account1 } }); // { ... matchedCount: 1, modifiedCount: 1 ...}
db.users.findOne({ _id: user1._id }); /*{
_id: 1,
name: { first: 'john', last: 'smith' },
accounts: [
{ balance: 500, bank: 'abc', number: '123' },
{ balance: 500, bank: 'abc', number: '123' }
]
}*/
```
When we don't expect things to work, they do. Again, another error was caused by not knowing or considering how indexes work on MongoDB. Reading about [unique constraints in the MongoDB documentation, we learn that MongoDB indexes don't duplicate strictly equal entries with the same key values pointing to the same document. Considering this, when we inserted `account1` for the second time on our user, an index entry wasn't created. With that, we don't have duplicate values on it.
Some of you more knowledgeable on MongoDB may think that using $addToSet instead of $push would resolve our problem. Not this time, young padawan. The `$addToSet` function would consider all the fields in the account's document, but as we specified at the beginning of our journey, an account must be unique and identifiable by the fields `bank` and `number`.
Okay, what can we do now? Our index has a ton of options and compound names, and our application doesn't behave as we hoped.
A simple way out of this situation is to change how our update function is structured, changing its filter parameter to match only the user's documents where the account we want to insert isn't in the `accounts` array.
```javascript
// Cleaning the collection
db.users.deleteMany({}); // Delete only documents, keep indexes definitions
db.users.insertMany(user1, user2]);
/* Test */
const bankFilter = {
$not: { $elemMatch: { bank: account1.bank, number: account1.number } }
};
db.users.updateOne(
{ _id: user1._id, accounts: bankFilter },
{ $push: { accounts: account1 } }
); // { ... matchedCount: 1, modifiedCount: 1 ...}
db.users.updateOne(
{ _id: user1._id, accounts: bankFilter },
{ $push: { accounts: account1 } }
); // { ... matchedCount: 0, modifiedCount: 0 ...}
db.users.findOne({ _id: user1._id }); /*{
_id: 1,
name: { first: 'john', last: 'smith' },
accounts: [ { balance: 500, bank: 'abc', number: '123' } ]
}*/
```
Problem solved. We tried to insert the same account for the same user, and it didn't insert, but it also didn't error out.
This behavior doesn't meet our expectations because it doesn't make it clear to the user that this operation is prohibited. Another point of concern is that this solution considers that every time a new account is inserted in the database, it'll use the correct update filter parameters.
We've worked in some companies and know that as people come and go, some knowledge about the implementation is lost, interns will try to reinvent the wheel, and some nasty shortcuts will be taken. We want a solution that will error out in any case and stop even the most unscrupulous developer/administrator who dares to change data directly on the production database 😱.
[MongoDB schema validation for the win.
A quick note before we go down this rabbit role. MongoDB best practices recommend implementing schema validation on the application level and using MongoDB schema validation as a backstop.
In MongoDB schema validation, it's possible to use the operator `$expr` to write an aggregation expression to validate the data of a document when it has been inserted or updated. With that, we can write an expression to verify if the items inside an array are unique.
After some consideration, we get the following expression:
```javascript
const accountsSet = {
$setIntersection: {
$map: {
input: "$accounts",
in: { bank: "$$this.bank", number: "$$this.number" }
},
},
};
const uniqueAccounts = {
$eq: { $size: "$accounts" }, { $size: accountsSet }],
};
const accountsValidator = {
$expr: {
$cond: {
if: { $isArray: "$accounts" },
then: uniqueAccounts,
else: true,
},
},
};
```
It can look a little scary at first, but we can go through it.
The first operation we have inside of [$expr is a $cond. When the logic specified in the `if` field results in `true`, the logic within the field `then` will be executed. When the result is `false`, the logic within the `else` field will be executed.
Using this knowledge to interpret our code, when the accounts array exists in the document, `{ $isArray: "$accounts" }`, we will execute the logic within`uniqueAccounts`. When the array doesn't exist, we return `true` signaling that the document passed the schema validation.
Inside the `uniqueAccounts` variable, we verify if the $size of two things is $eq. The first thing is the size of the array field `$accounts`, and the second thing is the size of `accountsSet` that is generated by the $setIntersection function. If the two arrays have the same size, the logic will return `true`, and the document will pass the validation. Otherwise, the logic will return `false`, the document will fail validation, and the operation will error out.
The $setIntersenction function will perform a set operation on the array passed to it, removing duplicate entries. The array passed to `$setIntersection` will be generated by a $map function, which maps each account in `$accounts` to only have the fields `bank` and `number`.
Let's see if this is witchcraft or science:
```javascript
// Cleaning the collection
db.users.drop({}); // Delete documents and indexes definitions
db.createCollection("users", { validator: accountsValidator });
db.users.createIndex(specification, optionsV2);
db.users.insertMany([user1, user2]);
/* Test */
db.users.updateOne({ _id: user1._id }, { $push: { accounts: account1 } }); // { ... matchedCount: 1, modifiedCount: 1 ...}
db.users.updateOne(
{ _id: user1._id },
{ $push: { accounts: account1 } }
); /* MongoServerError: Document failed validation
Additional information: {
failingDocumentId: 1,
details: {
operatorName: '$expr',
specifiedAs: {
'$expr': {
'$cond': {
if: { '$and': '$accounts' },
then: { '$eq': [ [Object], [Object] ] },
else: true
}
}
},
reason: 'expression did not match',
expressionResult: false
}
}*/
```
Mission accomplished! Now, our data is protected against those who dare to make changes directly in the database.
To get to our desired behavior, we reviewed MongoDB indexes with the `unique` option, how to add safety guards to our collection with a combination of parameters in the filter part of an update function, and how to use MongoDB schema validation to add an extra layer of security to our data. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn about how to handle unique documents in an array and some of the surrounding MongoDB unique index quirks.",
"contentType": "Tutorial"
} | Unique Indexes Quirks and Unique Documents in an Array of Documents | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/zero-hero-mrq | created | # From Zero to Hero with MrQ
> The following content is based on a recent episode from the MongoDB Podcast. Want to hear the full conversation? Head over the to episode page!
When you think of online gambling, what do you imagine? Big wins? Even bigger losses? Whatever view you might have in your mind, MrQ is here to revolutionize the industry.
MrQ is redefining the online casino gaming industry. The data-driven technology company saw its inception in September 2015 and officially launched in 2018. CTO Iulian Dafinoiu speaks of humble beginnings for MrQ — it was bootstrapped, with no external investment. And to this day, the company maintains a focus on building a culture of value- and vision-led teams.
MrQ wanted to become, in a sense, the Netflix of casinos. Perfecting a personalized user experience is at the heart of everything they do. The idea is to give players as much data as possible to make the right decisions. MrQ’s games don’t promise life-changing wins. As Dafinoiu puts it, you win some and you lose some. In fact, you might win, but you’ll definitely lose.
Gambling is heavily commoditized, and players expect to play the same games each time — ones that they have a personal connection with. MrQ aims to keep it all fun for their players with an extensive gaming catalog of player favorites, shifting the perception of what gambling should always be: enjoyable. But they’re realists and know that this can happen only if players are in control and everything is transparent.
At the same time, they had deeper goals around the data they were using.
>”The mindset was always to not be an online casino, but actually be a kind of data-driven technology company that operates in the gambling space.”
## The challenge
In the beginning, MrQ struggled with the availability of player data and real-time events. There was a poor back office system and technical implementations. The option to scale quickly and seamlessly was a must, especially as the UK-based company strives to expand into other countries and markets, within a market that’s heavily regulated, which can be a hindrance to compliance.
Behind the curtains, Dafinoiu started with Postgres but quickly realized this wasn’t going to give MrQ the freedom to scale how they wanted to.
>”I couldn’t dedicate a lot of time to putting servers together, managing the way they kind of scale, creating replica sets or even shards, which was almost impossible for MariaDB or Postgres, at the time. I couldn’t invest a lot of time into that."
## The solution
After realizing the shortcomings of Postgres, MrQ switched to MongoDB due to its ease and scalability. In the beginning, it was just Dafinoiu managing everything. He needed something that could almost do it for him. Thus, MongoDB became their primary database technology. It’s their primary source of truth and can scale horizontally without blinking twice. Dafinoiu saw that the schema flexibility is a good fit and the initial performance was strong. Initially, they used it on-premise but then migrated to Atlas, our multi-cloud database service.
Aside from MongoDB, MrQ uses Java and Kotlin for their backend system, React and JSON for the front end, and Kafka for real-time events.
With a tech stack that allows for more effortless growth, MrQ is looking toward a bright future.
## Next steps for MrQ
Dafinoiu came to MrQ with 13 years of experience as a software engineer. More than seven years into his journey with the company, he’s looking to take their more than one million players, 700 games, and 40 game providers to the next level. They’re actively working on moving into other territories and have a goal of going global this year, with MrQ+.
>”There’s a lot of compliance and regulations around it because you need to acquire new licenses for almost every new market that you want to go into."
Internally, the historically small development studio will continue to prioritize slow but sustainable growth, with workplace culture always at the forefront. For their customers, MrQ plans to continue using the magic of machine learning to provide a stellar experience. They want to innovate by creating their own games and even move into the Bingo space, making it a social experience for all ages with a chat feature and different versions, iterations, and interpretations of the long-time classic. Payments will also be faster and more stable. Overall, players can expect MrQ to continue reinforcing its place as one of the top destinations for online casino gaming.
Want to hear more from Iulian Dafinoiu about his journey with MrQ and how the platform interacts with MongoDB? Head over to our podcast and listen to the full episode. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "MrQ is redefining the online casino gaming industry. Learn more about where the company comes from and where it's going, from CTO Iulian Dafinoiu.",
"contentType": "Article"
} | From Zero to Hero with MrQ | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/serverless-development-aws-lambda-mongodb-atlas-using-java | created | # Serverless Development with AWS Lambda and MongoDB Atlas Using Java
So you need to build an application that will scale with demand and a database to scale with it? It might make sense to explore serverless functions, like those offered by AWS Lambda, and a cloud database like MongoDB Atlas.
Serverless functions are great because you can implement very specific logic in the form of a function and the infrastructure will scale automatically to meet the demand of your users. This will spare you from having to spend potentially large amounts of money on always on, but not always needed, infrastructure. Pair this with an elastically scalable database like MongoDB Atlas, and you've got an amazing thing in the works.
In this tutorial, we're going to explore how to create a serverless function with AWS Lambda and MongoDB, but we're going to focus on using Java, one of the available AWS Lambda runtimes.
## The requirements
To be successful with this tutorial, there are a few requirements that must be met prior to continuing.
- Must have an AWS Lambda compatible version of Java installed and configured on your local computer.
- Must have a MongoDB Atlas instance deployed and configured.
- Must have an Amazon Web Services (AWS) account.
- Must have Gradle or Maven, but Gradle will be the focus for dependency management.
For the sake of this tutorial, the instance size or tier of MongoDB Atlas is not too important. In fact, an M0 instance, which is free, will work fine. You could also use a serverless instance which pairs nicely with the serverless architecture of AWS Lambda. Since the Atlas configuration is out of the scope of this tutorial, you'll need to have your user rules and network access rules in place already. If you need help configuring MongoDB Atlas, consider checking out the getting started guide.
Going into this tutorial, you might start with the following boilerplate AWS Lambda code for Java:
```java
package example;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
public class Handler implements RequestHandler, Void>{
@Override
public void handleRequest(Map event, Context context) {
// Code will be in here...
return null;
}
}
```
You can use a popular development IDE like IntelliJ, but it doesn't matter, as long as you have access to Gradle or Maven for building your project.
Speaking of Gradle, the following can be used as boilerplate for our tasks and dependencies:
```groovy
plugins {
id 'java'
}
group = 'org.example'
version = '1.0-SNAPSHOT'
repositories {
mavenCentral()
}
dependencies {
testImplementation platform('org.junit:junit-bom:5.9.1')
testImplementation 'org.junit.jupiter:junit-jupiter'
implementation 'com.amazonaws:aws-lambda-java-core:1.2.2'
implementation 'com.amazonaws:aws-lambda-java-events:3.11.1'
implementation 'org.slf4j:slf4j-log4j12:1.7.36'
runtimeOnly 'com.amazonaws:aws-lambda-java-log4j2:1.5.1'
}
test {
useJUnitPlatform()
}
task buildZip(type: Zip) {
into('lib') {
from(jar)
from(configurations.runtimeClasspath)
}
}
build.dependsOn buildZip
```
Take note that we do have our AWS Lambda dependencies included as well as a task for bundling everything into a ZIP archive when we build.
With the baseline AWS Lambda function in place, we can focus on the MongoDB development side of things.
## Installing, configuring, and connecting to MongoDB Atlas with the MongoDB driver for Java
To get started, we're going to need the MongoDB driver for Java available to us. This dependency can be added to our project's **build.gradle** file:
```groovy
dependencies {
// Previous boilerplate dependencies ...
implementation 'org.mongodb:bson:4.10.2'
implementation 'org.mongodb:mongodb-driver-sync:4.10.2'
}
```
The above two lines indicate that we want to use the driver for interacting with MongoDB and we also want to be able to interact with BSON.
With the driver and related components available to us, let's revisit the Java code we saw earlier. In this particular example, the Java code will be found in a **src/main/java/example/Handler.java** file.
```java
package example;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.Filters;
import org.bson.BsonDocument;
import org.bson.Document;
import org.bson.conversions.Bson;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
public class Handler implements RequestHandler, Void>{
private final MongoClient mongoClient;
public Handler() {
mongoClient = MongoClients.create(System.getenv("MONGODB_ATLAS_URI"));
}
@Override
public void handleRequest(Map event, Context context) {
MongoDatabase database = mongoClient.getDatabase("sample_mflix");
MongoCollection collection = database.getCollection("movies");
// More logic here ...
return null;
}
}
```
In the above code, we've imported a few classes, but we've also made some changes pertaining to how we plan to interact with MongoDB.
The first thing you'll notice is our use of the `Handler` constructor method:
```java
public Handler() {
mongoClient = MongoClients.create(System.getenv("MONGODB_ATLAS_URI"));
}
```
We're establishing our client, not our connection, outside of the handler function itself. We're doing this so our connections can be reused and not established on every invocation, which would potentially overload us with too many concurrent connections. We're also referencing an environment variable for our MongoDB Atlas URI string. This will be set later within the AWS Lambda portal.
It's bad practice to hard-code your URI string into your application. Use a configuration file or environment variable whenever possible.
Next up, we have the function logic where we grab a reference to our database and collection:
```java
@Override
public void handleRequest(Map event, Context context) {
MongoDatabase database = mongoClient.getDatabase("sample_mflix");
MongoCollection collection = database.getCollection("movies");
// More logic here ...
return null;
}
```
Because this example was meant to only be enough to get you going, we're using the sample datasets that are available for MongoDB Atlas users. It doesn't really matter what you use for this example as long as you've got a collection with some data.
We're on our way to being successful with MongoDB and AWS Lambda!
## Querying data from MongoDB when the serverless function is invoked
With the client configuration in place, we can focus on interacting with MongoDB. Before we do that, a few things need to change to the design of our function:
```java
public class Handler implements RequestHandler, List>{
private final MongoClient mongoClient;
public Handler() {
mongoClient = MongoClients.create(System.getenv("MONGODB_ATLAS_URI"));
}
@Override
public List handleRequest(Map event, Context context) {
MongoDatabase database = mongoClient.getDatabase("sample_mflix");
MongoCollection collection = database.getCollection("movies");
// More logic here ...
return null;
}
}
```
Notice that the implemented `RequestHandler` now uses `List` instead of `Void`. The return type of the `handleRequest` function has also been changed from `void` to `List` to support us returning an array of documents back to the requesting client.
While you could do a POJO approach in your function, we're going to use `Document` instead.
If we want to query MongoDB and return the results, we could do something like this:
```java
@Override
public List handleRequest(Map event, Context context) {
MongoDatabase database = mongoClient.getDatabase("sample_mflix");
MongoCollection collection = database.getCollection("movies");
Bson filter = new BsonDocument();
if(event.containsKey("title") && !event.get("title").isEmpty()) {
filter = Filters.eq("title", event.get("title"));
}
List results = new ArrayList<>();
collection.find(filter).limit(5).into(results);
return results;
}
```
In the above example, we are checking to see if the user input data `event` contains a property "title" and if it does, use it as part of our filter. Otherwise, we're just going to return everything in the specified collection.
Speaking of returning everything, the sample data set is rather large, so we're actually going to limit the results to five documents or less. Also, instead of using a cursor, we're going to dump all the results from the `find` operation into a `List` which we're going to return back to the requesting client.
We didn't do much in terms of data validation, and our query was rather simple, but it is a starting point for bigger and better things.
## Deploy the Java application to AWS Lambda
The project for this example is complete, so it is time to get it bundled and ready to go for deployment within the AWS cloud.
Since we're using Gradle for this project and we have a task defined for bundling, execute the build script doing something like the following:
```bash
./gradlew build
```
If everything built properly, you should have a **build/distributions/\*.zip** file. The name of that file will depend on all the naming you've used throughout your project.
With that file in hand, go to the AWS dashboard for Lambda and create a new function.
There are three things you're going to want to do for a successful deployment:
1. Add the environment variable for the MongoDB Atlas URI.
2. Upload the ZIP archive.
3. Rename the "Handler" information to reflect your actual project.
Within the AWS Lambda dashboard for your new function, click the "Configuration" tab followed by the "Environment Variables" navigation item. Add your environment variable information and make sure the key name matches the name you used in your code.
We used `MONGODB_ATLAS_URI` in the code, and the actual value would look something like this:
```
mongodb+srv://:@examples.170lwj0.mongodb.net/?retryWrites=true&w=majority
```
Just remember to use your actual username, password, and instance URL.
Next, you can upload your ZIP archive from the "Code" tab of the dashboard.
When the upload completes, on the "Code" tab, look for "Runtime Settings" section and choose to edit it. In our example, the package name was **example**, the Java file was named **Handler**, and the function with the logic was named **handleRequest**. With this in mind, our "Handler" should be **example.Handler::handleRequest**. If you're using something else for your naming, make sure it reflects appropriately, otherwise Lambda won't know what to do when invoked.
Take the function for a spin!
Using the "Test" tab, try invoking the function with no user input and then invoke it using the following:
```json
{
"title": "Batman"
}
```
You should see different results reflecting what was added in the code.
## Conclusion
You just saw how to create a serverless function with AWS Lambda that interacts with MongoDB. In this particular example, Java was the star of the show, but similar logic and steps can be applied for any of the other supported AWS Lambda runtimes or MongoDB drivers.
If you have questions or want to see how others are using MongoDB Atlas with AWS Lambda, check out the MongoDB Community Forums.
| md | {
"tags": [
"Atlas",
"Java",
"Serverless"
],
"pageDescription": "Learn how to build and deploy a serverless function to AWS Lambda that communicates with MongoDB using the Java programming language.",
"contentType": "Tutorial"
} | Serverless Development with AWS Lambda and MongoDB Atlas Using Java | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/full-text-search-mobile-app-mongodb-realm | created | # How to Do Full-Text Search in a Mobile App with MongoDB Realm
Full-text search is an important feature in modern mobile applications, as it allows you to quickly and efficiently access information within large text datasets. This is fundamental for certain app categories that deal with large amounts of text documents, like news and magazines apps and chat and email applications.
We are happy to introduce full-text search (FTS) support for Realm — a feature long requested by our developers. While traditional search with string matching returns exact occurrences, FTS returns results that contain the words from the query, but respecting word boundaries. For example, looking for the word “cat” with FTS will return only text containing exactly that word, while a traditional search will return also text containing words like “catalog” and “advocating”. Additionally, it’s also possible to specify words that should *not* be present in the result texts. Another important addition with the Realm-provided FTS is speed: As the index is created beforehand, searches on it are very fast compared to pure string matching.
In this tutorial, we are giving examples using FTS with the .NET SDK, but FTS is also available in the Realm SDK for Kotlin, Dart, and JS, and will soon be available for Swift and Obj-C.
Later, we will show a practical example, but for now, let us take a look at what you need in order to use the new FTS search with the .NET Realm SDK:
1. Add the `Indexed(IndexType.FullText)]` attribute on the string property to create an index for searching.
2. Running queries
1. To run Language-Integrated Query (LINQ) queries, use `QueryMethods.FullTextSearch`. For example: `realm.All().Where(b => QueryMethods.FullTextSearch(b.Summary, "fantasy novel")`
2. To run `Filter` queries, use the `TEXT` operator. For example: `realm.All().Filter("Summary TEXT $0", "fantasy novel");`
Additionally, words in the search phrase can be prepended with a “-” to indicate that certain words should not occur. For example: `realm.All().Where(b => QueryMethods.FullTextSearch(b.Summary, "fantasy novel -rings")`
## Search example
In this example, we will be creating a realm with book summaries indexed and searchable by the full-text search. First, we’ll create the object schema for the books and index on the summary property:
```csharp
public partial class Book : IRealmObject
{
[PrimaryKey]
public string Name { get; set; } = null!;
[Indexed(IndexType.FullText)]
public string Summary { get; set; } = null!;
}
```
Next, we’ll define a few books with summaries and add those to the realm:
```csharp
// ..
var animalFarm = new Book
{
Name = "Animal Farm",
Summary = "Animal Farm is a novel that tells the story of a group of farm animals who rebel against their human farmer, hoping to create a society where the animals can be equal, free, and happy. Ultimately, the rebellion is betrayed, and the farm ends up in a state as bad as it was before."
};
var lordOfTheRings = new Book
{
Name = "Lord of the Rings",
Summary = "The Lord of the Rings is an epic high-fantasy novel by English author and scholar J. R. R. Tolkien. Set in Middle-earth, the story began as a sequel to Tolkien's 1937 children's book The Hobbit, but eventually developed into a much larger work."
};
var lordOfTheFlies = new Book
{
Name = "Lord of the Flies",
Summary = "Lord of the Flies is a novel that revolves around a group of British boys who are stranded on an uninhabited island and their disastrous attempts to govern themselves."
};
var realm = Realm.GetInstance();
realm.Write(() =>
{
realm.Add(animalFarm);
realm.Add(lordOfTheFlies);
realm.Add(lordOfTheRings);
});
```
And finally, we are ready for searching the summaries as follows:
```csharp
var books = realm.All();
// Returns all books with summaries containing both "novel" and "lord"
var result = books.Where(b => QueryMethods.FullTextSearch(b.Summary, "novel lord"));
// Equivalent query using `Filter`
result = books.Filter("Summary TEXT $0", "novel lord");
// Returns all books with summaries containing both "novel" and "lord", but not "rings"
result = books.Where(b => QueryMethods.FullTextSearch(b.Summary, "novel -rings"));
```
## Additional information
A few important things to keep in mind when using full-text search:
- Only string properties are valid for an FTS index, also on embedded objects. A collection of strings cannot be indexed.
- Indexes spanning multiple properties are not supported. For example, if you have a `Book` object, with `Name` and `Summary` properties, you cannot declare a single index that covers both, but you can have one index per property.
- Doing an FTS lookup for a phrase across multiple properties must be done using a combination of two expressions (i.e., trying to find `red ferrari` where `red` appears in property A and `ferrari` in property B must be done with `(A TEXT 'red') AND (B TEXT 'ferrari'))`.
- FTS only supports languages that use ASCII and Latin-1 character sets (most western languages). Only sequences of (alphanumeric) characters from these sets will be tokenized and indexed. All others will be considered white space.
- Searching is case- and diacritics-insensitive, so “Garcon” matches “garçon”.
We understand there are additional features to FTS we could work to add. Please give us feedback and head over to our [community forums! | md | {
"tags": [
"Realm",
"C#"
],
"pageDescription": "Learn how to add Full-Text Search (FTS) to your mobile applications using C# with Realm and MongoDB.",
"contentType": "Tutorial"
} | How to Do Full-Text Search in a Mobile App with MongoDB Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/leverage-event-driven-architecture-mongodb-databricks | created | # How to Leverage an Event-Driven Architecture with MongoDB and Databricks
Follow along with this tutorial to get a detailed view of how to leverage MongoDB Atlas App Services in addition to Databricks model building and deployment capabilities to fuel data-driven strategies with real-time events data. Let’s get started!
## The basics
We’re going to use a MongoDB Atlas M10 cluster as the backend service for the solution. If you are not familiar with MongoDB Atlas yet, you can follow along with the Introduction to MongoDB course to start with the basics of cluster configuration and management.
## Data collection
The solution is based on data that mimics a collection from an event-driven architecture ingestion from an e-commerce website storefront. We’re going to use a synthetic dataset to represent what we would receive in our cloud database coming from a Kafka stream of events. The data source can be found on Kaggle.
The data is in a tabular format. When converted into an object suitable for MongoDB, it will look like this:
```json
{
"_id": {
"$oid": "63c557ddcc552f591375062d"
},
"event_time": {
"$date": {
"$numberLong": "1572566410000"
}
},
"event_type": "view",
"product_id": "5837166",
"category_id": "1783999064103190764",
"brand": "pnb",
"price": 22.22,
"user_id": "556138645",
"user_session": "57ed222e-a54a-4907-9944-5a875c2d7f4f"
}
```
The event-driven architecture is very simple. It is made up of only four different events that a user can perform on the e-commerce site:
| **event_type** | **description** |
| ------------------ | --------------------------------------------------------- |
| "view" | A customer views a product on the product detail page. |
| "cart" | A customer adds a product to the cart. |
| "remove_from_cart" | A customer removes a product from the cart. |
| "purchase" | A customer completes a transaction of a specific product. |
The data in the Kaggle dataset is made of 4.6 million documents, which we will store in a database named **"ecom_events"** and under the collection **"cosmetics".** This collection represents all the events happening in a multi-category store during November 2019.
We’ve chosen this date specifically because it will contain behavior corresponding to Black Friday promotions, so it will surely showcase price changes and thus, it will be more interesting to evaluate the price elasticity of products during this time.
## Aggregate data in MongoDB
Using the powerful MongoDB Atlas Aggregation Pipeline, you can shape your data any way you need. We will shape the events in an aggregated view that will give us a “purchase log” so we can have historical prices and total quantities sold by product. This way, we can feed a linear regression model to get the best possible fit of a line representing the relationship between price and units sold.
Below, you’ll find the different stages of the aggregation pipeline:
1. **Match**: We are only interested in purchasing events, so we run a match stage for the event_type key having the value 'purchase'.
```json
{
'$match': {
'event_type': 'purchase'
}
}
```
2. **Group**: We are interested in knowing how many times a particular product was bought in a day and at what price. Therefore, we group by all the relevant keys, while we also do a data type transformation for the “event_time”, and we compute a new field, “total_sales”, to achieve daily total sales at a specific price point.
```json
{
'$group': {
'_id': {
'event_time': {
'$dateToString': {
'format': '%Y-%m-%d',
'date': '$event_time'
}
},
'product_id': '$product_id',
'price': '$price',
'brand': '$brand',
'category_code': '$category_code'
},
'total_sales': {
'$sum': 1
}
}
}
```
3. **Project**: Next, we run a project stage to get rid of the object nesting resulting after the group stage. (Check out the MongoDB Compass Aggregation Pipeline Builder as you will be able to see the result of each one of the stages you add in your pipeline!)
```json
{
'$project': {
'total_sales': 1,
'event_time': '$_id.event_time',
'product_id': '$_id.product_id',
'price': '$_id.price',
'brand': '$_id.brand',
'category_code': '$_id.category_code',
'_id': 0
}
}
```
4. **Group, Sort, and Project:** We need just one object that will have the historic sales of a product during the time, a sort of time series data log computing aggregates over time. Notice how we will also run a data transformation on the ‘$project’ stage to get the ‘revenue’ generated by that product on that specific day. To achieve this, we need to group, sort, and project as such:
```json
{
'$group': {
'_id': '$product_id',
'sales_history': {
'$push': '$$ROOT'
}
}
},
{
'$sort': {
'sales_history': -1
}
},
{
'$project': {
'product_id': '$_id',
'event_time': '$sales_history.event_time',
'price': '$sales_history.price',
'brand': '$sales_history.brand',
'category_code': '$sales_history.category_code',
'total_sales': '$sales_history.total_sales',
'revenue': {
'$map': {
'input': '$sales_history',
'as': 'item',
'in': {
'$multiply':
'$$item.price', '$$item.total_sales'
]
}
}
}
}
}
```
5. **Out**: The last stage of the pipeline is to push our properly shaped objects to a new collection called “purchase_log”. This collection will serve as the base to feed our model, and the aggregation pipeline will be the baseline of a trigger function further along to automate the generation of such log every time there’s a purchase, but in that case, we will use a $merge stage.
```json
{
'$out': 'purchase_log'
}
```
With this aggregation pipeline, we are effectively transforming our data to the needed purchase log to understand the historic sales by the price of each product and start building our dashboard for category leads to understand product sales and use that data to compute the price elasticity of demand of each one of them.
## Intelligence layer: Building your model and deploying it to a Databricks endpoint
The goal of this stage is to be able to compute the price elasticity of demand of each product in real-time. Using Databricks, you can easily start up a [cluster and attach your model-building Notebook to it.
On your Notebook, you can import MongoDB data using the MongoDB Connector for Spark, and you can also take advantage of the MlFlow custom Python module library to write your Python scripts, as this one below:
```python
# define a custom model
class MyModel(mlflow.pyfunc.PythonModel):
def predict(self, context, model_input):
return self.my_custom_function(model_input)
def my_custom_function(self, model_input):
import json
import numpy as np
import pandas as pd
from pandas import json_normalize
#transforming data from JSON to pandas dataframe
data_frame = pd.json_normalize(model_input)
data_frame = data_frame.explode("event_time", "price", "total_sales"]).drop(["category_code", "brand"], axis=1)
data_frame = data_frame.reset_index(drop=True)
#Calculating slope
slope = ( (data_frame.price*data_frame.total_sales).mean() - data_frame.price.mean()*data_frame.total_sales.mean() ) / ( (((data_frame.price)**2).mean()) - (data_frame.price.mean())**2)
price_elasticity = (slope)*(data_frame.price.mean()/data_frame.total_sales.mean())
return price_elasticity
```
But also, you could log the experiments and then register them as models so they can be then served as endpoints in the UI:
Logging the model as experiment directly from the Notebook:
```python
#Logging model as a experiment
my_model = MyModel()
with mlflow.start_run():
model_info = mlflow.pyfunc.log_model(artifact_path="model", python_model=my_model)
```
![Check the logs of all the experiments associated with a certain Notebook.
From the model page, you can click on “deploy model” and you’ll get an endpoint URL.
Once you have tested your model endpoint, it’s time to orchestrate your application to achieve real-time analytics.
## Orchestrating your application
For this challenge, we’ll use MongoDB Triggers and Functions to make sure that we aggregate the data only of the last bought product every time there’s a purchase event and we recalculate its price elasticity by passing its purchase log in an HTTP post call to the Databricks endpoint.
### Aggregating data after each purchase
First, you will need to set up an event stream that can capture changes in consumer behavior and price changes in real-time, so it will aggregate and update your purchase_log data.
By leveraging MongoDB App Services, you can build event-driven applications and integrate services in the cloud. So for this use case, we would like to set up a **Trigger** that will “listen” for any new “purchase” event in the cosmetics collection, such as you can see in the below screenshots. To get you started on App Services, you can check out the documentation.
After clicking on “Add Trigger,” you can configure it to execute only when there’s a new insert in the collection:
Scrolling down the page, you can also configure the function that will be triggered:
Such functions can be defined (and tested) in the function editor. The function we’re using simply retrieves data from the cosmetics collection, performs some data processing on the information, and saves the result in a new collection.
```javascript
exports = async function() {
const collection = context.services.get("mongodb-atlas").db('ecom_events').collection('cosmetics');
// Retrieving the last purchase event document
let lastItemArr = ];
try {
lastItemArr = await collection.find({event_type: 'purchase'}, { product_id: 1 }).sort({ _id: -1 }).limit(1).toArray();
}
catch (error) {
console.error('An error occurred during find execution:', error);
}
console.log(JSON.stringify(lastItemArr));
// Defining the product_id of the last purchase event document
var lastProductId = lastItemArr.length > 0 ? lastItemArr[0].product_id : null;
console.log(JSON.stringify(lastProductId));
console.log(typeof lastProductId);
if (!lastProductId) {
return null;
}
// Filtering the collection to get only the documents that match the same product_id as the last purchase event
let lastColl = [];
lastColl = await collection.find({"product_id": lastProductId}).toArray();
console.log(JSON.stringify(lastColl));
// Defining the aggregation pipeline for modeling a purchase log triggered by the purchase events.
const agg = [
{
'$match': {
'event_type': 'purchase',
'product_id': lastProductId
}
}, {
'$group': {
'_id': {
'event_time': '$event_time',
'product_id': '$product_id',
'price': '$price',
'brand': '$brand',
'category_code': '$category_code'
},
'total_sales': {
'$sum': 1
}
}
}, {
'$project': {
'total_sales': 1,
'event_time': '$_id.event_time',
'product_id': '$_id.product_id',
'price': '$_id.price',
'brand': '$_id.brand',
'category_code': '$_id.category_code',
'_id': 0
}
}, {
'$group': {
'_id': '$product_id',
'sales_history': {
'$push': '$$ROOT'
}
}
}, {
'$sort': {
'sales_history': -1
}
}, {
'$project': {
'product_id': '$_id',
'event_time': '$sales_history.event_time',
'price': '$sales_history.price',
'brand': '$sales_history.brand',
'category_code': '$sales_history.category_code',
'total_sales': '$sales_history.total_sales',
'revenue': {
'$map': {
'input': '$sales_history',
'as': 'item',
'in': {
'$multiply': [
'$$item.price', '$$item.total_sales'
]
}
}
}
}
}
, {
'$merge': {
'into': 'purchase_log',
'on': '_id',
'whenMatched': 'merge',
'whenNotMatched': 'insert'
}
}
];
// Running the aggregation
const purchaseLog = await collection.aggregate(agg);
const log = await purchaseLog.toArray();
return log;
};
```
The above function is meant to shape the data from the last product_id item purchased into the historic purchase_log needed to compute the price elasticity. As you can see in the code below, the result creates a document with historical price and total purchase data:
```json
{
"_id": {
"$numberInt": "5837183"
},
"product_id": {
"$numberInt": "5837183"
},
"event_time": [
"2023-05-17"
],
"price": [
{
"$numberDouble": "6.4"
}
],
"brand": [
"runail"
],
"category_code": [],
"total_sales": [
{
"$numberLong": "101"
}
],
"revenue": [
{
"$numberDouble": "646.4000000000001"
}
]
}
```
Note how we implement the **$merge** stage so we make sure to not overwrite the previous collection and just upsert the data corresponding to the latest bought item.
### Computing the price elasticity
The next step is to process the event stream and calculate the price elasticity of demand for each product. For this, you may set up a trigger so that every time there’s an insert or replace in the “purchase_log” collection, we will do a post-HTTP request for retrieving the price elasticity.
![Configuring the tigger to execute every time the collection has an insert or replace of documents
The trigger will execute a function such as the one below:
```javascript
exports = async function(changeEvent) {
// Defining a variable for the full document of the last purchase log in the collection
const { fullDocument } = changeEvent;
console.log("Received doc: " + fullDocument.product_id);
// Defining the collection to get
const collection = context.services.get("mongodb-atlas").db("ecom_events").collection("purchase_log");
console.log("It passed test 1");
// Fail proofing
if (!fullDocument) {
throw new Error('Error: could not get fullDocument from context');
}
console.log("It passed test 2");
if (!collection) {
throw new Error('Error: could not get collection from context');
}
console.log("It passed test 3");
//Defining the connection variables
const ENDPOINT_URL = "YOUR_ENDPOINT_URL";
const AUTH_TOKEN = "BASIC_TOKEN";
// Defining data to pass it into Databricks endpoint
const data = {"inputs": fullDocument]};
console.log("It passed test 4");
// Fetching data to the endpoint using http.post to get price elasticity of demand
try {
const res = await context.http.post({
"url": ENDPOINT_URL,
"body": JSON.stringify(data),
"encodeBodyAsJSON": false,
"headers": {
"Authorization": [AUTH_TOKEN],
"Content-Type": ["application/json"]
}
});
console.log("It passed test 5");
if (res.statusCode !== 200) {
throw new Error(`Failed to fetch data. Status code: ${res.statusCode}`);
}
console.log("It passed test 6");
// Logging response test
const responseText = await res.body.text();
console.log("Response body:", responseText);
// Parsing response from endpoint
const responseBody = JSON.parse(responseText);
const price_elasticity = responseBody.predictions;
console.log("It passed test 7 with price elasticity: " + price_elasticity);
//Updating price elasticity of demand for specific document on the purchase log collection
await collection.updateOne({"product_id": fullDocument.product_id}, {$push:{"price_elasticity": price_elasticity}} );
console.log("It updated the product_id " + fullDocument.product_id + "successfully, adding price elasticity " + price_elasticity );
}
catch (err) {
console.error(err);
throw err;
}
};
```
## Visualize data with MongoDB Charts
Finally, you will need to visualize the data to make it easier for stakeholders to understand the price elasticity of demand for each product. You can use a visualization tool like [MongoDB Charts to create dashboards and reports that show the price elasticity of demand over time and how it is impacted by changes in price, product offerings, and consumer behavior.
## Evolving your apps
The new variable “price_elasticity” can be easily passed to the collections that nurture your PIMS, allowing developers to build another set of rules based on these values to automate a full-fledged dynamic pricing tool.
It can also be embedded into your applications. Let’s say an e-commerce CMS system used by your category leads to manually adjusting the prices of different products. Or in this case, to build different rules based on the price elasticity of demand to automate price setting.
The same data can be used as a feature for forecasting total sales and creating a recommended price point for net revenue.
In conclusion, this framework might be used to create any kind of real-time analytics use case you might think of in combination with any of the diverse use cases you’ll find where machine learning could be used as a source of intelligent and automated decision-making processes.
Find all the code used in the GitHub repository and drop by the Community Forum for any further questions, comments or feedback!! | md | {
"tags": [
"MongoDB",
"Python",
"JavaScript",
"Spark"
],
"pageDescription": "Learn how to develop using an event-driven architecture that leverages MongoDB Atlas and Databricks.",
"contentType": "Tutorial"
} | How to Leverage an Event-Driven Architecture with MongoDB and Databricks | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/mongodb-bigquery-pipeline-using-confluent | created | # Streaming Data from MongoDB to BigQuery Using Confluent Connectors
Many enterprise customers of MongoDB and Google Cloud have the core operation workload running on MongoDB and run their analytics on BigQuery. To make it seamless to move the data between MongoDB and BigQuery, MongoDB introduced Google Dataflow templates. Though these templates cater to most of the common use cases, there is still some effort required to set up the change stream (CDC) Dataflow template. Setting up the CDC requires users to create their own custom code to monitor the changes happening on their MongoDB Atlas collection. Developing custom codes is time-consuming and requires a lot of time for development, support, management, and operations.
Overcoming the additional effort required to set up CDCs for MongoDB to BigQuery Dataflow templates can be achieved using Confluent Cloud. Confluent is a full-scale data platform capable of continuous, real-time processing, integration, and data streaming across any infrastructure. Confluent provides pluggable, declarative data integration through its connectors. With Confluent’s MongoDB source connectors, the process of creating and deploying a module for CDCs can be eliminated. Confluent Cloud provides a MongoDB Atlas source connector that can be easily configured from Confluent Cloud, which will read the changes from the MongoDB source and publish those changes to a topic. Reading from MongoDB as source is the part of the solution that is further enhanced with a Confluent BigQuery sink connector to read changes that are published to the topic and then writing to the BigQuery table.
This article explains how to set up the MongoDB cluster, Confluent cluster, and Confluent MongoDB Atlas source connector for reading changes from your MongoDB cluster, BigQuery dataset, and Confluent BigQuery sink connector.
As a prerequisite, we need a MongoDB Atlas cluster, Confluent Cloud cluster, and Google Cloud account. If you don’t have the accounts, the next sections will help you understand how to set them up.
### Set up your MongoDB Atlas cluster
To set up your first MongoDB Atlas cluster, you can register for MongoDB either from Google Marketplace or from the registration page. Once registered for MongoDB Atlas, you can set up your first free tier Shared M0 cluster. Follow the steps in the MongoDB documentation to configure the database user and network settings for your cluster.
Once the cluster and access setup is complete, we can load some sample data to the cluster. Navigate to “browse collection” from the Atlas homepage and click on “Create Database.” Name your database “Sample_company” and collection “Sample_employee.”
Insert your first document into the database:
```
{
"Name":"Jane Doe",
"Address":{
"Phone":{"$numberLong":"999999"},
"City":"Wonderland"
}
}
}
```
## Set up a BigQuery dataset on Google Cloud
As a prerequisite for setting up the pipeline, we need to create a dataset in the same region as that of the Confluent cluster. Please go through the Google documentation to understand how to create a dataset for your project. Name your dataset “Sample_Dataset.”
## Set up the Confluent Cloud cluster and connectors
After setting up the MongoDB and BigQuery datasets, Confluent will be the platform to build the data pipeline between these platforms.
To sign up using Confluent Cloud, you can either go to the Confluent website or register from Google Marketplace. New signups receive $400 to spend during their first 30 days and a credit card is not required. To create the cluster, you can follow the first step in the documentation. **One important thing to consider is that the region of the cluster should be the same region of the GCP BigQuery cluster.**
### Set up your MongoDB Atlas source connector on Confluent
Depending on the settings, it may take a few minutes to provision your cluster, but once the cluster has provisioned, we can get the sample data from MongoDB cluster to the Confluent cluster.
Confluent’s MongoDB Atlas Source connector helps to read the change stream data from the MongoDB database and write it to the topic. This connector is fully managed by Confluent and you don’t need to operate it. To set up a connector, navigate to Confluent Cloud and search for the MongoDB Atlas source connector under “Connectors.” The connector documentation provides the steps to provision the connector.
Below is the sample configuration for the MongoDB source connector setup.
1. For **Topic selection**, leave the prefix empty.
2. Generate **Kafka credentials** and click on “Continue.”
3. Under Authentication, provide the details:
1. Connection host: Only provide the MongoDB Hostname in format “mongodbcluster.mongodb.net.”
2. Connection user: MongoDB connection user name.
3. Connection password: Password of the user being authenticated.
4. Database name: **sample_database** and collection name: **sample_collection**.
4. Under configuration, select the output Kafka record format as **JSON_SR** and click on “Continue.”
5. Leave sizing to default and click on “Continue.”
6. Review and click on “Continue.”
```
{
"name": "MongoDbAtlasSourceConnector",
"config": {
"connector.class": "MongoDbAtlasSource",
"name": "MongoDbAtlasSourceConnector",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "****************",
"kafka.api.secret": "****************************************************************",
"connection.host": "mongodbcluster.mongodb.net",
"connection.user": "testuser",
"connection.password": "*********",
"database": "Sample_Company",
"collection": "Sample_Employee",
"output.data.format": "JSON_SR",
"publish.full.document.only": "true",
"tasks.max": "1"
}
}
```
### Set up Confluent Cloud: BigQuery sink connector
After setting up our BigQuery, we need to provision a sink connector to sink the data from Confluent Cluster to Google BigQuery. The Confluent Cloud to BigQuery Sink connector can stream table records from Kafka topics to Google BigQuery. The table records are streamed at high throughput rates to facilitate analytical queries in real time.
To set up the Bigquery sink connector, follow the steps in their documentation.
```
{
"name": "BigQuerySinkConnector_0",
"config": {
"topics": "AppEngineTest.emp",
"input.data.format": "JSON_SR",
"connector.class": "BigQuerySink",
"name": "BigQuerySinkConnector_0",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "****************",
"kafka.api.secret": "****************************************************************",
"keyfile": "******************************************************************************
—--
***************************************",
"project": "googleproject-id",
"datasets": "Sample_Dataset",
"auto.create.tables": "true",
"auto.update.schemas": "true",
"tasks.max": "1"
}
}
```
To see the data being loaded to BigQuery, make some changes on the MongoDB collection. Any inserts and updates will be recorded from MongoDB and pushed to BigQuery.
Insert below document to your MongoDB collection using MongoDB Atlas UI. (Navigate to your collection and click on “INSERT DOCUMENT.”)
```
{
"Name":"John Doe",
"Address":{
"Phone":{"$numberLong":"8888888"},
"City":"Narnia"
}
}
}
```
## Summary
MongoDB and Confluent are positioned at the heart of many modern data architectures that help developers easily build robust and reactive data pipelines that stream events between applications and services in real time. In this example, we provided a template to build a pipeline from MongoDB to Bigquery on Confluent Cloud. Confluent Cloud provides more than 200 connectors to build such pipelines between many solutions. Although the solutions change, the general approach is using those connectors to build pipelines.
### What's next?
1. To understand the features of Confluent Cloud managed MongoDB sink and source connectors, you can watch this webinar.
2. Learn more about the Bigquery sink connector.
3. A data pipeline for MongoDB Atlas and BigQuery using Dataflow.
4. Set up your first MongoDB cluster using Google Marketplace.
5. Run analytics using BigQuery using BigQuery ML.
| md | {
"tags": [
"Atlas",
"Google Cloud",
"AI"
],
"pageDescription": "Learn how to set up a data pipeline from your MongoDB database to BigQuery using the Confluent connector.",
"contentType": "Tutorial"
} | Streaming Data from MongoDB to BigQuery Using Confluent Connectors | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/cheat-sheet | created | # MongoDB Cheat Sheet
First steps in the MongoDB World? This cheat sheet is filled with some handy tips, commands, and quick references to get you connected and CRUD'ing in no time!
- Get a free MongoDB cluster in MongoDB Atlas.
- Follow a course in MongoDB University.
## Updates
- September 2023: Updated for MongoDB 7.0.
## Table of Contents
- Connect MongoDB Shell
- Helpers
- CRUD
- Databases and Collections
- Indexes
- Handy commands
- Change Streams
- Replica Set
- Sharded Cluster
- Wrap-up
## Connect via `mongosh`
``` bash
mongosh # connects to mongodb://127.0.0.1:27017 by default
mongosh --host --port --authenticationDatabase admin -u -p # omit the password if you want a prompt
mongosh "mongodb://:@192.168.1.1:27017"
mongosh "mongodb://192.168.1.1:27017"
mongosh "mongodb+srv://cluster-name.abcde.mongodb.net/" --apiVersion 1 --username # MongoDB Atlas
```
- mongosh documentation.
🔝 Table of Contents 🔝
## Helpers
### Show Databases
``` javascript
show dbs
db // prints the current database
```
### Switch Database
``` javascript
use
```
### Show Collections
``` javascript
show collections
```
### Run JavaScript File
``` javascript
load("myScript.js")
```
🔝 Table of Contents 🔝
## CRUD
### Create
``` javascript
db.coll.insertOne({name: "Max"})
db.coll.insertMany({name: "Max"}, {name:"Alex"}]) // ordered bulk insert
db.coll.insertMany([{name: "Max"}, {name:"Alex"}], {ordered: false}) // unordered bulk insert
db.coll.insertOne({date: ISODate()})
db.coll.insertOne({name: "Max"}, {"writeConcern": {"w": "majority", "wtimeout": 5000}})
```
### Read
``` javascript
db.coll.findOne() // returns a single document
db.coll.find() // returns a cursor - show 20 results - "it" to display more
db.coll.find().pretty()
db.coll.find({name: "Max", age: 32}) // implicit logical "AND".
db.coll.find({date: ISODate("2020-09-25T13:57:17.180Z")})
db.coll.find({name: "Max", age: 32}).explain("executionStats") // or "queryPlanner" or "allPlansExecution"
db.coll.distinct("name")
// Count
db.coll.countDocuments({age: 32}) // alias for an aggregation pipeline - accurate count
db.coll.estimatedDocumentCount() // estimation based on collection metadata
// Comparison
db.coll.find({"year": {$gt: 1970}})
db.coll.find({"year": {$gte: 1970}})
db.coll.find({"year": {$lt: 1970}})
db.coll.find({"year": {$lte: 1970}})
db.coll.find({"year": {$ne: 1970}})
db.coll.find({"year": {$in: [1958, 1959]}})
db.coll.find({"year": {$nin: [1958, 1959]}})
// Logical
db.coll.find({name:{$not: {$eq: "Max"}}})
db.coll.find({$or: [{"year" : 1958}, {"year" : 1959}]})
db.coll.find({$nor: [{price: 1.99}, {sale: true}]})
db.coll.find({
$and: [
{$or: [{qty: {$lt :10}}, {qty :{$gt: 50}}]},
{$or: [{sale: true}, {price: {$lt: 5 }}]}
]
})
// Element
db.coll.find({name: {$exists: true}})
db.coll.find({"zipCode": {$type: 2 }})
db.coll.find({"zipCode": {$type: "string"}})
// Aggregation Pipeline
db.coll.aggregate([
{$match: {status: "A"}},
{$group: {_id: "$cust_id", total: {$sum: "$amount"}}},
{$sort: {total: -1}}
])
// Text search with a "text" index
db.coll.find({$text: {$search: "cake"}}, {score: {$meta: "textScore"}}).sort({score: {$meta: "textScore"}})
// Regex
db.coll.find({name: /^Max/}) // regex: starts by letter "M"
db.coll.find({name: /^Max$/i}) // regex case insensitive
// Array
db.coll.find({tags: {$all: ["Realm", "Charts"]}})
db.coll.find({field: {$size: 2}}) // impossible to index - prefer storing the size of the array & update it
db.coll.find({results: {$elemMatch: {product: "xyz", score: {$gte: 8}}}})
// Projections
db.coll.find({"x": 1}, {"actors": 1}) // actors + _id
db.coll.find({"x": 1}, {"actors": 1, "_id": 0}) // actors
db.coll.find({"x": 1}, {"actors": 0, "summary": 0}) // all but "actors" and "summary"
// Sort, skip, limit
db.coll.find({}).sort({"year": 1, "rating": -1}).skip(10).limit(3)
// Read Concern
db.coll.find().readConcern("majority")
```
- [db.collection.find()
- Query and Projection Operators
- BSON types
- Read Concern
### Update
``` javascript
db.coll.updateOne({"_id": 1}, {$set: {"year": 2016, name: "Max"}})
db.coll.updateOne({"_id": 1}, {$unset: {"year": 1}})
db.coll.updateOne({"_id": 1}, {$rename: {"year": "date"} })
db.coll.updateOne({"_id": 1}, {$inc: {"year": 5}})
db.coll.updateOne({"_id": 1}, {$mul: {price: NumberDecimal("1.25"), qty: 2}})
db.coll.updateOne({"_id": 1}, {$min: {"imdb": 5}})
db.coll.updateOne({"_id": 1}, {$max: {"imdb": 8}})
db.coll.updateOne({"_id": 1}, {$currentDate: {"lastModified": true}})
db.coll.updateOne({"_id": 1}, {$currentDate: {"lastModified": {$type: "timestamp"}}})
// Array
db.coll.updateOne({"_id": 1}, {$push :{"array": 1}})
db.coll.updateOne({"_id": 1}, {$pull :{"array": 1}})
db.coll.updateOne({"_id": 1}, {$addToSet :{"array": 2}})
db.coll.updateOne({"_id": 1}, {$pop: {"array": 1}}) // last element
db.coll.updateOne({"_id": 1}, {$pop: {"array": -1}}) // first element
db.coll.updateOne({"_id": 1}, {$pullAll: {"array" :3, 4, 5]}})
db.coll.updateOne({"_id": 1}, {$push: {"scores": {$each: [90, 92]}}})
db.coll.updateOne({"_id": 2}, {$push: {"scores": {$each: [40, 60], $sort: 1}}}) // array sorted
db.coll.updateOne({"_id": 1, "grades": 80}, {$set: {"grades.$": 82}})
db.coll.updateMany({}, {$inc: {"grades.$[]": 10}})
db.coll.updateMany({}, {$set: {"grades.$[element]": 100}}, {multi: true, arrayFilters: [{"element": {$gte: 100}}]})
// FindOneAndUpdate
db.coll.findOneAndUpdate({"name": "Max"}, {$inc: {"points": 5}}, {returnNewDocument: true})
// Upsert
db.coll.updateOne({"_id": 1}, {$set: {item: "apple"}, $setOnInsert: {defaultQty: 100}}, {upsert: true})
// Replace
db.coll.replaceOne({"name": "Max"}, {"firstname": "Maxime", "surname": "Beugnet"})
// Write concern
db.coll.updateMany({}, {$set: {"x": 1}}, {"writeConcern": {"w": "majority", "wtimeout": 5000}})
```
### Delete
``` javascript
db.coll.deleteOne({name: "Max"})
db.coll.deleteMany({name: "Max"}, {"writeConcern": {"w": "majority", "wtimeout": 5000}})
db.coll.deleteMany({}) // WARNING! Deletes all the docs but not the collection itself and its index definitions
db.coll.findOneAndDelete({"name": "Max"})
```
🔝 [Table of Contents 🔝
## Databases and Collections
### Drop
``` javascript
db.coll.drop() // removes the collection and its index definitions
db.dropDatabase() // double check that you are *NOT* on the PROD cluster... :-)
```
### Create Collection
``` javascript
// Create collection with a $jsonschema
db.createCollection("contacts", {
validator: {$jsonSchema: {
bsonType: "object",
required: "phone"],
properties: {
phone: {
bsonType: "string",
description: "must be a string and is required"
},
email: {
bsonType: "string",
pattern: "@mongodb\.com$",
description: "must be a string and match the regular expression pattern"
},
status: {
enum: [ "Unknown", "Incomplete" ],
description: "can only be one of the enum values"
}
}
}}
})
```
### Other Collection Functions
``` javascript
db.coll.stats()
db.coll.storageSize()
db.coll.totalIndexSize()
db.coll.totalSize()
db.coll.validate({full: true})
db.coll.renameCollection("new_coll", true) // 2nd parameter to drop the target collection if exists
```
🔝 [Table of Contents 🔝
## Indexes
### List Indexes
``` javascript
db.coll.getIndexes()
db.coll.getIndexKeys()
```
### Create Indexes
``` javascript
// Index Types
db.coll.createIndex({"name": 1}) // single field index
db.coll.createIndex({"name": 1, "date": 1}) // compound index
db.coll.createIndex({foo: "text", bar: "text"}) // text index
db.coll.createIndex({"$**": "text"}) // wildcard text index
db.coll.createIndex({"userMetadata.$**": 1}) // wildcard index
db.coll.createIndex({"loc": "2d"}) // 2d index
db.coll.createIndex({"loc": "2dsphere"}) // 2dsphere index
db.coll.createIndex({"_id": "hashed"}) // hashed index
// Index Options
db.coll.createIndex({"lastModifiedDate": 1}, {expireAfterSeconds: 3600}) // TTL index
db.coll.createIndex({"name": 1}, {unique: true})
db.coll.createIndex({"name": 1}, {partialFilterExpression: {age: {$gt: 18}}}) // partial index
db.coll.createIndex({"name": 1}, {collation: {locale: 'en', strength: 1}}) // case insensitive index with strength = 1 or 2
db.coll.createIndex({"name": 1 }, {sparse: true})
```
### Drop Indexes
``` javascript
db.coll.dropIndex("name_1")
```
### Hide/Unhide Indexes
``` javascript
db.coll.hideIndex("name_1")
db.coll.unhideIndex("name_1")
```
- Indexes documentation
🔝 Table of Contents 🔝
## Handy commands
``` javascript
use admin
db.createUser({"user": "root", "pwd": passwordPrompt(), "roles": "root"]})
db.dropUser("root")
db.auth( "user", passwordPrompt() )
use test
db.getSiblingDB("dbname")
db.currentOp()
db.killOp(123) // opid
db.fsyncLock()
db.fsyncUnlock()
db.getCollectionNames()
db.getCollectionInfos()
db.printCollectionStats()
db.stats()
db.getReplicationInfo()
db.printReplicationInfo()
db.hello()
db.hostInfo()
db.shutdownServer()
db.serverStatus()
db.getProfilingStatus()
db.setProfilingLevel(1, 200) // 0 == OFF, 1 == ON with slowms, 2 == ON
db.enableFreeMonitoring()
db.disableFreeMonitoring()
db.getFreeMonitoringStatus()
db.createView("viewName", "sourceColl", [{$project:{department: 1}}])
```
🔝 [Table of Contents 🔝
## Change Streams
``` javascript
watchCursor = db.coll.watch( { $match : {"operationType" : "insert" } } ] )
while (!watchCursor.isExhausted()){
if (watchCursor.hasNext()){
print(tojson(watchCursor.next()));
}
}
```
🔝 [Table of Contents 🔝
## Replica Set
``` javascript
rs.status()
rs.initiate({"_id": "RS1",
members:
{ _id: 0, host: "mongodb1.net:27017" },
{ _id: 1, host: "mongodb2.net:27017" },
{ _id: 2, host: "mongodb3.net:27017" }]
})
rs.add("mongodb4.net:27017")
rs.addArb("mongodb5.net:27017")
rs.remove("mongodb1.net:27017")
rs.conf()
rs.hello()
rs.printReplicationInfo()
rs.printSecondaryReplicationInfo()
rs.reconfig(config)
rs.reconfigForPSASet(memberIndex, config, { options } )
db.getMongo().setReadPref('secondaryPreferred')
rs.stepDown(20, 5) // (stepDownSecs, secondaryCatchUpPeriodSecs)
```
🔝 [Table of Contents 🔝
## Sharded Cluster
``` javascript
db.printShardingStatus()
sh.status()
sh.addShard("rs1/mongodb1.example.net:27017")
sh.shardCollection("mydb.coll", {zipcode: 1})
sh.moveChunk("mydb.coll", { zipcode: "53187" }, "shard0019")
sh.splitAt("mydb.coll", {x: 70})
sh.splitFind("mydb.coll", {x: 70})
sh.startBalancer()
sh.stopBalancer()
sh.disableBalancing("mydb.coll")
sh.enableBalancing("mydb.coll")
sh.getBalancerState()
sh.setBalancerState(true/false)
sh.isBalancerRunning()
sh.startAutoMerger()
sh.stopAutoMerger()
sh.enableAutoMerger()
sh.disableAutoMerger()
sh.updateZoneKeyRange("mydb.coll", {state: "NY", zip: MinKey }, { state: "NY", zip: MaxKey }, "NY")
sh.removeRangeFromZone("mydb.coll", {state: "NY", zip: MinKey }, { state: "NY", zip: MaxKey })
sh.addShardToZone("shard0000", "NYC")
sh.removeShardFromZone("shard0000", "NYC")
```
🔝 Table of Contents 🔝
## Wrap-up
I hope you liked my little but - hopefully - helpful cheat sheet. Of course, this list isn't exhaustive at all. There are a lot more commands, but I'm sure you will find them in the MongoDB documentation.
If you feel like I forgot a critical command in this list, please send me a tweet and I will make sure to fix it.
Check out our free courses on MongoDB University if you are not too sure what some of the above commands are doing.
>
>
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
>
>
🔝 Table of Contents 🔝
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "MongoDB Cheat Sheet by MongoDB for our awesome MongoDB Community <3.",
"contentType": "Quickstart"
} | MongoDB Cheat Sheet | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/databricks-atlas-vector-search | created | # How to Implement Databricks Workflows and Atlas Vector Search for Enhanced Ecommerce Search Accuracy
In the vast realm of Ecommerce, customers' ability to quickly and accurately search through an extensive range of products is paramount. Atlas Vector Search is emerging as a turning point in this space, offering a refined approach to search that goes beyond mere keyword matching. Let's delve into its implementation using MongoDB Atlas, Atlas Vector Search, and Databricks.
### Prerequisites
* MongoDB Atlas cluster
* Databricks cluster
* python>=3.7
* pip3
* Node.js and npm
* GitHub repo for AI-enhanced search and vector search (code is bundled up for clarity)
In a previous tutorial, Learn to Build AI-Enhanced Retail Search Solutions with MongoDB and Databricks, we showcased how the integration of MongoDB and Databricks provides a comprehensive solution for the retail industry by combining real-time data processing, workflow orchestration, machine learning, custom data functions, and advanced search capabilities as a way to optimize product catalog management and enhance customer interactions.
In this tutorial, we are going to be building the Vector Search solution on top of the codebase from the previous tutorial. Please check out the Github repository for the full solution.
The diagram below represents the Databricks workflow for indexing data from the atp (available to promise), images, prd_desc (product discount), prd_score (product score), and price collections. These collections are also part of the previously mentioned tutorial, so please refer back if you need to access them.
Within the MongoDB Atlas platform, we can use change streams and the MongoDB Connector for Spark to move data from the collections into a new collection called Catalog. From there, we will use a text transformer to create the **`Catalog Final Collection`**. This will enable us to create a corpus of indexed and vector embedded data that will be used later as the search dictionary. We’ll call this collection **`catalog_final_myn`**. This will be shown further along after we embed the product names.
The catalog final collection will include the available to promise status for each product, its images, the product discount, product relevance score, and price, along with the vectorized or embedded product name that we’ll point our vector search engine at.
With the image below, we explain what the Databricks workflow looks like. It consists of two jobs that are separated in two notebooks respectively. We’ll go over each of the notebooks below.
## Indexing and merging several collections into one catalog
The first step is to ingest data from the previously mentioned collections using the spark.readStream method from the MongoDB Connector for Spark. The code below is part of the notebook we’ll set as a job using Databricks Workflows. You can learn more about Databricks notebooks by following their tutorial.
```
atp = spark.readStream.format("mongodb").\ option('spark.mongodb.connection.uri', MONGO_CONN).\ option('spark.mongodb.database', "search").\ option('spark.mongodb.collection', "atp_status_myn").\ option('spark.mongodb.change.stream.publish.full.document.only','true').\ option('spark.mongodb.aggregation.pipeline',]).\ option("forceDeleteTempCheckpointLocation", "true").load() atp = atp.drop("_id") atp.writeStream.format("mongodb").\ option('spark.mongodb.connection.uri', MONGO_CONN).\ option('spark.mongodb.database', "search").\ option('spark.mongodb.collection', "catalog_myn").\ option('spark.mongodb.operationType', "update").\ option('spark.mongodb.upsertDocument', True).\ option('spark.mongodb.idFieldList', "id").\ option("forceDeleteTempCheckpointLocation", "true").\ option("checkpointLocation", "/tmp/retail-atp-myn4/_checkpoint/").\ outputMode("append").\ start()
```
This part of the notebook reads data changes from the atp_status_myn collection in the search database, drops the _id field, and then writes (or updates) the processed data to the catalog_myn collection in the same database.
Notice how it’s reading from the `atp_status_myn` collection, which already has the one hot encoding (boolean values if the product is available or not) from the [previous tutorial. This way, we make sure that we only embed the data from the products that are available in our stock.
Please refer to the full notebook in our Github repository if you want to learn more about all the data ingestion and transformations conducted during this stage.
## Encoding text as vectors and building the final catalog collection
Using a combination of Python libraries and PySpark operations to process data from the Catalog MongoDB collection, we’ll transform it, vectorize it, and write the transformed data back to the Final Catalog collection. On top of this, we’ll build our application search business logic.
We start by using the %pip magic command, which is specific to Jupyter notebooks and IPython environments. The necessary packages are:
* **pymongo:** A Python driver for MongoDB.
* **tqdm:** A library to display progress bars.
* **sentence-transformers:** A library for state-of-the-art sentence, text, and image embeddings.
First, let’s use pip to install these packages in our Databricks notebook:
```
%pip install pymongo tqdm sentence-transformers
```
We continue the notebook with the following code:
```
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
```
Here we load a pre-trained model from the sentence-transformers library. This model will be used to convert text into embeddings or vectors.
The next step is to bring the data from the MongoDB Atlas catalog and search collections. This as a continuation of the same notebook:
```
catalog_status = spark.readStream.format("mongodb").\
option('spark.mongodb.connection.uri', MONGO_CONN).\
option('spark.mongodb.database', "search").\
option('spark.mongodb.collection', "catalog_myn").\
option('spark.mongodb.change.stream.publish.full.document.only','true').\
option('spark.mongodb.aggregation.pipeline',]).\
option("forceDeleteTempCheckpointLocation", "true").load()
```
With this code, we set up a structured streaming read from the **`catalog_myn`** collection in the **`search`** database of MongoDB. The resulting data is stored in the **`catalog_status`** DataFrame in Spark. The read operation is configured to fetch the full document from MongoDB's change stream and does not apply any aggregation.
The notebook code continues with:
```
#Calculating new column called discounted price using the F decorator
catalog_status = catalog_status.withColumn("discountedPrice", F.col("price") * F.col("pred_price"))
#One hot encoding of the atp_status column
catalog_status = catalog_status.withColumn("atp", (F.col("atp").cast("boolean") & F.lit(1).cast("boolean")).cast("integer"))
#Running embeddings of the product titles with the get_vec function
catalog_status.withColumn("vec", get_vec("title"))
#Dropping _id column and creating a new final catalog collection with checkpointing
catalog_status = catalog_status.drop("_id")
catalog_status.writeStream.format("mongodb").\
option('spark.mongodb.connection.uri', MONGO_CONN).\
option('spark.mongodb.database', "search").\
option('spark.mongodb.collection', "catalog_final_myn").\
option('spark.mongodb.operationType', "update").\
option('spark.mongodb.idFieldList', "id").\
option("forceDeleteTempCheckpointLocation", "true").\
option("checkpointLocation", "/tmp/retail-atp-myn5/_checkpoint/").\
outputMode("append").\
start()
```
With this last part of the code, we calculate a new column called discountedPrice as the product of the predicted price. Then, we perform [one-hot encoding on the atp status column, vectorize the title of the product, and merge everything back into a final catalog collection.
Now that we have our catalog collection with its proper embeddings, it’s time for us to build the Vector Search Index using MongoDB Atlas Search.
## Configuring the Atlas Vector Search index
Here we’ll define how data should be stored and indexed for efficient searching. To configure the index, you can insert the snippet in MongoDB Atlas by browsing to your cluster splash page and clicking on the “Search” tab:
Next, you can click over “Create Index.” Make sure you select “JSON Editor”:
Paste the JSON snippet from below into the JSON Editor. Make sure you select the correct database and collection! In our case, the collection name is **`catalog_final_myn`**. Please refer to the full code in the repository to see how the full index looks and how you can bring it together with the rest of parameters for the AI-enhanced search tutorial.
```
{
"mappings": {
"fields": {
"vec":
{
"dimensions": 384,
"similarity": "cosine",
"type": "knnVector"
}
]
}
}
}
```
In the code above, the vec field is of type [knnVector, designed for vector search. It indicates that each vector has 384 dimensions and uses cosine similarity to determine vector closeness. This is crucial for semantic search, where the goal is to find results that are contextually or semantically related.
By implementing these indexing parameters, we speed up retrieval times. Especially with high-dimensional vector data, as raw vectors can consume a significant amount of storage and reduce the computational cost of operations like similarity calculations.
Instead of comparing a query vector with every vector in the dataset, indexing allows the system to compare with a subset, saving computational resources.
## A quick example of improved search results
Browse over to our LEAFYY Ecommerce website, in which we will perform a search for the keywords ``tan bags``. You’ll get these results:
As you can see, you’ll first get results that match the specific tokenized keywords “tan” and “bags”. As a result, this will give you any product that contains any or both of those keywords in the product catalog collection documents.
However, not all the results are bags or of tan color. You can see shoes, wallets, a dress, and a pair of pants. This could be frustrating as a customer, prompting them to leave the site.
Now, enable vector search by clicking on the checkbox on the left of the magnifying glass icon in the search bar, and re-run the query “tan bags”. The results you get are in the image below:
As you can see from the screenshot, the results became more relevant for a consumer. Our search engine is able to identify similar products by understanding the context that “beige” is a similar color to “tan”, and therefore showcase these products as alternatives.
## Conclusion
By working with MongoDB Atlas and Databricks, we can create real-time data transformation pipelines. We achieve this by leveraging the MongoDB Connector for Spark to prepare our operational data for vectorization, and store it back into our MongoDB Atlas collections. This approach allows us to develop the search logic for our Ecommerce app with minimal operational overhead.
On top of that, Atlas Vector Search provides a robust solution for implementing advanced search features, making it easy to deliver a great search user experience for your customers. By understanding and integrating these tools, developers can create search experiences that are fast, relevant, and user-friendly.
Make sure to review the full code in our GitHub repository. Contact us to get a deeper understanding of how to build advanced search solutions for your Ecommerce business.
| md | {
"tags": [
"Atlas",
"Python",
"Node.js"
],
"pageDescription": "Learn how to implement Databricks Workflows and Atlas Vector Search for your Ecommerce accuracy.",
"contentType": "Tutorial"
} | How to Implement Databricks Workflows and Atlas Vector Search for Enhanced Ecommerce Search Accuracy | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/ruby/getting-started-atlas-ruby-on-rails | created |
<%= yield %>
| md | {
"tags": [
"Ruby",
"Atlas"
],
"pageDescription": "A tutorial showing how to get started with MongoDB Atlas and Ruby on Rails using the Mongoid driver",
"contentType": "Tutorial"
} | Getting Started with MongoDB Atlas and Ruby on Rails | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/atlas-databricks-pyspark-demo | created | # Utilizing PySpark to Connect MongoDB Atlas with Azure Databricks
Data processing is no easy feat, but with the proper tools, it can be simplified and can enable you to make the best data-driven decisions possible. In a world overflowing with data, we need the best methods to derive the most useful information.
The combination of MongoDB Atlas with Azure Databricks makes an efficient choice for big data processing. By connecting Atlas with Azure Databricks, we can extract data from our Atlas cluster, process and analyze the data using PySpark, and then store the processed data back in our Atlas cluster. Using Azure Databricks to analyze your Atlas data allows for access to Databricks’ wide range of advanced analytics capabilities, which include machine learning, data science, and areas of artificial intelligence like natural language processing! Processing your Atlas data with these advanced Databricks tools allows us to be able to handle any amount of data in an efficient and scalable way, making it easier than ever to gain insights into our data sets and enable us to make the most effective data-driven decisions.
This tutorial will show you how to utilize PySpark to connect Atlas with Databricks so you can take advantage of both platforms.
MongoDB Atlas is a scalable and flexible storage solution for your data while Azure Databricks provides the power of Apache Spark to work with the security and collaboration features that are available with a Microsoft Azure subscription. Apache Spark provides the Python interface for working with Spark, PySpark, which allows for an easy-to-use interface for developing in Python. To properly connect PySpark with MongoDB Atlas, the MongoDB Spark Connector is utilized. This connector ensures for seamless compatibility, as you will see below in the tutorial.
Our tutorial to combine the above platforms will consist of viewing and manipulating an Atlas cluster and visualizing our data from the cluster back in our PySpark console. We will be setting up both Atlas and Azure Databricks clusters, connecting our Databricks cluster to our IDE, and writing scripts to view and contribute to the cluster in our Atlas account. Let’s get started!
### Requirements
In order to successfully recreate this project, please ensure you have everything in the following list:
* MongoDB Atlas account.
* Microsoft Azure subscription (two-week free tier trial).
* Python 3.8+.
* GitHub Repository.
* Java on your local machine.
## Setting up a MongoDB Atlas cluster
Our first step is to set up a MongoDB Atlas cluster. Access the Atlas UI and follow these steps. For this tutorial, a free “shared” cluster is perfect. Create a database and name it “bookshelf” with a collection inside named “books”. To ensure ease for this tutorial, please allow for a connection from anywhere within your cluster’s network securities.
Once properly provisioned, your cluster will look like this:
Now we can set up our Azure Databricks cluster.
## Setting up an Azure Databricks cluster
Access the Azure Databricks page, sign in, and access the Azure Databricks tab. This is where you’ll create an Azure Databricks workspace.
For our Databricks cluster, a free trial works perfectly for this tutorial. Once the cluster is provisioned, you’ll only have two weeks to access it before you need to upgrade.
Hit “Review and Create” at the bottom. Once your workspace is validated, click “Create.” Once your deployment is complete, click on “Go to Resource.” You’ll be taken to your workspace overview. Click on “Launch Workspace” in the middle of the page.
This will direct you to the Microsoft Azure Databricks UI where we can create the Databricks cluster. On the left-hand of the screen, click on “Create a Cluster,” and then click “Create Compute” to access the correct form.
When creating your cluster, pay close attention to what your “Databricks runtime version” is. Continue through the steps to create your cluster.
We’re now going to install the libraries we need in order to connect to our MongoDB Atlas cluster. Head to the “Libraries” tab of your cluster, click on “Install New,” and select “Maven.” Hit “Search Packages” next to “Coordinates.” Search for `mongo` and select the `mongo-spark` package. Do the same thing with `xml` and select the `spark-xml` package. When done, your library tab will look like this:
## Utilizing Databricks-Connect
Now that we have our Azure Databricks cluster ready, we need to properly connect it to our IDE. We can do this through a very handy configuration named Databricks Connect. Databricks Connect allows for Azure Databricks clusters to connect seamlessly to the IDE of your choosing.
### Databricks configuration essentials
Before we establish our connection, let’s make sure we have our configuration essentials. This is available in the Databricks Connect tutorial on Microsoft’s website under “Step 2: Configure connection properties.” Please note these properties down in a safe place, as you will not be able to connect properly without them.
### Databricks-Connect configuration
Access the Databricks Connect page linked above to properly set up `databricks-connect` on your machine. Ensure that you are downloading the `databricks-connect` version that is compatible with your Python version and is the same as the Databricks runtime version in your Azure cluster.
>Please ensure prior to installation that you are working with a virtual environment for this project. Failure to use a virtual environment may cause PySpark package conflicts in your console.
Virtual environment steps in Python:
```
python3 -m venv name
```
Where the `name` is the name of your environment, so truly you can call it anything.
Our second step is to activate our virtual environment:
```
source name/bin/activate
```
And that’s it. We are now in our Python virtual environment. You can see that you’re in it when the little (name) or whatever you named it shows up.
* * *
Continuing on...for our project, use this installation command:
```
pip install -U “databricks-connect==10.4.*”
```
Once fully downloaded, we need to set up our cluster configuration. Use the configure command and follow the instructions. This is where you will input your configuration essentials from our “Databricks configuration essentials” section.
Once finished, use this command to check if you’re connected to your cluster:
```
databricks-connect test
```
You’ll know you’re correctly configured when you see an “All tests passed” in your console.
Now, it’s time to set up our SparkSessions and connect them to our Atlas cluster.
## SparkSession + Atlas configuration
The creation of a SparkSession object is crucial for our tutorial because it provides a way to access all important PySpark features in one place. These features include: reading data, creating data frames, and managing the overall configuration of PySpark applications. Our SparkSession will enable us to read and write to our Atlas cluster through the data frames we create.
The full code is on our Github account, so please access it there if you would like to replicate this exact tutorial. We will only go over the code for some of the essentials of the tutorial below.
This is the SparkSession object we need to include. We are going to use a basic structure where we describe the application name, configure our “read” and “write” connectors to our `connection_string` (our MongoDB cluster connection string that we have saved safely as an environment variable), and configure our `mongo-spark-connector`. Make sure to use the correct `mongo-spark-connector` for your environment. For ours, it is version 10.0.3. Depending on your Python version, the `mongo-spark-connector` version might be different. To find which version is compatible with your environment, please refer to the MVN Repository documents.
```
# use environment variable for uri
load_dotenv()
connection_string: str = os.environ.get("CONNECTION_STRING")
# Create a SparkSession. Ensure you have the mongo-spark-connector included.
my_spark = SparkSession \
.builder \
.appName("tutorial") \
.config("spark.mongodb.read.connection.uri", connection_string) \
.config("spark.mongodb.write.connection.uri", connection_string) \
.config("spark.jars.packages", "org.mongodb.spark:mongo-spark-connector:10.0.3") \
.getOrCreate()
```
For more help on how to create a SparkSession object with MongoDB and for more details on the `mongo-spark-connector`, please view the documentation.
Our next step is to create two data frames, one to `write` a book to our Atlas cluster, and a second to `read` back all the books in our cluster. These data frames are essential; make sure to use the proper format or else they will not properly connect to your cluster.
Data frame to `write` a book:
```
add_books = my_spark \
.createDataFrame(("", "<author>", <year>)], ["title", "author", "year"])
add_books.write \
.format("com.mongodb.spark.sql.DefaultSource") \
.option('uri', connection_string) \
.option('database', 'bookshelf') \
.option('collection', 'books') \
.mode("append") \
.save()
```
[Data frame to `read` back our books:
```
# Create a data frame so you can read in your books from your bookshelf.
return_books = my_spark.read.format("com.mongodb.spark.sql.DefaultSource") \
.option('uri', connection_string) \
.option('database', 'bookshelf') \
.option('collection', 'books') \
.load()
# Show the books in your PySpark shell.
return_books.show()
```
Add in the book of your choosing under the `add_books` dataframe. Here, exchange the title, author, and year for the areas with the `< >` brackets. Once you add in your book and run the file, you’ll see that the logs are telling us we’re connecting properly and we can see the added books in our PySpark shell. This demo script was run six separate times to add in six different books. A picture of the console is below:
We can double-check our cluster in Atlas to ensure they match up:
## Conclusion
Congratulations! We have successfully connected our MongoDB Atlas cluster to Azure Databricks through PySpark, and we can `read` and `write` data straight to our Atlas cluster.
The skills you’ve learned from this tutorial will allow you to utilize Atlas’s scalable and flexible storage solution while leveraging Azure Databricks’ advanced analytics capabilities. This combination can allow developers to handle any amount of data in an efficient and scalable manner, while allowing them to gain insights into complex data sets to make exciting data-driven decisions!
Questions? Comments? Let’s continue the conversation over at the MongoDB Developer Community! | md | {
"tags": [
"Python",
"MongoDB",
"Spark"
],
"pageDescription": "This tutorial will show you how to connect MongoDB Atlas to Azure Databricks using PySpark. \n",
"contentType": "Tutorial"
} | Utilizing PySpark to Connect MongoDB Atlas with Azure Databricks | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/utilizing-collection-globbing-provenance-data-federation | created | # Utilizing Collection Globbing and Provenance in Data Federation
A common pattern for users of MongoDB running multi-tenant services is to model your data by splitting your different customers into different databases. This is an excellent strategy for keeping your various customers’ data separate from one another, as well as helpful for scaling in the future. But one downside to this strategy is that you can end up struggling to get a holistic view of your data across all of your customers. There are many ways to mitigate this challenge, and one of the primary ones is to copy and transform your data into another storage solution. However, this can lead to some unfortunate compromises. For example, you are now paying more to store your data twice. You now need to manage the copy and transformation process, which can become onerous as you add more customers. And lastly, and perhaps most importantly, you are now looking at a delayed state of your data.
To solve these exact challenges, we’re thrilled to announce two features that will completely transform how you use your cluster data and the ease with which you can remodel it. The first feature is called Provenance. This functionality allows you to tell Data Federation to inject fields into your documents during query time that indicate where they are coming from. For example, you can add the source collection on the Atlas cluster when federating across clusters or you can add the path from your AWS S3 bucket where the data is being read. The great thing is that you can now also query on these fields to only get data from the source of your choice!
The other feature we’re adding is a bit nuanced, and we are calling it “globbing.” For those of you familiar with Atlas Data Federation, you probably know about our “wildcard collections.” This functionality allows you to generate collection names based on the collections that exist in your underlying Atlas clusters or based on sections of paths to your files in S3. This is a handy feature to avoid having to explicitly define everything in your storage configuration. “Globbing” is somewhat similar, except that instead of dynamically generating new collections for each collection in your cluster, it will dynamically merge collections to give you a “global” view of your data automatically. To help illustrate this, I’m going to walk you through an example.
Imagine you are running a successful travel agency on top of MongoDB. For various reasons, you have chosen to store your customers data in different databases based on their location. (Maybe you are going to shard based on this and will have different databases in different regions for compliance purposes.)
This has worked well, but now you’d like to query your data based on this information and get a holistic view of your data across geographies in real time (without impacting your operational workloads). So let’s discuss how to solve this challenge!
## Prerequisites
In order to follow along with this tutorial yourself, you will need the following:
1. Experience with Atlas Data Federation.
2. An Atlas cluster with the sample data in it.
Here is how the data is modeled in my cluster (data in your cluster can be spread out among collections however your application requires):
* Cluster: MongoTravelServices
* Database: ireland
* Collection: user_feedback (8658 Documents)
* Collection: passengers
* Collection: flights
* Database: israel
* Collection: user_feedback (8658 Documents)
* Collection: passengers
* Collection: flights
* Database: usa
* Collection: user_feedback (8660 Documents)
* Collection: passengers
* Collection: flights
The goal here is to consolidate this data into one database, and then have each of the collections for user feedback, passengers, and flights represent the data stored in the collections from each database on the cluster. Lastly, we also want to be able to query on the “database” name as if it were part of our documents.
## Create a Federated Database instance
* The first thing you’ll need to do is navigate to the “Data Federation” tab on the left-hand side of your Atlas dashboard and then click “set up manually” in the "create new federated database" dropdown in the top right corner of the UI.
* Then, for this example, we’re going to manually edit the storage configuration as these capabilities are not yet available in the UI editor.
```
{
"databases":
{
"name": "GlobalVirtualDB",
"collections": [
{
"name": "user_feedback",
"dataSources": [
{
"collection": "user_feedback",
"databaseRegex": ".*", // This syntax triggers the globbing or combination of each collection named user_feedback in each database of the MongoTravelServices cluster.
"provenanceFieldName": "_provenance_data", // The name of the field where provenance data will be added.
"storeName": "MongoTravelServices"
}
]
}
],
"views": []
}
],
"stores": [
{
"clusterName": "MongoTravelServices",
"name": "MongoTravelServices",
"projectId": "5d9b6aba014b768e8241d442",
"provider": "atlas",
"readPreference": {
"mode": "secondary",
"tagSets": []
}
}
]
}
```
Now when you connect, you will see:
```
AtlasDataFederation GlobalVirtualDB> show dbs
GlobalVirtualDB 0 B
AtlasDataFederation GlobalVirtualDB> use GlobalVirtualDB
already on db GlobalVirtualDB
AtlasDataFederation GlobalVirtualDB> show tables
user_feedback
AtlasDataFederation GlobalVirtualDB>
```
And a simple count results in the count of all three collections globbed together:
```
AtlasDataFederation GlobalVirtualDB> db.user_feedback.countDocuments()
25976
AtlasDataFederation GlobalVirtualDB>
```
25976 is the sum of 8660 feedback documents from the USA, 8658 from Israel, and 8658 from Ireland.
And lastly, I can query on the provenance metadata using the field *“provenancedata.databaseName”*:
```
AtlasDataFederation GlobalVirtualDB> db.user_feedback.findOne({"_provenance_data.databaseName": "usa"})
{
_id: ObjectId("63a471e1bb988608b5740f65"),
'id': 21037,
'Gender': 'Female',
'Customer Type': 'Loyal Customer',
'Age': 44,
'Type of Travel': 'Business travel',
'Class': 'Business',
…
'Cleanliness': 1,
'Departure Delay in Minutes': 50,
'Arrival Delay in Minutes': 55,
'satisfaction': 'satisfied',
'_provenance_data': {
'provider': 'atlas',
'clusterName': 'MongoTravelServices',
'databaseName': 'usa',
'collectionName': 'user_feedback'
}
}
AtlasDataFederation GlobalVirtualDB>
```
## In review
So, what have we done and what have we learned?
1. We saw how quickly and easily you can create a Federated Database in MongoDB Atlas.
2. We learned how you can easily combine and reshape data from your underlying Atlas clusters inside of Atlas Data Federation with Collection Globbing. Now, you can easily query one user_feedback collection and have it query data in the user_feedback collections in each database.
3. We saw how to add provenance data to our documents and query it.
### A couple of things to remember about Atlas Data Federation
1. Collection globbing is a new feature that applies to Atlas cluster sources and allows dynamic manipulation of source collections similar to “wildcard collections.”
2. Provenance allows you to include additional metadata with your documents. You can indicate that data federation should include additional attributes such as source cluster, database, collection, the source path in S3, and more.
3. Currently, this is only supported in the Data Federation JSON editor or via setting the Storage Configuration in the shell, not the visual storage configuration editor.
4. This is particularly powerful for multi-tenant implementations done in MongoDB.
To learn more about [Atlas Data Federation and whether it would be the right solution for you, check out our documentation and tutorials or get started today. | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to model and transform your MongoDB Atlas Cluster data for real-time query-ability with Data Federation.",
"contentType": "Tutorial"
} | Utilizing Collection Globbing and Provenance in Data Federation | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/deploy-mongodb-atlas-aws-cdk-typescript | created | # How to Deploy MongoDB Atlas with AWS CDK in TypeScript
MongoDB Atlas, the industry’s leading developer data platform, simplifies application development and working with data for a wide variety of use cases, scales globally, and optimizes for price/performance as your data needs evolve over time. With Atlas, you can address the needs of modern applications faster to accelerate your go-to-market timelines, all while reducing data infrastructure complexity. Atlas offers a variety of features such as cloud backups, search, and easy integration with other cloud services.
AWS Cloud Development Kit (CDK) is a tool provided by Amazon Web Services (AWS) that allows you to define infrastructure as code using familiar programming languages such as TypeScript, JavaScript, Python, Java, Go, and C#.
MongoDB recently announced the GA for Atlas Integrations for CDK. This is an ideal use case for teams that want to leverage the TypeScript ecosystem and no longer want to manually provision AWS CloudFormation templates in YAML or JSON. Not a fan of TypeScript? No worries! MongoDB Atlas CDK Integrations also now support Python, Java, C#, and Go.
In this step-by-step guide, we will walk you through the entire process. Let's get started!
## Setup
Before we start, you will need to do the following:
- Open a MongoDB Atlas account
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
- Create a MongoDB Atlas Programmatic API Key (PAK)
- Install and configure an AWS Account + AWS CLI
- Store your MongoDB Atlas PAK in AWS Secret Manager
- Activate the below CloudFormation resources in the AWS region of your choice
- MongoDB::Atlas::Project
- MongoDB::Atlas::Cluster
- MongoDB::Atlas::DatabaseUser
- MongoDB::Atlas::ProjectIpAccessList
## Step 1: Install AWS CDK
The AWS CDK is an open-source software (OSS) development framework for defining cloud infrastructure as code and provisioning it through AWS CloudFormation. It provides high-level components that preconfigure cloud resources with proven defaults, so you can build cloud applications without needing to be an expert. You can install it globally using npm:
```bash
npm install -g aws-cdk
```
This command installs AWS CDK. The optional -g flag allows you to use it globally anywhere on your machine.
## Step 2: Bootstrap CDK
Next, we need to bootstrap our AWS environment to create the necessary resources to manage the CDK apps. The `cdk bootstrap` command creates an Amazon S3 bucket for storing files and a CloudFormation stack to manage the resources.
```bash
cdk bootstrap aws://ACCOUNT_NUMBER/REGION
```
Replace ACCOUNT_NUMBER with your AWS account number, and REGION with the AWS region you want to use.
## Step 3: Initialize a New CDK app
Now we can initialize a new CDK app using TypeScript. This is done using the `cdk init` command:
```bash
cdk init app --language typescript
```
This command initializes a new CDK app in TypeScript language. It creates a new directory with the necessary files and directories for a CDK app.
## Step 4: Install MongoDB Atlas CDK
To manage MongoDB Atlas resources, we will need a specific CDK module called awscdk-resources-mongodbatlas (see more details on this package on our Construct Hub page). Let's install it:
```bash
npm install awscdk-resources-mongodbatlas
```
This command installs the MongoDB Atlas CDK module, which will allow us to define and manage MongoDB Atlas resources in our CDK app.
## Step 5: Replace the generated file with AtlasBasic CDK L3 repo example
Feel free to start coding if you are familiar with CDK already or if it’s easier, you can leverage the AtlasBasic CDK resource example in our repo (also included below). This is a simple CDK Level 3 resource that deploys a MongoDB Atlas project, cluster, database user, and project IP access List resources on your behalf. All you need to do is paste this in your “lib/YOUR_FILE.ts” directory, making sure to replace the generated file that is already there (which was created in Step 3).
Please make sure to replace the `export class CdkTestingStack extends cdk.Stack` line with the specific folder name used in your specific environment. No other changes are required.
```javascript
// This CDK L3 example creates a MongoDB Atlas project, cluster, databaseUser, and projectIpAccessList
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { AtlasBasic } from 'awscdk-resources-mongodbatlas';
interface AtlasStackProps {
readonly orgId: string;
readonly profile: string;
readonly clusterName: string;
readonly region: string;
readonly ip: string;
}
//Make sure to replace "CdkTestingStack" with your specific folder name used
export class CdkTestingStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const atlasProps = this.getContextProps();
const atlasBasic = new AtlasBasic(this, 'AtlasBasic', {
clusterProps: {
name: atlasProps.clusterName,
replicationSpecs:
{
numShards: 1,
advancedRegionConfigs: [
{
analyticsSpecs: {
ebsVolumeType: "STANDARD",
instanceSize: "M10",
nodeCount: 1
},
electableSpecs: {
ebsVolumeType: "STANDARD",
instanceSize: "M10",
nodeCount: 3
},
priority: 7,
regionName: atlasProps.region,
}]
}]
},
projectProps: {
orgId: atlasProps.orgId,
},
ipAccessListProps: {
accessList:[
{ ipAddress: atlasProps.ip, comment: 'My first IP address' }
]
},
profile: atlasProps.profile,
});
}
getContextProps(): AtlasStackProps {
const orgId = this.node.tryGetContext('orgId');
if (!orgId){
throw "No context value specified for orgId. Please specify via the cdk context."
}
const profile = this.node.tryGetContext('profile') ?? 'default';
const clusterName = this.node.tryGetContext('clusterName') ?? 'test-cluster';
const region = this.node.tryGetContext('region') ?? "US_EAST_1";
const ip = this.node.tryGetContext('ip');
if (!ip){
throw "No context value specified for ip. Please specify via the cdk context."
}
return {
orgId,
profile,
clusterName,
region,
ip
}
}
}
```
## Step 6: Compare the deployed stack with the current state
It's always a good idea to check what changes the CDK will make before actually deploying the stack. Use `cdk diff` command to do so:
```bash
cdk diff --context orgId="YOUR_ORG" --context ip="YOUR_IP"
```
Replace YOUR_ORG with your MongoDB Atlas organization ID and YOUR_IP with your IP address. This command shows the proposed changes to be made in your infrastructure between the deployed stack and the current state of your app, notice highlights for any resources to be created, deleted, or modified. This is for review purposes only. No changes will be made to your infrastructure.
## Step 7: Deploy the app
Finally, if everything is set up correctly, you can deploy the app:
```bash
cdk deploy --context orgId="YOUR_ORG" --context ip="YOUR_IP"
```
Again, replace YOUR_ORG with your MongoDB Atlas organization ID and YOUR_IP with your IP address. This command deploys your app using AWS CloudFormation.
## (Optional) Step 8: Clean up the deployed resources
Once you're finished with your MongoDB Atlas setup, you might want to clean up the resources you've provisioned to avoid incurring unnecessary costs. You can destroy the resources you've created using the cdk destroy command:
```bash
cdk destroy --context orgId="YOUR_ORG" --context ip="YOUR_IP"
```
This command will destroy the CloudFormation stack associated with your CDK app, effectively deleting all the resources that were created during the deployment process.
Congratulations! You have just deployed MongoDB Atlas with AWS CDK in TypeScript. Next, head to YouTube for a [full video step-by-step walkthrough and demo.
The MongoDB Atlas CDK resources are open-sourced under the Apache-2.0 license and we welcome community contributions. To learn more, see our contributing guidelines.
The fastest way to get started is to create a MongoDB Atlas account from the AWS Marketplace. Go build with MongoDB Atlas and the AWS CDK today! | md | {
"tags": [
"Atlas",
"TypeScript",
"AWS"
],
"pageDescription": "Learn how to quickly and easily deploy a MongoDB Atlas instance using AWS CDK with TypeScript.",
"contentType": "Tutorial"
} | How to Deploy MongoDB Atlas with AWS CDK in TypeScript | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/query-multiple-databases-with-atlas-data-federation | created | # How to Query from Multiple MongoDB Databases Using MongoDB Atlas Data Federation
Have you ever needed to make queries across databases, clusters, data centers, or even mix it with data stored in an AWS S3 blob? You probably haven't had to do all of these at once, but I'm guessing you've needed to do at least one of these at some point in your career. I'll also bet that you didn't know that this is possible (and easy) to do with MongoDB Atlas Data Federation! These allow you to configure multiple remote MongoDB deployments, and enable federated queries across all the configured deployments.
**MongoDB Atlas Data Federation** allows you to perform queries across many MongoDB systems, including Clusters, Databases, and even AWS S3 buckets. Here's how **MongoDB Atlas Data Federation** works in practice.
Note: In this post, we will be demoing how to query from two separate databases. However, if you want to query data from two separate collections that are in the same database, I would personally recommend that you use the $lookup (aggregation pipeline) query. $lookup performs a left outer join to an unsharded collection in the same database to filter documents from the "joined" collection for processing. In this scenario, using a federated database instance is not necessary.
tl;dr: In this post, I will guide you through the process of creating and connecting to a virtual database in MongoDB Atlas, configuring paths to collections in two separate MongoDB databases stored in separate datacenters, and querying data from both databases using only a single query.
## Prerequisites
In order to follow along this tutorial, you need to:
- Create at least two M10 clusters in MongoDB Atlas. For this demo, I have created two databases deployed to separate Cloud Providers (AWS and GCP). Click here for information on setting up a new MongoDB Atlas cluster.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
- Ensure that each database has been seeded by loading sample data into our Atlas cluster.
- Have a Mongo Shell installed.
## Deploy a Federated Database Instance
First, make sure you are logged into MongoDB
Atlas. Next, select the Data Federation option on the left-hand navigation.
Create a Virtual Database
- Click “set up manually” in the "create new federated database" dropdown in the top right corner of the UI.
Click **Add Data Source** on the Data Federation Configuration page, and select **MongoDB Atlas Cluster**. Select your first cluster, input `sample_mflix` as the database and `theaters` as the collection. Do this again for your second cluster and input `sample_restaurants` as the database and `restaurants` as the collection. For this tutorial, we will be analyzing restaurant data and some movie theater sample data to determine the number of theaters and restaurants in each zip code.
Repeat the steps above to connect the data for your other cluster and data source.
Next, drag these new data stores into your federated database instance and click **save**. It should look like this.
## Connect to Your Federated Database Instance
The next thing we are going to need to do after setting up our federated database instance is to connect to it so we can start running queries on all of our data. First, click connect in the first box on the data federation overview page.
Click Add Your Current IP Address. Enter your IP address and an optional description, then click **Add IP Address**. In the **Create a MongoDB User** step of the dialog, enter a Username and a Password for your database user. (Note: You'll use this username and password combination to access data on your cluster.)
## Run Queries Against Your Virtual Database
You can run your queries any way you feel comfortable. You can use MongoDB Compass, the MongoDB Shell, connect to an application, or anything you see fit. For this demo, I'm going to be running my queries using MongoDB Visual Studio Code plugin and leveraging its
Playgrounds feature. For more information on using this plugin, check out this post on our Developer Hub.
Make sure you are using the connection string for your federated database instance and not for your individual MongoDB databases. To get the connection string for your new federated database instance, click the connect button on the MongoDB Atlas Data Federation overview page. Then click on Connect using **MongoDB Compass**. Copy this connection string to your clipboard. Note: You will need to add the password of the user that you authorized to access your virtual database here.
You're going to paste this connection string into the MongoDB Visual Studio Code plugin when you add a new connection.
Note: If you need assistance with getting started with the MongoDB Visual Studio Code Plugin, be sure to check out my post, How To Use The MongoDB Visual Studio Code Plugin, and the official documentation.
You can run operations using the MongoDB Query Language (MQL) which includes most, but not all, standard server commands. To learn which MQL operations are supported, see the MQL Support documentation.
The following queries use the paths that you added to your Federated Database Instance during deployment.
For this query, I wanted to construct a unique aggregation that could only be used if both sample datasets were combined using federated query and MongoDB Atlas Data Federation. For this example, we will run a query to determine the number of theaters and restaurants in each zip code, by analyzing the `sample_restaurants.restaurants` and the `sample_mflix.theaters` datasets that were entered above in our clusters.
I want to make it clear that these data sources are still being stored in different MongoDB databases in completely different datacenters, but by leveraging MongoDB Atlas Data Federation, we can query all of our databases at once as if all of our data is in a single collection! The following query is only possible using federated search! How cool is that?
``` javascript
// MongoDB Playground
// Select the database to use. VirtualDatabase0 is the default name for a MongoDB Atlas Data Federation database. If you renamed your database, be sure to put in your virtual database name here.
use('VirtualDatabase0');
// We are connecting to `VirtualCollection0` since this is the default collection that MongoDB Atlas Data Federation calls your collection. If you renamed it, be sure to put in your virtual collection name here.
db.VirtualCollection0.aggregate(
// In the first stage of our aggregation pipeline, we extract and normalize the dataset to only extract zip code data from our dataset.
{
'$project': {
'restaurant_zipcode': '$address.zipcode',
'theater_zipcode': '$location.address.zipcode',
'zipcode': {
'$ifNull': [
'$address.zipcode', '$location.address.zipcode'
]
}
}
},
// In the second stage of our aggregation, we group the data based on the zip code it resides in. We also push each unique restaurant and theater into an array, so we can get a count of the number of each in the next stage.
// We are calculating the `total` number of theaters and restaurants by using the aggregator function on $group. This sums all the documents that share a common zip code.
{
'$group': {
'_id': '$zipcode',
'total': {
'$sum': 1
},
'theaters': {
'$push': '$theater_zipcode'
},
'restaurants': {
'$push': '$restaurant_zipcode'
}
}
},
// In the third stage, we get the size or length of the `theaters` and `restaurants` array from the previous stage. This gives us our totals for each category.
{
'$project': {
'zipcode': '$_id',
'total': '$total',
'total_theaters': {
'$size': '$theaters'
},
'total_restaurants': {
'$size': '$restaurants'
}
}
},
// In our final stage, we sort our data in descending order so that the zip codes with the most number of restaurants and theaters are listed at the top.
{
'$sort': {
'total': -1
}
}
])
```
This outputs the zip codes with the most theaters and restaurants.
``` json
[
{
"_id": "10003",
"zipcode": "10003",
"total": 688,
"total_theaters": 2,
"total_restaurants": 686
},
{
"_id": "10019",
"zipcode": "10019",
"total": 676,
"total_theaters": 1,
"total_restaurants": 675
},
{
"_id": "10036",
"zipcode": "10036",
"total": 611,
"total_theaters": 0,
"total_restaurants": 611
},
{
"_id": "10012",
"zipcode": "10012",
"total": 408,
"total_theaters": 1,
"total_restaurants": 407
},
{
"_id": "11354",
"zipcode": "11354",
"total": 379,
"total_theaters": 1,
"total_restaurants": 378
},
{
"_id": "10017",
"zipcode": "10017",
"total": 378,
"total_theaters": 1,
"total_restaurants": 377
}
]
```
## Wrap-Up
Congratulations! You just set up an Federated Database Instance that contains databases being run in different cloud providers. Then, you queried both databases using the MongoDB Aggregation pipeline by leveraging Atlas Data Federation and federated queries. This allows us to more easily run queries on data that is stored in multiple MongoDB database deployments across clusters, data centers, and even in different formats, including S3 blob storage.
![Screenshot from the MongoDB Atlas Data Federation overview page showing the information for our new virtual database.
Screenshot from the MongoDB Atlas Data Federation overview page showing the information for our new Virtual Database.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
## Additional Resources
- Getting Started with MongoDB Atlas Data Federation Docs
- Tutorial Federated Queries and $out to AWS
S3 | md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "Learn how to query from multiple MongoDB databases using MongoDB Atlas Data Federation.",
"contentType": "Tutorial"
} | How to Query from Multiple MongoDB Databases Using MongoDB Atlas Data Federation | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/delivering-near-real-time-single-view-customers-federated-database | created | # Delivering a Near Real-Time Single View into Customers with a Federated Database
So the data within your organization spans across multiple databases, database platforms, and even storage types, but you need to bring it together and make sense of the data that's dispersed. This is referred to as a Single View application and it is a common need for many organizations, so you're not alone!
With MongoDB Data Federation, you can seamlessly query, transform, and aggregate your data from one or more locations, such as within a MongoDB database, AWS S3 buckets, and even HTTP API endpoints. In other words, with Data Federation, you can use the MongoDB Query API to work with your data even if it doesn't exist within MongoDB.
What's a scenario where this might make sense?
Let's say you're in the automotive or supply chain industries. You have customer data that might exist within MongoDB, but your parts vendors run their own businesses external to yours. However, there's a need to pair the parts data with transactions for any particular customer. In this scenario, you might want to be able to create queries or views that bring each of these pieces together.
In this tutorial, we're going to see how quick and easy it is to work with MongoDB Data Federation to create custom views that might aid your sales and marketing teams.
## The prerequisites
To be successful with this tutorial, you should have the following or at least an understanding of the following:
- A MongoDB Atlas instance, M0 or better.
- An external data source, accessible within an AWS S3 bucket or an HTTP endpoint.
- Node.js 18+.
While you could have data ready to go for this tutorial, we're going to assume you need a little bit of help. With Node.js, we can get a package that will allow us to generate fake data. This fake data will act as our customer data within MongoDB Atlas. The external data source will contain our vendor data, something we need to access, but ultimately don't own.
To get down to the specifics, we'll be referencing Carvana data because it is available as a dataset on AWS. If you want to follow along exactly, load that dataset into your AWS S3 bucket. You can either expose the S3 bucket to the public, or configure access specific for MongoDB. For this example, we'll just be exposing the bucket to the public so we can use HTTP.
## Understanding the Carvana dataset within AWS S3
If you choose to play around with the Carvana dataset that is available within the AWS marketplace, you'll notice that you're left with a CSV that looks like the following:
- vechicle_id
- stock_number
- year
- make
- model
- miles
- trim
- sold_price
- discounted_sold_price
- partnered_dealership
- delivery_fee
- earliest_delivery_date
- sold_date
Since this example is supposed to get you started, much of the data isn't too important to us, but the theme is. The most important data to us will be the **vehicle_id** because it should be a unique representation for any particular vehicle. The **vehicle_id** will be how we connect a customer to a particular vehicle.
With the Carvana data in mind, we can continue towards generating fake customer data.
## Generate fake customer data for MongoDB
While we could connect the Carvana data to a MongoDB federated database and perform queries, the example isn't particularly exciting until we add a different data source.
To populate MongoDB with fake data that makes sense and isn't completely random, we're going to use a tool titled mgeneratejs which can be installed with NPM.
If you don't already have it installed, execute the following from a command prompt:
```bash
npm install -g mgeneratejs
```
With the generator installed, we're going to need to draft a template of how the data should look. You can do this directly in the command line, but it might be easier just to create a shell file for it.
Create a **generate_data.sh** file and include the following:
```bash
mgeneratejs '{
"_id": "$oid",
"name": "$name",
"location": {
"address": "$address",
"city": {
"$choose": {
"from": "Tracy", "Palo Alto", "San Francsico", "Los Angeles" ]
}
},
"state": "CA"
},
"payment_preference": {
"$choose": {
"from": ["Credit Card", "Banking", "Cash", "Bitcoin" ]
}
},
"transaction_history": {
"$array": {
"of": {
"$choose": {
"from": ["2270123", "2298228", "2463098", "2488480", "2183400", "2401599", "2479412", "2477865", "2296988", "2415845", "2406021", "2471438", "2284073", "2328898", "2442162", "2467207", "2388202", "2258139", "2373216", "2285237", "2383902", "2245879", "2491062", "2481293", "2410976", "2496821", "2479193", "2129703", "2434249", "2459973", "2468197", "2451166", "2451181", "2276549", "2472323", "2436171", "2475436", "2351149", "2451184", "2470487", "2475571", "2412684", "2406871", "2458189", "2450423", "2493361", "2431145", "2314101", "2229869", "2298756", "2394023", "2501380", "2431582", "2490094", "2388993", "2489033", "2506533", "2411642", "2429795", "2441783", "2377402", "2327280", "2361260", "2505412", "2253805", "2451233", "2461674", "2466434", "2287125", "2505418", "2478740", "2366998", "2171300", "2431678", "2359605", "2164278", "2366343", "2449257", "2435175", "2413261", "2368558", "2088504", "2406398", "2362833", "2393989", "2178198", "2478544", "2290107", "2441142", "2287235", "2090225", "2463293", "2458539", "2328519", "2400013", "2506801", "2454632", "2386676", "2487915", "2495358", "2353712", "2421438", "2465682", "2483923", "2449799", "2492327", "2484972", "2042273", "2446226", "2163978", "2496932", "2136162", "2449304", "2149687", "2502682", "2380738", "2493539", "2235360", "2423807", "2403760", "2483944", "2253657", "2318369", "2468266", "2435881", "2510356", "2434007", "2030813", "2478191", "2508884", "2383725", "2324734", "2477641", "2439767", "2294898", "2022930", "2129990", "2448650", "2438041", "2261312", "2418766", "2495220", "2403300", "2323337", "2417618", "2451496", "2482895", "2356295", "2189971", "2253113", "2444116", "2378270", "2431210", "2470691", "2460896", "2426935", "2503476", "2475952", "2332775", "2453908", "2432284", "2456026", "2209392", "2457841", "2066544", "2450290", "2427091", "2426772", "2312503", "2402615", "2452975", "2382964", "2396979", "2391773", "2457692", "2158784", "2434491", "2237533", "2474056", "2474203", "2450595", "2393747", "2497077", "2459487", "2494952"]
}
},
"number": {
"$integer": {
"min": 1,
"max": 3
}
},
"unique": true
}
}
}
' -n 50
```
So what's happening in the above template?
It might be easier to have a look at a completed document based on the above template:
```json
{
"_id": ObjectId("64062d2db97b8ab3a8f20f8d"),
"name": "Amanda Vega",
"location": {
"address": "1509 Fuvzu Circle",
"city": "Tracy",
"state": "CA"
},
"payment_preference": "Credit Card",
"transaction_history": [
"2323337"
]
}
```
The script will create 50 documents. Many of the fields will be randomly generated with the exception of the `city`, `payment_preference`, and `transaction_history` fields. While these fields will be somewhat random, we're sandboxing them to a particular set of options.
Customers need to be linked to actual vehicles found in the Carvana data. The script adds one to three actual id values to each document. To narrow the scope, we'll imagine that the customers are locked to certain regions.
Import the output into MongoDB. You might consider creating a **carvana** database and a **customers** collection within MongoDB for this data to live.
## Create a multiple datasource federated database within MongoDB Atlas
It's time for the fun part! We need to create a federated database to combine both customer data that already lives within MongoDB and the Carvana data that lives on AWS S3.
Within MongoDB Atlas, click the **Data Federation** Tab.
![MongoDB Atlas Federated Databases
Click “set up manually” in the "create new federated database" dropdown in the top right corner of the UI.
Then, add your data sources. Whether the Carvana data source comes directly from an AWS S3 integration or a public HTTP endpoint, it is up to you. The end result will be the same.
With the data sources available, create a database within your federated instance. Since the theme of this example is Carvana, it might make sense to create a **carvana** database and give each data source a proper collection name. The data living on AWS S3 might be called **sales** or **transactions** and the customer data might have a **customers** name.
What you name everything is up to you. When connecting to this federated instance, you'll only ever see the federated database name and federated collection names. Looking in, you won't notice any difference from connecting to any other MongoDB instance.
You can connect to your federated instance using the connection string it provides. It will look similar to a standard MongoDB Atlas connection string.
The above image was captured with MongoDB Compass. Notice the **sales** collection is the Carvana data on AWS S3 and it looks like any other MongoDB document?
## Create a single view report with a MongoDB aggregation pipeline
Having all the data sources accessible from one location with Data Federation is great, but we can do better by providing users a single view that might make sense for their reporting needs.
A little imagination will need to be used for this example, but let's say we want a report that shows the amount of car types sold for every city. For this, we're going to need data from both the **customers** collection as well as the **carvana** collection.
Let's take a look at the following aggregation pipeline:
```json
{
"$lookup": {
"from": "sales",
"localField": "transaction_history",
"foreignField": "vehicle_id",
"as": "transaction_history"
}
},
{
"$unwind": {
"path": "$transaction_history"
}
},
{
"$group": {
"_id": {
"city": "$location.city",
"vehicle": "$transaction_history.make"
},
"total_transactions": {
"$sum": 1
}
}
},
{
"$project": {
"_id": 0,
"city": "$_id.city",
"vehicle": "$_id.vehicle",
"total_transactions": 1
}
}
]
```
There are four stages in the above pipeline.
In the first stage, we want to expand the vehicle id values that are found in **customers** documents. Reference values are not particularly useful to us standalone so we do a join operation using the `$lookup` operator between collections. This leaves us with all the details for every vehicle alongside the customer information.
The next stage flattens the array of vehicle information using the `$unwind` operation. By the end of this, all results are flat and we're no longer working with arrays.
In the third stage we group the data. In this example, we are grouping the data based on the city and vehicle type and counting how many of those transactions occurred. By the end of this stage, the results might look like the following:
```json
{
"_id": {
"city": "Tracy",
"vehicle": "Honda"
},
"total_transactions": 4
}
```
In the final stage, we format the data into something a little more attractive using a `$project` operation. This leaves us with data that looks like the following:
```json
[
{
"city": "Tracy",
"vehicle": "Honda",
"total_transactions": 4
},
{
"city": "Tracy",
"vehicle": "Toyota",
"total_transactions": 12
}
]
```
The data can be manipulated any way we want, but for someone running a report of what city sells the most of a certain type of vehicle, this might be useful.
The aggregation pipeline above can be used in MongoDB Compass and would be nearly identical using several of the MongoDB drivers such as Node.js and Python. To get an idea of what it would look like in another language, here is an example of Java:
```java
Arrays.asList(new Document("$lookup",
new Document("from", "sales")
.append("localField", "transaction_history")
.append("foreignField", "vehicle_id")
.append("as", "transaction_history")),
new Document("$unwind", "$transaction_history"),
new Document("$group",
new Document("_id",
new Document("city", "$location.city")
.append("vehicle", "$transaction_history.make"))
.append("total_transactions",
new Document("$sum", 1L))),
new Document("$project",
new Document("_id", 0L)
.append("city", "$_id.city")
.append("vehicle", "$_id.vehicle")
.append("total_transactions", 1L)))
```
When using MongoDB Compass, aggregation pipelines can be output automatically to any supported driver language you want.
The person generating the report probably won't want to deal with aggregation pipelines or application code. Instead, they'll want to look at a view that is always up to date in near real-time.
Within the MongoDB Atlas dashboard, go back to the configuration area for your federated instance. You'll want to create a view, similar to how you created a federated database and federated collection.
![MongoDB Atlas Federated Database View
Give the view a name and paste the aggregation pipeline into the box when prompted.
Refresh MongoDB Compass or whatever tool you're using and you should see the view. When you load the view, it should show your data as if you ran a pipeline — however, this time without running anything.
In other words, you’d be interacting with the view like you would any other collection — no queries or aggregations to constantly run or keep track of.
The view is automatically kept up to date behind the scenes using the pipeline you used to create it.
## Conclusion
With MongoDB Data Federation, you can combine data from numerous data sources and interact with it using standard MongoDB queries and aggregation pipelines. This allows you to create views and run reports in near real-time regardless where your data might live.
Have a question about Data Federation or aggregations? Check out the MongoDB Community Forums and learn how others are using them. | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to bring data together from different datasources for a near-realtime view into customer data using the MongoDB Federated Database feature.",
"contentType": "Tutorial"
} | Delivering a Near Real-Time Single View into Customers with a Federated Database | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/automated-continuous-data-copying-from-mongodb-to-s3 | created | # How to Automate Continuous Data Copying from MongoDB to S3
Modern always-on applications rely on automatic failover capabilities and real-time data access. MongoDB Atlas already supports automatic backups out of the box, but you might still want to copy your data into another location to run advanced analytics on your data or isolate your operational workload. For this reason, it can be incredibly useful to set up automatic continuous replication of your data for your workload.
In this post, we are going to set up a way to continuously copy data from a MongoDB database into an AWS S3 bucket in the Parquet data format by using MongoDB Atlas Database Triggers. We will first set up a Federated Database Instance using MongoDB Atlas Data Federation to consolidate a MongoDB database and our AWS S3 bucket. Next, we will set up a Trigger to automatically add a new document to a collection every minute, and another Trigger to automatically copy our data to our S3 bucket. Then, we will run a test to ensure that our data is being continuously copied into S3 from MongoDB. Finally, we’ll cover some items you’ll want to consider when building out something like this for your application.
Note: The values we use for certain parameters in this blog are for demonstration and testing purposes. If you plan on utilizing this functionality, we recommend you look at the “Production Considerations” section and adjust based on your needs.
## What is Parquet?
For those of you not familiar with Parquet, it's an amazing file format that does a lot of the heavy lifting to ensure blazing fast query performance on data stored in files. This is a popular file format in the Data Warehouse and Data Lake space as well as for a variety of machine learning tasks.
One thing we frequently see users struggle with is getting NoSQL data into Parquet as it is a columnar format. Historically, you would have to write some custom code to get the data out of the database, transform it into an appropriate structure, and then probably utilize a third-party library to write it to Parquet. Fortunately, with MongoDB Atlas Data Federation's $out to S3, you can now convert MongoDB Data into Parquet with little effort.
## Prerequisites
In order to follow along with this tutorial yourself, you will need to
do the following:
1. Create a MongoDB Atlas account, if you do not have one already.
2. Create an AWS account with privileges to create IAM Roles and S3 Buckets (to give Data Federation access to write data to your S3 bucket). Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
3. Install the AWS CLI. 4. Configure the AWS CLI.
5. *Optional*: Set up unified AWS access.
## Create a Federated Database Instance and Connect to S3
We need to set up a Federated Database Instance to copy our MongoDB data and utilize MongoDB Atlas Data Federation's $out to S3 to convert our MongoDB Data into Parquet and land it in an S3 bucket.
The first thing you'll need to do is navigate to "Data Federation" on the left-hand side of your Atlas Dashboard and then click “set up manually” in the "create new federated database" dropdown in the top right corner of the UI.
Then, you need to go ahead and connect your S3 bucket to your Federated Database Instance. This is where we will write the Parquet files. The setup wizard should guide you through this pretty quickly, but you will need access to your credentials for AWS.
>Note: For more information, be sure to refer to the documentation on deploying a Federated Database Instance for a S3 data store. (Be sure to give Atlas Data Federation "Read and Write" access to the bucket, so it can write the Parquet files there).
Select an AWS IAM role for Atlas.
- If you created a role that Atlas is already authorized to read and write to your S3 bucket, select this user.
- If you are authorizing Atlas for an existing role or are creating a new role, be sure to refer to the documentation for how to do this.
Enter the S3 bucket information.
- Enter the name of your S3 bucket. I named my bucket `mongodb-data-lake-demo`.
- Choose Read and write, to be able to write documents to your S3 bucket.
Assign an access policy to your AWS IAM role.
- Follow the steps in the Atlas user interface to assign an access policy to your AWS IAM role.
- Your role policy for read-only or read and write access should look similar to the following:
``` json
{
"Version": "2012-10-17",
"Statement":
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketLocation"
],
"Resource": [
]
}
]
}
```
- Define the path structure for your files in the S3 bucket and click Next.
- Once you've connected your S3 bucket, we're going to create a simple data source to query the data in S3, so we can verify we've written the data to S3 at the end of this tutorial.
## Connect Your MongoDB Database to Your Federated Database Instance
Now, we're going to connect our Atlas Cluster, so we can write data from it into the Parquet files on S3. This involves picking the cluster from a list of clusters in your Atlas project and then selecting the databases and collections you'd like to create Data Sources from and dragging them into your Federated Database Instance.
![Screenshot of the Add Data Source modal with collections selected
## Create a MongoDB Atlas Trigger to Create a New Document Every Minute
Now that we have all of our data sources set up in our brand new Federated Database Instance, we can now set up a MongoDB Database Trigger to automatically generate new documents every minute for our continuous replication demo. **Triggers** allow you to execute server-side logic in response to database events or according to a schedule. Atlas provides two kinds of Triggers: **Database** and **Scheduled** triggers. We will use a **Scheduled** trigger to ensure that these documents are automatically archived in our S3 bucket.
1. Click the Atlas tab in the top navigation of your screen if you have not already navigated to Atlas.
2. Click Triggers in the left-hand navigation.
3. On the Overview tab of the Triggers page, click Add Trigger to open the trigger configuration page.
4. Enter these configuration values for our trigger:
And our Trigger function looks like this:
``` javascript
exports = function () {
const mongodb = context.services.get("NAME_OF_YOUR_ATLAS_SERVICE");
const db = mongodb.db("NAME_OF_YOUR DATABASE")
const events = db.collection("NAME_OF_YOUR_COLLECTION");
const event = events.insertOne(
{
time: new Date(),
aNumber: Math.random() * 100,
type: "event"
}
);
return JSON.stringify(event);
};
```
Lastly, click Run and check that your database is getting new documents inserted into it every 60 seconds.
## Create a MongoDB Atlas Trigger to Copy New MongoDB Data into S3 Every Minute
Alright, now is the fun part. We are going to create a new MongoDB Trigger that copies our MongoDB data every 60 seconds utilizing MongoDB Atlas Data Federation's $out to S3 aggregation pipeline. Create a new Trigger and use these configuration settings.
Your Trigger function will look something like this. But there's a lot going on, so let's break it down.
* First, we are going to connect to our new Federated Database Instance. This is different from the previous Trigger that connected to our Atlas database. Be sure to put your virtual database name in for `context.services.get`. You must connect to your Federated Database Instance to use $out to S3.
* Next, we are going to create an aggregation pipeline function to first query our MongoDB data that's more than 60 seconds old.
* Then, we will utilize the $out aggregate operator to replicate the data from our previous aggregation stage into S3.
* In the format, we're going to specify *parquet* and determine a maxFileSize and maxRowGroupSize.
* *maxFileSize* is going to determine the maximum size each
partition will be.
*maxRowGroupSize* is going to determine how records are grouped inside of the parquet file in "row groups" which will impact performance querying your Parquet files similarly to file size.
* Lastly, we’re going to set our S3 path to match the value of the data.
``` javascript
exports = function () {
const service = context.services.get("NAME_OF_YOUR_FEDERATED_DATA_SERVICE");
const db = service.db("NAME_OF_YOUR_VIRTUAL_DATABASE")
const events = db.collection("NAME_OF_YOUR_VIRTUAL_COLLECTION");
const pipeline =
{
$match: {
"time": {
$gt: new Date(Date.now() - 60 * 60 * 1000),
$lt: new Date(Date.now())
}
}
}, {
"$out": {
"s3": {
"bucket": "mongodb-federated-data-demo",
"region": "us-east-1",
"filename": "events",
"format": {
"name": "parquet",
"maxFileSize": "10GB",
"maxRowGroupSize": "100MB"
}
}
}
}
];
return events.aggregate(pipeline);
};
```
If all is good, you should see your new Parquet document in your S3 bucket. I've enabled the AWS GUI to show you the versions so that you can see how it is being updated every 60 seconds automatically.
![Screenshot from AWS S3 management console showing the new events.parquet document that was generated by our $out trigger function.
## Production Considerations
Some of the configurations chosen above were done so to make it easy to set up and test, but if you’re going to use this in production, you’ll want to adjust them.
Firstly, this blog was setup with a “deltas” approach. This means that we are only copying the new documents from our collection into our Parquet files. Another approach would be to do a full snapshot, i.e., copying the entire collection into Parquet each time. The approach you’re taking should depend on how much data is in your collection and what’s required by the downstream consumer.
Secondly, regardless of how much data you’re copying, ideally you want Parquet files to be larger, and for them to be partitioned based on how you’re going to query. Apache recommends row group sizes of 512MB to 1GB. You can go smaller depending on your requirements, but as you can see, you want larger files. The other consideration is if you plan to query this data in the parquet format, you should partition it so that it aligns with your query pattern. If you’re going to query on a date field, for instance, you might want each file to have a single day's worth of data.
Lastly, depending on your needs, it may be appropriate to look into an alternative scheduling device to triggers, like Temporal or Apache Airflow.
## Wrap Up
In this post, we walked through how to set up an automated continuous replication from a MongoDB database into an AWS S3 bucket in the Parquet data format by using MongoDB Atlas Data Federation and MongoDB Atlas Database Triggers. First, we set up a new Federated Database Instance to consolidate a MongoDB database and our AWS S3 bucket. Then, we set up a Trigger to automatically add a new document to a collection every minute, and another Trigger to automatically back up these new automatically generated documents into our S3 bucket.
We also discussed how Parquet is a great format for your MongoDB data when you need to use columnar-oriented tools like Tableau for visualizations or Machine Learning frameworks that use Data Frames. Parquet can be quickly and easily converted into Pandas Data Frames in Python.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
Additional Resources:
- Data Federation: Getting Started Documentation
- $out S3 Data Lake Documentation | md | {
"tags": [
"Atlas",
"Parquet",
"AWS"
],
"pageDescription": "Learn how to set up a continuous copy from MongoDB into an AWS S3 bucket in Parquet.",
"contentType": "Tutorial"
} | How to Automate Continuous Data Copying from MongoDB to S3 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongodb-apache-airflow | created | # Using MongoDB with Apache Airflow
While writing cron jobs to execute scripts is one way to accomplish data movement, as workflows become more complex, managing job scheduling becomes very difficult and error-prone. This is where Apache Airflow shines. Airflow is a workflow management system originally designed by Airbnb and open sourced in 2015. With Airflow, you can programmatically author, schedule, and monitor complex data pipelines. Airflow is used in many use cases with MongoDB, including:
* Machine learning pipelines.
* Automating database administration operations.
* Batch movement of data.
In this post, you will learn the basics of how to leverage MongoDB within an Airflow pipeline.
## Getting started
Apache Airflow consists of a number of installation steps, including installing a database and webserver. While it’s possible to follow the installation script and configure the database and services, the easiest way to get started with Airflow is to use Astronomer CLI. This CLI stands up a complete Airflow docker environment from a single command line.
Likewise, the easiest way to stand up a MongoDB cluster is with MongoDB Atlas. Atlas is not just a hosted MongoDB cluster. Rather, it’s an integrated suite of cloud database and data services that enable you to quickly build your applications. One service, Atlas Data Federation, is a cloud-native query processing service that allows users to create a virtual collection from heterogeneous data sources such as Amazon S3 buckets, MongoDB clusters, and HTTP API endpoints. Once defined, the user simply issues a query to obtain data combined from these sources.
For example, consider a scenario where you were moving data with an Airflow DAG into MongoDB and wanted to join cloud object storage - Amazon S3 or Microsoft Azure Blob Storage data with MongoDB as part of a data analytics application. Using MongoDB Atlas Data Federation, you create a virtual collection that contains a MongoDB cluster and a cloud object storage collection. Now, all your application needs to do is issue a single query and Atlas takes care of joining heterogeneous data. This feature and others like MongoDB Charts, which we will see later in this post, will increase your productivity and enhance your Airflow solution. To learn more about MongoDB Atlas Data Federation, check out the MongoDB.live webinar on YouTube, Help You Data Flow with Atlas Data Lake. For an overview of MongoDB Atlas, check out Intro to MongoDB Atlas in 10 mins | Jumpstart, available on YouTube.
## Currency over time
In this post, we will create an Airflow workflow that queries an HTTP endpoint for a historical list of currency values versus the Euro. The data will then be inserted into MongoDB using the MongoHook and a chart will be created using MongoDB Charts. In Airflow, a hook is an interface to an external platform or database such as MongoDB. The MongoHook wraps the PyMongo Python Driver for MongoDB, unlocking all the capabilities of the driver within an Airflow workflow.
### Step 1: Spin up the Airflow environment
If you don’t have an Airflow environment already available, install the Astro CLI. Once it’s installed, create a directory for the project called “currency.”
**mkdir currency && cd currency**
Next, create the Airflow environment using the Astro CLI.
**astro dev init**
This command will create a folder structure that includes a folder for DAGs, a Dockerfile, and other support files that are used for customizations.
### Step 2: Install the MongoDB Airflow provider
Providers help Airflow interface with external systems. To add a provider, modify the requirements.txt file and add the MongoDB provider.
**echo “apache-airflow-providers-mongo==3.0.0” >> requirements.txt**
Finally, start the Airflow project.
**astro dev start**
This simple command will start and configure the four docker containers needed for Airflow: a webserver, scheduler, triggerer, and Postgres database, respectively.
**Astro dev restart**
Note: You can also manually install the MongoDB Provider using PyPi if you are not using the Astro CLI.
Note: The HTTP provider is already installed as part of the Astro runtime. If you did not use Astro, you will need to install the HTTP provider.
### Step 3: Creating the DAG workflow
One of the components that is installed with Airflow is a webserver. This is used as the main operational portal for Airflow workflows. To access, open a browser and navigate to http://localhost:8080. Depending on how you installed Airflow, you might see example DAGs already populated. Airflow workflows are referred to as DAGs (Directed Acyclic Graphs) and can be anything from the most basic job scheduling pipelines to more complex ETL, machine learning, or predictive data pipeline workflows such as fraud detection. These DAGs are Python scripts that give developers complete control of the workflow. DAGs can be triggered manually via an API call or the web UI. DAGs can also be scheduled for execution one time, recurring, or in any cron-like configuration.
Let’s get started exploring Airflow by creating a Python file, “currency.py,” within the **dags** folder using your favorite editor.
The following is the complete source code for the DAG.
```
import os
import json
from airflow import DAG
from airflow.operators.python import PythonOperator
from airflow.operators.bash import BashOperator
from airflow.providers.http.operators.http import SimpleHttpOperator
from airflow.providers.mongo.hooks.mongo import MongoHook
from datetime import datetime,timedelta
def on_failure_callback(**context):
print(f"Task {context'task_instance_key_str']} failed.")
def uploadtomongo(ti, **context):
try:
hook = MongoHook(mongo_conn_id='mongoid')
client = hook.get_conn()
db = client.MyDB
currency_collection=db.currency_collection
print(f"Connected to MongoDB - {client.server_info()}")
d=json.loads(context["result"])
currency_collection.insert_one(d)
except Exception as e:
printf("Error connecting to MongoDB -- {e}")
with DAG(
dag_id="load_currency_data",
schedule_interval=None,
start_date=datetime(2022,10,28),
catchup=False,
tags= ["currency"],
default_args={
"owner": "Rob",
"retries": 2,
"retry_delay": timedelta(minutes=5),
'on_failure_callback': on_failure_callback
}
) as dag:
t1 = SimpleHttpOperator(
task_id='get_currency',
method='GET',
endpoint='2022-01-01..2022-06-30',
headers={"Content-Type": "application/json"},
do_xcom_push=True,
dag=dag)
t2 = PythonOperator(
task_id='upload-mongodb',
python_callable=uploadtomongo,
op_kwargs={"result": t1.output},
dag=dag
)
t1 >> t2
```
### Step 4: Configure connections
When you look at the code, notice there are no connection strings within the Python file. [Connection identifiers as shown in the below code snippet are placeholders for connection strings.
hook = MongoHook(mongo\_conn\_id='mongoid')
Connection identifiers and the connection configurations they represent are defined within the Connections tab of the Admin menu in the Airflow UI.
In this example, since we are connecting to MongoDB and an HTTP API, we need to define two connections. First, let’s create the MongoDB connection by clicking the “Add a new record” button.
This will present a page where you can fill out connection information. Select “MongoDB” from the Connection Type drop-down and fill out the following fields:
| | |
| --- | --- |
| Connection Id | mongoid |
| Connection Type | MongoDB |
| Host | XXXX..mongodb.net
*(Place your MongoDB Atlas hostname here)* |
| Schema | MyDB
*(e.g. the database in MongoDB)* |
| Login | *(Place your database username here)* |
| Password | *(Place your database password here)* |
| Extra | {"srv": true} |
Click “Save” and “Add a new record” to create the HTTP API connection.
Select “HTTP” for the Connection Type and fill out the following fields:
| | |
| --- | --- |
| Connection Id | http\_default |
| Connection Type | HTTP |
| Host | api.frankfurter.app |
Note: Connection strings can also be stored in environment variables or stores securely using an external secrets back end, such as HashiCorp Vault or AWS SSM Parameter Store.
### Step 5: The DAG workflow
Click on the DAGs menu and then “load\_currency\_data.” You’ll be presented with a number of sub items that address the workflow, such as the Code menu that shows the Python code that makes up the DAG.
Clicking on Graph will show a visual representation of the DAG parsed from the Python code.
In our example, “get\_currency” uses the SimpleHttpOperator to obtain a historical list of currency values versus the Euro.
```
t1 = SimpleHttpOperator(
task_id='get_currency',
method='GET',
endpoint='2022-01-01..2022-06-30',
headers={"Content-Type": "application/json"},
do_xcom_push=True,
dag=dag)
```
Airflow passes information between tasks using XComs. In this example, we store the return data from the API call to XCom. The next operator, “upload-mongodb,” uses the PythonOperator to call a python function, “uploadtomongo.”
```
t2 = PythonOperator(
task_id='upload-mongodb',
python_callable=uploadtomongo,
op_kwargs={"result": t1.output},
dag=dag
)
```
This function accesses the data stored in XCom and uses MongoHook to insert the data obtained from the API call into a MongoDB cluster.
```
def uploadtomongo(ti, **context):
try:
hook = MongoHook(mongo_conn_id='mongoid')
client = hook.get_conn()
db = client.MyDB
currency_collection=db.currency_collection
print(f"Connected to MongoDB - {client.server_info()}")
d=json.loads(context"result"])
currency_collection.insert_one(d)
except Exception as e:
printf("Error connecting to MongoDB -- {e}")
```
While our example workflow is simple, execute a task and then another task.
```
t1 >> t2
```
Airflow overloaded the “>>” bitwise operator to describe the flow of tasks. For more information, see “[Bitshift Composition.”
Airflow can enable more complex workflows, such as the following:
Task execution can be conditional with multiple execution paths.
### Step 6: Scheduling the DAG
Airflow is known best for its workflow scheduling capabilities, and these are defined as part of the DAG definition.
```
with DAG(
dag_id="load_currency_data",
schedule=None,
start_date=datetime(2022,10,28),
catchup=False,
tags= "currency"],
default_args={
"owner": "Rob",
"retries": 2,
"retry_delay": timedelta(minutes=5),
'on_failure_callback': on_failure_callback
}
) as dag:
```
The [scheduling interval can be defined using a cron expression, a timedelta, or one of AIrflow presets, such as the one used in this example, “None.”
DAGs can be scheduled to start at a date in the past. If you’d like Airflow to catch up and execute the DAG as many times as would have been done within the start time and now, you can set the “catchup” property. Note: “Catchup” defaults to “True,” so make sure you set the value accordingly.
From our example, you can see just some of the configuration options available.
### Step 7: Running the DAG
You can execute a DAG ad-hoc through the web using the “play” button under the action column.
Once it’s executed, you can click on the DAG and Grid menu item to display the runtime status of the DAG.
In the example above, the DAG was run four times, all with success. You can view the log of each step by clicking on the task and then “Log” from the menu.
The log is useful for troubleshooting the task. Here we can see our output from the `print(f"Connected to MongoDB - {client.server_info()}")` command within the PythonOperator.
### Step 8: Exploring the data in MongoDB Atlas
Once we run the DAG, the data will be in the MongoDB Atlas cluster. Navigating to the cluster, we can see the “currency\_collection” was created and populated with currency data.
### Step 9: Visualizing the data using MongoDB Charts
Next, we can visualize the data by using MongoDB Charts.
Note that the data that was stored in MongoDB from the API with a subdocument for every day of the given period. A sample of this data is as follows:
```
{
_id: ObjectId("635b25bdcef2d967af053e2c"),
amount: 1,
base: 'EUR',
start_date: '2022-01-03',
end_date: '2022-06-30',
rates: {
'2022-01-03': {
AUD: 1.5691,
BGN: 1.9558,
BRL: 6.3539,
… },
},
'2022-01-04': {
AUD: 1.5682,
BGN: 1.9558,
BRL: 6.4174,
… }
```
With MongoDB Charts, we can define an aggregation pipeline filter to transform the data into a format that will be optimized for chart creation. For example, consider the following aggregation pipeline filter:
```
{$project:{
rates:{
$objectToArray:"$rates"}}},{
$unwind:"$rates"
}
,{
$project:{
_id:0,"date":"$rates.k","Value":"$rates.v"}}]
```
This transforms the data into subdocuments that have two key value pairs of the date and values respectively.
```
{
date: '2022-01-03',
Value: {
AUD: 1.5691,
BGN: 1.9558,
BRL: 6.3539,
… },
{
date: '2022-01-04',
Value: {
AUD: 1.5682,
BGN: 1.9558,
BRL: 6.4174,
..}
}
```
We can add this aggregation pipeline filter into Charts and build out a chart comparing the US dollar (USD) to the Euro (EUR) over this time period.
![We can add this aggregation pipeline filter into Charts and build out a chart comparing the US dollar (USD) to the Euro (EUR) over this time period.
For more information on MongoDB Charts, check out the YouTube video “Intro to MongoDB Charts (demo)” for a walkthrough of the feature.
## Summary
Airflow is an open-sourced workflow scheduler used by many enterprises throughout the world. Integrating MongoDB with Airflow is simple using the MongoHook. Astronomer makes it easy to quickly spin up a local Airflow deployment. Astronomer also has a registry that provides a central place for Airflow operators, including the MongoHook and MongoSensor.
## Useful resources
Learn more about Astronomer, and check out the MongoHook documentation. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to integrate MongoDB within your Airflow DAGs.",
"contentType": "Tutorial"
} | Using MongoDB with Apache Airflow | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/best-practices-google-cloud-functions-atlas | created | # Best Practices and a Tutorial for Using Google Cloud Functions with MongoDB Atlas
Serverless applications are becoming increasingly popular among developers. They provide a cost-effective and efficient way to handle application logic and data storage. Two of the most popular technologies that can be used together to build serverless applications are Google Cloud Functions and MongoDB Atlas.
Google Cloud Functions allows developers to run their code in response to events, such as changes in data or HTTP requests, without having to manage the underlying infrastructure. This makes it easy to build scalable and performant applications. MongoDB Atlas, on the other hand, provides a fully-managed, globally-distributed, and highly-available data platform. This makes it easy for developers to store and manage their data in a reliable and secure way.
In this article, we'll discuss three best practices for working with databases in Google Cloud Functions. First, we'll explore the benefits of opening database connections in the global scope. Then, we'll cover how to make your database operations idempotent to ensure data consistency in event-driven functions. Finally, we'll discuss how to set up a secure network connection to protect your data from unauthorized access. By following these best practices, you can build more reliable and secure event-driven functions that work seamlessly with your databases.
## Prerequisites
The minimal requirements for following this tutorial are:
* A MongoDB Atlas database with a database user and appropriate network configuration.
* A Google Cloud account with billing enabled.
* Cloud Functions, Cloud Build, Artifact Registry, Cloud Run, Logging, and Pub/Sub APIs enabled. Follow this link to enable the required APIs.
You can try the experiments shown in this article yourself. Both MongoDB Atlas and Cloud Functions offer a free tier which are sufficient for the first two examples. The final example — setting up a VPC network or Private Service Connect — requires setting up a paid, dedicated Atlas database and using paid Google Cloud features.
## Open database connections in the global scope
Let’s say that we’re building a traditional, self-hosted application that connects to MongoDB. We could open a new connection every time we need to communicate with the database and then immediately close that connection. But opening and closing connections adds an overhead both to the database server and to our app. It’s far more efficient to reuse the same connection every time we send a request to the database. Normally, we’d connect to the database using a MongoDB driver when we start the app, save the connection to a globally accessible variable, and use it to send requests. As long as the app is running, the connection will remain open.
To be more precise, when we connect, the MongoDB driver creates a connection pool. This allows for concurrent requests to communicate with the database. The driver will automatically manage the connections in the pool, creating new ones when needed and closing them when they’re idle. The pooling also limits the number of connections that can come from a single application instance (100 connections is the default).
On the other hand, Cloud Functions are serverless. They’re very efficient at automatically scaling up when multiple concurrent requests come in, and down when the demand decreases.
By default, each function instance can handle only one request at a time. However, with Cloud Functions 2nd gen, you can configure your functions to handle concurrent requests. For example, if you set the concurrency parameter to 10, a single function instance will be able to work on a max of 10 requests at the same time. If we’re careful about how we connect to the database, the requests will take advantage of the connection pool created by the MongoDB driver. In this section, we’ll explore specific strategies for reusing connections.
By default, Cloud Functions can spin up to 1,000 new instances. However, each function instance runs in its own isolated execution context. This means that instances can’t share a database connection pool. That’s why we need to pay attention to the way we open database connections. If we have our concurrency parameter set to 1 and we open a new connection with each request, we will cause unnecessary overhead to the database or even hit the maximum connections limit.
That looks very inefficient! Thankfully, there’s a better way to do it. We can take advantage of the way Cloud Functions reuses already-started instances.
We mentioned earlier that Cloud Functions scale by spinning up new instances to handle incoming requests. Creating a brand new instance is called a “cold start” and involves the following steps:
1. Loading the runtime environment.
2. Executing the global (instance-wide) scope of the function.
3. Executing the body of the function defined as an “entry point.”
When the instance handles the request, it’s not closed down immediately. If we get another request in the next few minutes, chances are high it will be routed to the same, already “warmed” instance. But this time, only the “entry point” function will be invoked. And what’s more important is that the function will be invoked in the same execution environment. Practically, this means that everything we defined in the global scope can be reused — including a database connection! This will reduce the overhead of opening a new connection with every function invocation.
While we can take advantage of the global scope for storing a reusable connection, there is no guarantee that a reusable connection will be used.
Let’s test this theory! We’ll do the following experiment:
1. We’ll create two Cloud Functions that insert a document into a MongoDB Atlas database. We’ll also attach an event listener that logs a message every time a new database connection is created.
1. The first function will connect to Atlas in the function scope.
2. The second function will connect to Atlas in the global scope.
2. We’ll send 50 concurrent requests to each function and wait for them to complete. In theory, after spinning up a few instances, Cloud Functions will reuse them to handle some of the requests.
3. Finally, we’ll inspect the logs to see how many database connections were created in each case.
Before starting, go back to your Atlas deployment and locate your connection string. Also, make sure you’ve allowed access from anywhere in the network settings. Instead of this, we strongly recommend establishing a secure connection.
### Creating the Cloud Function with function-scoped database connection
We’ll use the Google Cloud console to conduct our experiment. Navigate to the Cloud Functions page and make sure you’ve logged in, selected a project, and enabled all required APIs. Then, click on **Create function** and enter the following configuration:
* Environment: **2nd gen**
* Function name: **create-document-function-scope**
* Region: **us-central-1**
* Authentication: **Allow unauthenticated invocations**
Expand the **Runtime, build, connections and security settings** section and under **Runtime environment variables**, add a new variable **ATLAS_URI** with your MongoDB Atlas connection string. Don’t forget to replace the username and password placeholders with the credentials for your database user.
> Instead of adding your credentials as environment variables in clear text, you can easily store them as secrets in Secret Manager. Once you do that, you’ll be able to access them from your Cloud Functions.
Click **Next**. It’s time to add the implementation of the function. Open the `package.json` file from the left pane and replace its contents with the following:
```json
{
"dependencies": {
"@google-cloud/functions-framework": "^3.0.0",
"mongodb": "latest"
}
}
```
We’ve added the `mongodb` package as a dependency. The package is used to distribute the MongoDB Node.js driver that we’ll use to connect to the database.
Now, switch to the **`index.js`** file and replace the default code with the following:
```javascript
// Global (instance-wide) scope
// This code runs once (at instance cold-start)
const { http } = require('@google-cloud/functions-framework');
const { MongoClient } = require('mongodb');
http('createDocument', async (req, res) => {
// Function scope
// This code runs every time this function is invoked
const client = new MongoClient(process.env.ATLAS_URI);
client.on('connectionCreated', () => {
console.log('New connection created!');
});
// Connect to the database in the function scope
try {
await client.connect();
const collection = client.db('test').collection('documents');
const result = await collection.insertOne({ source: 'Cloud Functions' });
if (result) {
console.log(`Document ${result.insertedId} created!`);
return res.status(201).send(`Successfully created a new document with id ${result.insertedId}`);
} else {
return res.status(500).send('Creating a new document failed!');
}
} catch (error) {
res.status(500).send(error.message);
}
});
```
Make sure the selected runtime is **Node.js 16** and for entry point, replace **helloHttp** with **createDocument**.
Finally, hit **Deploy**.
### Creating the Cloud Function with globally-scoped database connection
Go back to the list with functions and click **Create function** again. Name the function **create-document-global-scope**. The rest of the configuration should be exactly the same as in the previous function. Don’t forget to add an environment variable called **ATLAS_URI** for your connection string. Click **Next** and replace the **`package.json`** contents with the same code we used in the previous section. Then, open **`index.js`** and add the following implementation:
```javascript
// Global (instance-wide) scope
// This code runs once (at instance cold-start)
const { http } = require('@google-cloud/functions-framework');
const { MongoClient } = require('mongodb');
// Use lazy initialization to instantiate the MongoDB client and connect to the database
let client;
async function getConnection() {
if (!client) {
client = new MongoClient(process.env.ATLAS_URI);
client.on('connectionCreated', () => {
console.log('New connection created!');
});
// Connect to the database in the global scope
await client.connect();
}
return client;
}
http('createDocument', async (req, res) => {
// Function scope
// This code runs every time this function is invoked
const connection = await getConnection();
const collection = connection.db('test').collection('documents');
try {
const result = await collection.insertOne({ source: 'Cloud Functions' });
if (result) {
console.log(`Document ${result.insertedId} created!`);
return res.status(201).send(`Successfully created a new document with id ${result.insertedId}`);
} else {
return res.status(500).send('Creating a new document failed!');
}
} catch (error) {
res.status(500).send(error.message);
}
});
```
Change the entry point to **createDocument** and deploy the function.
As you can see, the only difference between the two implementations is where we connect to the database. To reiterate:
* The function that connects in the function scope will create a new connection on every invocation.
* The function that connects in the global scope will create new connections only on “cold starts,” allowing for some connections to be reused.
Let’s run our functions and see what happens! Click **Activate Cloud Shell** at the top of the Google Cloud console. Execute the following command to send 50 requests to the **create-document-function-scope** function:
```shell
seq 50 | xargs -Iz -n 1 -P 50 \
gcloud functions call \
create-document-function-scope \
--region us-central1 \
--gen2
```
You’ll be prompted to authorize Cloud Shell to use your credentials when executing commands. Click **Authorize**. After a few seconds, you should start seeing logs in the terminal window about documents being created. Wait until the command stops running — this means all requests were sent.
Then, execute the following command to get the logs from the function:
```shell
gcloud functions logs read \
create-document-function-scope \
--region us-central1 \
--gen2 \
--limit 500 \
| grep "New connection created"
```
We’re using `grep` to filter only the messages that are logged whenever a new connection is created. You should see that a whole bunch of new connections were created!
We can count them with the `wc -l` command:
```shell
gcloud functions logs read \
create-document-function-scope \
--region us-central1 \
--gen2 \
--limit 500 \
| grep "New connection created" \
| wc -l
```
You should see the number 50 printed in the terminal window. This confirms our theory that a connection is created for each request.
Let’s repeat the process for the **create-document-global-scope** function.
```shell
seq 50 | xargs -Iz -n 1 -P 50 \
gcloud functions call \
create-document-global-scope \
--region us-central1 \
--gen2
```
You should see log messages about created documents again. When the command’s finished, run:
```shell
gcloud functions logs read \
create-document-global-scope \
--region us-central1 \
--gen2 \
--limit 500 \
| grep "New connection created"
```
This time, you should see significantly fewer new connections. You can count them again with `wc -l`. We have our proof that establishing a database connection in the global scope is more efficient than doing it in the function scope.
We noted earlier that increasing the number of concurrent requests for a Cloud Function can help alleviate the database connections issue. Let’s expand a bit more on this.
### Concurrency with Cloud Functions 2nd gen and Cloud Run
By default, Cloud Functions can only process one request at a time. However, Cloud Functions 2nd gen are executed in a Cloud Run container. Among other benefits, this allows us to configure our functions to handle multiple concurrent requests. Increasing the concurrency capacity brings Cloud Functions closer to a way traditional server applications communicate with a database.
If your function instance supports concurrent requests, you can also take advantage of connection pooling. As a reminder, the MongoDB driver you’re using will automatically create and maintain a pool with connections that concurrent requests will use.
Depending on the use case and the amount of work your functions are expected to do, you can adjust:
* The concurrency settings of your functions.
* The maximum number of function instances that can be created.
* The maximum number of connections in the pool maintained by the MongoDB driver.
And as we proved, you should always declare your database connection in the global scope to persist it between invocations.
## Make your database operations idempotent in event-driven functions
You can enable retrying for your event-driven functions. If you do that, Cloud Functions will try executing your function again and again until it completes successfully or the retry period ends.
This functionality can be useful in many cases, namely when dealing with intermittent failures. However, if your function contains a database operation, executing it more than once can create duplicate documents or other undesired results.
Let’s consider the following example: The function **store-message-and-notify** is executed whenever a message is published to a specified Pub/Sub topic. The function saves the received message as a document in MongoDB Atlas and then uses a third-party service to send an SMS. However, the SMS service provider frequently fails and the function throws an error. We have enabled retries, so Cloud Functions tries executing our function again. If we weren’t careful with the implementation, we could duplicate the message in our database.
How do we handle such scenarios? How do we make our functions safe to retry? We have to ensure that the function is idempotent. Idempotent functions produce exactly the same result regardless of whether they were executed once or multiple times. If we insert a database document without a uniqueness check, we make the function non-idempotent.
Let’s give this scenario a try.
### Creating the event-driven non-idempotent Cloud Function
Go to Cloud Functions and start configuring a new function:
* Environment: **2nd gen**
* Function name: **store-message-and-notify**
* Region: **us-central-1**
* Authentication: **Require authentication**
Then, click on **Add Eventarc Trigger** and select the following in the opened dialog:
* Event provider: **Cloud Pub/Sub**
* Event: **google.cloud.pubsub.topic.v1.messagePublished**
Expand **Select a Cloud Pub/Sub topic** and then click **Create a topic**. Enter **test-topic** for the topic ID, and then **Create topic**.
Finally, enable **Retry on failure** and click **Save trigger**. Note that the function will always retry on failure even if the failure is caused by a bug in the implementation.
Add a new environment variable called **ATLAS_URI** with your connection string and click **Next**.
Replace the **`package.json`** with the one we used earlier and then, replace the **`index.js`** file with the following implementation:
```javascript
const { cloudEvent } = require('@google-cloud/functions-framework');
const { MongoClient } = require('mongodb');
// Use lazy initialization to instantiate the MongoDB client and connect to the database
let client;
async function getConnection() {
if (!client) {
client = new MongoClient(process.env.ATLAS_URI);
await client.connect();
}
return client;
}
cloudEvent('processMessage', async (cloudEvent) => {
let message;
try {
const base64message = cloudEvent?.data?.message?.data;
message = Buffer.from(base64message, 'base64').toString();
} catch (error) {
console.error('Invalid message', cloudEvent.data);
return Promise.resolve();
}
try {
await store(message);
} catch (error) {
console.error(error.message);
throw new Error('Storing message in the database failed.');
}
if (!notify()) {
throw new Error('Notification service failed.');
}
});
async function store(message) {
const connection = await getConnection();
const collection = connection.db('test').collection('messages');
await collection.insertOne({
text: message
});
}
// Simulate a third-party service with a 50% fail rate
function notify() {
return Math.floor(Math.random() * 2);
}
```
Then, navigate to the Pub/Sub topic we just created and go to the **Messages** tab. Publish a few messages with different message bodies.
Navigate back to your Atlas deployments. You can inspect the messages stored in the database by clicking **Browse Collections** in your cluster tile and then selecting the **test** database and the **messages** collection. You’ll notice that some of the messages you just published are duplicated. This is because when the function is retried, we store the same message again.
One obvious way to try to fix the idempotency of the function is to switch the two operations. We could execute the `notify()` function first and then, if it succeeds, store the message in the database. But what happens if the database operation fails? If that was a real implementation, we wouldn’t be able to unsend an SMS notification. So, the function is still non-idempotent. Let’s look for another solution.
### Using the event ID and unique index to make the Cloud Function idempotent
Every time the function is invoked, the associated event is passed as an argument together with an unique ID. The event ID remains the same even when the function is retried. We can store the event ID as a field in the MongoDB document. Then, we can create a unique index on that field. That way, storing a message with a duplicate event ID will fail.
Connect to your database from the MongoDB Shell and execute the following command to create a unique index:
```shell
db.messages.createIndex({ "event_id": 1 }, { unique: true })
```
Then, click on **Edit** in your Cloud Function and replace the implementation with the following:
```javascript
const { cloudEvent } = require('@google-cloud/functions-framework');
const { MongoClient } = require('mongodb');
// Use lazy initialization to instantiate the MongoDB client and connect to the database
let client;
async function getConnection() {
if (!client) {
client = new MongoClient(process.env.ATLAS_URI);
await client.connect();
}
return client;
}
cloudEvent('processMessage', async (cloudEvent) => {
let message;
try {
const base64message = cloudEvent?.data?.message?.data;
message = Buffer.from(base64message, 'base64').toString();
} catch (error) {
console.error('Invalid message', cloudEvent.data);
return Promise.resolve();
}
try {
await store(cloudEvent.id, message);
} catch (error) {
// The error E11000: duplicate key error for the 'event_id' field is expected when retrying
if (error.message.includes('E11000') && error.message.includes('event_id')) {
console.log('Skipping retrying because the error is expected...');
return Promise.resolve();
}
console.error(error.message);
throw new Error('Storing message in the database failed.');
}
if (!notify()) {
throw new Error('Notification service failed.');
}
});
async function store(id, message) {
const connection = await getConnection();
const collection = connection.db('test').collection('messages');
await collection.insertOne({
event_id: id,
text: message
});
}
// Simulate a third-party service with a 50% fail rate
function notify() {
return Math.floor(Math.random() * 2);
}
```
Go back to the Pub/Sub topic and publish a few more messages. Then, inspect your data in Atlas, and you’ll see the new messages are not getting duplicated anymore.
There isn’t a one-size-fits-all solution to idempotency. For example, if you’re using update operations instead of insert, you might want to check out the `upsert` option and the `$setOnInsert` operator.
## Set up a secure network connection
To ensure maximum security for your Atlas cluster and Google Cloud Functions, establishing a secure connection is imperative. Fortunately, you have several options available through Atlas that allow us to configure private networking.
One such option is to set up Network Peering between the MongoDB Atlas database and Google Cloud. Alternatively, you can create a private endpoint utilizing Private Service Connect. Both of these methods provide robust solutions for securing the connection.
It is important to note, however, that these features are not available for use with the free Atlas M0 cluster. To take advantage of these enhanced security measures, you will need to upgrade to a dedicated cluster at the M10 tier or higher.
## Wrap-up
In conclusion, Cloud Functions and MongoDB Atlas are a powerful combination for building efficient, scalable, and cost-effective applications. By following the best practices outlined in this article, you can ensure that your application is robust, performant, and able to handle any amount of traffic. From using proper indexes to securing your network, these tips will help you make the most of these two powerful tools and build applications that are truly cloud-native. So start implementing these best practices today and take your cloud development to the next level! If you haven’t already, you can subscribe to MongoDB Atlas and create your first free cluster right from the Google Cloud marketplace.
| md | {
"tags": [
"Atlas",
"Google Cloud"
],
"pageDescription": "In this article, we'll discuss three best practices for working with databases in Google Cloud Functions.",
"contentType": "Article"
} | Best Practices and a Tutorial for Using Google Cloud Functions with MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/stitch-aws-rekognition-images | created | # Using AWS Rekognition to Analyse and Tag Uploaded Images
>Please note: This article discusses Stitch. Stitch is now MongoDB Realm. All the same features and functionality, now with a new name. Learn more here. We will be updating this article in due course.
Computers can now look at a video or image and know what's going on and, sometimes, who's in it. Amazon Web Service Rekognition gives your applications the eyes it needs to label visual content. In the following, you can see how to use Rekognition along with MongoDB Stitch to supplement new content with information as it is inserted into the database.
You can easily detect labels or faces in images or videos in your MongoDB application using the built-in AWS service. Just add the AWS service and use the Stitch client to execute the AWS SES request right from your React.js application or create a Stitch function and Trigger. In a recent Stitchcraft live coding session on my Twitch channel, I wanted to tag an image using label detection. I set up a trigger that executed a function after an image was uploaded to my S3 bucket and its metadata was inserted into a collection.
``` javascript
exports = function(changeEvent) {
const aws = context.services.get('AWS');
const mongodb = context.services.get("mongodb-atlas");
const insertedPic = changeEvent.fullDocument;
const args = {
Image: {
S3Object: {
Bucket: insertedPic.s3.bucket,
Name: insertedPic.s3.key
}
},
MaxLabels: 10,
MinConfidence: 75.0
};
return aws.rekognition()
.DetectLabels(args)
.then(result => {
return mongodb
.db('data')
.collection('picstream')
.updateOne({_id: insertedPic._id}, {$set: {tags: result.Labels}});
});
};
```
With just a couple of service calls, I was able to take an image, stored in S3, analyse it with Rekognition, and add the tags to its document. Want to see how it all came together? Watch the recording on YouTube with the Github repo in the description. Follow me on Twitch to join me and ask questions live.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace. | md | {
"tags": [
"MongoDB",
"JavaScript",
"AWS"
],
"pageDescription": "Use MongoDB with AWS Rekognition to tag and analyse images.",
"contentType": "Article"
} | Using AWS Rekognition to Analyse and Tag Uploaded Images | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/designing-strategy-develop-game-unity-mongodb | created | # Designing a Strategy to Develop a Game with Unity and MongoDB
When it comes to game development, you should probably have some ideas written down before you start writing code or generating assets. The same could probably be said about any kind of development, unless of course you're just messing around and learning something new.
So what should be planned before developing your next game?
Depending on the type of game, you're probably going to want a playable frontend, otherwise known as the game itself, some kind of backend if you want an online component such as multiplayer, leaderboards, or similar, and then possibly a web-based dashboard to get information at a glance if you're on the operational side of the game and not a player.
Adrienne Tacke, Karen Huaulme, and myself (Nic Raboy) are in the process of building a game. We think Fall Guys: Ultimate Knockout is a very well-made game and thought it'd be interesting to create a tribute game that is a little more on the retro side, but with a lot of the same features. The game will be titled, Plummeting People. This article explores the planning, design, and development process!
Take a look at the Jamboard we've created so far:
The above Jamboard was created during a planning stream on Twitch where the community participated. The content that follows is a summary of each of the topics discussed and helpful information towards planning the development of a game.
## Planning the Game Experience with a Playable Frontend
The game is what most will see and what most will ever care about. It should act as the driver to every other component that operates behind the scenes.
Rather than try to invade the space of an already great game that we enjoy (Fall Guys), we wanted to put our own spin on things by making it 2D rather than 3D. With Fall Guys being the basic idea behind what we wanted to accomplish, we needed to further break down what the game would need. We came to a few conclusions.
**Levels / Arenas**
We need a few arenas to be able to call it a game worth playing, but we didn't want it to be as thought out as the game that inspired our idea. At the end of the day, we wanted to focus more on the development journey than making a blockbuster hit.
Fall Guys, while considered a battle royale, is still a racing game at its core. So what kind of arenas would make sense in a 2D setting?
Our plan is to start with the simplest level concepts to save us from complicated game physics and engineering. There are two levels in particular that have basic collisions as the emphasis in Fall Guys. These levels include "Door Dash" and "Tip Toe" which focus on fake doors and disappearing floor tiles. Both of which have no rotational physics and nothing beyond basic collisions and randomization.
While we could just stick with two basic levels as our proof of concept, we have a goal for a team arena such as scoring goals at soccer (football).
**Assets**
The arena concepts are important, but in order to execute, game assets will be necessary.
We're considering the following game assets a necessary part of our game:
- Arena backgrounds
- Obstacle images
- Player images
- Sound effects
- Game music
To maintain the spirit of the modern battle royale game, we thought player customizations were a necessary component. This means we'll need customized sprites with different outfits that can be unlocked throughout the gameplay experience.
**Gameplay Physics and Controls**
Level design and game assets are only part of a game. They are quite meaningless unless paired with the user interaction component. The user needs to be able to control the player, interact with other players, and interact with obstacles in the arena. For this we'll need to create our own gameplay logic using the assets that we create.
## Maintaining an Online, Multiplayer Experience with a Data Driven Backend
We envision the bulk of our work around this tribute game will be on the backend. Moving around on the screen and interacting with obstacles is not too difficult of a task as demonstrated in a previous tutorial that I wrote.
Instead, the online experience will require most of our attention. Our first round of planning came to the following conclusions:
**Real-Time Interaction with Sockets**
When the player does anything in the game, it needs to be processed by the server and broadcasted to other players in the game. This needs to be real-time and sockets is probably the only logical solution to this. If the server is managing the sockets, data can be stored in the database about the players, and the server can also validate interactions to prevent cheating.
**Matchmaking Players with Games**
When the game is live, there will be simultaneous games in operation, each with their own set of players. We'll need to come up with a matchmaking solution so that players can only be added to a game that is accepting players and these players must fit certain criteria.
The matchmaking process might serve as a perfect opportunity to use aggregation
pipelines within MongoDB. For example, let's say that you have 5 wins and 1000 losses. You're not a very good player, so you probably shouldn't end up in a match with a player that has 1000 wins and 5 losses. These are things that we can plan for from a database level.
**User Profile Stores**
User profile stores are one of the most common components for any online game. These store information about the player such as the name and billing information for the player as well as gaming statistics. Just imagine that everything you do in a game will end up in a record for your player.
So what might we store in a user profile store? What about the following?:
- Unlocked player outfits
- Wins, losses, experience points
- Username
- Play time
The list could go on endlessly.
The user profile store will have to be carefully planned because it is the baseline for anything data related in the game. It will affect the matchmaking process, leaderboards, historical data, and so much more.
To get an idea of what we're putting into the user profile store, check out a recorded Twitch stream we did on the topic.
**Leaderboards**
Since this is a competitive game, it makes sense to have a leaderboard. However this leaderboard can be a little more complicated than just your name and your experience points. What if we wanted to track who has the most wins, losses, steps, play time, etc.? What if we wanted to break it down further to see who was the leader in North America, Europe, or Asia? We could use MongoDB geospatial queries around the location of players.
As long as we're collecting game data for each player, we can come up with some interesting leaderboard ideas.
**Player Statistics**
We know we're going to want to track wins and losses for each player, but we might want to track more. For example, maybe we want to track how many steps a player took in a particular arena, or how many times they fell. This information could be later passed through an aggregation pipeline in MongoDB to determine a rank or level which could be useful for matchmaking and leaderboards.
**Player Chat**
Would it be an online multiplayer game without some kind of chat? We were thinking that while a player was in matchmaking, they could chat with each other until the game started. This chat data would be stored in MongoDB and we could implement Atlas Search functionality to look for signs of abuse, foul language, etc., that might appear throughout the chat.
## Generating Reports and Logical Metrics with an Admin Dashboard
As an admin of the game, we're going to want to collect information to make the game better. Chances are we're not going to want to analyze that information from within the game itself or with raw queries against the database.
For this, we're probably going to want to create dashboards, reports, and other useful tools to work with our data on a regular basis. Here are some things that we were thinking about doing:
**MongoDB Atlas Charts**
If everything has been running smooth with the game and the data-collection of the backend, we've got data, so we just need to visualize it. MongoDB Atlas Charts can take that data and help us make sense of it. Maybe we want to show a heatmap at different hours of the day for different regions around the world, or maybe we want to show a bar graph around player experience points. Whatever the reason may be, Atlas Charts would make sense in an admin dashboard setting.
**Offloading Historical Data**
Depending on the popularity of the game, data will be coming into MongoDB like a firehose. To help with scaling and pricing, it will make sense to offload historical data from our cluster to a cloud object storage in order to save on costs and improve our cluster's performance by removing historical data.
In MongoDB Atlas, the best way to do this is to enable Online Archive which allows you to set rules to automatically archive your data to a fully-managed cloud storage while retaining access to query that data.
You can also leverage MongoDB Atlas Data Lake to connect your own cloud storage - Amazon S3 of Microsoft Blob Storage buckets and run Federated Queries to access your entire data set using MQL and the Aggregation Framework.
## Conclusion
Like previously mentioned, this article is a starting point for a series of articles that are coming from Adrienne Tacke, Karen
Huaulme, and myself (Nic Raboy), around a Fall Guys tribute game that we're calling Plummeting People. Are we trying to compete with Fall Guys? Absolutely not! We're trying to show the thought process around designing and developing a game that leverages MongoDB and since Fall Guys is such an awesome game, we wanted to pay tribute to it.
The next article in the series will be around designing and developing the user profile store for the game. It will cover the data model, queries, and some backend server code for managing the future interactions between the game and the server.
Want to discuss this planning article or the Twitch stream that went with it? Join us in the community thread that we created. | md | {
"tags": [
"C#",
"Unity"
],
"pageDescription": "Learn how to design a strategy towards developing the next big online game that uses MongoDB.",
"contentType": "Tutorial"
} | Designing a Strategy to Develop a Game with Unity and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/upgrade-fearlessly-stable-api | created | # Upgrade Fearlessly with the MongoDB Stable API
Do you hesitate to upgrade MongoDB, for fear the new database will be incompatible with your existing code?
Once you've written and deployed your MongoDB application, you want to be able to upgrade your MongoDB database at will, without worrying that a behavior change will break your application. In the past, we've tried our best to ensure each database release is backward-compatible, while also adding new features. But sometimes we've had to break compatibility, because there was no other way to fix an issue or improve behavior. Besides, we didn't have a single definition of backward compatibility.
Solving this problem is more important now: We're releasing new versions four times a year instead of one, and we plan to go faster in the future. We want to help you upgrade frequently and take advantage of new features, but first you must feel confident you can upgrade safely. Ideally, you could immediately upgrade all your applications to the latest MongoDB whenever we release.
The MongoDB Stable API is how we will make this possible. The Stable API encompasses the subset of MongoDB commands that applications commonly use to read and write data, create collections and indexes, and so on. We commit to keeping these commands backward-compatible in new MongoDB versions. We can add new features (such as new command parameters, new aggregation operators, new commands, etc.) to the Stable API, but only in backward-compatible ways.
We follow this principle:
> For any API version V, if an application declares API version V and uses only behaviors in V, and it is deployed along with a specific version of an official driver, then it will experience no semantically significant behavior changes resulting from database upgrades so long as the new database supports V.
(What's a semantically **insignificant** behavior change? Examples include the text of some error message, the order of a query result if you **don't** explicitly sort it, or the performance of a particular query. Behaviors like these, which are not documented and don't affect correctness, may change from version to version.)
To use the Stable API, upgrade to the latest driver and create your application's MongoClient like this:
```js
client = MongoClient(
"mongodb://host/",
api={"version": "1", "strict": True})
```
For now, "1" is the only API version. Passing "strict": True means the database will reject all commands that aren't in the Stable API. For example, if you call replSetGetStatus, which isn't in the Stable API, you'll receive an error:
```js
{
"ok" : 0,
"errmsg" : "Provided apiStrict:true, but replSetGetStatus is not in API Version 1",
"code" : 323,
"codeName" : "APIStrictError"
}
```
Run your application's test suite with the new MongoClient options, see what commands and features you're using that are outside the Stable API, and migrate to versioned alternatives. For example, "mapreduce" is not in the Stable API but "aggregate" is. Once your application uses only the Stable API, you can redeploy it with the new MongoClient options, and be confident that future database upgrades won't affect your application.
The mongosh shell now supports the Stable API too:
```bash
mongosh --apiVersion 1 --apiStrict
```
You may need to use unversioned features in some part of your application, perhaps temporarily while you are migrating to the Stable API, perhaps permanently. The **escape hatch** is to create a non-strict MongoClient and use it just for using unversioned features:
```PYTHON
# Non-strict client.
client = MongoClient(
"mongodb://host/",
api={"version": "1", "strict": False})
client.admin.command({"replSetGetStatus": 1})
```
The "strict" option is false by default, I'm just being explicit here. Use this non-strict client for the few unversioned commands your application needs. Be aware that we occasionally make backwards-incompatible changes in these commands.
The only API version that exists today is "1", but in the future we'll release new API versions. This is exciting for us: MongoDB has a few warts that we had to keep for compatibility's sake, but the Stable API gives us a safe way to remove them. Consider the following:
```PYTHON
client = MongoClient("mongodb://host")
client.test.collection.insertOne({"a": 1]})
# Strangely, this matches the document above.
result = client.test.collection.findOne(
{"a.b": {"$ne": null}})
```
It's clearly wrong that `{"a": [1]}` matches the query `{"a.b": {"$ne": null}}`, but we can't fix this behavior, for fear that users' applications rely on it. The Stable API gives us a way to safely fix this. We can provide cleaner query semantics in Version 2:
```PYTHON
# Explicitly opt in to new behavior.
client = MongoClient(
"mongodb://host/",
api={"version": "2", "strict": True})
client.test.collection.insertOne({"a": [1]})
# New behavior: doesn't match document above.
result = client.test.collection.findOne(
{"a.b": {"$ne": null}})
```
Future versions of MongoDB will support **both** Version 1 and 2, and we'll maintain Version 1 for many years. Applications requesting the old or new versions can run concurrently against the same database. The default behavior will be Version 1 (for compatibility with old applications that don't request a specific version), but new applications can be written for Version 2 and get the new, obviously more sensible behavior.
Over time we'll deprecate some Version 1 features. That's a signal that when we introduce Version 2, those features won't be included. (Future MongoDB releases will support both Version 1 with deprecated features, and Version 2 without them.) When the time comes for you to migrate an existing application from Version 1 to 2, your first step will be to find all the deprecated features it uses:
```PYTHON
# Catch uses of features deprecated in Version 1.
client = MongoClient(
"mongodb://host/",
api={"version": "1",
"strict": True,
"deprecationErrors": True})
```
The database will return an APIDeprecationError whenever your code tries to use a deprecated feature. Once you've run your tests and fixed all the errors, you'll be ready to test your application with Version 2.
Version 2 might be a long way off, though. Until then, we're continuing to add features and make improvements in Version 1. We'll introduce new commands, new options, new aggregation operators, and so on. Each change to Version 1 will be an **extension** of the existing API, and it will never affect existing application code. With quarterly releases, we can improve MongoDB faster than ever before. Once you've upgraded to 5.0 and migrated your app to the Stable API, you can always use the latest release fearlessly.
You can try out the Stable API with the MongoDB 5.0 Release Candidate, which is available now from our [Download Center.
## Appendix
Here's a list of commands included in API Version 1 in MongoDB 5.0. You can call these commands with version "1" and strict: true. (But of course, you can also call them without configuring your MongoClient's API version at all, just like before.) We won't make backwards-incompatible changes to any of these commands. In future releases, we may add features to these commands, and we may add new commands to Version 1.
* abortTransaction
* aggregate
* authenticate
* collMod
* commitTransaction
* create
* createIndexes
* delete
* drop
* dropDatabase
* dropIndexes
* endSessions
* explain (we won't make incompatible changes to this command's input parameters, although its output format may change arbitrarily)
* find
* findAndModify
* getMore
* hello
* insert
* killCursors
* listCollections
* listDatabases
* listIndexes
* ping
* refreshSessions
* saslContinue
* saslStart
* update
## Safe Harbor
The development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality.
| md | {
"tags": [
"MongoDB",
"Python"
],
"pageDescription": "With the Stable API, you can upgrade to the latest MongoDB releases without introducing backward-breaking app changes. Learn what it is and how to use it.",
"contentType": "Tutorial"
} | Upgrade Fearlessly with the MongoDB Stable API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/influence-search-result-ranking-function-scores-atlas-search | created | # Influence Search Result Ranking with Function Scores in Atlas Search
When it comes to natural language searching, it's useful to know how the order of the results for a query were determined. Exact matches might be obvious, but what about situations where not all the results were exact matches due to a fuzzy parameter, the `$near` operator, or something else?
This is where the document score becomes relevant.
Every document returned by a `$search` query in MongoDB Atlas Search is assigned a score based on relevance, and the documents included in a result set are returned in order from highest score to lowest.
You can choose to rely on the scoring that Atlas Search determines based on the query operators, or you can customize its behavior using function scoring and optimize it towards your needs. In this tutorial, we're going to see how the `function` option in Atlas Search can be used to rank results in an example.
Per the documentation, the `function` option allows the value of a numeric field to alter the final score of the document. You can specify the numeric field for computing the final score through an expression. With this in mind, let's look at a few scenarios where this could be useful.
Let's say that you have a review system like Yelp where the user needs to provide some search criteria such as the type of food they want to eat. By default, you're probably going to get results based on relevance to your search term as well as the location that you defined. In the examples below, I’m using the sample restaurants data available in MongoDB Atlas.
The `$search` query (expressed as an aggregation pipeline) to make this search happen in MongoDB might look like the following:
```json
{
"$search": {
"text": {
"query": "korean",
"path": [ "cuisine" ],
"fuzzy": {
"maxEdits": 2
}
}
}
},
{
"$project": {
"_id": 0,
"name": 1,
"cuisine": 1,
"location": 1,
"rating": 1,
"score": {
"$meta": "searchScore"
}
}
}
]
```
The above query is a two-stage aggregation pipeline in MongoDB. The first stage is searching for "korean" in the "cuisine" document path. A fuzzy factor is applied to the search so spelling mistakes are allowed. The document results from the first stage might be quite large, so in the second stage, we're specifying which fields to return for every document. This includes a search score that is not part of the original document, but part of the search results.
As a result, you might end up with the following results:
```json
[
{
"location": "Jfk International Airport",
"cuisine": "Korean",
"name": "Korean Lounge",
"rating": 2,
"score": 3.5087265968322754
},
{
"location": "Broadway",
"cuisine": "Korean",
"name": "Mill Korean Restaurant",
"rating": 4,
"score": 2.995847225189209
},
{
"location": "Northern Boulevard",
"cuisine": "Korean",
"name": "Korean Bbq Restaurant",
"rating": 5,
"score": 2.995847225189209
}
]
```
The default ordering of the documents returned is based on the `score` value in descending order. The higher the score, the closer your match.
It's very unlikely that you're going to want to eat at the restaurants that have a rating below your threshold, even if they match your search term and are within the search location. With the `function` option, we can assign a point system to the rating and perform some arithmetic to give better rated restaurants a boost in your results.
Let's modify the search query to look like the following:
```json
[
{
"$search": {
"text": {
"query": "korean",
"path": [ "cuisine" ],
"fuzzy": {
"maxEdits": 2
},
"score": {
"function": {
"multiply": [
{
"score": "relevance"
},
{
"path": {
"value": "rating",
"undefined": 1
}
}
]
}
}
}
}
},
{
"$project": {
"_id": 0,
"name": 1,
"cuisine": 1,
"location": 1,
"rating": 1,
"score": {
"$meta": "searchScore"
}
}
}
]
```
In the above two-stage aggregation pipeline, the part to pay attention to is the following:
```json
"score": {
"function": {
"multiply": [
{
"score": "relevance"
},
{
"path": {
"value": "rating",
"undefined": 1
}
}
]
}
}
```
What we're saying in this part of the `$search` query is that we want to take the relevance score that we had already seen in the previous example and multiply it by whatever value is in the `rating` field of the document. This means that the score will potentially be higher if the rating of the restaurant is higher. If the restaurant does not have a rating, then we use a default multiplier value of 1.
If we run this query on the same data as before, we might now get results that look like this:
```json
[
{
"location": "Northern Boulevard",
"cuisine": "Korean",
"name": "Korean Bbq Restaurant",
"rating": 5,
"score": 14.979236125946045
},
{
"location": "Broadway",
"cuisine": "Korean",
"name": "Mill Korean Restaurant",
"rating": 4,
"score": 11.983388900756836
},
{
"location": "Jfk International Airport",
"cuisine": "Korean",
"name": "Korean Lounge",
"rating": 2,
"score": 7.017453193664551
}
]
```
So now, while "Korean BBQ Restaurant" might be further in terms of location, it appears higher in our result set because the rating of the restaurant is higher.
Increasing the score based on rating is just one example. Another scenario could be to give search result priority to restaurants that are sponsors. A `function` multiplier could be used based on the sponsorship level.
Let's look at a different use case. Say you have an e-commerce website that is running a sale. To push search products that are on sale higher in the list than items that are not on sale, you might use a `constant` score in combination with a relevancy score.
An aggregation that supports the above example might look like the following:
```
db.products.aggregate([
{
"$search": {
"compound": {
"should": [
{
"text": {
"path": "promotions",
"query": "July4Sale",
"score": {
"constant": {
"value": 1
}
}
}
}
],
"must": [
{
"text": {
"path": "name",
"query": "bose headphones"
}
}
]
}
}
},
{
"$project": {
"_id": 0,
"name": 1,
"promotions": 1,
"score": { "$meta": "searchScore" }
}
}
]);
```
To get into the nitty gritty of the above two-stage pipeline, the first stage uses the [compound operator for searching. We're saying that the search results `must` satisfy "bose headphones" and if the result-set `should` contain "July4Sale" in the `promotions` path, then add a `constant` of one to the score for that particular result item to boost its ranking.
The `should` operator doesn't require its contents to be satisfied, so you could end up with headphone results that are not part of the "July4Sale." Those result items just won't have their score increased by any value, and therefore would show up lower down in the list. The second stage of the pipeline just defines which fields should exist in the response.
## Conclusion
Being able to customize how search result sets are scored can help you deliver more relevant content to your users. While we looked at a couple examples around the `function` option with the `multiply` operator, there are other ways you can use function scoring, like replacing the value of a missing field with a constant value or boosting the results of documents with search terms found in a specific path. You can find more information in the Atlas Search documentation.
Don't forget to check out the MongoDB Community Forums to learn about what other developers are doing with Atlas Search. | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Learn how to influence the score of your Atlas Search results using a variety of operators and options.",
"contentType": "Tutorial"
} | Influence Search Result Ranking with Function Scores in Atlas Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/javascript/locator-app-code-example | created | # Find our Devices - A locator app built using Realm
INTRODUCTION
This Summer, MongoDB hosted 112 interns, spread across departments such as MongoDB Cloud, Atlas, and Realm. These interns have worked on a vast array of projects using the MongoDB platform and technologies. One such project was created by two Software Engineering interns, José Pedro Martins and Linnea Jansson, on the MongoDB Realm team.
Using MongoDB Realm and React Native, they built an app to log and display the location and movement of a user’s devices in real-time on a map. Users can watch as their device’s position on the map updates in response to how its physical location changes in real life. Additionally, users can join groups and view the live location of devices owned by other group members.
In this article, I look forward to demonstrating the app’s features, discussing how it uses MongoDB Realm, and reviewing some noteworthy scenarios which arose during its development.
APP OVERVIEW
The project, called *Find Our Devices*, is an app for iOS and Android which allows users to view the live location of their devices on a map. The demo video above demonstrates some key features and shows off the intuitive UI. Users can track multiple devices by installing the app, logging in with their email, and adding the current device to their account.
For each device, a new pin is added to the map to indicate the device’s location. This feature is perfect if one of your devices has been lost or stolen, as you can easily track the location of your iOS and Android devices from one app. Instead of using multiple apps to track devices on android and iOS, the user can focus on retrieving their device. Indeed, if you’re only interested in the location of one device, you can instantly find its location by selecting it from a dropdown menu.
Additionally, users can create groups with other users. In these groups, users can see both the location of their devices and the location of other group members' devices. Group members can also invite other users by inputting their email. If a user accepts an invitation, their devices' locations begin to sync to the map. They can also view the live location of other members’ devices on the group map.
This feature is fantastic for families or groups of friends travelling abroad. If somebody gets lost, their location is still visible to everyone in the group, provided they have network connectivity. Alternatively, logistics companies could use the app to track their fleets. If each driver installs the app, HQ could quickly find the location of any vehicle in the fleet and predict delays or suggest alternative routes to drivers. If users want privacy, they can disable location sharing at any time, or leave the group.
USES OF REALM
This app was built using the MongoDB RealmJS SDK and React-Native and utilises many of Realm’s features. For example, the authentication process of registration, logging in, and logging out is handled using Realm Email/Password authentication. Additionally, Realm enables a seamless data flow while updating device locations in groups, as demonstrated by the diagram below:
As a device moves, Realm writes the location to Atlas, provided the device has network connectivity. If the device doesn’t have network connectivity, Realm will sync the data into Atlas when the device is back online. Once the data is in Atlas, Realm will propagate the changes to the other users in the group. Upon receiving the new data, a change listener in the app is notified of this update in the device's location. As a result, the pin’s position on the map will update and users in the group can see the device’s new location.
Another feature of Realm used in this project is shared realms. In the Realm task tracker tutorial, available here, all users in a group have read/write permission to the group partition. The developers allowed this, as group members were trusted to change any data in the group’s shared resources. Indeed, this was encouraged, as it allowed team members to edit tasks created by other team members and mark them as completed. In this app, users couldn't have write permissions to the shared realm, as group members could modify other users' locations with write permission. The solution to this problem is shown in the diagram below. Group members only have read permissions for the shared realm, allowing them to read others' locations, but not edit them. You can learn more about Realm partitioning strategies here.
FIXING A SECURITY VULNERABILITY
Several difficult scenarios and edge cases came up during the development process. For example, in the initial version, users could write to the *Group Membership*(https://github.com/realm/FindOurDevices/blob/0b118053a3956d4415d40d9c059f6802960fc484/app/models/GroupMembership.js) class. The intention was that this permission would allow members to join new groups and write their new membership to Atlas from Realm. Unfortunately, this permission also created a security vulnerability, as the client could edit the *GroupMembership.groupId* value to anything they wanted. If they edited this value to another group’s ID value, this change would be synced to Atlas, as the user had write permission to this class. Malicious users could use this vulnerability to join a group without an invitation and snoop on the group members' locations.
Due to the serious ethical issues posed by this vulnerability, a fix needed to be found. Ultimately, the solution was to split the Device partition from the User partition and retract write permissions from the User class, as shown in the diagram below. Thanks to this amendment, users could no longer edit their *GroupMembership.groupId* value. As such, malicious actors could no longer join groups for which they had no invitation. Additionally, each device is now responsible for updating its location, as the Device partition is now separate from the User partition, with write permissions.
CONCLUSION
In this blog post, we discussed a fascinating project built by two Realm interns this year. More specifically, we explored the functionality and use cases of the project, looked at how the project used MongoDB Realm, and examined a noteworthy security vulnerability that arose during development.
If you want to learn more about the project or dive into the code, you can check out the backend repository here and the frontend repository here. You can also build the project yourself by following the instructions in the ReadMe files in the two repositories. Alternatively, if you'd like to learn more about MongoDB, you can visit our community forums, sign up for MongoDB University, or sign up for the MongoDB newsletter! | md | {
"tags": [
"JavaScript",
"Realm",
"iOS",
"Android"
],
"pageDescription": "Build an example mobile application using realm for iOS and Android",
"contentType": "Code Example"
} | Find our Devices - A locator app built using Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-hackathon-experience | created | # The MongoDB Realm Hackathon Experience
With Covid19 putting an end to in-person events, we wanted to engage directly with developers utilizing the recently announced MongoDB Realm public preview, and so the Realm Hackathon was conceived. This would be MongoDB's first digital Hackathon and we were delighted with the response. In the end, we ended up with nearly 300 registrations, which culminated in 23 teams coming together over the course of a day and half of learning, experimenting, and above all, having fun! The teams were predominantly European given the timezone of the Hackathon, but we did have participants from the US and also the Asia Pacific region, too.
During the Hackathon, we engaged in
- Team forming
- Idea pitching
- Q&A with the Realm Enginnering team behind many of the Realm SDKs
- and of course, developing!
With 23 teams, there was a huge variation in concepts and ideas put forward for the Hackathon. From Covid19-influenced apps to chatbots to inventory tracking apps, the variety was superb. On the final morning, all teams had an opportunity to pitch their apps competitively and we (the judges) were highly impressed with the ingenuity, use of Realm, and the scope of what the teams accomplished in a 24-hour period. In the end, there can only be one winner, and we were delighted to award that title to Team PurpleBlack.
Team PurpleBlack created a MongoDB Realm-based mobile asset maintenance solution. Effective asset maintenance is critical to the success of any utility company. The solution included an offline iOS app for field technicians, a MongoDB Charts dashboard, and email notifications for administrators. Santaneel and Srinivas impressed with their grasp of Realm and their ambition to build a solution leveraging not only Realm but MongoDB Atlas, Charts, and Triggers. So, we asked Team PurpleBlack to share their experience in their own words, and we're thrilled to share this with you.
>Guest post - by Santaneel Pyne of Team PurpleBlack - The MongoDB Realm Hackathon Experience!
## THE MOTIVATION
Hackathons are always a fantastic experience. They are fun, exciting, and enriching all at the same time. This July, I participated in the first Realm Hackathon organised by MongoDB. Earlier in the year, while I was going through a list of upcoming Hackathons, I came across the Realm Hackathon. I was keen on participating in this hackathon as this was about building offline mobile apps. I am a Solution Architect working with On Device Solutions, and enterprise mobile apps are a key focus area for me. For the hackathon, I had teamed up with Srinivas Divakarla from Cognizant Technology Solutions. He is a technical lead and an experienced Swift developer. We named our team PurpleBlack. It is just another random name. Neither of us had any experience with MongoDB Realm. This was going to be our opportunity to learn. We went ahead with an open mind without too many expectations.
## THE 'VIRTUAL' EXPERIENCE
This was our first fully online hackathon experience. The hackathon was spread across two days and it was hosted entirely on Zoom. The first day was the actual hack day and the next day was for presentations and awards. There were a couple of introductory sessions held earlier in the week to provide all participants a feel of the online hackathon. After the first session, we created our accounts in cloud.mongodb.com and made sure we had access to all the necessary tools and SDKs as mentioned during the introductory session. On the day of the hackathon, we joined the Zoom meeting and were greeted by the MongoDB team. As with any good hackathon, a key takeaway is interaction with the experts. It was no different in this case. We met the Realm experts - Kraen Hansen, Eduardo Lopez, Lee Maguire, Andrew Morgan, and Franck Franck. They shared their experience and answered questions from the participants.
By the end of the expert sessions, all participants were assigned a team. Each team was put into a private Zoom breakout room. The organisers and the Realm experts were in the Main Zoom room. We could toggle between the breakout room and the Main room when needed. It took us some time to get used to this. We started our hacking session with an end-to-end plan and distributed the work between ourselves. I took the responsibility of configuring the back-end components of our solution, like the cluster, collections, Realm app configurations, user authentication, functions, triggers, and charts. Srinivas was responsible for building the iOS app using the iOS SDK. Before we started working on our solution, we had allocated some time to understand the end-to-end architecture and underlying concepts. We achieved this by following the task tracker iOS app tutorial. We had spent a lot of time on this tutorial, but it was worth it as we were able to re-use several components from the task tracker app. After completing the tutorial, we felt confident working on our solution. We were able to quickly complete all the backend components and then
started working on the iOS application. Once we were able to sync data between the app and the MongoDB collections, we were like, "BINGO!" We then added two features that we had not planned for earlier. These features were the email notifications and the embedded charts. We rounded-off Day 1 by providing finishing touches to our presentation.
Day 2 started with the final presentations and demos from all the teams. Everyone was present in the Main Zoom room. Each team had five minutes to present. The presentations and demos from all the teams were great. This added a bit of pressure on us as we were slotted to present at the end. When our turn finally arrived, I breezed through the presentation and then the demo. The demo went smoothly and I was able to showcase all the features we had built.
Next was the countdown to the award ceremony. The panel of judges went into a breakout room to select the winner. When the judges were back, they announced PurpleBlack as the winner of the first MongoDB Realm Hackathon!!
## OUR IDEA
Team PurpleBlack created a MongoDB Realm-based mobile asset maintenance solution. Effective asset maintenance is critical to the success of any utility company. The solution included an offline iOS app for field technicians, a MongoDB Charts dashboard, and email notifications for Maintenance Managers or Administrators. Field technicians will download all relevant asset data into the mobile app during the initial synchronization. Later, when they are in a remote area without connectivity, they can scan a QR code fixed to an asset to view the asset details. Once the asset details are confirmed, an issue can be created against the identified asset. Finally, when the technicians are back online, the Realm mobile app will automatically synchronize all new issues with MongoDB Atlas. Functions and triggers help to send email notifications to an Administrator in case any high-priority issue is created. Administrators can view the charts dashboard to keep track of all issues created and take follow-up actions.
To summarise, our solution included the following features:
- iOS app based on Realm iOS SDK
- Secure user authentication using email-id and password
- MongoDB Atlas as the cloud datastore
- MongoDB Charts and embedded charts using the embedding SDK
- Email notifications via the SendGrid API using Realm functions and triggers
A working version of our iOS project can be found in our GitHub
repo.
This project is based on the Task Tracker app with some tweaks that helped us build the features we wanted. In our app, we wanted to download two objects into the same Realm - Assets and Issues. This means when a user successfully logs into the app, all assets and issues available in MongoDB Atlas will be downloaded to the client. Initially, a list of issues is displayed.
From the issue list screen, the user can create a new issue by tapping the + button. Upon clicking this button, the app opens the camera to scan a barcode/QR code. The code will be the same as the asset ID of an asset. If the user scans an asset that is available in the Realm, then there is a successful match and the user can proceed to the next screen to create an asset. We illustrate how this is accomplished with the code below:
``` Swift
func scanCompleted(code: String)
{
currentBarcode = code
// pass the scanned barcode to the CreateIssueViewController and Query MongoDB Realm
let queryStr: String = "assetId == '"+code+"'";
print(queryStr);
print("issues that contain assetIDs: \(assets.filter(queryStr).count)");
if(assets.filter(queryStr).count > 0 ){
scanner?.requestCaptureSessionStopRunning()
self.navigationController!.pushViewController(CreateIssueViewController(code: currentBarcode!, configuration: realm.configuration), animated: true);
} else {
self.showToast(message: "No Asset found for the scanned code", seconds: 0.6)
}
}
```
In the next screen, the user can create a new issue against the identified asset.
To find out the asset details, the Asset object from Realm must be queried with the asset ID:
``` Swift
required init(with code: String, configuration: Realm.Configuration) {
// Ensure the realm was opened with sync.
guard let syncConfiguration = configuration.syncConfiguration else {
fatalError("Sync configuration not found! Realm not opened with sync?");
}
let realm = try! Realm(configuration: configuration)
let queryStr: String = "assetId == '"+code+"'";
scannedAssetCode = code
assets = realm.objects(Asset.self).filter(queryStr)
// Partition value must be of string type.
partitionValue = syncConfiguration.partitionValue.stringValue!
super.init(nibName: nil, bundle: nil)
}
```
Once the user submits the new issue, it is then written to the Realm:
``` Swift
func submitDataToRealm(){
print(form.values())
// Create a new Issue with the text that the user entered.
let issue = Issue(partition: self.partitionValue)
let createdByRow: TextRow? = form.rowBy(tag: "createdBy")
let descriptionRow: TextRow? = form.rowBy(tag: "description")
let priorityRow: SegmentedRow? = form.rowBy(tag: "priority")
let issueIdRow: TextRow? = form.rowBy(tag: "issueId")
issue.issueId = issueIdRow?.value ?? ""
issue.createdBy = createdByRow?.value ?? ""
issue.desc = descriptionRow?.value ?? ""
issue.priority = priorityRow?.value ?? "Low"
issue.status = "Open"
issue.assetId = self.scannedAssetCode
try! self.realm.write {
// Add the Issue to the Realm. That's it!
self.realm.add(issue)
}
self.navigationController!.pushViewController(TasksViewController( assetRealm: self.realm), animated: true);
}
```
The new entry is immediately synced with MongoDB Atlas and is available in the Administrator dashboard built using MongoDB Charts.
## WRAPPING UP
Winning the first MongoDB Realm hackathon was a bonus for us. We had registered for this hackathon just to experience the app-building process with Realm. Both of us had our share of the "wow" moments throughout the hackathon. What stood out at the end was the ease with which we were able to build new features once we understood the underlying concepts. We want to continue this learning journey and explore MongoDB Realm further.
Follow these links to learn more -
- GitHub Repo for Project
- Realm Tutorial
- Charts Examples
- Sending Emails with MongoDB Stitch and SendGrid
To learn more, ask questions, leave feedback, or simply connect with other MongoDB developers, visit our community forums. Come to learn. Stay to connect.
>Getting started with Atlas is easy. Sign up for a free MongoDB Atlas account to start working with all the exciting new features of MongoDB, including Realm and Charts, today! | md | {
"tags": [
"Realm"
],
"pageDescription": "In July, MongoDB ran its first digital hackathon for Realm. Our winners, team \"PurpleBlack,\" share their experience of the Hackathon in this guest post.",
"contentType": "Article"
} | The MongoDB Realm Hackathon Experience | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/building-generative-ai-applications-vector-search-open-source-models | created | # Building Generative AI Applications Using MongoDB: Harnessing the Power of Atlas Vector Search and Open Source Models
Artificial intelligence is at the core of what's being heralded as the fourth industrial revolution. There is a fundamental change happening in the way we live and the way we work, and it's happening right now. While AI and its applications across businesses are not new, recently, generative AI has become a hot topic worldwide with the incredible success of ChatGPT, the popular chatbot from OpenAI. It reached 100 million monthly active users in two months, becoming the fastest-growing consumer application.
In this blog, we will talk about how you can leverage the power of large language models (LLMs), the transformative technology powering ChatGPT, on your private data to build transformative AI-powered applications using MongoDB and Atlas Vector Search. We will also walk through an example of building a semantic search using Python, machine learning models, and Atlas Vector Search for finding movies using natural language queries. For instance, to find “Funny movies with lead characters that are not human” would involve performing a semantic search that understands the meaning and intent behind the query to retrieve relevant movie recommendations, and not just the keywords present in the dataset.
Using vector embeddings, you can leverage the power of LLMs for use cases like semantic search, a recommendation system, anomaly detection, and a customer support chatbot that are grounded in your private data.
## What are vector embeddings?
A vector is a list of floating point numbers (representing a point in an n-dimensional embedding space) and captures semantic information about the text it represents. For instance, an embedding for the string "MongoDB is awesome" using an open source LLM model called `all-MiniLM-L6-v2` would consist of 384 floating point numbers and look like this:
```
-0.018378766253590584, -0.004090079106390476, -0.05688102915883064, 0.04963553324341774, …..
....
0.08254531025886536, -0.07415960729122162, -0.007168072275817394, 0.0672200545668602]
```
Note: Later in the tutorial, we will cover the steps to obtain vector embeddings like this.
## What is vector search?
Vector search is a capability that allows you to find related objects that have a semantic similarity. This means searching for data based on meaning rather than the keywords present in the dataset.
Vector search uses machine learning models to transform unstructured data (like text, audio, and images) into numeric representation (called vector embeddings) that captures the intent and meaning of that data. Then, it finds related content by comparing the distances between these vector embeddings, using approximate k nearest neighbor (approximate KNN) algorithms. The most commonly used method for finding the distance between these vectors involves calculating the cosine similarity between two vectors.
## What is Atlas Vector Search?
[Atlas Vector Search is a fully managed service that simplifies the process of effectively indexing high-dimensional vector data within MongoDB and being able to perform fast vector similarity searches. With Atlas Vector Search, you can use MongoDB as a standalone vector database for a new project or augment your existing MongoDB collections with vector search functionality.
Having a single solution that can take care of your operational application data as well as vector data eliminates the complexities of using a standalone system just for vector search functionality, such as data transfer and infrastructure management overhead. With Atlas Vector Search, you can use the powerful capabilities of vector search in any major public cloud (AWS, Azure, GCP) and achieve massive scalability and data security out of the box while being enterprise-ready with provisions like SoC2 compliance.
## Semantic search for movie recommendations
For this tutorial, we will be using a movie dataset containing over 23,000 documents in MongoDB. We will be using the `all-MiniLM-L6-v2` model from HuggingFace for generating the vector embedding during the index time as well as query time. But you can apply the same concepts by using a dataset and model of your own choice, as well. You will need a Python notebook or IDE, a MongoDB Atlas account, and a HuggingFace account for an hands-on experience.
For a movie database, various kinds of content — such as the movie description, plot, genre, actors, user comments, and the movie poster — can be easily converted into vector embeddings. In a similar manner, the user query can be converted into vector embedding, and then the vector search can find the most relevant results by finding the nearest neighbors in the embedding space.
### Step 1: Connect to your MongoDB instance
To create a MongoDB Atlas cluster, first, you need to create a MongoDB Atlas account if you don't already have one. Visit the MongoDB Atlas website and click on “Register.”
For this tutorial, we will be using the sample data pertaining to movies. The “sample_mflix” database contains a “movies” collection where each document contains fields like title, plot, genres, cast, directors, etc.
You can also connect to your own collection if you have your own data that you would like to use.
You can use an IDE of your choice or a Python notebook for following along. You will need to install the `pymongo` package prior to executing this code, which can be done via `pip install pymongo`.
```python
import pymongo
client = pymongo.MongoClient("")
db = client.sample_mflix
collection = db.movies
```
Note: In production environments, it is not recommended to hard code your database connection string in the way shown, but for the sake of a personal demo, it is okay.
You can check your dataset in the Atlas UI.
### Step 2: Set up the embedding creation function
There are many options for creating embeddings, like calling a managed API, hosting your own model, or having the model run locally.
In this example, we will be using the HuggingFace inference API to use a model called all-MiniLM-L6-v2. HuggingFace is an open-source platform that provides tools for building, training, and deploying machine learning models. We are using them as they make it easy to use machine learning models via APIs and SDKs.
To use open-source models on Hugging Face, go to https://huggingface.co/. Create a new account if you don’t have one already. Then, to retrieve your Access token, go to Settings > “Access Tokens.” Once in the “Access Tokens” section, create a new token by clicking on “New Token” and give it a “read” right. Then, you can get the token to authenticate to the Hugging Face inference API:
You can now define a function that will be able to generate embeddings. Note that this is just a setup and we are not running anything yet.
```python
import requests
hf_token = ""
embedding_url = "https://api-inference.huggingface.co/pipeline/feature-extraction/sentence-transformers/all-MiniLM-L6-v2"
def generate_embedding(text: str) -> listfloat]:
response = requests.post(
embedding_url,
headers={"Authorization": f"Bearer {hf_token}"},
json={"inputs": text})
if response.status_code != 200:
raise ValueError(f"Request failed with status code {response.status_code}: {response.text}")
return response.json()
```
Now you can test out generating embeddings using the function we defined above.
```python
generate_embedding("MongoDB is awesome")
```
The output of this function will look like this:
![Verify the output of the generate_embedding function
Note: HuggingFace Inference API is free (to begin with) and is meant for quick prototyping with strict rate limits. You can consider setting up a paid “HuggingFace Inference Endpoints” using the steps described in the Bonus Suggestions. This will create a private deployment of the model for you.
### Step 3: Create and store embeddings
Now, we will execute an operation to create a vector embedding for the data in the "plot" field in our movie documents and store it in the database. As described in the introduction, creating vector embeddings using a machine learning model is necessary for performing a similarity search based on intent.
In the code snippet below, we are creating vector embeddings for 50 documents in our dataset, that have the field “plot.” We will be storing the newly created vector embeddings in a field called "plot_embedding_hf," but you can name this anything you want.
When you are ready, you can execute the code below.
```python
for doc in collection.find({'plot':{"$exists": True}}).limit(50):
doc'plot_embedding_hf'] = generate_embedding(doc['plot'])
collection.replace_one({'_id': doc['_id']}, doc)
```
Note: In this case, we are storing the vector embedding in the original collection (that is alongside the application data). This could also be done in a separate collection.
Once this step completes, you can verify in your database that a new field “plot_embedding_hf” has been created for some of the collections.
Note: We are restricting this to just 50 documents to avoid running into rate-limits on the HuggingFace inference API. If you want to do this over the entire dataset of 23,000 documents in our sample_mflix database, it will take a while, and you may need to create a paid “Inference Endpoint” as described in the optional setup above.
### Step 4: Create a vector search index
Now, we will head over to Atlas Search and create an index. First, click the “search” tab on your cluster and click on “Create Search Index.”
![Search tab within the Cluster page with a focus on “Create Search Index”][1]
This will lead to the “Create a Search Index” configuration page. Select the “JSON Editor” and click “Next.”
![Search tab “Create Search Index” experience with a focus on “JSON Editor”][2]
Now, perform the following three steps on the "JSON Editor" page:
1. Select the database and collection on the left. For this tutorial, it should be sample_mflix/movies.
2. Enter the Index Name. For this tutorial, we are choosing to call it `PlotSemanticSearch`.
3. Enter the configuration JSON (given below) into the text editor. The field name should match the name of the embedding field created in Step 3 (for this tutorial it should be `plot_embedding_hf`), and the dimensions match those of the chosen model (for this tutorial it should be 384). The chosen value for the "similarity" field (of “dotProduct”) represents cosine similarity, in our case.
For a description of the other fields in this configuration, you can check out our [Vector Search documentation.
Then, click “Next” and click “Create Search Index” button on the review page.
``` json
{
"type": "vectorSearch,
"fields": {
"path": "plot_embedding_hf",
"dimensions": 384,
"similarity": "dotProduct",
"type": "vector"
}]
}
```
![Search Index Configuration JSON Editor with arrows pointing at the database and collection name, as well as the JSON editor][3]
### Step 5: Query your data
Once the index is created, you can query it using the “$vectorSearch” stage in the MQL workflow.
> Support for the '$vectorSearch' aggregation pipeline stage is available with MongoDB Atlas 6.0.11 and 7.0.2.
In the query below, we will search for four recommendations of movies whose plots matches the intent behind the query “imaginary characters from outer space at war”.
Execute the Python code block described below, in your chosen IDE or notebook.
```python
query = "imaginary characters from outer space at war"
results = collection.aggregate([
{"$vectorSearch": {
"queryVector": generate_embedding(query),
"path": "plot_embedding_hf",
"numCandidates": 100,
"limit": 4,
"index": "PlotSemanticSearch",
}}
});
for document in results:
print(f'Movie Name: {document["title"]},\nMovie Plot: {document["plot"]}\n')
```
The output will look like this:
![The output of Vector Search query
Note: To find out more about the various parameters (like ‘$vectorSearch’, ‘numCandidates’, and ‘k’), you can check out the Atlas Vector Search documentation.
This will return the movies whose plots most closely match the intent behind the query “imaginary characters from outer space at war.”
**Note:** As you can see, the results above need to be more accurate since we only embedded 50 movie documents. If the entire movie dataset of 23,000+ documents were embedded, the query “imaginary characters from outer space at war” would result in the below. The formatted results below show the title, plot, and rendering of the image for the movie poster.
### Conclusion
In this tutorial, we demonstrated how to use HuggingFace Inference APIs, how to generate embeddings, and how to use Atlas Vector search. We also learned how to build a semantic search application to find movies whose plots most closely matched the intent behind a natural language query, rather than searching based on the existing keywords in the dataset. We also demonstrated how efficient it is to bring the power of machine learning models to your data using the Atlas Developer Data Platform.
> If you prefer learning by watching, check out the video version of this article!
:youtube]{vid=wOdZ1hEWvjU}
## Bonus Suggestions
### HuggingFace Inference Endpoints
“[HuggingFace Inference Endpoints” is the recommended way to easily create a private deployment of the model and use it for production use case. As we discussed before ‘HuggingFace Inference API’ is meant for quick prototyping and has strict rate limits.
To create an ‘Inference Endpoint’ for a model on HuggingFace, follow these steps:
1. On the model page, click on "Deploy" and in the dropdown choose "Inference Endpoints."
2. Select the Cloud Provider of choice and the instance type on the "Create a new Endpoint" page. For this tutorial, you can choose the default of AWS and Instance type of CPU small]. This would cost about $0.06/hour.
![Create a new endpoint
3. Now click on the "Advanced configuration" and choose the task type to "Sentence Embedding." This configuration is necessary to ensure that the endpoint returns the response from the model that is suitable for the embedding creation task.
Optional] you can set the “Automatic Scale-to-Zero” to “After 15 minutes with no activity” to ensure your endpoint is paused after a period of inactivity and you are not charged. Setting this configuration will, however, mean that the endpoint will be unresponsive after it’s been paused. It will take some time to return online after you send requests to it again.
![Selecting a supported tasks
4. After this, you can click on “Create endpoint" and you can see the status as "Initializing."
5. Use the following Python function to generate embeddings.
Notice the difference in response format from the previous usage of “HuggingFace Inference API.”
```python
import requests
hf_token = ""
embedding_url = ""
def generate_embedding(text: str) -> listfloat]:
response = requests.post(
embedding_url,
headers={"Authorization": f"Bearer {hf_token}"},
json={"inputs": text})
if response.status_code != 200:
raise ValueError(f"Request failed with status code {response.status_code}: {response.text}")
return response.json()["embeddings"]
```
### OpenAI embeddings
To use OpenAI for embedding generation, you can use the package (install using `pip install openai`).
You’ll need your OpenAI API key, which you can [create on their website. Click on the account icon on the top right and select “View API keys” from the dropdown. Then, from the API keys, click on "Create new secret key."
To generate the embeddings in Python, install the openAI package (`pip install openai`) and use the following code.
```python
openai.api_key = os.getenv("OPENAI_API_KEY")
model = "text-embedding-ada-002"
def generate_embedding(text: str) -> listfloat]:
resp = openai.Embedding.create(
input=[text],
model=model)
return resp["data"][0]["embedding"]
```
### Azure OpenAI embedding endpoints
You can use Azure OpenAI endpoints by creating a deployment in your Azure account and using:
```python
def generate_embedding(text: str) -> list[float]:
embeddings =
resp = openai.Embedding.create
(deployment_id=deployment_id,
input=[text])
return resp["data"][0]["embedding"]
```
### Model input size limitations
Models have a limitation on the number of input tokens that they can handle. The limitation for OpenAI's `text-embedding-ada-002` model is 8,192 tokens. Splitting the original text into smaller chunks becomes necessary when creating embeddings for the data that exceeds the model's limit.
## Get started today
Get started by [creating a MongoDB Atlas account if you don't already have one. Just click on “Register.” MongoDB offers a free-forever Atlas cluster in the public cloud service of your choice.
To learn more about Atlas Vector Search, visit the product page or the documentation for creating a vector search index or running vector search queries.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta6bbbb7c921bb08c/65a1b3ecd2ebff119d6f491d/atlas-search-create-search-index.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte848f96fae511855/65a1b7cb1f2d0f12aead1547/atlas-vector-search-create-index-json.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt698150f3ea6e10f0/65a1b85eecc34e813110c5b2/atlas-search-vector-search-json-editor.png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "Learn how to build generative AI (GenAI) applications by harnessing the power of MongoDB Atlas and Vector Search.",
"contentType": "Tutorial"
} | Building Generative AI Applications Using MongoDB: Harnessing the Power of Atlas Vector Search and Open Source Models | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/go/golang-multi-document-acid-transactions | created | # Multi-Document ACID Transactions in MongoDB with Go
The past few months have been an adventure when it comes to getting started with MongoDB using the Go programming language (Golang). We've explored everything from create, retrieve, update, and delete (CRUD) operations, to data modeling, and to change streams. To bring this series to a solid finish, we're going to take a look at a popular requirement that a lot of organizations need, and that requirement is transactions.
So why would you want transactions?
There are some situations where you might need atomicity of reads and writes to multiple documents within a single collection or multiple collections. This isn't always a necessity, but in some cases, it might be.
Take the following for example.
Let's say you want to create documents in one collection that depend on documents in another collection existing. Or let's say you have schema validation rules in place on your collection. In the scenario that you're trying to create documents and the related document doesn't exist or your schema validation rules fail, you don't want the operation to proceed. Instead, you'd probably want to roll back to before it happened.
There are other reasons that you might use transactions, but you can use your imagination for those.
In this tutorial, we're going to look at what it takes to use transactions with Golang and MongoDB. Our example will rely more on schema validation rules passing, but it isn't a limitation.
## Understanding the Data Model and Applying Schema Validation
Since we've continued the same theme throughout the series, I think it'd be a good idea to have a refresher on the data model that we'll be using for this example.
In the past few tutorials, we've explored working with potential podcast data in various collections. For example, our Go data model looks something like this:
``` go
type Episode struct {
ID primitive.ObjectID `bson:"_id,omitempty"`
Podcast primitive.ObjectID `bson:"podcast,omitempty"`
Title string `bson:"title,omitempty"`
Description string `bson:"description,omitempty"`
Duration int32 `bson:"duration,omitempty"`
}
```
The fields in the data structure are mapped to MongoDB document fields through the BSON annotations. You can learn more about using these annotations in the previous tutorial I wrote on the subject.
While we had other collections, we're going to focus strictly on the `episodes` collection for this example.
Rather than coming up with complicated code for this example to demonstrate operations that fail or should be rolled back, we're going to go with schema validation to force fail some operations. Let's assume that no episode should be less than two minutes in duration, otherwise it is not valid. Rather than implementing this, we can use features baked into MongoDB.
Take the following schema validation logic:
``` json
{
"$jsonSchema": {
"additionalProperties": true,
"properties": {
"duration": {
"bsonType": "int",
"minimum": 2
}
}
}
}
```
The above logic would be applied using the MongoDB CLI or with Compass, but we're essentially saying that our schema for the `episodes` collection can contain any fields in a document, but the `duration` field must be an integer and it must be at least two. Could our schema validation be more complex? Absolutely, but we're all about simplicity in this example. If you want to learn more about schema validation, check out this awesome tutorial on the subject.
Now that we know the schema and what will cause a failure, we can start implementing some transaction code that will commit or roll back changes.
## Starting and Committing Transactions
Before we dive into starting a session for our operations and committing transactions, let's establish a base point in our project. Let's assume that your project has the following boilerplate MongoDB with Go code:
``` go
package main
import (
"context"
"fmt"
"os"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
// Episode represents the schema for the "Episodes" collection
type Episode struct {
ID primitive.ObjectID `bson:"_id,omitempty"`
Podcast primitive.ObjectID `bson:"podcast,omitempty"`
Title string `bson:"title,omitempty"`
Description string `bson:"description,omitempty"`
Duration int32 `bson:"duration,omitempty"`
}
func main() {
client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(os.Getenv("ATLAS_URI")))
if err != nil {
panic(err)
}
defer client.Disconnect(context.TODO())
database := client.Database("quickstart")
episodesCollection := database.Collection("episodes")
database.RunCommand(context.TODO(), bson.D{{"create", "episodes"}})
}
```
The collection must exist prior to working with transactions. When using the `RunCommand`, if the collection already exists, an error will be returned. For this example, the error is not important to us since we just want the collection to exist, even if that means creating it.
Now let's assume that you've correctly included the MongoDB Go driver as seen in a previous tutorial titled, How to Get Connected to Your MongoDB Cluster with Go.
The goal here will be to try to insert a document that complies with our schema validation as well as a document that doesn't so that we have a commit that doesn't happen.
``` go
// ...
func main() {
// ...
wc := writeconcern.New(writeconcern.WMajority())
rc := readconcern.Snapshot()
txnOpts := options.Transaction().SetWriteConcern(wc).SetReadConcern(rc)
session, err := client.StartSession()
if err != nil {
panic(err)
}
defer session.EndSession(context.Background())
err = mongo.WithSession(context.Background(), session, func(sessionContext mongo.SessionContext) error {
if err = session.StartTransaction(txnOpts); err != nil {
return err
}
result, err := episodesCollection.InsertOne(
sessionContext,
Episode{
Title: "A Transaction Episode for the Ages",
Duration: 15,
},
)
if err != nil {
return err
}
fmt.Println(result.InsertedID)
result, err = episodesCollection.InsertOne(
sessionContext,
Episode{
Title: "Transactions for All",
Duration: 1,
},
)
if err != nil {
return err
}
if err = session.CommitTransaction(sessionContext); err != nil {
return err
}
fmt.Println(result.InsertedID)
return nil
})
if err != nil {
if abortErr := session.AbortTransaction(context.Background()); abortErr != nil {
panic(abortErr)
}
panic(err)
}
}
```
In the above code, we start by defining the read and write concerns that will give us the desired level of isolation in our transaction. To learn more about the available read and write concerns, check out the documentation.
After defining the transaction options, we start a session which will encapsulate everything we want to do with atomicity. After, we start a transaction that we'll use to commit everything in the session.
A `Session` represents a MongoDB logical session and can be used to enable casual consistency for a group of operations or to execute operations in an ACID transaction. More information on how they work in Go can be found in the documentation.
Inside the session, we are doing two `InsertOne` operations. The first would succeed because it doesn't violate any of our schema validation rules. It will even print out an object id when it's done. However, the second operation will fail because it is less than two minutes. The `CommitTransaction` won't ever succeed because of the error that the second operation created. When the `WithSession` function returns the error that we created, the transaction is aborted using the `AbortTransaction` function. For this reason, neither of the `InsertOne` operations will show up in the database.
## Using a Convenient Transactions API
Starting and committing transactions from within a logical session isn't the only way to work with ACID transactions using Golang and MongoDB. Instead, we can use what might be thought of as a more convenient transactions API.
Take the following adjustments to our code:
``` go
// ...
func main() {
// ...
wc := writeconcern.New(writeconcern.WMajority())
rc := readconcern.Snapshot()
txnOpts := options.Transaction().SetWriteConcern(wc).SetReadConcern(rc)
session, err := client.StartSession()
if err != nil {
panic(err)
}
defer session.EndSession(context.Background())
callback := func(sessionContext mongo.SessionContext) (interface{}, error) {
result, err := episodesCollection.InsertOne(
sessionContext,
Episode{
Title: "A Transaction Episode for the Ages",
Duration: 15,
},
)
if err != nil {
return nil, err
}
result, err = episodesCollection.InsertOne(
sessionContext,
Episode{
Title: "Transactions for All",
Duration: 2,
},
)
if err != nil {
return nil, err
}
return result, err
}
_, err = session.WithTransaction(context.Background(), callback, txnOpts)
if err != nil {
panic(err)
}
}
```
Instead of using `WithSession`, we are now using `WithTransaction`, which handles starting a transaction, executing some application code, and then committing or aborting the transaction based on the success of that application code. Not only that, but retries can happen for specific errors if certain operations fail.
## Conclusion
You just saw how to use transactions with the MongoDB Go driver. While in this example we used schema validation to determine if a commit operation succeeds or fails, you could easily apply your own application logic within the scope of the session.
If you want to catch up on other tutorials in the getting started with Golang series, you can find some below:
- How to Get Connected to Your MongoDB Cluster with Go
- Creating MongoDB Documents with Go
- Retrieving and Querying MongoDB Documents with Go
- Updating MongoDB Documents with Go
- Deleting MongoDB Documents with Go
- Modeling MongoDB Documents with Native Go Data Structures
- Performing Complex MongoDB Data Aggregation Queries with Go
- Reacting to Database Changes with MongoDB Change Streams and Go
Since transactions brings this tutorial series to a close, make sure you keep a lookout for more tutorials that focus on more niche and interesting topics that apply everything that was taught while getting started. | md | {
"tags": [
"Go"
],
"pageDescription": "Learn how to accomplish ACID transactions and logical sessions with MongoDB and the Go programming language (Golang).",
"contentType": "Quickstart"
} | Multi-Document ACID Transactions in MongoDB with Go | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/awslambda-pymongo | created | # How to Use PyMongo to Connect MongoDB Atlas with AWS Lambda
Picture a developer’s paradise: a world where instead of fussing over hardware complexities, we are free to focus entirely on running and executing our applications. With the combination of AWS Lambda and MongoDB Atlas, this vision becomes a reality.
Armed with AWS Lambda’s pay-per-execution structure and MongoDB Atlas’ unparalleled scalability, developers will truly understand what it means for their applications to thrive without the hardware limitations they might be used to.
This tutorial will take you through how to properly set up an Atlas cluster, connect it to AWS Lambda using MongoDB’s Python Driver, write an aggregation pipeline on our data, and return our wanted information. Let’s get started.
### Prerequisites for success
* MongoDB Atlas Account
* AWS Account; Lambda access is necessary
* GitHub repository
* Python 3.8+
## Create an Atlas Cluster
Our first step is to create an Atlas cluster. Log into the Atlas UI and follow the steps to set it up. For this tutorial, the free tier is recommended, but any tier will work!
Please ensure that the cloud provider picked is AWS. It’s also necessary to pick a secure username and password so that we will have the proper authorization later on in this tutorial, along with proper IP address access.
Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
Once your cluster is up and running, click the ellipses next to the Browse Collections button and download the `sample dataset`. Your finished cluster will look like this:
Once our cluster is provisioned, let’s set up our AWS Lambda function.
## Creating an AWS Lambda function
Sign into your AWS account and search for “Lambda” in the search bar. Hit the orange “Create function” button at the top right side of the screen, and you’ll be taken to the image below. Here, make sure to first select the “Author from scratch” option. Then, we want to select a name for our function (AWSLambdaDemo), the runtime (3.8), and our architecture (x86_64).
Hit the orange “Create function” button on the bottom right to continue. Once your function is created, you’ll see a page with your function overview above and your code source right below.
Now, we are ready to set up our connection from AWS Lambda to our MongoDB cluster.
To make things easier for ourselves because we are going to be using Pymongo, a dependency, instead of editing directly in the code source, we will be using Visual Studio Code. AWS Lambda has a limited amount of pre-installed libraries and dependencies, so in order to get around this and incorporate Pymongo, we will need to package our code in a special way. Due to this “workaround,” this will not be a typical tutorial with testing at every step. We will first have to download our dependencies and upload our code to Lambda prior to ensuring our code works instead of using a typical `requirements.txt` file. More on that below.
## AWS Lambda and MongoDB cluster connection
Now we are ready to establish a connection between AWS Lambda and our MongoDB cluster!
Create a new directory on your local machine and name it
`awslambda-demo`.
Let’s install `pymongo`. As said above, Lambda doesn’t have every library available. So, we need to download `pymongo` at the root of our project. We can do it by working with .zip file archives:
In the terminal, enter our `awslambda-demo` directory:
cd awslambda-demo
Create a new directory where your dependencies will live:
mkdir dependencies
Install `pymongo` directly in your `dependencies` package:
pip install --target ./dependencies pymongo
Open Visual Studio Code, open the `awslambda-demo` directory, and create a new Python file named `lambda_function.py`. This is where the heart of our connection will be.
Insert the code below in our `lambda_function.py`. Here, we are setting up our console to check that we are able to connect to our Atlas cluster. Please keep in mind that since we are incorporating our environment variables in a later step, you will not be able to connect just yet. We have copied the `lambda_handler` definition from our Lambda code source and have edited it to insert one document stating my full name into a new “test” database and “test” collection. It is best practice to incorporate our MongoClient outside of our `lambda_handler` because to establish a connection and performing authentication is reactively expensive, and Lambda will re-use this instance.
```
import os
from pymongo import MongoClient
client = MongoClient(host=os.environ.get("ATLAS_URI"))
def lambda_handler(event, context):
# Name of database
db = client.test
# Name of collection
collection = db.test
# Document to add inside
document = {"first name": "Anaiya", "last name": "Raisinghani"}
# Insert document
result = collection.insert_one(document)
if result.inserted_id:
return "Document inserted successfully"
else:
return "Failed to insert document"
```
If this is properly inserted in AWS Lambda, we will see “Document inserted successfully” and in MongoDB Atlas, we will see the creation of our “test” database and collection along with the single document holding the name “Anaiya Raisinghani.” Please keep in mind we will not be seeing this yet since we haven’t configured our environment variables and will be doing this a couple steps down.
Now, we need to create a .zip file, so we can upload it in our Lambda function and execute our code. Create a .zip file at the root:
cd dependencies
zip -r ../deployment.zip *
This creates a `deployment.zip` file in your project directory.
Now, we need to add in our `lambda_function.py` file to the root of our .zip file:
cd ..
zip deployment.zip lambda_function.py
Once you have your .zip file, access your AWS Lambda function screen, click the “Upload from” button, and select “.zip file” on the right hand side of the page:
Upload your .zip file and you should see the code from your `lambda_function.py` in your “Code Source”:
Let’s configure our environment variables. Select the “Configuration” tab and then select the “Environment Variables” tab. Here, put in your “ATLAS_URI” string. To access your connection string, please follow the instructions in our docs.
Once you have your Environment Variables in place, we are ready to run our code and see if our connection works. Hit the “Test” button. If it’s the first time you’re hitting it, you’ll need to name your event. Keep everything else on the default settings. You should see this page with our “Execution results.” Our document has been inserted!
When we double-check in Atlas, we can see that our new database “test” and collection “test” have been created, along with our document with “Anaiya Raisinghani.”
This means our connection works and we are capable of inserting documents from AWS Lambda to our MongoDB cluster. Now, we can take things a step further and input a simple aggregation pipeline!
## Aggregation pipeline example
For our pipeline, let’s change our code to connect to our `sample_restaurants` database and `restaurants` collection. We are going to be incorporating our aggregation pipeline to find a sample size of five American cuisine restaurants that are located in Brooklyn, New York. Let’s dive right in!
Since we have our `pymongo` dependency downloaded, we can directly incorporate our aggregation pipeline into our code source. Change your `lambda_function.py` to look like this:
```
import os
from pymongo import MongoClient
connect = MongoClient(host=os.environ.get("ATLAS_URI"))
def lambda_handler(event, context):
# Choose our "sample_restaurants" database and our "restaurants" collection
database = connect.sample_restaurants
collection = database.restaurants
# This is our aggregation pipeline
pipeline =
# We are finding American restaurants in Brooklyn
{"$match": {"borough": "Brooklyn", "cuisine": "American"}},
# We only want 5 out of our over 20k+ documents
{"$limit": 5},
# We don't want all the details, project what you need
{"$project": {"_id": 0, "name": 1, "borough": 1, "cuisine": 1}}
]
# This will show our pipeline
result = list(collection.aggregate(pipeline))
# Print the result
for restaurant in result:
print(restaurant)
```
Here, we are using `$match` to find all the American cuisine restaurants located in Brooklyn. We are then using `$limit` to only five documents out of our database. Next, we are using `$project` to only show the fields we want. We are going to include “borough”, “cuisine”, and the “name” of the restaurant. Then, we are executing our pipeline and printing out our results.
Click on “Deploy” to ensure our changes have been deployed to the code environment. After the changes are deployed, hit “Test.” We will get a sample size of five Brooklyn American restaurants as the result in our console:
![results from our aggregation pipeline shown in AWS Lambda
Our aggregation pipeline was successful!
## Conclusion
This tutorial provided you with hands-on experience to connect a MongoDB Atlas database to AWS Lambda. We also got an inside look on how to write to a cluster from Lambda, how to read back information from an aggregation pipeline, and how to properly configure our dependencies when using Lambda. Hopefully now, you are ready to take advantage of AWS Lambda and MongoDB to create the best applications without worrying about external infrastructure.
If you enjoyed this tutorial and would like to learn more, please check out our MongoDB Developer Center and YouTube channel.
| md | {
"tags": [
"Atlas",
"Python",
"AWS",
"Serverless"
],
"pageDescription": "Learn how to leverage the power of AWS Lambda and MongoDB Atlas in your applications. ",
"contentType": "Tutorial"
} | How to Use PyMongo to Connect MongoDB Atlas with AWS Lambda | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-schema-migration | created | # Migrating Your iOS App's Realm Schema in Production
## Introduction
Murphy's law dictates that as soon as your mobile app goes live, you'll receive a request to add a new feature. Then another. Then another.
This is fine if these features don't require any changes to your data schema. But, that isn't always the case.
Fortunately, Realm has built-in functionality to make schema migration easier.
This tutorial will step you through updating an existing mobile app to add some new features that require changes to the schema. In particular, we'll look at the Realm migration code that ensures that no existing data is lost when the new app versions are rolled out to your production users.
We'll use the Scrumdinger app that I modified in a previous post to show how Apple's sample Swift app could be ported to Realm. The starting point for the app can be found in this branch of our Scrumdinger repo and the final version is in this branch.
Note that the app we're using for this post doesn't use Atlas Device Sync. If it did, then the schema migration process would be very different—that's covered in Migrating Your iOS App's **Synced** Realm Schema in Production.
## Prerequisites
This tutorial has a dependency on Realm-Cocoa 10.13.0+.
## Baseline App/Realm Schema
As a reminder, the starting point for this tutorial is the "realm" branch of the Scrumdinger repo.
There are two Realm model classes that we'll extend to add new features to Scrumdinger. The first, DailyScrum, represents one scrum:
``` swift
class DailyScrum: Object, ObjectKeyIdentifiable {
@Persisted var title = ""
@Persisted var attendeeList = RealmSwift.List()
@Persisted var lengthInMinutes = 0
@Persisted var colorComponents: Components?
@Persisted var historyList = RealmSwift.List()
var color: Color { Color(colorComponents ?? Components()) }
var attendees: String] { Array(attendeeList) }
var history: [History] { Array(historyList) }
...
}
```
The second, [History, represents the minutes of a meeting from one of the user's scrums:
``` swift
class History: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var date: Date?
@Persisted var attendeeList = List()
@Persisted var lengthInMinutes: Int = 0
@Persisted var transcript: String?
var attendees: String] { Array(attendeeList) }
...
}
```
We can use [Realm Studio to examine the contents of our Realm database after the `DailyScrum` and `History` objects have been created:
Accessing Realm Data on iOS Using Realm Studio explains how to locate and open the Realm files from your iOS simulator.
## Schema Change #1—Mark Scrums as Public/Private
The first new feature we've been asked to add is a flag to indicate whether each scrum is public or private:
This feature requires the addition of a new `Bool` named `isPublic` to DailyScrum:
``` swift
class DailyScrum: Object, ObjectKeyIdentifiable {
@Persisted var title = ""
@Persisted var attendeeList = RealmSwift.List()
@Persisted var lengthInMinutes = 0
@Persisted var isPublic = false
@Persisted var colorComponents: Components?
@Persisted var historyList = RealmSwift.List()
var color: Color { Color(colorComponents ?? Components()) }
var attendees: String] { Array(attendeeList) }
var history: [History] { Array(historyList) }
...
}
```
Remember that our original version of Scrumdinger is already in production, and the embedded Realm database is storing instances of `DailyScrum`. We don't want to lose that data, and so we must migrate those objects to the new schema when the app is upgraded.
Fortunately, Realm has built-in functionality to automatically handle the addition and deletion of fields. When adding a field, Realm will use a default value (e.g., `0` for an `Int`, and `false` for a `Bool`).
If we simply upgrade the installed app with the one using the new schema, then we'll get a fatal error. That's because we need to tell Realm that we've updated the schema. We do that by setting the schema version to 1 (the version defaulted to 0 for the original schema):
``` swift
@main
struct ScrumdingerApp: SwiftUI.App {
var body: some Scene {
WindowGroup {
NavigationView {
ScrumsView()
.environment(\.realmConfiguration,
Realm.Configuration(schemaVersion: 1))
}
}
}
}
```
After upgrading the app, we can use [Realm Studio to confirm that our `DailyScrum` object has been updated to initialize `isPublic` to `false`:
## Schema Change #2—Store The Number of Attendees at Each Meeting
The second feature request is to show the number of attendees in the history from each meeting:
We could calculate the count every time that it's needed, but we've decided to calculate it just once and then store it in our History object in a new field named `numberOfAttendees`:
``` swift
class History: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var date: Date?
@Persisted var attendeeList = List()
@Persisted var numberOfAttendees = 0
@Persisted var lengthInMinutes: Int = 0
@Persisted var transcript: String?
var attendees: String] { Array(attendeeList) }
...
}
```
We increment the schema version to 2. Note that the schema version applies to all Realm objects, and so we have to set the version to 2 even though this is the first time that we've changed the schema for `History`.
If we leave it to Realm to initialize `numberOfAttendees`, then it will set it to 0—which is not what we want. Instead, we provide a `migrationBlock` which initializes new fields based on the old schema version:
``` swift
@main
struct ScrumdingerApp: SwiftUI.App {
var body: some Scene {
WindowGroup {
NavigationView {
ScrumsView()
.environment(\.realmConfiguration, Realm.Configuration(
schemaVersion: 2,
migrationBlock: { migration, oldSchemaVersion in
if oldSchemaVersion < 1 {
// Could init the `DailyScrum.isPublic` field here, but the default behavior of setting
// it to `false` is what we want.
}
if oldSchemaVersion < 2 {
migration.enumerateObjects(ofType: History.className()) { oldObject, newObject in
let attendees = oldObject!["attendeeList"] as? RealmSwift.List
newObject!["numberOfAttendees"] = attendees?.count ?? 0
}
}
if oldSchemaVersion < 3 {
// TODO: This is where you'd add you're migration code to go from version
// to version 3 when you next modify the schema
}
}
))
}
}
}
}
```
Note that all other fields are migrated automatically.
It's up to you how you use data from the previous schema to populate fields in the new schema. E.g., if you wanted to combine `firstName` and `lastName` from the previous schema to populate a `fullName` field in the new schema, then you could do so like this:
``` swift
migration.enumerateObjects(ofType: Person.className()) { oldObject, newObject in
let firstName = oldObject!["firstName"] as! String
let lastName = oldObject!["lastName"] as! String
newObject!["fullName"] = "\(firstName) \(lastName)"
}
```
We can't know what "old version" of the schema will be already installed on a user's device when it's upgraded to the latest version (some users may skip some versions,) and so the `migrationBlock` must handle all previous versions. Best practice is to process the incremental schema changes sequentially:
* `oldSchemaVersion < 1` : Process the delta between v0 and v1
* `oldSchemaVersion < 2` : Process the delta between v1 and v2
* `oldSchemaVersion < 3` : Process the delta between v2 and v3
* ...
Realm Studio shows that our code has correctly initialized `numberOfAttendees`:
![Realm Studio showing that the numberOfAttendees field has been set to 2 – matching the number of attendees in the meeting history
## Conclusion
It's almost inevitable that any successful mobile app will need some schema changes after it's gone into production. Realm makes adapting to those changes simple, ensuring that users don't lose any of their existing data when upgrading to new versions of the app.
For changes such as adding or removing fields, all you need to do as a developer is to increment the version with each new deployed schema. For more complex changes, you provide code that computes the values for fields in the new schema using data from the old schema.
This tutorial stepped you through adding two new features that both required schema changes. You can view the final app in the new-schema branch of the Scrumdinger repo.
## Next Steps
This post focussed on schema migration for an iOS app. You can find some more complex examples in the repo.
If you're working with an app for a different platform, then you can find instructions in the docs:
* Node.js
* Android
* iOS
* .NET
* React Native
If you've any questions about schema migration, or anything else related to Realm, then please post them to our community forum. | md | {
"tags": [
"Realm",
"Swift",
"iOS"
],
"pageDescription": "Learn how to safely update your iOS app's Realm schema to support new functionality—without losing any existing data",
"contentType": "Tutorial"
} | Migrating Your iOS App's Realm Schema in Production | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/node-crud-tutorial | created | # MongoDB and Node.js Tutorial - CRUD Operations
In the first post in this series, I walked you through how to connect to a MongoDB database from a Node.js script, retrieve a list of databases, and print the results to your console. If you haven't read that post yet, I recommend you do so and then return here.
>
>
>This post uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.
>
>Click here to see a previous version of this post that uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.
>
>
Now that we have connected to a database, let's kick things off with the CRUD (create, read, update, and delete) operations.
If you prefer video over text, I've got you covered. Check out the video
in the section below. :-)
>
>
>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
>
>
Here is a summary of what we'll cover in this post:
- Learn by Video
- How MongoDB Stores Data
- Setup
- Create
- Read
- Update
- Delete
- Wrapping Up
## Learn by Video
I created the video below for those who prefer to learn by video instead of text. You might also find this video helpful if you get stuck while trying the steps in the text-based instructions below.
Here is a summary of what the video covers:
- How to connect to a MongoDB database hosted on MongoDB Atlas from inside of a Node.js script (01:00)
- How MongoDB stores data in documents and collections (instead of rows and tables) (08:22)
- How to create documents using `insertOne()` and `insertMany()` (11:47)
- How to read documents using `findOne()` and `find()` (17:16)
- How to update documents using `updateOne()` with and without `upsert` as well as `updateMany()` (24:46
)
- How to delete documents using `deleteOne()` and `deleteMany()` (35:58)
:youtube]{vid=fbYExfeFsI0}
Below are the links I mentioned in the video.
- [GitHub Repo
- Back to Basics Webinar Recording
## How MongoDB Stores Data
Before we go any further, let's take a moment to understand how data is stored in MongoDB.
MongoDB stores data in BSON documents. BSON is a binary representation of JSON (JavaScript Object Notation) documents. When you read MongoDB documentation, you'll frequently see the term "document," but you can think of a document as simply a JavaScript object. For those coming from the SQL world, you can think of a document as being roughly equivalent to a row.
MongoDB stores groups of documents in collections. For those with a SQL background, you can think of a collection as being roughly equivalent to a table.
Every document is required to have a field named `_id`. The value of `_id` must be unique for each document in a collection, is immutable, and can be of any type other than an array. MongoDB will automatically create an index on `_id`. You can choose to make the value of `_id` meaningful (rather than a somewhat random ObjectId) if you have a unique value for each document that you'd like to be able to quickly search.
In this blog series, we'll use the sample Airbnb listings dataset. The `sample_airbnb` database contains one collection: `listingsAndReviews`. This collection contains documents about Airbnb listings and their reviews.
Let's take a look at a document in the `listingsAndReviews` collection. Below is part of an Extended JSON representation of a BSON document:
``` json
{
"_id": "10057447",
"listing_url": "https://www.airbnb.com/rooms/10057447",
"name": "Modern Spacious 1 Bedroom Loft",
"summary": "Prime location, amazing lighting and no annoying neighbours. Good place to rent if you want a relaxing time in Montreal.",
"property_type": "Apartment",
"bedrooms": {"$numberInt":"1"},
"bathrooms": {"$numberDecimal":"1.0"},
"amenities": "Internet","Wifi","Kitchen","Heating","Family/kid friendly","Washer","Dryer","Smoke detector","First aid kit","Safety card","Fire extinguisher","Essentials","Shampoo","24-hour check-in","Hangers","Iron","Laptop friendly workspace"],
}
```
For more information on how MongoDB stores data, see the [MongoDB Back to Basics Webinar that I co-hosted with Ken Alger.
## Setup
To make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.
1. Download a copy of template.js.
2. Open `template.js` in your favorite code editor.
3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.
4. Save the file as `crud.js`.
You can run this file by executing `node crud.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.
## Create
Now that we know how to connect to a MongoDB database and we understand how data is stored in a MongoDB database, let's create some data!
### Create One Document
Let's begin by creating a new Airbnb listing. We can do so by calling Collection's insertOne(). `insertOne()` will insert a single document into the collection. The only required parameter is the new document (of type object) that will be inserted. If our new document does not contain the `_id` field, the MongoDB driver will automatically create an `_id` for the document.
Our function to create a new listing will look something like the following:
``` javascript
async function createListing(client, newListing){
const result = await client.db("sample_airbnb").collection("listingsAndReviews").insertOne(newListing);
console.log(`New listing created with the following id: ${result.insertedId}`);
}
```
We can call this function by passing a connected MongoClient as well as an object that contains information about a listing.
``` javascript
await createListing(client,
{
name: "Lovely Loft",
summary: "A charming loft in Paris",
bedrooms: 1,
bathrooms: 1
}
);
```
The output would be something like the following:
``` none
New listing created with the following id: 5d9ddadee415264e135ccec8
```
Note that since we did not include a field named `_id` in the document, the MongoDB driver automatically created an `_id` for us. The `_id` of the document you create will be different from the one shown above. For more information on how MongoDB generates `_id`, see Quick Start: BSON Data Types - ObjectId.
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Create Multiple Documents
Sometimes you will want to insert more than one document at a time. You could choose to repeatedly call `insertOne()`. The problem is that, depending on how you've structured your code, you may end up waiting for each insert operation to return before beginning the next, resulting in slow code.
Instead, you can choose to call Collection's insertMany(). `insertMany()` will insert an array of documents into your collection.
One important option to note for `insertMany()` is `ordered`. If `ordered` is set to `true`, the documents will be inserted in the order given in the array. If any of the inserts fail (for example, if you attempt to insert a document with an `_id` that is already being used by another document in the collection), the remaining documents will not be inserted. If ordered is set to `false`, the documents may not be inserted in the order given in the array. MongoDB will attempt to insert all of the documents in the given array—regardless of whether any of the other inserts fail. By default, `ordered` is set to `true`.
Let's write a function to create multiple Airbnb listings.
``` javascript
async function createMultipleListings(client, newListings){
const result = await client.db("sample_airbnb").collection("listingsAndReviews").insertMany(newListings);
console.log(`${result.insertedCount} new listing(s) created with the following id(s):`);
console.log(result.insertedIds);
}
```
We can call this function by passing a connected MongoClient and an array of objects that contain information about listings.
``` javascript
await createMultipleListings(client,
{
name: "Infinite Views",
summary: "Modern home with infinite views from the infinity pool",
property_type: "House",
bedrooms: 5,
bathrooms: 4.5,
beds: 5
},
{
name: "Private room in London",
property_type: "Apartment",
bedrooms: 1,
bathroom: 1
},
{
name: "Beautiful Beach House",
summary: "Enjoy relaxed beach living in this house with a private beach",
bedrooms: 4,
bathrooms: 2.5,
beds: 7,
last_review: new Date()
}
]);
```
Note that every document does not have the same fields, which is perfectly OK. (I'm guessing that those who come from the SQL world will find this incredibly uncomfortable, but it really will be OK 😊.) When you use MongoDB, you get a lot of flexibility in how to structure your documents. If you later decide you want to add [schema validation rules so you can guarantee your documents have a particular structure, you can.
The output of calling `createMultipleListings()` would be something like the following:
``` none
3 new listing(s) created with the following id(s):
{
'0': 5d9ddadee415264e135ccec9,
'1': 5d9ddadee415264e135cceca,
'2': 5d9ddadee415264e135ccecb
}
```
Just like the MongoDB Driver automatically created the `_id` field for us when we called `insertOne()`, the Driver has once again created the `_id` field for us when we called `insertMany()`.
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
## Read
Now that we know how to **create** documents, let's **read** one!
### Read One Document
Let's begin by querying for an Airbnb listing in the listingsAndReviews collection
by name.
We can query for a document by calling Collection's findOne(). `findOne()` will return the first document that matches the given query. Even if more than one document matches the query, only one document will be returned.
`findOne()` has only one required parameter: a query of type object. The query object can contain zero or more properties that MongoDB will use to find a document in the collection. If you want to query all documents in a collection without narrowing your results in any way, you can simply send an empty object.
Since we want to search for an Airbnb listing with a particular name, we will include the name field in the query object we pass to `findOne()`:
``` javascript
findOne({ name: nameOfListing })
```
Our function to find a listing by querying the name field could look something like the following:
``` javascript
async function findOneListingByName(client, nameOfListing) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews").findOne({ name: nameOfListing });
if (result) {
console.log(`Found a listing in the collection with the name '${nameOfListing}':`);
console.log(result);
} else {
console.log(`No listings found with the name '${nameOfListing}'`);
}
}
```
We can call this function by passing a connected MongoClient as well as the name of a listing we want to find. Let's search for a listing named "Infinite Views" that we created in an earlier section.
``` javascript
await findOneListingByName(client, "Infinite Views");
```
The output should be something like the following.
``` none
Found a listing in the collection with the name 'Infinite Views':
{
_id: 5da9b5983e104518671ae128,
name: 'Infinite Views',
summary: 'Modern home with infinite views from the infinity pool',
property_type: 'House',
bedrooms: 5,
bathrooms: 4.5,
beds: 5
}
```
Note that the `_id` of the document in your database will not match the `_id` in the sample output above.
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Read Multiple Documents
Now that you know how to query for one document, let's discuss how to query for multiple documents at a time. We can do so by calling Collection's find().
Similar to `findOne()`, the first parameter for `find()` is the query object. You can include zero to many properties in the query object.
Let's say we want to search for all Airbnb listings that have minimum numbers of bedrooms and bathrooms. We could do so by making a call like the following:
``` javascript
client.db("sample_airbnb").collection("listingsAndReviews").find(
{
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
);
```
As you can see above, we have two properties in our query object: one for bedrooms and one for bathrooms. We can leverage the $gte comparison query operator to search for documents that have bedrooms greater than or equal to a given number. We can do the same to satisfy our minimum number of bathrooms requirement. MongoDB provides a variety of other comparison query operators that you can utilize in your queries. See the official documentation for more details.
The query above will return a Cursor. A Cursor allows traversal over the result set of a query.
You can also use Cursor's functions to modify what documents are included in the results. For example, let's say we want to sort our results so that those with the most recent reviews are returned first. We could use Cursor's sort() function to sort the results using the `last_review` field. We could sort the results in descending order (indicated by passing -1 to `sort()`) so that listings with the most recent reviews will be returned first. We can now update our existing query to look like the following.
``` javascript
const cursor = client.db("sample_airbnb").collection("listingsAndReviews").find(
{
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
).sort({ last_review: -1 });
```
The above query matches 192 documents in our collection. Let's say we don't want to process that many results inside of our script. Instead, we want to limit our results to a smaller number of documents. We can chain another of `sort()`'s functions to our existing query: limit(). As the name implies, `limit()` will set the limit for the cursor. We can now update our query to only return a certain number of results.
``` javascript
const cursor = client.db("sample_airbnb").collection("listingsAndReviews").find(
{
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
).sort({ last_review: -1 })
.limit(maximumNumberOfResults);
```
We could choose to iterate over the cursor to get the results one by one. Instead, if we want to retrieve all of our results in an array, we can call Cursor's toArray() function. Now our code looks like the following:
``` javascript
const cursor = client.db("sample_airbnb").collection("listingsAndReviews").find(
{
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
).sort({ last_review: -1 })
.limit(maximumNumberOfResults);
const results = await cursor.toArray();
```
Now that we have our query ready to go, let's put it inside an asynchronous function and add functionality to print the results.
``` javascript
async function findListingsWithMinimumBedroomsBathroomsAndMostRecentReviews(client, {
minimumNumberOfBedrooms = 0,
minimumNumberOfBathrooms = 0,
maximumNumberOfResults = Number.MAX_SAFE_INTEGER
} = {}) {
const cursor = client.db("sample_airbnb").collection("listingsAndReviews").find(
{
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
).sort({ last_review: -1 })
.limit(maximumNumberOfResults);
const results = await cursor.toArray();
if (results.length > 0) {
console.log(`Found listing(s) with at least ${minimumNumberOfBedrooms} bedrooms and ${minimumNumberOfBathrooms} bathrooms:`);
results.forEach((result, i) => {
date = new Date(result.last_review).toDateString();
console.log();
console.log(`${i + 1}. name: ${result.name}`);
console.log(` _id: ${result._id}`);
console.log(` bedrooms: ${result.bedrooms}`);
console.log(` bathrooms: ${result.bathrooms}`);
console.log(` most recent review date: ${new Date(result.last_review).toDateString()}`);
});
} else {
console.log(`No listings found with at least ${minimumNumberOfBedrooms} bedrooms and ${minimumNumberOfBathrooms} bathrooms`);
}
}
```
We can call this function by passing a connected MongoClient as well as an object with properties indicating the minimum number of bedrooms, the minimum number of bathrooms, and the maximum number of results.
``` javascript
await findListingsWithMinimumBedroomsBathroomsAndMostRecentReviews(client, {
minimumNumberOfBedrooms: 4,
minimumNumberOfBathrooms: 2,
maximumNumberOfResults: 5
});
```
If you've created the documents as described in the earlier section, the output would be something like the following:
``` none
Found listing(s) with at least 4 bedrooms and 2 bathrooms:
1. name: Beautiful Beach House
_id: 5db6ed14f2e0a60683d8fe44
bedrooms: 4
bathrooms: 2.5
most recent review date: Mon Oct 28 2019
2. name: Spectacular Modern Uptown Duplex
_id: 582364
bedrooms: 4
bathrooms: 2.5
most recent review date: Wed Mar 06 2019
3. name: Grace 1 - Habitat Apartments
_id: 29407312
bedrooms: 4
bathrooms: 2.0
most recent review date: Tue Mar 05 2019
4. name: 6 bd country living near beach
_id: 2741869
bedrooms: 6
bathrooms: 3.0
most recent review date: Mon Mar 04 2019
5. name: Awesome 2-storey home Bronte Beach next to Bondi!
_id: 20206764
bedrooms: 4
bathrooms: 2.0
most recent review date: Sun Mar 03 2019
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
## Update
We're halfway through the CRUD operations. Now that we know how to **create** and **read** documents, let's discover how to **update** them.
### Update One Document
Let's begin by updating a single Airbnb listing in the listingsAndReviews collection.
We can update a single document by calling Collection's updateOne(). `updateOne()` has two required parameters:
1. `filter` (object): the Filter used to select the document to update. You can think of the filter as essentially the same as the query param we used in findOne() to search for a particular document. You can include zero properties in the filter to search for all documents in the collection, or you can include one or more properties to narrow your search.
2. `update` (object): the update operations to be applied to the document. MongoDB has a variety of update operators you can use such as `$inc`, `$currentDate`, `$set`, and `$unset` among others. See the official documentation for a complete list of update operators and their descriptions.
`updateOne()` also has an optional `options` param. See the updateOne() docs for more information on these options.
`updateOne()` will update the first document that matches the given query. Even if more than one document matches the query, only one document will be updated.
Let's say we want to update an Airbnb listing with a particular name. We can use `updateOne()` to achieve this. We'll include the name of the listing in the filter param. We'll use the $set update operator to set new values for new or existing fields in the document we are updating. When we use `$set`, we pass a document that contains fields and values that should be updated or created. The document that we pass to `$set` will not replace the existing document; any fields that are part of the original document but not part of the document we pass to `$set` will remain as they are.
Our function to update a listing with a particular name would look like the following:
``` javascript
async function updateListingByName(client, nameOfListing, updatedListing) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.updateOne({ name: nameOfListing }, { $set: updatedListing });
console.log(`${result.matchedCount} document(s) matched the query criteria.`);
console.log(`${result.modifiedCount} document(s) was/were updated.`);
}
```
Let's say we want to update our Airbnb listing that has the name "Infinite Views." We created this listing in an earlier section.
``` javascript
{
_id: 5db6ed14f2e0a60683d8fe42,
name: 'Infinite Views',
summary: 'Modern home with infinite views from the infinity pool',
property_type: 'House',
bedrooms: 5,
bathrooms: 4.5,
beds: 5
}
```
We can call `updateListingByName()` by passing a connected MongoClient, the name of the listing, and an object containing the fields we want to update and/or create.
``` javascript
await updateListingByName(client, "Infinite Views", { bedrooms: 6, beds: 8 });
```
Executing this command results in the following output.
``` none
1 document(s) matched the query criteria.
1 document(s) was/were updated.
```
Now our listing has an updated number of bedrooms and beds.
``` json
{
_id: 5db6ed14f2e0a60683d8fe42,
name: 'Infinite Views',
summary: 'Modern home with infinite views from the infinity pool',
property_type: 'House',
bedrooms: 6,
bathrooms: 4.5,
beds: 8
}
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Upsert One Document
One of the options you can choose to pass to `updateOne()` is upsert. Upsert is a handy feature that allows you to update a document if it exists or insert a document if it does not.
For example, let's say you wanted to ensure that an Airbnb listing with a particular name had a certain number of bedrooms and bathrooms. Without upsert, you'd first use `findOne()` to check if the document existed. If the document existed, you'd use `updateOne()` to update the document. If the document did not exist, you'd use `insertOne()` to create the document. When you use upsert, you can combine all of that functionality into a single command.
Our function to upsert a listing with a particular name can be basically identical to the function we wrote above with one key difference: We'll pass `{upsert: true}` in the `options` param for `updateOne()`.
``` javascript
async function upsertListingByName(client, nameOfListing, updatedListing) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.updateOne({ name: nameOfListing },
{ $set: updatedListing },
{ upsert: true });
console.log(`${result.matchedCount} document(s) matched the query criteria.`);
if (result.upsertedCount > 0) {
console.log(`One document was inserted with the id ${result.upsertedId._id}`);
} else {
console.log(`${result.modifiedCount} document(s) was/were updated.`);
}
}
```
Let's say we aren't sure if a listing named "Cozy Cottage" is in our collection or, if it does exist, if it holds old data. Either way, we want to ensure the listing that exists in our collection has the most up-to-date data. We can call `upsertListingByName()` with a connected MongoClient, the name of the listing, and an object containing the up-to-date data that should be in the listing.
``` javascript
await upsertListingByName(client, "Cozy Cottage", { name: "Cozy Cottage", bedrooms: 2, bathrooms: 1 });
```
If the document did not previously exist, the output of the function would be something like the following:
``` none
0 document(s) matched the query criteria.
One document was inserted with the id 5db9d9286c503eb624d036a1
```
We have a new document in the listingsAndReviews collection:
``` json
{
_id: 5db9d9286c503eb624d036a1,
name: 'Cozy Cottage',
bathrooms: 1,
bedrooms: 2
}
```
If we discover more information about the "Cozy Cottage" listing, we can use `upsertListingByName()` again.
``` javascript
await upsertListingByName(client, "Cozy Cottage", { beds: 2 });
```
And we would see the following output.
``` none
1 document(s) matched the query criteria.
1 document(s) was/were updated.
```
Now our document has a new field named "beds."
``` json
{
_id: 5db9d9286c503eb624d036a1,
name: 'Cozy Cottage',
bathrooms: 1,
bedrooms: 2,
beds: 2
}
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Update Multiple Documents
Sometimes you'll want to update more than one document at a time. In this case, you can use Collection's updateMany(). Like `updateOne()`, `updateMany()` requires that you pass a filter of type object and an update of type object. You can choose to include options of type object as well.
Let's say we want to ensure that every document has a field named `property_type`. We can use the $exists query operator to search for documents where the `property_type` field does not exist. Then we can use the $set update operator to set the `property_type` to "Unknown" for those documents. Our function will look like the following.
``` javascript
async function updateAllListingsToHavePropertyType(client) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.updateMany({ property_type: { $exists: false } },
{ $set: { property_type: "Unknown" } });
console.log(`${result.matchedCount} document(s) matched the query criteria.`);
console.log(`${result.modifiedCount} document(s) was/were updated.`);
}
```
We can call this function with a connected MongoClient.
``` javascript
await updateAllListingsToHavePropertyType(client);
```
Below is the output from executing the previous command.
``` none
3 document(s) matched the query criteria.
3 document(s) was/were updated.
```
Now our "Cozy Cottage" document and all of the other documents in the Airbnb collection have the `property_type` field.
``` json
{
_id: 5db9d9286c503eb624d036a1,
name: 'Cozy Cottage',
bathrooms: 1,
bedrooms: 2,
beds: 2,
property_type: 'Unknown'
}
```
Listings that contained a `property_type` before we called `updateMany()` remain as they were. For example, the "Spectacular Modern Uptown Duplex" listing still has `property_type` set to `Apartment`.
``` json
{
_id: '582364',
listing_url: 'https://www.airbnb.com/rooms/582364',
name: 'Spectacular Modern Uptown Duplex',
property_type: 'Apartment',
room_type: 'Entire home/apt',
bedrooms: 4,
beds: 7
...
}
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
## Delete
Now that we know how to **create**, **read**, and **update** documents, let's tackle the final CRUD operation: **delete**.
### Delete One Document
Let's begin by deleting a single Airbnb listing in the listingsAndReviews collection.
We can delete a single document by calling Collection's deleteOne(). `deleteOne()` has one required parameter: a filter of type object. The filter is used to select the document to delete. You can think of the filter as essentially the same as the query param we used in findOne() and the filter param we used in updateOne(). You can include zero properties in the filter to search for all documents in the collection, or you can include one or more properties to narrow your search.
`deleteOne()` also has an optional `options` param. See the deleteOne() docs for more information on these options.
`deleteOne()` will delete the first document that matches the given query. Even if more than one document matches the query, only one document will be deleted. If you do not specify a filter, the first document found in natural order will be deleted.
Let's say we want to delete an Airbnb listing with a particular name. We can use `deleteOne()` to achieve this. We'll include the name of the listing in the filter param. We can create a function to delete a listing with a particular name.
``` javascript
async function deleteListingByName(client, nameOfListing) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.deleteOne({ name: nameOfListing });
console.log(`${result.deletedCount} document(s) was/were deleted.`);
}
```
Let's say we want to delete the Airbnb listing we created in an earlier section that has the name "Cozy Cottage." We can call `deleteListingsByName()` by passing a connected MongoClient and the name "Cozy Cottage."
``` javascript
await deleteListingByName(client, "Cozy Cottage");
```
Executing the command above results in the following output.
``` none
1 document(s) was/were deleted.
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Deleting Multiple Documents
Sometimes you'll want to delete more than one document at a time. In this case, you can use Collection's deleteMany(). Like `deleteOne()`, `deleteMany()` requires that you pass a filter of type object. You can choose to include options of type object as well.
Let's say we want to remove documents that have not been updated recently. We can call `deleteMany()` with a filter that searches for documents that were scraped prior to a particular date. Our function will look like the following.
``` javascript
async function deleteListingsScrapedBeforeDate(client, date) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.deleteMany({ "last_scraped": { $lt: date } });
console.log(`${result.deletedCount} document(s) was/were deleted.`);
}
```
To delete listings that were scraped prior to February 15, 2019, we can call `deleteListingsScrapedBeforeDate()` with a connected MongoClient and a Date instance that represents February 15.
``` javascript
await deleteListingsScrapedBeforeDate(client, new Date("2019-02-15"));
```
Executing the command above will result in the following output.
``` none
606 document(s) was/were deleted.
```
Now only recently scraped documents are in our collection.
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
## Wrapping Up
We covered a lot today! Let's recap.
We began by exploring how MongoDB stores data in documents and collections. Then we learned the basics of creating, reading, updating, and deleting data.
Continue on to the next post in this series, where we'll discuss how you can analyze and manipulate data using the aggregation pipeline.
Comments? Questions? We'd love to chat with you in the MongoDB Community.
| md | {
"tags": [
"JavaScript",
"MongoDB"
],
"pageDescription": "Learn how to execute the CRUD (create, read, update, and delete) operations in MongoDB using Node.js in this step-by-step tutorial.",
"contentType": "Quickstart"
} | MongoDB and Node.js Tutorial - CRUD Operations | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/5-year-atlas-anniversary-episode-1-on-ramp | created | # Atlas 5-Year Anniversary Podcast Series Episode 1 - Onramp to Atlas
My name is Michael Lynn, and I’m a developer advocate at MongoDB.
I’m excited to welcome you to this, the first in a series of episodes created to celebrate the five year anniversary of the launch of MongoDB Atlas, our Database as a Service Platform.
In this series, my co-hosts, Jesse Hall, and Nic Raboy will talk with some of the people responsible for building, and launching the platform that helped to transform MongoDB as a company.
beginning with Episode 1, the On ramp to Atlas talking with Sahir Azam, Chief Product Officer, and Andrew Davidson, VP of Product about the strategic shift from a software company to a software as a service business.
In episode 2, Zero to Database as a Service, we’ll chat with Cailin Nelson, SVP of Engineering, and Cory Mintz, VP of Engineering - about Atlas as a product and how it was built and launched.
In episode 3, we’ll Go Mobile, talking with Alexander Stigsen, Founder of the Realm Mobile Database which has become a part of the Atlas Platform.
In episode 4, we’ll wrap the series up with a panel discussion and review some of our valued customer comments about the platform.
Thanks so much for tuning in and reading, please take a moment to subscribe for more episodes and if you enjoy what you hear, please don’t forget to provide a comment, and a rating to help us continue to improve.
Without further adue, here is the transcript from episode one of this series.
Sahir: [00:00:00] Hi Everyone. My name is Sahir Azam and I'm the chief product officer at Mongo DB. Welcome to the Mongo DB podcast.
Mike: [00:00:07] Okay. Today, we're going to be talking about Mongo to be Atlas and the journey that has taken place to bring us to this point, the five-year anniversary of MongoDB Atlas of a launch of MongoDB Atlas. And I'm joined in the studio today by a couple of guests. And we'll start by introducing Sahir Azam chief product officer at Mongo DB.
Sahir, welcome to the show. It's great to have you on the podcast.
Sahir: [00:00:31] Hey, Hey Mike. Great to be here.
Mike: [00:00:33] Terrific. And we're also joined by Andrew Davidson. Andrew is vice-president of product cloud products at Mongo DB. Is it, do I have that right?
Andrew: [00:00:41] That's right? Good to be here, Mike. How you doin?
Mike: [00:00:44] Doing great. It's great to have you on the show. And of course are my co-hosts for the day. Is Jesse Hall also known as codeSTACKr. Welcome back to the show, Jesse.
It's great to have you on
Jesse: [00:00:54] I'm fairly new here. So I'm excited to hear about the, history of Atlas
Mike: [00:00:58] fantastic. Yeah. W we're gonna, we're gonna get into that. But before we do so here, I guess we'll maybe introduce yourself to the audience, talk a little bit about who you are and what you.
Sahir: [00:01:09] Yeah. So, I mentioned earlier, I run the product organization at Mongo and as part of my core focus, I think about the products we build the roadmaps of those products and how they serve customers and ultimately help us grow our business. And I've been with the company for about five years.
Coincidentally I was recruited to lead basically the transition of the business. Open source enterprise software company to becoming a SAS vendor. And so I came on right before the launch of Atlas, Andrew on the line here certainly has the history of how Atlas came to be even prior to me joining.
But, uh, it's been a heck of a ride.
Mike: [00:01:46] Fantastic. Well, Andrew, that brings us to you once, yet, let folks know who you are and what you do.
Andrew: [00:01:52] sure. Yeah. Similar to Sahir, I focus on product management, but a really more specifically focused on our cloud product suite. And if you think about it, that was something that five years ago, when we first launched Atlas was just an early kernel, a little bit of a startup inside of our broader company.
And so before that time, I was very focused on our traditional more private cloud management experience from marketing the and it's really just been this amazing journey to really transform this company with Sahir and so many others into being a cloud company. So really excited to be here on this milestone.
Mike: [00:02:25] Fantastic. And Jesse, so you've been with Mongo to be, I guess, relatively the least amount of time among the four of us, but maybe talk about your experience with Mongo to be and cloud in general.
Jesse: [00:02:36] Yeah. So I've used it several times in some tutorials that I've created on the Atlas portion of it. Going through the onboarding experience and
learning how it actually works, how the command line and all of that was amazing to understand it from that perspective as well.
So, yeah, I'm excited to see how you took it from that to the cloud.
Mike: [00:02:58] Yeah. Yeah, me too. And if you think about the journey I'm going to be was a successful open source product. I was a project that was largely used by developers. To increase agility. It represented a different way to store data and it wasn't a perfect journey. There were some challenges early on, specifically around the uniqueness of the mechanism that it's using to store data is different from traditional forms.
And. So I guess Andrew you've been here the longest over eight years. Talk about the challenges of transitioning from a software product to an online database, as a service.
Andrew: [00:03:37] Yeah. Sure. When you think back to where we were, say eight and a half years ago, to your point, we had this kind of almost new category of data experience for developers that gave them this natural way to interface with data in a way that was totally reflective of the way they wanted to think about their data, the objects in there. And we came in and revolutionized the world with this way of interfacing with data. And that's what led to them. I'm going to be just exploding in popularity. It was just mind boggling to see millions of people every month, experiencing MongoDB for the first time as pure open source software on their laptops.
But as we move forward over the years, we realized. We could be this phenomenal database that gave developers exactly the way they want to interface with data. We could be incredibly scalable. We could go up to any level of scale with vertical and horizontal kind of linear cost economics, really built for cloud.
We could do all of that, but if our customers continued to have to self manage all of this software at scale, we realized, frankly, we might get left behind in the end. We might get beaten by databases that weren't as good. But then we're going to be delivered at a higher level of abstraction, fully managed service.
So we went all in as a company recognizing we need to make this just so easy for people to get started and to go up to any level of scale. And that's really what Atlas was about. It was all about democratizing this incredible database, which had already democratize a new data model, but making it accessible for production use cases in the cloud, anywhere in the room.
And I think when you see what's happened today with just millions of people who have now used Atlas, the same magnitude of number of have had used our self-managed software. It's just amazing to see how far.
Mike: [00:05:21] Yeah. Yeah. It's been quite a ride and it is interesting timing. So here, so you joined right around the same time. I think it was, I think a couple of months prior to the launch of Atlas. Tell us about like your role early.
Sahir: [00:05:36] Yeah, I think what attracted me to Mongo DB in the first place, certainly the team, I knew there was a strong team here and I absolutely knew of the sort of popularity and. Just disruption that the open source technology and database had created in the market just as, somebody being an it and technology.
And certainly it'd be hard to miss. So I had a very kind of positive impression overall of the business, but the thing that really did it for me was the fact that the company was embarking on this strategic expansion to become a SAS company and deliver this database as a service with Atlas, because I had certainly built. In my own mind sort of conviction that for open source companies, the right business model that would ultimately be most successful was distributing tech technology as a matter of service so that it can get the reach global audiences and, really democratize that experiences as Andrew mentioned.
So that was the most interesting challenge. And when I joined the company, I think. Part of everyone understands is okay, it's a managed version of bongo DB, and there's a whole bunch of automation, elasticity and pay as you go pricing and all of the things that you would expect in the early days from a managed service.
But the more interesting thing that I think is sometimes hidden away is how much it's really transformed Mongo DB. The company's go to market strategy. As well, it's allowed us to really reach, tens of thousands of customers and millions of developers worldwide. And that's a function of the fact that it's just so easy to get started.
You can start off on our free tier or as you start building your application and it scales just get going on a credit card and then ultimately engaged and, in a larger level with our organization, as you start to get to mission criticality and scale. That's really hard to do in a, a traditional sort of enterprise software model.
It's easy to do for large customers. It's not easy to do for the broad base of them. Mid-market and the SMB and the startups and the ecosystem. And together with the team, we put a lot of focus into thinking about how do we make sure we widen the funnel as much as possible and get as many developers to try Atlas as the default experience we're using Mongo DB, because we felt a, it was definitely the best way to use the technology, but also for us as a company, it was the most powerful way for us to scale our overall operations.
Mike: [00:07:58] Okay.
Jesse: [00:08:00] Okay.
Mike: [00:08:00] So obviously there's going to be some challenges early on in the minds of the early adopters. Now we've had some relatively large names. I don't know if we can say any names of customers that were early adopters, but there were obviously challenges around that. What are some of the challenges that were particularly difficult when you started to talk to some of these larger name companies?
What are some of the things that. Really concerned about early
on.
Sahir: [00:08:28] Yeah I'll try them a little bit. And Andrew, I'm sure you have thoughts on this as well. So I think in the, when we phased out sort of the strategy for Atlas in the early years, when we first launched, it's funny to think back. We were only on AWS and I think we were in maybe four or five regions at the time if I remember correctly and the first kind of six to 12 months was really optimized for. Let's call it lower end use cases where you could come in. You didn't necessarily have high-end requirements around security or compliance guarantees. And so I think the biggest barrier to entry for larger customers or more mission critical sort of sensitive applications was. We as ourselves had not yet gotten our own third-party compliance certifications, there were certain enterprise level security capabilities like encryption, bring your own key encryption things like, private networking with with peering on the cloud providers that we just hadn't built yet on our roadmap.
And we wanted to make sure we prioritize correctly. So I think that was the. Internal factor. The external factor was, five years ago. It wasn't so obvious that for the large enterprise, that databases of service would be the default way to consume databases in the cloud. Certainly there was some of that traction happening, but if you look at it compared to today, it was still early days.
And I laugh because early on, we probably got positively surprised by some early conservative enterprise names. Maybe Thermo Fisher was one of them. We had I want to say AstraZeneca, perhaps a couple of like really established brand names who are, bullish on the cloud, believed in Mongo DB as a key enabling technology.
And in many ways where those early partners with us in the enterprise segment were to help develop the maturity we needed to scale over time.
Mike: [00:10:23] Yeah,
Andrew: [00:10:23] I remember the, these this kind of wake up call moment where you realized the pace of being a cloud company is just so much higher than what we had traditionally been before, where it was, a bit more of a slow moving enterprise type of sales motion, where you have a very big, POC phase and a bunch of kind of setup time and months of delivery.
That whole model though, was changing. The whole idea of Atlas was to enable our customer to very rapidly and self-service that service matter build amazing applications. And so you had people come in the matter of hours, started to do really cool, amazing stuff. And sometimes we weren't even ready for that.
We weren't even ready to be responsive enough for them. So we had to develop these new muscles. Be on the pulse of what this type of new speed of customer expected. I remember in one of our earliest large-scale customers who would just take us to the limits, it was, we had, I think actually funny enough, multiple cricket league, fantasy sports apps out of India, they were all like just booming and popularity during the India premier league.
Mike: [00:11:25] Okay.
Andrew: [00:11:26] Cricket competition. And it was just like so crazy how many people were storming into this application, the platform at the same time and realizing that we had a platform that could, actually scale to their needs was amazing, but it was also this constant realization that every new level of scale, every kind of new rung is going to require us to build out new operational chops, new muscles, new maturity, and we're still, it's an endless journey, a customer today.
A thousand times bigger than what we could accommodate at that time. But I can imagine that the customers of, five years from now will be a, yet another couple of order magnitude, larger or orders meant to larger. And it's just going to keep challenging us. But now we're in this mindset of expecting that and always working to get that next level, which is exciting.
Mike: [00:12:09] Yeah. I'm sure it hasn't always been a smooth ride. I'm sure there were some hiccups along the way. And maybe even around scale, you mentioned, we got surprised. Do you want to talk a little bit about maybe some of that massive uptake. Did we have trouble offering this product as a service?
Just based on the number of customers that we were able to sign up?
Sahir: [00:12:30] I'd say by and large, it's been a really smooth ride. I think one of the ones, the surprises that kind of I think is worth sharing
is we have. I think just under or close to 80 regions now in Atlas and the promise of the cloud at least on paper is endless scale and availability of resources, whether that be compute or networking or storage. That's largely true for most customers in major regions where the cloud providers are. But if you're in a region that's not a primary region or you've got a massive rollout where you need a lot of compute capacity, a lot of network capacity it's not suddenly available for you on demand all the time. There are supply chain data center or, resources backing all of this and our partners, do a really great job, obviously staying ahead of that demand, but there are sometimes constraints.
And so I think we reached a certain scale inflection point where we were consistently bumping up. The infrastructure cloud providers limits in terms of availability of capacity. And, we've worked with them on making sure our quotas were set properly and that we were treated in a special case, but there were definitely a couple of times where, we had a new application launching for a customer. It's not like it was a quota we were heading there literally was just not there weren't enough VMs and underlying physical infrastructure is set up and available in those data centers. And so we had some teething pains like working with our cloud provider friends to make sure that we were always projecting ahead with more and more I think, of a forward look to them so that we can, make sure we're not blocking our customers. Funny cloud learnings, I would say.
Mike: [00:14:18] Well, I guess that answers that, I was going to ask the question, why not? Build our own cloud, why not build, a massive data center and try and meet the demands with something like, an ops manager tool and make that a service offering. But I guess that really answers the question that the demand, the level of demand around the world would be so difficult.
Was that ever a consideration though? Building our own
Sahir: [00:14:43] so ironically, we actually did run our own infrastructure in the early days for our cloud backup service. So we had spinning disks and
physical devices, our own colo space, and frankly, we just outgrew it. I think there's two factors for us. One, the database is pretty. Low in the stack, so to speak.
So it needs to, and as an operational transactional service, We need to be really close to where the application actually runs. And the power of what the hyperscale cloud providers has built is just immense reach. So now any small company can stand up a local site or a point of presence, so to speak in any part of the world, across those different regions that they have.
And so the idea that. Having a single region that we perhaps had the economies of scale in just doesn't make sense. We're very dispersed because of all the different regions we support across the major cloud providers and the need to be close to where the application is. So just given the dynamic of running a database, the service, it is really important that we sit in those public major public cloud providers, right by the side, those those customers, the other.
Is really just that we benefit from the innovation that the hyperscale cloud providers put out in the market themselves. Right. There's higher levels of abstraction. We don't want to be sitting there. We have limited resources like any company, would we rather spend the dollars on racking and stacking hardware and, managing our own data center footprint and networking stack and all of that, or would we rather spend those reasons?
Consuming as a service and then building more value for our customers. So the same thing we, we just engage with customers and why they choose Atlas is very much true to us as we build our cloud platforms.
Andrew: [00:16:29] Yeah. I If you think about it, I'm going to be is really the only company that's giving developers this totally native data model. That's so easy to get started with at the prototyping phase. They can go up to any level of scale from there that can read and write across 80 regions across the big three cloud providers all over the world.
And for us to not stay laser-focused on that level. Making developers able to build incredible global applications would just be to pull our focus away from really the most important thing for us, which is to be obsessed with that customer experience rather than the infrastructure building blocks in the backend, which of course we do optimize them in close partnership with our cloud provider partners to Sahir's point.. .
Jesse: [00:17:09] So along with all of these challenges to scale over time, there was also other competitors trying to do the same thing. So how does Mongo DB, continue to have a competitive advantage?
Sahir: [00:17:22] Yeah, I think it's a consistent investment in engineering, R and D and innovation, right? If you look at the capabilities we've released, the core of the database surrounding the database and Atlas, the new services that integrated simplify the architecture for applications, some of the newer things we have, like search or realm or what we're doing with analytics with that was data lake.
I'll put our ability to push out more value and capability to customers against any competitor in the world. I think we've got a pretty strong track record there, but at a more kind of macro level. If you went back kind of five years ago to the launch of Atlas, most customers and developers, how to trade off to make you either go with a technology that's very deep on functionality and best of breed.
So to speak in a particular domain. Like a Mongo DB, then you have to, that's typically all software, so you tend to have to operate it yourself, learn how to manage and scale and monitor and all those different things. Or you want to choose a managed service experience where you get, the ease of use of just getting started and scaling and having all the pay as you go kind of consumption models.
But those databases are nowhere close to as capable as the best of breed players. That was the state of the mark. Five years ago, but now, fast forward to 2021 and going forward customers no longer have to make that trade. You have multicloud and sort of database and service offerings analytics as a service offerings, which you learning players that have not only the best of breed capability, that's a lot deeper than the first party services that are managed by the cloud providers, but are also delivered in this really amazing, scalable manner.
Consumption-based model so that trade-off is no longer there. And I think that's a key part of what drives our success is the fact that, we have the best capabilities. That's the features and the costs that at the cost of developers and organizations want. We deliver it as a really fluid elastic managed service.
And then guess what, for enterprises, especially multicloud is an increasingly strategic sort of characteristic they look for in their major providers, especially their data providers. And we're available on all three of the major public clouds with Atlas. That's a very unique proposition. No one else can offer that.
And so that's the thing that really drives in this
Mike: [00:19:38] Yeah.
Sahir: [00:19:39] powering, the acceleration of the Atlas business.
Mike: [00:19:42] Yeah. And so, Andrew, I wonder if for the folks that are not familiar with Atlas, the architecture you want to just give an overview of how Atlas works and leverages the multiple cloud providers behind the scenes.
Andrew: [00:19:56] Yeah, sure. Look, anyone who's not used not going to be Atlas, I encourage you just, sign up right away. It's the kind of thing where in just a matter of five minutes, you can deploy a free sandbox cluster and really start building your hello world. Experience your hello world application on top of MongoDB to be the way Atlas really works is essentially we try and make it as simple as possible.
You sign up. Then you decide which cloud provider and which region in that cloud provider do I want to launch my database cluster into, and you can choose between those 80 regions to hear mentioned or you can do more advanced stuff, you can decide to go multi-region, you can decide to go even multicloud all within the same database cluster.
And the key thing is that you can decide to start really tiny, even at the free level or at our dedicated cluster, starting at $60. Or you can go up to just massive scale sharded clusters that can power millions of concurrent users. And what's really exciting is you can transition those clusters between those states with no downtime.
At any time you can start single region and small and scale up or scale to multiple regions or scale to multiple clouds and each step of the way you're meeting whatever your latest business objectives are or whatever the needs of your application are. But in general, you don't have to completely reinvent the wheel and rearchitect your app each step of the way.
That's where MongoDB makes it just so as you to start at that prototyping level and then get up to the levels of scale. Now on the backend, Atlas does all of this with of course, huge amount of sophistication. There's dedicated virtual, private clouds per customer, per region for a dedicated clusters.
You can connect into those clusters using VPC, Piering, or private link, offering a variety of secure ways to connect without having to deal with public IP access lists. You can also use the. We have a wide variety of authentication and authorization options, database auditing, like Sahir mentioned, bring your own key encryption and even client-side field level encryption, which allows you to encrypt data before it even goes into the database for the subsets of your schema at the highest classification level.
So we make it, the whole philosophy here is to democratize making it easy to build applications in a privacy optimized way to really ultimately make it possible, to have millions of end consumers have a better experience. And use all this wonderful digital experiences that everyone's building out there.
Jesse: [00:22:09] So here we talked about how just the Mongo DB software, there was a steady growth, right. But once we went to the cloud
with Atlas, the success of that, how did that impact our business?
Sahir: [00:22:20] Yeah, I think it's been obviously Quite impactful in terms of just driving the acceleration of growth and continued success of MongoDB. We were fortunate, five, six years ago when Atlas was being built and launched that, our business was quite healthy. We were about a year out from IPO.
We had many enterprise customers that were choosing our commercial technology to power, their mission, critical applications. That continues through today. So the idea of launching outlets was although certainly strategic and, had we saw where the market was going. And we knew this would in many ways, be the flagship product for the company in the term, it was done out of sort of an offensive view to getting to market.
And so if you look at it now, Atlas is about 51% of our revenue. It's, the fastest growing product in our portfolio, Atlas is no longer just a database. It's a whole data platform where we've collapsed a bunch of other capabilities in the architecture of an application. So it's much simpler for developers.
And over time we expected that 51% number is only going to continue to be, a larger percentage of our business, but it's important to know. Making sure that we deliver a powerful open source database to the market, that we have an enterprise version of the software for customers who aren't for applications or customers that aren't yet in the crowd, or may never go to the cloud for certain workloads is super critical.
This sort of idea of run anywhere. And the reason why is, oftentimes. Timeline for modernizing an application. Let's say you're a large insurance provider or a bank or something. You've got thousands of these applications on legacy databases. There's an intense need to monitor modernize.
Those that save costs to unlock developer, agility, that timeline of choosing a database. First of all, it's a decision that lasts typically seven to 10 years. So it's a long-term investment decision, but it's not always timed with a cloud model. So the idea that if you're on premises, that you can modernize to an amazing database, like Mongo DB, perhaps run it in Kubernetes, run it in virtual machines in your own data center.
But then, two years later, if that application needs to move to the cloud, it's just a seamless migration into Atlas on any cloud provider you choose. That's a very unique and powerful, compelling story for, especially for large organizations, because what they don't want is to modernize or rewrite an application twice, once to get the value on pro-business and then have to think about it again later, if the app moves to the cloud, it's one seamless journey and that hybrid model.
Of moving customers to words outlets over time is really been a cohesive strategies. It's not just Atlas, it's open source and the enterprise version all seamlessly playing in a uniform model.
Mike: [00:25:04] Hmm. Fantastic. And, I love that, the journey that. Atlas has been on it's really become a platform. It's no longer just a database as a service. It's really, an indispensable tool that developers can use to increase agility. And, I'm just looking back at the kind of steady drum beat of additional features that have been added to, to really transform Atlas into a platform starting with free tier and increasing the regions and the coverage and.
Client side field level encryption. And just the list of features that have been added is pretty incredible. I think I would be remiss if I didn't ask both of you to maybe talk a little bit about the future. Obviously there's things like, I don't know, like invisibility of the service and AI and ML and what are some of the things that you're thinking about, I guess, without, tipping your cards too much.
Talk about what's interesting to you in the future of cloud.
Andrew: [00:25:56] I'll take a quick pass. Just I love the question to me, the most important thing for us to be just laser focused on always going forward. Is to deliver a truly integrated, elegant experience for our end customers that is just differentiated from essentially a user experience perspective from everything else that's out there.
And the platform is such a fundamental part of that, being a possibility, it starts with that document data model, which is this super set data model that can express within it, everything from key value to, essentially relational and object and. And then behind making it possible to access all of those different data models through a single developer native interface, but then making it possible to drive different physical workloads on the backend of that.
And what by workloads, I mean, different ways of storing the data in different algorithms used to analyze that data, making it possible to do everything from operational transactional to those search use cases to here mentioned a data lake and mobile synchronization. Streaming, et cetera, making all of that easily accessible through that single elegant interface.
That is something that requires just constant focus on not adding new knobs, not adding new complex service area, not adding a millions of new permutations, but making it elegant and accessible to do all of these wonderful data models and workload types and expanding out from there. So you'll just see us keep, I think focusing. Yeah.
Mike: [00:27:15] Fantastic. I'll just give a plug. This is the first in the series that we're calling on ramp to Mongo to be Atlas. We're going to go a little bit deeper into the architecture. We're going to talk with some engineering folks. Then we're going to go into the mobile space and talk with Alexander Stevenson and and talk a little bit about the realm story.
And then we're going to wrap it up with a panel discussion where we'll actually have some customer comments and and we'll provide a little bit. Detail into what the future might look like in that round table discussion with all of the guests. I just want to thank both of you for taking the time to chat with us and I'll give you a space to, to mention anything else you'd like to talk about before we wrap the episode up. Sahir, anything?
Sahir: [00:27:54] Nothing really to add other than just a thank you. And it's been humbling to think about the fact that this product is growing so fast in five years, and it feels like we're just getting started. I would encourage everyone to keep an eye out for our annual user conference next month.
And some of the exciting announcements we have and Atlas and across the portfolio going forward, certainly not letting off the gas.
Mike: [00:28:15] Great. Any final words Andrew?
Andrew: [00:28:18] yeah, I'll just say, mom going to be very much a big ten community. Over a hundred thousand people are signing up for Atlas every month. We invest so much in making it easy to absorb, learn, dive into to university courses, dive into our wonderful documentation and build amazing things on us.
We're here to help and we look forward to seeing you on the platform.
Mike: [00:28:36] Fantastic. Jesse, any final words?
Jesse: [00:28:38] No. I want just want to thank both of you for joining us. It's been very great to hear about how it got started and look forward to the next episodes.
Mike: [00:28:49] right.
Sahir: [00:28:49] Thanks guys.
Mike: [00:28:50] Thank you.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "My name is Michael Lynn, and I’m a developer advocate at MongoDB.\n\nI’m excited to welcome you to this, the first in a series of episodes created to celebrate the five year anniversary of the launch of MongoDB Atlas, our Database as a Service Platform.\n\nIn this series, my co-hosts, Jesse Hall, and Nic Raboy will talk with some of the people responsible for building, and launching the platform that helped to transform MongoDB as a company.\n\nbeginning with Episode 1, the On ramp to Atlas talking with Sahir Azam, Chief Product Officer, and Andrew Davidson, VP of Product about the strategic shift from a software company to a software as a service business.\n\nIn episode 2, Zero to Database as a Service, we’ll chat with Cailin Nelson, SVP of Engineering, and Cory Mintz, VP of Engineering - about Atlas as a product and how it was built and launched.\n\nIn episode 3, we’ll Go Mobile, talking with Alexander Stigsen, Founder of the Realm Mobile Database which has become a part of the Atlas Platform. \n\nIn episode 4, we’ll wrap the series up with a panel discussion and review some of our valued customer comments about the platform. \n\nThanks so much for tuning in, please take a moment to subscribe for more episodes and if you enjoy what you hear, please don’t forget to provide a comment, and a rating to help us continue to improve.\n",
"contentType": "Podcast"
} | Atlas 5-Year Anniversary Podcast Series Episode 1 - Onramp to Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/mongodb-charts-embedding-sdk-react | created | # MongoDB Charts Embedding SDK with React
## Introduction
In the previous blog post of this series, we created a React website that was retrieving a list of countries using Axios and a REST API hosted in MongoDB Realm.
In this blog post, we will continue to build on this foundation and create a dashboard with COVID-19 charts, built with MongoDB Charts and embedded in a React website with the MongoDB Charts Embedding SDK.
To add some spice in the mix, we will use our list of countries to create a dynamic filter so we can filter all the COVID-19 charts by country.
You can see the **final result here** that I hosted in a MongoDB Realm application using the static hosting feature available.
## Prerequisites
The code of this project is available on GitHub in this repository.
```shell
git clone git@github.com:mongodb-developer/mongodb-charts-embedded-react.git
```
To run this project, you will need `node` and `npm` in a recent version. Here is what I'm currently using:
```shell
$ node -v
v14.17.1
$ npm -v
8.0.0
```
You can run the project locally like so:
```sh
$ cd mongodb-realm-react-charts
$ npm install
$ npm start
```
In the next sections of this blog post, I will explain what we need to do to make this project work.
## Create a MongoDB Charts Dashboard
Before we can actually embed our charts in our custom React website, we need to create them in MongoDB Charts.
Here is the link to the dashboard I created for this website. It looks like this.
If you want to use the same data as me, check out this blog post about the Open Data COVID-19 Project and especially this section to duplicate the data in your own cluster in MongoDB Atlas.
As you can see in the dashboard, my charts are not filtered by country here. You can find the data of all the countries in the four charts I created.
## Enable the Filtering and the Embedding
To enable the filtering when I'm embedding my charts in my website, I must tell MongoDB Charts which field(s) I will be able to filter by, based on the fields available in my collection. Here, I chose to filter by a single field, `country`, and I chose to enable the unauthenticated access for this public blog post (see below).
In the `User Specified Filters` field, I added `country` and chose to use the JavaScript SDK option instead of the iFrame alternative that is less convenient to use for a React website with dynamic filters.
For each of the four charts, I need to retrieve the `Charts Base URL` (unique for a dashboard) and the `Charts IDs`.
Now that we have everything we need, we can go into the React code.
## React Website
### MongoDB Charts Embedding SDK
First things first: We need to install the MongoDB Charts Embedding SDK in our project.
```shell
npm i @mongodb-js/charts-embed-dom
```
It's already done in the project I provided above but it's not if you are following from the first blog post.
### React Project
My React project is made with just two function components: `Dashboard` and `Chart`.
The `index.js` root of the project is just calling the `Dashboard` function component.
```js
import React from 'react';
import ReactDOM from 'react-dom';
import Dashboard from "./Dashboard";
ReactDOM.render(
, document.getElementById('root'));
```
The `Dashboard` is the central piece of the project:
```js
import './Dashboard.css';
import {useEffect, useState} from "react";
import axios from "axios";
import Chart from "./Chart";
const Dashboard = () => {
const url = 'https://webhooks.mongodb-stitch.com/api/client/v2.0/app/covid-19-qppza/service/REST-API/incoming_webhook/metadata';
const countries, setCountries] = useState([]);
const [selectedCountry, setSelectedCountry] = useState("");
const [filterCountry, setFilterCountry] = useState({});
function getRandomInt(max) {
return Math.floor(Math.random() * max);
}
useEffect(() => {
axios.get(url).then(res => {
setCountries(res.data.countries);
const randomCountryNumber = getRandomInt(res.data.countries.length);
let randomCountry = res.data.countries[randomCountryNumber];
setSelectedCountry(randomCountry);
setFilterCountry({"country": randomCountry});
})
}, [])
useEffect(() => {
if (selectedCountry !== "") {
setFilterCountry({"country": selectedCountry});
}
}, [selectedCountry])
return
MongoDB Charts
COVID-19 Dashboard with Filters
{countries.map(c =>
setSelectedCountry(c)} checked={c === selectedCountry}/>
{c}
)}
};
export default Dashboard;
```
It's responsible for a few things:
- Line 17 - Retrieve the list of countries from the REST API using Axios (cf [previous blog post).
- Lines 18-22 - Select a random country in the list for the initial value.
- Lines 22 & 26 - Update the filter when a new value is selected (randomly or manually).
- Line 32 `counties.map(...)` - Use the list of countries to build a list of radio buttons to update the filter.
- Line 32 ` x4` - Call the `Chart` component one time for each chart with the appropriate props, including the filter and the Chart ID.
As you may have noticed here, I'm using the same filter `fitlerCountry` for all the Charts, but nothing prevents me from using a custom filter for each Chart.
You may also have noticed a very minimalistic CSS file `Dashboard.css`. Here it is:
```css
.title {
text-align: center;
}
.form {
border: solid black 1px;
}
.elem {
overflow: hidden;
display: inline-block;
width: 150px;
height: 20px;
}
.charts {
text-align: center;
}
.chart {
border: solid #589636 1px;
margin: 5px;
display: inline-block;
}
```
The `Chart` component looks like this:
```js
import React, {useEffect, useRef, useState} from 'react';
import ChartsEmbedSDK from "@mongodb-js/charts-embed-dom";
const Chart = ({filter, chartId, height, width}) => {
const sdk = new ChartsEmbedSDK({baseUrl: 'https://charts.mongodb.com/charts-open-data-covid-19-zddgb'});
const chartDiv = useRef(null);
const rendered, setRendered] = useState(false);
const [chart] = useState(sdk.createChart({chartId: chartId, height: height, width: width, theme: "dark"}));
useEffect(() => {
chart.render(chartDiv.current).then(() => setRendered(true)).catch(err => console.log("Error during Charts rendering.", err));
}, [chart]);
useEffect(() => {
if (rendered) {
chart.setFilter(filter).catch(err => console.log("Error while filtering.", err));
}
}, [chart, filter, rendered]);
return ;
};
export default Chart;
```
The `Chart` component isn't doing much. It's just responsible for rendering the Chart **once** when the page is loaded and reloading the chart if the filter is updated to display the correct data (thanks to React).
Note that the second useEffect (with the `chart.setFilter(filter)` call) shouldn't be executed if the chart isn't done rendering. So it's protected by the `rendered` state that is only set to `true` once the chart is rendered on the screen.
And voilà! If everything went as planned, you should end up with a (not very) beautiful website like [this one.
## Conclusion
In this blog post, your learned how to embed MongoDB Charts into a React website using the MongoDB Charts Embedding SDK.
We also learned how to create dynamic filters for the charts using `useEffect()`.
We didn't learn how to secure the Charts with an authentication token, but you can learn how to do that in this documentation.
If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Atlas",
"React"
],
"pageDescription": "In this blog post, we are creating a dynamic dashboard using React and the MongoDB Charts Embedding SDK with filters.",
"contentType": "Tutorial"
} | MongoDB Charts Embedding SDK with React | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/paginations-time-series-collections-in-five-minutes | created | # Paginations 1.0: Time Series Collections in five minutes
# Paginations 1.0: Time-Series Collections in 5 Minutes
#
As someone who loves to constantly measure myself and everything around me, I was excited to see MongoDB add dedicated time-series collections in MongoDB 5.0. Previously, MongoDB had been great for handling time-series data, but only if you were prepared to write some fairly complicated insert and update code and use a complex schema. In 5.0, all the hard work is done for you, including lots of behind-the-scenes optimization.
Working with time-series data brings some interesting technical challenges for databases. Let me explain.
## What is time-series data?
Time-series data is where we have multiple related data points that have a time, a source, and one or more values. For example, I might be recording my speed on my bike and the gradient of the road, so I have the time, the source (me on that bike), and two data values (speed and gradient). The source would change if it was a different bike or another person riding it.
Time-series data is not simply any data that has a date component, but specifically data where we want to look at how values change over a period of time and so need to compare data for a given time window or windows. On my bike, am I slowing down over time on a ride? Or does my speed vary with the road gradient?
This means when we store time-series data, we usually want to retrieve or work with all data points for a time period, or all data points for a time period for one or more specific sources.
These data points tend to be small. A time is usually eight bytes, an identifier is normally only (at most) a dozen bytes, and a data point is more often than not one or more eight-byte floating point numbers. So, each "record" we need to store and access is perhaps 50 or 100 bytes in length.
##
## Why time-series data needs special handling
##
This is where dealing with time-series data gets interesting—at least, I think it's interesting. Most databases, MongoDB included, store data on disks, and those are read and written by the underlying hardware in blocks of typically 4, 8, or 32 KB at a time. Because of these disk blocks, the layers on top of the physical disks—virtual memory, file systems, operating systems, and databases—work in blocks of data too. MongoDB, like all databases, uses blocks of records when reading,writing, and caching. Unfortunately, this can make reading and writing these tiny little time-series records much less efficient.
This animation shows what happens when these records are simply inserted into a general purpose database such as MongoDB or an RDBMS.
As each record is received, it is stored sequentially in a block on the disk. To allow us to access them, we use two indexes: one with the unique record identifier, which is required for replication, and the other with the source and timestamp to let us find everything for a specific device over a time period.
This is fine for writing data. We have quick sequential writing and we can amortise disk flushes of blocks to get a very high write speed.
The issue arises when we read. In order to find the data about one device over a time period, we need to fetch many of these small records. Due to the way they were stored, the records we want are spread over multiple database blocks and disk blocks. For each block we have to read, we pay a penalty of having to read and process the whole block, using database cache space equivalent to the block size. This is a lot of wasted compute resources.
## Time-series specific collections
MongoDB 5.0 has specialized time-series collections optimized for this type of data, which we can use simply by adding two parameters when creating a collection.
```
db.createCollection("readings",
"time-series" :{ "timeField" : "timestamp",
"metaField" : "deviceId"}})
```
We don't need to change the code we use for reading or writing at all. MongoDB takes care of everything for us behind the scenes. This second animation shows how.
With a time-series collection, MongoDB organizes the writes so that data for the same source is stored in the same block, alongside other data points from a similar point in time. The blocks are limited in size (because so are disk blocks) and once we have enough data in a block, we will automatically create another one. The important point is that each block will cover one source and one span of time, and we have an index for each block to help us find that span.
Doing this means we can have much smaller indexes as we only have one unique identifier per block. We also only have one index per block, typically for the source and time range. This results in an overall reduction in index size of hundreds of times.
Not only that but by storing data like this, MongoDB is better able to apply compression. Over time, data for a source will not change randomly, so we can compress the changes in values that are co-located. This makes for a data size improvement of at least three to five times.
And when we come to read it, we can read it several times faster as we no longer need to read data, which is not relevant to our query just to get to the data we want.
## Summing up time-series collections
And that, in a nutshell, is MongoDB time-series collections. I can just specify the time and source fields when creating a collection and MongoDB will reorganise my cycling data to make it three to five times smaller, as well as faster, to read and analyze.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "A brief, animated introduction to what Time-Series data is, why is challenging for traditional database structures and how MongoDB Time-Series Collections are specially adapted to managing this sort of data.",
"contentType": "Article"
} | Paginations 1.0: Time Series Collections in five minutes | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/triggers-tricks-preimage-cass | created | # Triggers Treats and Tricks: Cascade Document Delete Using Triggers Preimage
In this blog series, we are trying to inspire you with some reactive Realm trigger use cases. We hope these will help you bring your application pipelines to the next level.
Essentially, triggers are components in our Atlas projects/Realm apps that allow a user to define a custom function to be invoked on a specific event.
* **Database triggers:** We have triggers that can be triggered based on database events—like ``deletes``, ``inserts``, ``updates``, and ``replaces``—called database triggers.
* **Scheduled triggers**: We can schedule a trigger based on a ``cron`` expression via scheduled triggers.
* **Authentication triggers**: These triggers are only relevant for Realm authentication. They are triggered by one of the Realm auth providers' authentication events and can be configured only via a Realm application.
Relationships are an important part of any data design. Relational databases use primary and foreign key concepts to form those relationships when normalizing the data schema. Using those concepts, it allows a “cascading'' delete, which means a primary key parent delete will delete the related siblings.
MongoDB allows you to form relationships in different ways—for example, by embedding documents or arrays inside a parent document. This allows the document to contain all of its relationships within itself and therefore it does the cascading delete out of the box. Consider the following example between a user and the assigned tasks of the user:
``` js
{
userId : "abcd",
username : "user1@example.com"
Tasks :
{ taskId : 1,
Details : ["write","print" , "delete"]
},
{ taskId : 1,
Details : ["clean","cook" , "eat"]
}
}
```
Delete of this document will delete all the tasks.
However, in some design cases, we will want to separate the data of the relationship into Parent and Sibling collections—for example, ``games`` collection holding data for a specific game including ids referencing a ``quests`` collection holding a per game quest. As amount of quest data per game can be large and complex, we’d rather not embed it in ``games`` but reference:
**Games collection**
``` js
{
_id: ObjectId("60f950794a61939b6aac12a4"),
userId: 'xxx',
gameId: 'abcd-wxyz',
gameName: 'Crash',
quests: [
{
startTime: ISODate("2021-01-01T22:00:00.000Z"),
questId: ObjectId("60f94b7beb7f78709b97b5f3")
},
{
questId: ObjectId("60f94bbfeb7f78709b97b5f4"),
startTime: ISODate("2021-01-02T02:00:00.000Z")
}
]
}
```
Each game has a quest array with a start time of this quest and a reference to the quests collection where the quest data reside.
**Quests collection**
``` js
{
_id: ObjectId("60f94bbfeb7f78709b97b5f4"),
questName: 'War of fruits ',
userId: 'xxx',
details: {
lastModified: ISODate("2021-01-01T23:00:00.000Z"),
currentState: 'in-progress'
},
progressRounds: [ 'failed', 'failed', 'in-progress' ]
},
{
_id: ObjectId("60f94b7beb7f78709b97b5f3"),
questName: 'War of vegetable ',
userId: 'xxx',
details: {
lastModified: ISODate("2021-01-01T22:00:00.000Z"),
currentState: 'failed'
},
progressRounds: [ 'failed', 'failed', 'failed' ]
}
```
When a game gets deleted, we would like to purge the relevant quests in a cascading delete. This is where the **Preimage** trigger feature comes into play.
## Preimage Trigger Option
The Preimage option allows the trigger function to receive a snapshot of the deleted/modified document just before the change that triggered the function. This feature is enabled by enriching the oplog of the underlying replica set to store this snapshot as part of the change.
Read more on our [documentation.
In our case, we will use this feature to capture the parent deleted document full snapshot (games) and delete the related relationship documents in the sibling collection (quests).
## Building the Trigger
When we define the database trigger, we will point it to the relevant cluster and parent namespace to monitor and trigger when a document is deleted—in our case, ``GamesDB.games``.
To enable the “Preimage” feature, we will toggle Document Preimage to “ON” and specify our function to handle the cascade delete logic.
**deleteCascadingQuests - Function**
``` js
exports = async function(changeEvent) {
// Get deleted document preImage using "fullDocumentBeforeChange"
var deletedDocument = changeEvent.fullDocumentBeforeChange;
// Get sibling collection "quests"
const quests = context.services.get("mongodb-atlas").db("GamesDB").collection("quests");
// Delete all relevant quest documents.
deletedDocument.quests.map( async (quest) => {
await quests.deleteOne({_id : quest.questId});
})
};
```
As you can see, the function gets the fully deleted “games” document present in “changeEvent.fullDocumentBeforeChange” and iterates over the “quests” array. For each of those array elements, the function runs a “deleteOne” on the “quests” collection to delete the relevant quests documents.
## Deleting the Parent Document
Now let's put our trigger to the test by deleting the game from the “games” collection:
Once the document was deleted, our trigger was fired and now the “quests” collection is empty as it had only quests related to this deleted game:
Our cascade delete works thanks to triggers “Preimages.”
## Wrap Up
The ability to get a modified or deleted full document opens a new world of opportunities for trigger use cases and abilities. We showed here one option to use this new feature but this can be used for many other scenarios, like tracking complex document state changes for auditing or cleanup images storage using the deleted metadata documents.
We suggest that you try this new feature considering your use case and look forward to the next trick along this blog series.
Want to keep going? Join the conversation over at our community forums! | md | {
"tags": [
"MongoDB"
],
"pageDescription": "In this article, we will show you how to use a preimage feature to perform cascading relationship deletes via a trigger - based on the deleted parent document.",
"contentType": "Article"
} | Triggers Treats and Tricks: Cascade Document Delete Using Triggers Preimage | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/5-ways-reduce-costs-atlas | created | # 5 Ways to Reduce Costs With MongoDB Atlas
Now more than ever, businesses are looking for ways to reduce or eliminate costs wherever possible. As a cloud service, MongoDB Atlas is a platform that enables enhanced scalability and reduces dependence on the kind of fixed costs businesses experience when they deploy on premises instances of MongoDB. This article will help you understand ways you can reduce costs with your MongoDB Atlas deployment.
## #1 Pause Your Cluster
Pausing a cluster essentially brings the cluster down so if you still have active applications depending on this cluster, it's probably not a good idea. However, pausing the cluster leaves the infrastructure and data in place so that it's available when you're ready to return to business. You can pause a cluster for up to 30 days but if you do not resume the cluster within 30 days, Atlas automatically resumes the cluster. Clusters that have been paused are billed at a different, lower rate than active clusters. Read more about pausing clusters in our documentation, or check out this great article by Joe Drumgoole, on automating the process of pausing and restarting your clusters.
## #2 Scale Your Cluster Down
MongoDB Atlas was designed with scalability in mind and while scaling down is probably the last thing on our minds as we prepare for launching a Startup or a new application, it's a reality that we must all face.
Fortunately, the engineers at MongoDB that created MongoDB Atlas, our online database as a service, created the solution with bidirectional scalability in mind. The process of scaling a MongoDB Cluster will change the underlying infrastructure associated with the hosts on which your database resides. Scaling up to larger nodes in a cluster is the very same process as scaling down to smaller clusters.
## #3 Enable Elastic Scalability
Another great feature of MongoDB Atlas is the ability to programmatically control the size of your cluster based on its use. MongoDB Atlas offers scalability of various components of the platform including Disk, and Compute. With compute auto-scaling, you have the ability to configure your cluster with a maximum and minimum cluster size. You can enable compute auto-scaling through either the UI or the public API. Auto-scaling is available on all clusters M10 and higher on Azure and GCP, and on all "General" class clusters M10 and higher on AWS. To enable auto-scaling from the UI, select the Auto-scale "Cluster tier" option, and choose a maximum cluster size from the available options.
Atlas analyzes the following cluster metrics to determine when to scale a cluster, and whether to scale the cluster tier up or down:
- CPU Utilization
- Memory Utilization
To learn more about how to monitor cluster metrics, see View Cluster Metrics.
Once you configure auto-scaling with both a minimum and a maximum cluster size, Atlas checks that the cluster would not be in a tier outside of your specified Cluster Size range. If the next lowest cluster tier is within your Minimum Cluster Size range, Atlas scales the cluster down to the next lowest tier if both of the following are true:
- The average CPU Utilization and Memory Utilization over the past 72 hours is below 50%, and
- The cluster has not been scaled down (manually or automatically) in the past 72 hours.
To learn more about downward auto-scaling behavior, see Considerations for Downward Auto-Scaling.
## #4 Cleanup and Optimize
You may also be leveraging old datasets that you no longer need. Conduct a thorough analysis of your clusters, databases, and collections to remove any duplicates, and old, outdated data. Also, remove sample datasets if you're not using them. Many developers will load these to explore and then leave them.
## #5 Terminate Your Cluster
As a last resort, you may want to remove your cluster by terminating it. Please be aware that terminating a cluster is a destructive operation -once you terminate a cluster, it is gone. If you want to get your data back online and available, you will need to restore it from a backup. You can restore backups from cloud provider snapshots or from continuous backups.
Be sure you download and secure your backups before terminating as you will no longer have access to them once you terminate.
I hope you found this information valuable and that it helps you reduce or eliminate unnecessary expenses. If you have questions, please feel free to reach out. You will find me in the MongoDB Community or on Twitter @mlynn. Please let me know if I can help in any way.
>
>
>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.
>
>
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Explore five ways to reduce MongoDB Atlas costs.",
"contentType": "Article"
} | 5 Ways to Reduce Costs With MongoDB Atlas | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/rag-atlas-vector-search-langchain-openai | created | # RAG with Atlas Vector Search, LangChain, and OpenAI
With all the recent developments (and frenzy!) around generative AI, there has been a lot of focus on LLMs, in particular. However, there is also another emerging trend that many are unaware of: the rise of vector stores. Vector stores or vector databases play a crucial role in building LLM applications. This puts Atlas Vector Search in the vector store arena that has a handful of contenders.
The goal of this tutorial is to provide an overview of the key-concepts of Atlas Vector Search as a vector store, and LLMs and their limitations. We’ll also look into an upcoming paradigm that is gaining rapid adoption called "retrieval-augmented generation" (RAG). We will also briefly discuss the LangChain framework, OpenAI models, and Gradio. Finally, we will tie everything together by actually using these concepts + architecture + components in a real-world application. By the end of this tutorial, readers will leave with a high-level understanding of the aforementioned concepts, and a renewed appreciation for Atlas Vector Search!
## **LLMs and their limitations**
**Large language models (LLMs)** are a class of deep neural network models that have been trained on vast amounts of text data, which enables them to understand and generate human-like text. LLMs have revolutionized the field of natural language processing, but they do come with certain limitations:
1. **Hallucinations**: LLMs sometimes generate factually inaccurate or ungrounded information, a phenomenon known as “hallucinations.”
2. **Stale data**: LLMs are trained on a static dataset that was current only up to a certain point in time. This means they might not have information about events or developments that occurred after their training data was collected.
3. **No access to users’ local data**: LLMs don’t have access to a user’s local data or personal databases. They can only generate responses based on the knowledge they were trained on, which can limit their ability to provide personalized or context-specific responses.
4. **Token limits**: LLMs have a maximum limit on the number of tokens (pieces of text) they can process in a single interaction. Tokens in LLMs are the basic units of text that the models process and generate. They can represent individual characters, words, subwords, or even larger linguistic units. For example, the token limit for OpenAI’s *gpt-3.5-turbo* is 4096.
**Retrieval-augmented generation (RAG)**
The **retrieval-augmented generation (RAG)** architecture was developed to address these issues. RAG uses vector search to retrieve relevant documents based on the input query. It then provides these retrieved documents as context to the LLM to help generate a more informed and accurate response. That is, instead of generating responses purely from patterns learned during training, RAG uses those relevant retrieved documents to help generate a more informed and accurate response. This helps address the above limitations in LLMs. Specifically:
- RAGs minimize hallucinations by grounding the model’s responses in factual information.
- By retrieving information from up-to-date sources, RAG ensures that the model’s responses reflect the most current and accurate information available.
- While RAG does not directly give LLMs access to a user’s local data, it does allow them to utilize external databases or knowledge bases, which can be updated with user-specific information.
- Also, while RAG does not increase an LLM’s token limit, it does make the model’s use of tokens more efficient by retrieving *only the most relevant documents* for generating a response.
This tutorial demonstrates how the RAG architecture can be leveraged with Atlas Vector Search to build a question-answering application against your own data.
## **Application architecture**
The architecture of the application looks like this:
.
1. Install the following packages:
```bash
pip3 install langchain pymongo bs4 openai tiktoken gradio requests lxml argparse unstructured
```
2. Create the OpenAI API key. This requires a paid account with OpenAI, with enough credits. OpenAI API requests stop working if credit balance reaches $0.
1. Save the OpenAI API key in the *key_param.py* file. The filename is up to you.
2. Optionally, save the MongoDB URI in the file, as well.
3. Create two Python scripts:
1. load_data.py: This script will be used to load your documents and ingest the text and vector embeddings, in a MongoDB collection.
2. extract_information.py: This script will generate the user interface and will allow you to perform question-answering against your data, using Atlas Vector Search and OpenAI.
4. Import the following libraries:
```python
from pymongo import MongoClient
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.document_loaders import DirectoryLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
import gradio as gr
from gradio.themes.base import Base
import key_param
```
**Sample documents**
In this tutorial, we will be loading three text files from a directory using the DirectoryLoader. These files should be saved to a directory named **sample_files.** The contents of these text files are as follows *(none of these texts contain PII or CI)*:
1. log_example.txt
```
2023-08-16T16:43:06.537+0000 I MONGOT 63528f5c2c4f78275d37902d-f5-u6-a0 BufferlessChangeStreamApplier] [63528f5c2c4f78275d37902d-f5-u6-a0 BufferlessChangeStreamApplier] Starting change stream from opTime=Timestamp{value=7267960339944178238, seconds=1692203884, inc=574}2023-08-16T16:43:06.543+0000 W MONGOT [63528f5c2c4f78275d37902d-f5-u6-a0 BufferlessChangeStreamApplier] [c.x.m.r.m.common.SchedulerQueue] cancelling queue batches for 63528f5c2c4f78275d37902d-f5-u6-a02023-08-16T16:43:06.544+0000 E MONGOT [63528f5c2c4f78275d37902d-f5-u6-a0 InitialSyncManager] [BufferlessInitialSyncManager 63528f5c2c4f78275d37902d-f5-u6-a0] Caught exception waiting for change stream events to be applied. Shutting down.com.xgen.mongot.replication.mongodb.common.InitialSyncException: com.mongodb.MongoCommandException: Command failed with error 286 (ChangeStreamHistoryLost): 'Executor error during getMore :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.' on server atlas-6keegs-shard-00-01.4bvxy.mongodb.net:27017.2023-08-16T16:43:06.545+0000 I MONGOT [indexing-lifecycle-3] [63528f5c2c4f78275d37902d-f5-u6-a0 ReplicationIndexManager] Transitioning from INITIAL_SYNC to INITIAL_SYNC_BACKOFF.2023-08-16T16:43:18.068+0000 I MONGOT [config-monitor] [c.x.m.config.provider.mms.ConfCaller] Conf call response has not changed. Last update date: 2023-08-16T16:43:18Z.2023-08-16T16:43:36.545+0000 I MONGOT [indexing-lifecycle-2] [63528f5c2c4f78275d37902d-f5-u6-a0 ReplicationIndexManager] Transitioning from INITIAL_SYNC_BACKOFF to INITIAL_SYNC.
```
2. chat_conversation.txt
```
Alfred: Hi, can you explain to me how compression works in MongoDB? Bruce: Sure! MongoDB supports compression of data at rest. It uses either zlib or snappy compression algorithms at the collection level. When data is written, MongoDB compresses and stores it compressed. When data is read, MongoDB uncompresses it before returning it. Compression reduces storage space requirements. Alfred: Interesting, that's helpful to know. Can you also tell me how indexes are stored in MongoDB? Bruce: MongoDB indexes are stored in B-trees. The internal nodes of the B-trees contain keys that point to children nodes or leaf nodes. The leaf nodes contain references to the actual documents stored in the collection. Indexes are stored in memory and also written to disk. The in-memory B-trees provide fast access for queries using the index.Alfred: Ok that makes sense. Does MongoDB compress the indexes as well?Bruce: Yes, MongoDB also compresses the index data using prefix compression. This compresses common prefixes in the index keys to save space. However, the compression is lightweight and focused on performance vs storage space. Index compression is enabled by default.Alfred: Great, that's really helpful context on how indexes are handled. One last question - when I query on a non-indexed field, how does MongoDB actually perform the scanning?Bruce: MongoDB performs a collection scan if a query does not use an index. It will scan every document in the collection in memory and on disk to select the documents that match the query. This can be resource intensive for large collections without indexes, so indexing improves query performance.Alfred: Thank you for the detailed explanations Bruce, I really appreciate you taking the time to walk through how compression and indexes work under the hood in MongoDB. Very helpful!Bruce: You're very welcome! I'm glad I could explain the technical details clearly. Feel free to reach out if you have any other MongoDB questions.
```
3. aerodynamics.txt
```
Boundary layer control, achieved using suction or blowing methods, can significantly reduce the aerodynamic drag on an aircraft's wing surface.The yaw angle of an aircraft, indicative of its side-to-side motion, is crucial for stability and is controlled primarily by the rudder.With advancements in computational fluid dynamics (CFD), engineers can accurately predict the turbulent airflow patterns around complex aircraft geometries, optimizing their design for better performance.
```
**Loading the documents**
1. Set the MongoDB URI, DB, Collection Names:
```python
client = MongoClient(key_param.MONGO_URI)
dbName = "langchain_demo"
collectionName = "collection_of_text_blobs"
collection = client[dbName][collectionName]
```
2. Initialize the DirectoryLoader:
```python
loader = DirectoryLoader( './sample_files', glob="./*.txt", show_progress=True)
data = loader.load()
```
3. Define the OpenAI Embedding Model we want to use for the source data. The embedding model is different from the language generation model:
```python
embeddings = OpenAIEmbeddings(openai_api_key=key_param.openai_api_key)
```
4. Initialize the VectorStore. Vectorise the text from the documents using the specified embedding model, and insert them into the specified MongoDB collection.
```python
vectorStore = MongoDBAtlasVectorSearch.from_documents( data, embeddings, collection=collection )
```
5. Create the following Atlas Search index on the collection, please ensure the name of your index is set to `default`:
```json
{
"fields": [{
"path": "embedding",
"numDimensions": 1536,
"similarity": "cosine",
"type": "vector"
}]
}
```
**Performing vector search using Atlas Vector Search**
1. Set the MongoDB URI, DB, and Collection Names:
```python
client = MongoClient(key_param.MONGO_URI)
dbName = "langchain_demo"
collectionName = "collection_of_text_blobs"
collection = client[dbName][collectionName]
```
2. Define the OpenAI Embedding Model we want to use. The embedding model is different from the language generation model:
```python
embeddings = OpenAIEmbeddings(openai_api_key=key_param.openai_api_key)
```
3. Initialize the Vector Store:
```python
vectorStore = MongoDBAtlasVectorSearch( collection, embeddings )
```
4. Define a function that **a) performs semantic similarity search using Atlas Vector Search** **(note that I am including this step only to highlight the differences between output of only semantic search** **vs** **output generated with RAG architecture using RetrieverQA)**:
```python
def query_data(query):
# Convert question to vector using OpenAI embeddings
# Perform Atlas Vector Search using Langchain's vectorStore
# similarity_search returns MongoDB documents most similar to the query
docs = vectorStore.similarity_search(query, K=1)
as_output = docs[0].page_content
```
and, **b) uses a retrieval-based augmentation to perform question-answering on the data:**
```python
# Leveraging Atlas Vector Search paired with Langchain's QARetriever
# Define the LLM that we want to use -- note that this is the Language Generation Model and NOT an Embedding Model
# If it's not specified (for example like in the code below),
# then the default OpenAI model used in LangChain is OpenAI GPT-3.5-turbo, as of August 30, 2023
llm = OpenAI(openai_api_key=key_param.openai_api_key, temperature=0)
# Get VectorStoreRetriever: Specifically, Retriever for MongoDB VectorStore.
# Implements _get_relevant_documents which retrieves documents relevant to a query.
retriever = vectorStore.as_retriever()
# Load "stuff" documents chain. Stuff documents chain takes a list of documents,
# inserts them all into a prompt and passes that prompt to an LLM.
qa = RetrievalQA.from_chain_type(llm, chain_type="stuff", retriever=retriever)
# Execute the chain
retriever_output = qa.run(query)
# Return Atlas Vector Search output, and output generated using RAG Architecture
return as_output, retriever_output
```
5. Create a web interface for the app using Gradio:
```python
with gr.Blocks(theme=Base(), title="Question Answering App using Vector Search + RAG") as demo:
gr.Markdown(
"""
# Question Answering App using Atlas Vector Search + RAG Architecture
""")
textbox = gr.Textbox(label="Enter your Question:")
with gr.Row():
button = gr.Button("Submit", variant="primary")
with gr.Column():
output1 = gr.Textbox(lines=1, max_lines=10, label="Output with just Atlas Vector Search (returns text field as is):")
output2 = gr.Textbox(lines=1, max_lines=10, label="Output generated by chaining Atlas Vector Search to Langchain's RetrieverQA + OpenAI LLM:")
# Call query_data function upon clicking the Submit button
button.click(query_data, textbox, outputs=[output1, output2])
demo.launch()
```
## **Sample outputs**
The following screenshots show the outputs generated for various questions asked. Note that a purely semantic-similarity search returns the text contents of the source documents as is, while the output from the question-answering app using the RAG architecture generates precise answers to the questions asked.
**Log analysis example**
![Log analysis example][4]
**Chat conversation example**
![Chat conversion example][6]
**Sentiment analysis example**
![Sentiment analysis example][7]
**Precise answer retrieval example**
![Precise answer retrieval example][8]
## **Final thoughts**
In this tutorial, we have seen how to build a question-answering app to converse with your private data, using Atlas Vector Search as a vector store, while leveraging the retrieval-augmented generation architecture with LangChain and OpenAI.
Vector stores or vector databases play a crucial role in building LLM applications, and retrieval-augmented generation (RAG) is a significant advancement in the field of AI, particularly in natural language processing. By pairing these together, it is possible to build powerful AI-powered applications for various use-cases.
If you have questions or comments, join us in the [developer forums to continue the conversation!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb482d06c8f1f0674/65398a092c3581197ab3b07f/image3.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5f69c39c41bd7f0a/653a87b2b78a75040aa24c50/table1-largest.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta74135e3423e8b54/653a87c9dc41eb04079b5fee/table2-largest.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta4386370772f61ee/653ac0875887ca040ac36fdb/logQA.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb6e727cbcd4b9e83/653ac09f9d1704040afd185d/chat_convo.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7e035f322fe53735/653ac88e5e9b4a0407a4d319/chat_convo-1.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc220de3c036fdda5/653ac0b7e47ab5040a0f43bb/sentiment_analysis.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt828a1fe4be4a6d52/653ac0cf5887ca040ac36fe0/precise_info_retrieval.png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "Learn about Vector Search with MongoDB, LLMs, and OpenAI with the Python programming language.",
"contentType": "Tutorial"
} | RAG with Atlas Vector Search, LangChain, and OpenAI | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/attribute-pattern | created | # Building with Patterns: The Attribute Pattern
Welcome back to the Building with Patterns series. Last time we looked
at the Polymorphic Pattern which covers
situations when all documents in a collection are of similar, but not
identical, structure. In this post, we'll take a look at the Attribute
Pattern.
The Attribute Pattern is particularly well suited when:
- We have big documents with many similar fields but there is a subset of fields that share common characteristics and we want to sort or query on that subset of fields, *or*
- The fields we need to sort on are only found in a small subset of documents, *or*
- Both of the above conditions are met within the documents.
For performance reasons, to optimize our search we'd likely need many indexes to account for all of the subsets. Creating all of these indexes could reduce performance. The Attribute Pattern provides a good solution for these cases.
## The Attribute Pattern
Let's think about a collection of movies. The documents will likely have similar fields involved across all of the documents: title, director,
producer, cast, etc. Let's say we want to search on the release date. A
challenge that we face when doing so, is *which* release date? Movies
are often released on different dates in different countries.
``` javascript
{
title: "Star Wars",
director: "George Lucas",
...
release_US: ISODate("1977-05-20T01:00:00+01:00"),
release_France: ISODate("1977-10-19T01:00:00+01:00"),
release_Italy: ISODate("1977-10-20T01:00:00+01:00"),
release_UK: ISODate("1977-12-27T01:00:00+01:00"),
...
}
```
A search for a release date will require looking across many fields at
once. In order to quickly do searches for release dates, we'd need
several indexes on our movies collection:
``` javascript
{release_US: 1}
{release_France: 1}
{release_Italy: 1}
...
```
By using the Attribute Pattern, we can move this subset of information into an array and reduce the indexing needs. We turn this information into an array of key-value pairs:
``` javascript
{
title: "Star Wars",
director: "George Lucas",
...
releases:
{
location: "USA",
date: ISODate("1977-05-20T01:00:00+01:00")
},
{
location: "France",
date: ISODate("1977-10-19T01:00:00+01:00")
},
{
location: "Italy",
date: ISODate("1977-10-20T01:00:00+01:00")
},
{
location: "UK",
date: ISODate("1977-12-27T01:00:00+01:00")
},
...
],
...
}
```
Indexing becomes much more manageable by creating one index on the
elements in the array:
``` javascript
{ "releases.location": 1, "releases.date": 1}
```
By using the Attribute Pattern, we can add organization to our documents for common characteristics and account for rare/unpredictable fields. For example, a movie released in a new or small festival. Further, moving to a key/value convention allows for the use of non-deterministic naming and the easy addition of qualifiers. For example, if our data collection was on bottles of water, our attributes might look something like:
``` javascript
"specs": [
{ k: "volume", v: "500", u: "ml" },
{ k: "volume", v: "12", u: "ounces" }
]
```
Here we break the information out into keys and values, "k" and "v," and add in a third field, "u," which allows for the units of measure to be stored separately.
``` javascript
{"specs.k": 1, "specs.v": 1, "specs.u": 1}
```
## Sample use case
The Attribute Pattern is well suited for schemas that have sets of fields that have the same value type, such as lists of dates. It also works well when working with the characteristics of products. Some products, such as clothing, may have sizes that are expressed in small, medium, or large. Other products in the same collection may be expressed in volume. Yet others may be expressed in physical dimensions or weight.
A customer in the domain of asset management recently deployed their solution using the Attribute Pattern. The customer uses the pattern to store all characteristics of a given asset. These characteristics are seldom common across the assets or are simply difficult to predict at design time. Relational models typically use a complicated design process to express the same idea in the form of [user-defined fields.
While many of the fields in the product catalog are similar, such as name, vendor, manufacturer, country of origin, etc., the specifications, or attributes, of the item may differ. If your application and data access patterns rely on searching through many of these different fields at once, the Attribute Pattern provides a good structure for the data.
## Conclusion
The Attribute Pattern provides for easier indexing the documents, targeting many similar fields per document. By moving this subset of data into a key-value sub-document, we can use non-deterministic field names, add additional qualifiers to the information, and more clearly state the relationship of the original field and value. When we use the Attribute Pattern, we need fewer indexes, our queries become simpler to write, and our queries become faster.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.",
"contentType": "Tutorial"
} | Building with Patterns: The Attribute Pattern | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/kotlin/splash-screen-android | created | # Building Splash Screen Natively, Android 12, Kotlin
> In this article, we will explore and learn how to build a splash screen with SplashScreen API, which was introduced in Android 12.
## What is a Splash Screen?
It is the first view that is shown to a user as soon as you tap on the app icon. If you notice a blank white screen (for
a short moment) after tapping on your favourite app, it means it doesn't have a splash screen.
## Why/When Do I Need It?
Often, the splash screen is seen as a differentiator between normal and professional apps. Some use cases where a splash
screen fits perfectly are:
* When we want to download data before users start using the app.
* If we want to promote app branding and display your logo for a longer period of time, or just have a more immersive
experience that smoothly takes you from the moment you tap on the icon to whatever the app has to offer.
Until now, creating a splash screen was never straightforward and always required some amount of boilerplate code added
to the application, like creating SplashActivity with no view, adding a timer for branding promotion purposes, etc. With
SplashScreen API, all of this is set to go.
## Show Me the Code
### Step 1: Creating a Theme
Even for the new `SplashScreen` API, we need to create a theme but in the `value-v31` folder as a few parameters are
supported only in **Android 12**. Therefore, create a folder named `value-v31` under `res` folder and add `theme.xml`
to it.
And before that, let’s break our splash screen into pieces for simplification.
* Point 1 represents the icon of the screen.
* Point 2 represents the background colour of the splash screen icon.
* Point 3 represents the background colour of the splash screen.
* Point 4 represents the space for branding logo if needed.
Now, let's assign some values to the corresponding keys that describe the different pieces of the splash screen.
```xml
#FFFFFF
#000000
@drawable/ic_realm_logo_250
@drawable/relam_horizontal
```
In case you want to use an app icon (or don't have a separate icon) as `windowSplashScreenAnimatedIcon`, you ignore this
parameter and by default, it will take your app icon.
> **Tips & Tricks**: If your drawable icon is getting cropped on the splash screen, create an app icon from the image
> and then replace the content of `windowSplashScreenAnimatedIcon` drawable with the `ic_launcher_foreground.xml`.
>
> For `windowSplashScreenBrandingImage`, I couldn't find any alternative. Do share in the comments if you find one.
### Step 2: Add the Theme to Activity
Open AndroidManifest file and add a theme to the activity.
``` xml
```
In my view, there is no need for a new `activity` class for the splash screen, which traditionally was required. And now
we are all set for the new **Android 12** splash screen.
Adding animation to the splash screen is also a piece of cake. Just update the icon drawable with
`AnimationDrawable` and `AnimatedVectorDrawable` drawable and custom parameters for the duration of the animation.
```xml
1000
```
Earlier, I mentioned that the new API helps with the initial app data download use case, so let's see that in action.
In the splash screen activity, we can register for `addOnPreDrawListener` listener which will help to hold off the first
draw on the screen, until data is ready.
``` Kotlin
private val viewModel: MainViewModel by viewModels()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
addInitialDataListener()
loadAppView()
}
private fun addInitialDataListener() {
val content: View = findViewById(android.R.id.content)
// This would be called until true is not returned from the condition
content.viewTreeObserver.addOnPreDrawListener {
return@addOnPreDrawListener viewModel.isAppReady.value ?: false
}
}
private fun loadAppView() {
binding = ActivityMainBinding.inflate(layoutInflater)
setContentView(binding.root)
```
> **Tips & Tricks**: While developing Splash screen you can return `false` for `addOnPreDrawListener`, so the next screen is not rendered and you can validate the splash screen easily.
### Summary
I really like the new `SplashScreen` API, which is very clean and easy to use, getting rid of SplashScreen activity
altogether. There are a few things I disliked, though.
1. The splash screen background supports only single colour. We're waiting for support of vector drawable backgrounds.
2. There is no design spec available for icon and branding images, which makes for more of a hit and trial game. I still
couldn't fix the banding image, in my example.
3. Last but not least, SplashScreen UI side feature(`theme.xml`) is only supported from Android 12 and above, so we
can't get rid of the old code for now.
You can also check out the complete working example from my GitHub repo. Note: Just running code on the device will show
you white. To see the example, close the app recent tray and then click on the app icon again.
Github Repo link
Hope this was informative and enjoyed reading it.
| md | {
"tags": [
"Kotlin",
"Realm",
"Android"
],
"pageDescription": "In this article, we will explore and learn how to build a splash screen with SplashScreen API, which was introduced in Android 12.",
"contentType": "Code Example"
} | Building Splash Screen Natively, Android 12, Kotlin | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/migrate-azure-cosmosdb-mongodb-atlas-apache-kafka | created | # Migrate from Azure CosmosDB to MongoDB Atlas Using Apache Kafka
## Overview
When you are the best of breed, you have many imitators. MongoDB is no different in the database world. If you are reading this blog, you are most likely an Azure customer that ended up using CosmosDB.
You needed a database that could handle unstructured data in Azure and eventually realized CosmosDB wasn’t the best fit. Perhaps you found that it is too expensive for your workload or not performing well or simply have no confidence in the platform. You also might have tried using the MongoDB API and found that the queries you wanted to use simply don’t work in CosmosDB because it fails 67% of the compatibility tests.
Whatever the path you took to CosmosDB, know that you can easily migrate your data to MongoDB Atlas while still leveraging the full power of Azure. With MongoDB Atlas in Azure, there are no more failed queries, slow performance, and surprise bills from not optimizing your RDUs. MongoDB Atlas in Azure also gives you access to the latest releases of MongoDB and the flexibility to leverage any of the three cloud providers if your business needs change.
Note: When you originally created your CosmosDB, you were presented with these API options:
If you created your CosmosDB using Azure Cosmos DB API for MongoDB, you can use mongo tools such as mongodump, mongorestore, mongoimport, and mongoexport to move your data. The Azure CosmosDB Connector for Kafka Connect does not work with CosmosDB databases that were created for the Azure Cosmos DB API for MongoDB.
In this blog post, we will cover how to leverage Apache Kafka to move data from Azure CosmosDB Core (Native API) to MongoDB Atlas. While there are many ways to move data, using Kafka will allow you to not only perform a one-time migration but to stream data from CosmosDB to MongoDB. This gives you the opportunity to test your application and compare the experience so that you can make the final application change to MongoDB Atlas when you are ready. The complete example code is available in this GitHub repository.
## Getting started
You’ll need access to an Apache Kafka cluster. There are many options available to you, including Confluent Cloud, or you can deploy your own Apache Kafka via Docker as shown in this blog. Microsoft Azure also includes an event messaging service called Azure Event Hubs. This service provides a Kafka endpoint that can be used as an alternative to running your own Kafka cluster. Azure Event Hubs exposes the same Kafka Connect API, enabling the use of the MongoDB connector and Azure CosmosDB DB Connector with the Event Hubs service.
If you do not have an existing Kafka deployment, perform these steps. You will need docker installed on your local machine:
```
git clone https://github.com/RWaltersMA/CosmosDB2MongoDB.git
```
Next, build the docker containers.
```
docker-compose up -d --build
```
The docker compose script (docker-compose.yml) will stand up all the components you need, including Apache Kafka and Kafka Connect. Install the CosmosDB and MongoDB connectors.
## Configuring Kafka Connect
Modify the **cosmosdb-source.json** file and replace the placeholder values with your own.
```
{
"name": "cosmosdb-source",
"config": {
"connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"connect.cosmos.task.poll.interval": "100",
"connect.cosmos.connection.endpoint":
"https://****.documents.azure.com:443/",
"connect.cosmos.master.key": **"",**
"connect.cosmos.databasename": **"",**
"connect.cosmos.containers.topicmap": **"#”,**
"connect.cosmos.offset.useLatest": false,
"value.converter.schemas.enable": "false",
"key.converter.schemas.enable": "false"
}
}
```
Modify the **mongo-sink.json** file and replace the placeholder values with your own.
```
{"name": "mongo-sink",
"config": {
"connector.class":"com.mongodb.kafka.connect.MongoSinkConnector",
"tasks.max":"1",
"topics":"",
"connection.uri":"",
"database":"",
"collection":"",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false",
"key.converter.schemas.enable": "false"
}}
```
Note: Before we configure Kafka Connect, make sure that your network settings on both CosmosDB and MongoDB Atlas will allow communication between these two services. In CosmosDB, select the Firewall and Virtual Networks. While the easiest configuration is to select “All networks,” you can provide a more secure connection by specifying the IP range from the Firewall setting in the Selected networks option. MongoDB Atlas Network access also needs to be configured to allow remote connections. By default, MongoDB Atlas does not allow any external connections. See Configure IP Access List for more information.
To configure our two connectors, make a REST API call to the Kafka Connect service:
```
curl -X POST -H "Content-Type: application/json" -d @cosmosdb-source.json http://localhost:8083/connectors
curl -X POST -H "Content-Type: application/json" -d @mongodb-sink.json http://localhost:8083/connectors
```
That’s it!
Provided the network and database access was configured properly, data from your CosmosDB should begin to flow into MongoDB Atlas. If you don’t see anything, here are some troubleshooting tips:
* Try connecting to your MongoDB Atlas cluster using the mongosh tool from the server running the docker container.
* View the docker logs for the Kafka Connect service.
* Verify that you can connect to the CosmosDB instance using the Azure CLI from the server running the docker container.
**Summary**
In this post, we explored how to move data from CosmosDB to MongoDB using Apache Kafka. If you’d like to explore this method and other ways to migrate data, check out the 2021 MongoDB partner of the year award winner, Peerslands', five-part blog post on CosmosDB migration. | md | {
"tags": [
"Atlas",
"JavaScript",
"Kafka"
],
"pageDescription": "Learn how to migrate your data in Azure CosmosDB to MongoDB Atlas using Apache Kafka.",
"contentType": "Tutorial"
} | Migrate from Azure CosmosDB to MongoDB Atlas Using Apache Kafka | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/paginations-why-choose-mongodb | created | # Paginations 2.0: Why I Would Choose MongoDB
# Paginations 2.0: Why I Would Choose MongoDB
I've been writing and designing large scale, multi-user, applications with database backends since 1995, as lead architect for intelligence management systems, text mining, and analytics platforms, and as a consultant working in retail and investment banking, mobile games, connected-car IoT projects, and country scale document management. It's fair to say I've seen how a lot of applications are put together.
Now it's also reasonable to assume that as I work for MongoDB, I have some bias, but MongoDB isn't my first database, or even my first document database, and so I do have a fairly broad perspective. I'd like to share with you three features of MongoDB that would make it my first choice for almost all large, multi-user database applications.
## The Document Model
The Document model is a fundamental aspect of MongoDB. All databases store records—information about things that have named attributes and values for those attributes. Some attributes might have multiple values. In a tabular database, we break the record into multiple rows with a single scalar value for each attribute and have a way to relate those rows together to access the record.
The difference in a Document database is when we have multiple values for an attribute, we can retain those as part of a single record, storing access and manipulating them together. We can also group attributes together to compare and refer to them as a group. For example, all the parts of an address can be accessed as a single address field or individually.
Why does this matter? Well, being able to store an entire record co-located on disk and in memory has some huge advantages.
By having these larger, atomic objects to work with, there are some oft quoted benefits like making it easier for OO developers and reducing the computational overheads of accessing the whole record, but this misses a third, even more important benefit.
With the correct schema, documents reduce each database write operation to single atomic changes of one piece of data. This has two huge and related benefits.
By only requiring one piece of data to be examined for its current state and changed to a new state at a time, the period of time where the database state is unresolved is reduced to almost nothing. Effectively, there is no interaction between multiple writes to the database and none have to wait for another to complete, at least not beyond a single change to a single document.
If we have to use traditional transactions, whether in an RDBMS or MongoDB, to perform a change then all records concerned remain effectively locked until the transaction is complete. This greatly widens the window for contention and delay. Using the document model instead, you can remove all contention in your database and achieve far higher 'transactional' throughput in a multi-user system.
The second part of this is that when each write to the database can be treated as an independent operation, it makes it easy to horizontally scale the database to support large workloads as the state of a document on one server has no impact on your ability to change a document on another. Every operation can be parallelised.
Doing this does require you to design your schema correctly, though. Document databases are far from schemaless (a term MongoDB has not used for many years). In truth, it makes schema design even more important than in an RDBMS.
## Highly Available as standard
The second reason I would choose to use MongoDB is that high-availability is at the heart of the database. MongoDB is designed so that a server can be taken offline instantly, at any time and there is no loss of service or data. This is absolutely fundamental to how all of MongoDB is designed. It doesn't rely on specialist hardware, third-party software, or add-ons. It allows for replacement of servers, operating systems, and even database versions invisibly to the end user, and even mostly to the developer. This goes equally for Atlas, where MongoDB can provide a multi-cloud database service at any scale that is resilient to the loss of an entire cloud provider, whether it’s Azure, Google, or Amazon. This level of uptime is unprecedented.
So, if I plan to develop a large, multi-user application I just want to know the database will always be there, zero downtime, zero data loss, end of story.
## Smart Update Capability
The third reason I would choose MongoDB is possibly the most surprising. Not all document databases are the same, and allow you to realise all the benefits of a document versus relational model, some are simply JSON stores or Key/Value stores where the value is some form of document.
MongoDB has the powerful, specialised update operators capable of doing more than simply replacing a document or a value in the database. With MongoDB, you can, as part of a single atomic operation, verify the state of values in the document, compute the new value for any field based on it and any other fields, sort and truncate arrays when adding to them and, should you require it automatically, create a new document rather than modify an existing one.
It is this "smart" update capability that makes MongoDB capable of being a principal, "transactional" database in large, multi-user systems versus a simple store of document shaped data.
These three features, at the heart of an end-to-end data platform, are what genuinely make MongoDB my personal first choice when I want to build a system to support many users with a snappy user experience, 24 hours a day, 365 days a year.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Distinguished Engineer and 25 year NoSQL veteran John Page explains in 5 minutes why MongoDB would be his first choice for building a multi-user application.",
"contentType": "Article"
} | Paginations 2.0: Why I Would Choose MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/bson-data-types-objectid | created | # Quick Start: BSON Data Types - ObjectId
In the database world, it is frequently important to have unique identifiers associated with a record. In a legacy, tabular database, these unique identifiers are often used as primary keys. In a modern database, such as MongoDB, we need a unique identifier in an `_id` field as a primary key as well. MongoDB provides an automatic unique identifier for the `_id` field in the form of an `ObjectId` data type.
For those that are familiar with MongoDB Documents you've likely come across the `ObjectId` data type in the `_id` field. For those unfamiliar with MongoDB Documents, the ObjectId datatype is automatically generated as a unique document identifier if no other identifier is provided. But what is an `ObjectId` field? What makes them unique? This post will unveil some of the magic behind the BSON ObjectId data type. First, though, what is BSON?
## Binary JSON (BSON)
Many programming languages have JavaScript Object Notation (JSON) support or similar data structures. MongoDB uses JSON documents to store records. However, behind the scenes, MongoDB represents these documents in a binary-encoded format called BSON. BSON provides additional data types and ordered fields to allow for efficient support across a variety of languages. One of these additional data types is ObjectId.
## Makeup of an ObjectId
Let's start with an examination of what goes into an ObjectId. If we take a look at the construction of the ObjectId value, in its current implementation, it is a 12-byte hexadecimal value. This 12-byte configuration is smaller than a typical universally unique identifier (UUID), which is, typically, 128-bits. Beginning in MongoDB 3.4, an ObjectId consists of the following values:
- 4-byte value representing the seconds since the Unix epoch,
- 5-byte random value, and
- 3-byte counter, starting with a random value.
With this makeup, ObjectIds are *likely* to be globally unique and unique per collection. Therefore, they make a good candidate for the unique requirement of the `_id` field. While the `_id` in a collection can be an auto-assigned `ObjectId`, it can be user-defined as well, as long as it is unique within a collection. Remember that if you aren't using a MongoDB generated `ObjectId` for the `_id` field, the application creating the document will have to ensure the value is unique.
## History of ObjectId
The makeup of the ObjectId has changed over time. Through version 3.2, it consisted of the following values:
- 4-byte value representing the seconds since the Unix epoch,
- 3-byte machine identifier,
- 2-byte process id, and
- 3-byte counter, starting with a random value.
The change from including a machine-specific identifier and process id to a random value increased the likelihood that the `ObjectId` would be globally unique. These machine-specific 5-bytes of information became less likely to be random with the prevalence of Virtual Machines (VMs) that had the same MAC addresses and processes that started in the same order. While it still isn't guaranteed, removing machine-specific information from the `ObjectId` increases the chances that the same machine won't generate the same `ObjectId`.
## ObjectId Odds of Uniqueness
The randomness of the last eight bytes in the current implementation makes the likelihood of the same ObjectId being created pretty small. How small depends on the number of inserts per second that your application does. Let's do some quick math and look at the odds.
If we do one insert per second, the first four bytes of the ObjectId would change so we can't have a duplicate ObjectId. What are the odds though when multiple documents are inserted in the same second that *two* ObjectIds are the same? Since there are *eight* bits in a byte, and *eight* random bytes in our Object Id (5 random + 3 random starting values), the denominator in our odds ratio would be 2^(8\*8), or 1.84467441x10'^19. For those that have forgotten scientific notation, that's 18,446,744,100,000,000,000. Yes, that's correct, 18 quintillion and change. As a bit of perspective, the odds of being struck by lightning in the U.S. in a given year are 1 in 700,000, according to National Geographic. The odds of winning the Powerball Lottery jackpot are 1 in 292,201,338. The numerator in our odds equation is the number of documents per second. Even in a write-heavy system with 250 million writes/second, the odds are, while not zero, pretty good against duplicate ObjectIds being generated.
## Wrap Up
>Get started exploring BSON types, like ObjectId, with MongoDB Atlas today!
ObjectId is one data type that is part of the BSON Specification that MongoDB uses for data storage. It is a binary representation of JSON and includes other data types beyond those defined in JSON. It is a powerful data type that is incredibly useful as a unique identifier in MongoDB Documents. | md | {
"tags": [
"MongoDB",
"JavaScript"
],
"pageDescription": "MongoDB provides an automatic unique identifier for the _id field in the form of an ObjectId data type.",
"contentType": "Quickstart"
} | Quick Start: BSON Data Types - ObjectId | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/hidden-indexes | created | # Optimize and Tune MongoDB Performance with Hidden Indexes
MongoDB 4.4 is the biggest release of MongoDB to date and is available in beta right now. You can try out it out in MongoDB Atlas or download the development release. There is so much new stuff to talk about ranging from new features like custom aggregation expressions, improvements to existing functionality like refinable shard keys, and much more.
In this post, we are going to look at a new feature coming to MongoDB 4.4 that will help you better optimize and fine-tune the performance of your queries as your application evolves called hidden indexes.
Hidden indexes, as the name implies, allows you to hide an index from the query planner without removing it, allowing you to assess the impact of not using that specific index.
## Prerequisites
For this tutorial you'll need:
- MongoDB 4.4
## Hidden Indexes in MongoDB 4.4
Most database technologies, and MongoDB is no different, rely on indexes to speed up performance and efficiently execute queries. Without an index, MongoDB would have to perform a collection scan, meaning scanning every document in a collection to filter out the ones the query asked for.
With an index, and often times with a correct index, this process is greatly sped up. But choosing the right data to index is an art and a science of its own. If you'd like to learn a bit more about indexing best practices, check out this blog post. Building, maintaining, and dropping indexes can be resource-intensive and time-consuming, especially if you're working with a large dataset.
Hidden indexes is a new feature coming to MongoDB 4.4 that allows you to easily measure the impact an index has on your queries without actually deleting it and having to rebuild it if you find that the index is in fact required and improves performance.
The awesome thing about hidden indexes is that besides being hidden from the query planner, meaning they won't be used in the execution of the query, they behave exactly like a normal index would. This means that hidden indexes are still updated and maintained even while hidden (but this also means that a hidden index continues to consume disk space and memory so if you find that hiding an index does not have an impact on performance, consider dropping it), hidden unique indexes still apply the unique constraint to documents, and hidden TTL indexes still continue to expire documents.
There are some limitations on hidden indexes. The first is that you cannot hide the default `_id` index. The second is that you cannot perform a cursor.hint() on a hidden index to force MongoDB to use the hidden index.
## Creating Hidden Indexes in MongoDB
To create a hidden index in MongoDB 4.4 you simply pass a `hidden` parameter and set the value to `true` within the `db.collection.createIndex()` options argument. For a more concrete example, let's assume we have a `movies` collection that stores documents on individual films. The documents in this collection may look something like this:
```
{
"_id": ObjectId("573a13b2f29313caabd3ac0d"),
"title": "Toy Story 3",
"plot": "The toys are mistakenly delivered to a day-care center instead of the attic right before Andy leaves for college, and it's up to Woody to convince the other toys that they weren't abandoned and to return home.",
"genres": "Animation", "Adventure", "Comedy"],
"runtime": 103,
"metacritic": 92,
"rated": "G",
"cast": ["Tom Hanks", "Tim Allen", "Joan Cusack", "Ned Beatty"],
"directors": ["Lee Unkrich"],
"poster": "https://m.media-amazon.com/images/M/MV5BMTgxOTY4Mjc0MF5BMl5BanBnXkFtZTcwNTA4MDQyMw@@._V1_SY1000_SX677_AL_.jpg",
"year": 2010,
"type": "movie"
}
```
Now let's assume we wanted to create a brand new index on the title of the movie and we wanted it to be hidden by default. To do this, we'd execute the following command:
``` bash
db.movies.createIndex( { title: 1 }, { hidden: true })
```
This command will create a new index that will be hidden by default. This means that if we were to execute a query such as `db.movies.find({ "title" : "Toy Story 3" })` the query planner would perform a collection scan. Using [MongoDB Compass, I'll confirm that that's what happens.
From the screenshot, we can see that `collscan` was used and that the actual query execution time took 8ms. If we navigate to the Indexes tab in MongoDB Compass, we can also confirm that we do have a `title_1` index created, that's consuming 315.4kb, and has been used 0 times.
This is the expected behavior as we created our index as hidden from the get-go. Next, we'll learn how to unhide the index we created and see if we get improved performance.
## Unhiding Indexes in MongoDB 4.4
To measure the impact an index has on our query performance, we'll unhide it. We have a couple of different options on how to accomplish this. We can, of course, use `db.runCommand()` in conjunction with `collMod`, but we also have a number of mongo shell helpers that I think are much easier and less verbose to work with. In this section, we'll use the latter.
To unhide an index, we can use the `db.collection.unhideIndex()` method passing in either the name of the index, or the index keys. Let's unhide our title index using the index keys. To do this we'll execute the following command:
``` bash
db.movies.unhideIndex({title: 1})
```
Our response will look like this:
If we were to execute our query to find **Toy Story 3** in MongoDB Compass now and view the Explain Plan, we'd see that instead of a `collscan` or collection scan our query will now use the `ixscan` or index scan, meaning it's going to use the index. We get the same results back, but now our actual query execution time is 0ms.
Additionally, if we look at our Indexes tab, we'll see that our `title_1` index was used one time.
## Working with Existing Indexes in MongoDB 4.4
When you create an index in MongoDB 4.4, by default it will be created with the `hidden` property set to false, which can be overwritten to create a hidden index from the get-go as we did in this tutorial. But what about existing indexes? Can you hide and unhide those? You betcha!
Just like the `db.collection.unhideIndex()` helper method, there is a `db.collection.hideIndex()` helper method, and it allows you to hide an existing index via its name or index keys. Or you can use the `db.runCommand()` in conjunction with `collMod`. Let's hide our title index, this time using the `db.runCommand()`.
``` bash
db.runCommand({
collMod : "movies"
index: {
keyPattern: {title:1},
hidden: true
}
})
```
Executing this command will once again hide our `title_1` index from the query planner so when we execute queries and search for movies by their title, MongoDB will perform the much slower `collscan` or collection scan.
## Conclusion
Hidden indexes in MongoDB 4.4 make it faster and more efficient for you to tune performance as your application evolves. Getting indexes right is one-half art, one-half science, and with hidden indexes you can make better and more informed decisions much faster.
Regardless of whether you use the hidden indexes feature or not, please be sure to create and use indexes in your collections as they will have a significant impact on your query performance. Check out the free M201 MongoDB University course to learn more about MongoDB performance and indexes.
>**Safe Harbor Statement**
>
>The development, release, and timing of any features or functionality
>described for MongoDB products remains at MongoDB's sole discretion.
>This information is merely intended to outline our general product
>direction and it should not be relied on in making a purchasing decision
>nor is this a commitment, promise or legal obligation to deliver any
>material, code, or functionality. Except as required by law, we
>undertake no obligation to update any forward-looking statements to
>reflect events or circumstances after the date of such statements.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to optimize and fine tune your MongoDB performance with hidden indexes.",
"contentType": "Tutorial"
} | Optimize and Tune MongoDB Performance with Hidden Indexes | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/designing-developing-2d-game-levels-unity-csharp | created | # Designing and Developing 2D Game Levels with Unity and C#
If you've been keeping up with the game development series that me (Nic Raboy) and Adrienne Tacke have been creating, you've probably seen how to create a user profile store for a game and move a player around on the screen with Unity.
To continue with the series, which is also being streamed on Twitch, we're at a point where we need to worry about designing a level for gameplay rather than just exploring a blank screen.
In this tutorial, we're going to see how to create a level, which can also be referred to as a map or world, using simple C# and the Unity Tilemap Editor.
To get a better idea of what we plan to accomplish, take a look at the following animated image.
You'll notice that we're moving a non-animated sprite around the screen. You might think at first glance that the level is one big image, but it is actually many tiles placed carefully within Unity. The edge tiles have collision boundaries to prevent the player from moving off the screen.
If you're looking at the above animated image and wondering where MongoDB fits into this, the short answer is that it doesn't. The game that Adrienne and I are building will leverage MongoDB, but some parts of the game development process such as level design won't need a database. We're attempting to tell a story with this series.
## Using the Unity Tilemap Editor to Draw 2D Game Levels
There are many ways to create a level for a game, but as previously mentioned, we're going to be using tilemaps. Unity makes this easy for us because the software provides a paint-like experience where we can draw tiles on the canvas using any available images that we load into the project.
For this example, we're going to use the following texture sheet:
Rather than creating a new project and repeating previously explained steps, we're going to continue where we left off from the previous tutorial. The **doordash-level.png** file should be placed in the **Assets/Textures** directory of the project.
While we won't be exploring animations in this particular tutorial, if you want the spritesheet used in the animated image, you can download it below:
The **plummie.png** file should be added to the project's **Assets/Textures** directory. To learn how to animate the spritesheet, take a look at a previous tutorial I wrote on the topic.
Inside the Unity editor, click on the **doordash-level.png** file that was added. We're going to want to do a few things before we can work with each tile as independent images.
- Change the sprite mode to **Multiple**.
- Define the actual **Pixels Per Unit** of the tiles in the texture packed image.
- Split the tiles using the **Sprite Editor**.
In the above image, you might notice that the **Pixels Per Unit** value is **255** while the actual tiles are **256**. By defining the tiles as one pixel smaller, we're attempting to remove any border between the tile images that might make the level look weird due to padding.
When using the **Sprite Editor**, make sure to slice the image by the cell size using the correct width and height dimensions of the tiles. For clarity, the tiles that I attached are 256x256 in resolution.
If you plan to use the spritesheet for the Plummie character, make sure to repeat the same steps for that spritesheet as well. It is important we have access to the individual images in a spritesheet rather than treating all the images as one single image.
With the images ready for use, let's focus on drawing the level.
Within the Unity menu, choose **Component -> Tilemap -> Tilemap** to add a new tilemap and parent grid object to the scene. To get the best results, we're going to want to layer multiple tilemaps on our scene. Right click on the **Grid** object in the scene and choose **2D Object -> Tilemap**. You'll want three tilemaps in total for this particular example.
We want multiple tilemap layers because it will add depth to the scene and more control. For example, one layer will represent the furthest part of our background, maybe dirt or floors. Another layer will represent any kind of decoration that will sit on top of the floors — aay, for example, arrows. Then, the final tilemap layer might represent our walls or obstacles.
To make sure the layers get rendered in the correct order, the **Tilemap Renderer** for each tilemap should have a properly defined **Sorting Layer**. If continuing from the previous tutorial, you'll remember we had created a **Background** layer and a **GameObject** layer. These can be used, or you can continue to create and assign more. Just remember that the render order of the sorting layers is top to bottom, the opposite of what you'd experience in photo editing software like Adobe Photoshop.
The next step is to open the **Tile Palette** window within Unity. From the menu, choose **Window -> 2D -> Tile Palette**. The palette will be empty to start, but you'll want to drag your images either one at a time or multiple at a time into the window.
With images in the tile palette, they can be drawn on the scene like painting on a canvas. First click on the tile image you want to use and then choose the painting tool you want to use. You can paint on a tile-by-tile basis or paint multiple tiles at a time.
It is important that you have the proper **Active Tilemap** selected when drawing your tiles. This is important because of the order that each tile renders and any collision boundaries we add later.
Take a look at the following possible result:
Remember, we're designing a level, so this means that your tiles can exceed the view of the camera. Use your tiles to make your level as big and extravagant as you'd like.
Assuming we kept the same logic from the previous tutorial, Getting Started with Unity for Creating a 2D Game, we can move our player around in the level, but the player can exceed the screen. The player may still be a white box or the Plummie sprite depending on what you've chosen to do. Regardless, we want to make sure our layer that represents the boundaries acts as a boundary with collision.
## Adding Collision Boundaries to Specific Tiles and Regions on a Level
Adding collision boundaries to tiles in a tilemap is quite easy and doesn't require more than a few clicks.
Select the tilemap that represents our walls or boundaries and choose to **Add Component** in the inspector. You'll want to add both a **Tilemap Collider 2D** as well as a **Rigidbody 2D**. The **Body Type** of the **Rigidbody 2D** should be static so that gravity and other physics-related events are not applied.
After doing these short steps, the player should no longer be able to go beyond the tiles for this layer.
We can improve things!
Right now, every tile that is part of our tilemap with the **Tilemap Collider 2D** and **Rigidbody 2D** component has a full collision area around the tile. This is true even if the tiles are adjacent and parts of the tile can never be reached by the player. Imagine having four tiles creating a large square. Of the possible 16 collision regions, only eight can ever be interacted with. We're going to change this, which will greatly improve performance.
On the tilemap with the **Tilemap Collider 2D** and **Rigidbody 2D** components, add a **Composite Collider 2D** component. After adding, enable the **Used By Composite** field in the **Tilemap Collider 2D** component.
Just like that, there are fewer regions that are tracking collisions, which will boost performance.
## Following the Player While Traversing the 2D Game Level using C#
As of right now, we have our player, which might be a Plummie or might be a white pixel, and we have our carefully crafted level made from tiles. The problem is that our camera can only fit so much into view, which probably isn't the full scope of our level.
What we can do as part of the gameplay experience is have the camera follow the player as it traverses the level. We can do this with C#.
Select the **Main Camera** within the current scene. We're going to want to add a new script component.
Within the C# script that you'll need to attach, include the following code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CameraPosition : MonoBehaviour
{
public Transform player;
void Start() {}
void Update()
{
transform.position = new Vector3(player.position.x, 0, -10);
}
}
```
In the above code, we are looking at the transform of another unrelated game object. We'll attach that game object in just a moment. Every time the frame updates, the position of the camera is updated to match the position of the player in the x-axis. In this example, we are fixing the y-axis and z-axis so we are only following the player in the left and right direction. Depending on how you've created your level, this might need to change.
Remember, this script should be attached to the **Main Camera** or whatever your camera is for the scene.
Remember the `player` variable in the script? You'll find it in the inspector for the camera. Drag your player object from the project hierarchy into this field and that will be the object that is followed by the camera.
Running the game will result in the camera being centered on the player. As the player moves through the tilemap level, so will the camera. If the player tries to collide with any of the tiles that have collision boundaries, motion will stop.
## Conclusion
You just saw how to create a 2D world in Unity using tile images and the Unity Tilemap Editor. This is a very powerful tool because you don't have to create massive images to represent worlds and you don't have to worry about creating worlds with massive amounts of game objects.
The assets we used in this tutorial are based around a series that myself (Nic Raboy) and Adrienne Tacke are building titled Plummeting People. This series is on the topic of building a multiplayer game with Unity that leverages MongoDB. While this particular tutorial didn't include MongoDB, plenty of other tutorials in the series will.
If you feel like this tutorial skipped a few steps, it did. I encourage you to read through some of the previous tutorials in the series to catch up.
If you want to build Plummeting People with us, follow us on Twitch where we work toward building it live, every other week.
| md | {
"tags": [
"C#",
"Unity"
],
"pageDescription": "Learn how to use Unity tilemaps to create complex 2D worlds for your game.",
"contentType": "Tutorial"
} | Designing and Developing 2D Game Levels with Unity and C# | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/generate-mql-with-mongosh-and-openai | created | # Generating MQL Shell Commands Using OpenAI and New mongosh Shell
# Generating MQL Shell Commands Using OpenAI and New mongosh Shell
OpenAI is a fascinating and growing AI platform sponsored by Microsoft, allowing you to digest text cleverly to produce AI content with stunning results considering how small the “learning data set” you actually provide is.
MongoDB’s Query Language (MQL) is an intuitive language for developers to interact with MongoDB Documents. For this reason, I wanted to put OpenAI to the test of quickly learning the MongoDB language and using its overall knowledge to build queries from simple sentences. The results were more than satisfying to me. Github is already working on a project called Github copilot which uses the same OpenAI engine to code.
In this article, I will show you my experiment, including the game-changing capabilities of the new MongoDB Shell (`mongosh`) which can extend scripting with npm modules integrations.
## What is OpenAI and How Do I Get Access to It?
OpenAI is a unique project aiming to provide an API for many AI tasks built mostly on Natural Language Processing today. You can read more about their projects in this blog.
There are a variety of examples for its text processing capabilities.
If you want to use OpenAI, you will need to get a trial API key first by joining the waitlist on their main page. Once you are approved to get an API key, you will be granted about $18 for three months of testing. Each call in OpenAI is billed and this is something to consider when using in production. For our purposes, $18 is more than enough to test the most expensive engine named “davinci.”
Once you get the API key, you can use various clients to run their AI API from your script/application.
Since we will be using the new `mongosh` shell, I have used the
JS API.
## Preparing the mongosh to Use OpenAI
First, we need to install the new shell, if you haven’t done it so far. On my Mac laptop, I just issued:
``` bash
brew install mongosh
```
Windows users should download the MSI installer from our download page and follow the Windows instructions.
Once my mongosh is ready, I can start using it, but before I do so, let’s install OpenAI JS, which we will import in the shell later on:
``` bash
$ mkdir openai-test
$ cd openai-test
Openai-test $ npm i openai-api
```
I’ve decided to use the Questions and Answers pattern, in the form of `Q: ` and `A: `, provided to the text to command completion API to provide the learning material about MongoDB queries for the AI engine. To better feed it, I placed the training questions and answers in a file called `AI-input.txt` and its content:
```
Q: What is the query syntax?
A: db.collection.find(, , )
Q: Query users collection for username with value "boy"
A: db.users.find({"username" : "boy"})
Q: Query users collection for username with value "girl"A: db.users.find({"username" : "girl"})
Q: Query users collection for username with age bigger than 16
A: db.users.find({"age" : {$gt : 16}})n;
Q: Query author collection for username with value "boy"
A: db.authors.find({"username" : "boy"})
Q:Query author collection for age lower than 7
A: db.authors.find({"age" : {$lt : 7}});
Q:insert a json document into collection authors with username equal to "girl"
A: db.authors.insert({"username" : "girl"}, {"age" : 10, "gender" : "female"})
Q: insert many documents into users collections
A: db.users.insertMany({username : "john doe"}, {username : "joe doe"}]);
Q: The following aggregation uses $geoNear to find documents with a location at most 2 meters from the center [ -73.99279 , 40.719296 ] and a category equal to Parks.
A: db.places.aggregate([{ $geoNear: {near: { type: "Point", coordinates: [ -73.99279 , 40.719296 ] },distanceField: "dist.calculated",maxDistance: 2, query: { category: "Parks" },includeLocs: "dist.location", spherical: true}}])
Q: How to build atlas $search text query?
A: db.collection.aggregate({$search: {"index": "text": {"query": "","path": "", "fuzzy": ,"score": } }})
Q: What is the aggregate syntax?
A: db.collection.aggregate([,,], , );
Q: aggregate users collection to calculate salary sum per user
A: db.users.aggregate([{$group : { _id : "$username" , salary_sum : { $sum : "$salary" }}}]);
Q: aggregate person collection to calculate salary sum per person
A: db.persons.aggregate([{$group : { _id : "$person" , salary_sum : { $sum : "$salary" }}}]);
Q: Lookup users and orders collection
A: db.users.aggregate([{$lookup : {from: 'orders', localField : "_id", foreignField : "userId", as : "result" }} ]);
Q: What is the update syntax?
A:db.collection.update(query, update, options)
Q: How to edit collection sports where sportname is 'football' and match is 'england vs portugal' to score of '3-3' and date to current date?
A: db.sports.update({ sportname: "football", match: "england vs portugal"} , {$set : {score: "3-3" , date : new Date()}} })
Q: Query and atomically update collection zoo where animal is "bear" with a counter increment on eat field, if the data does not exist user upsert
A: db.zoo.findOneAndUpdate({animal : "bear"}, {$inc: { eat : 1 }} , {upsert : true})
```
We will use this file later in our code.
This way, the completion will be based on a similar pattern.
### Prepare Your Atlas Cluster
[MongoDB Atlas, the database-as-a-platform service, is a great way to have a running cluster in seconds with a sample dataset already there for our test. To prepare it, please use the following steps:
1. Create an Atlas account (if you don’t have one already) and use/start a cluster. For detailed steps, follow this documentation.
2. Load the sample data set.
3. Get your connection string.
Use the copied connection string, providing it to the `mongosh` binary to connect to the pre-populated Atlas cluster with sample data. Then, switch to `sample_restaurants`
database.
``` js
mongosh "mongodb+srv://:
@/sample_restaurants"
Using Mongosh : X.X.X
Using MongoDB: X.X.X
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
ATLAS atlas-ugld61-shard-0 primary]> use sample_restaurants;
```
## Using OpenAI Inside the mongosh Shell
Now, we can build our `textToMql` function by pasting it into the `mongosh`. The function will receive a text sentence, use our generated OpenAI API key, and will try to return the best MQL command for it:
``` js
async function textToMql(query){
const OpenAI = require('openai-api');
const openai-client = new OpenAI("");
const fs = require('fs');
var data = await fs.promises.readFile('AI-input.txt', 'utf8');
const learningPath = data;
var aiInput = learningPath + "Q:" + query + "\nA:";
const gptResponse = await openai-client.complete({
engine: 'davinci',
prompt: aiInput,
"temperature": 0.3,
"max_tokens": 400,
"top_p": 1,
"frequency_penalty": 0.2,
"presence_penalty": 0,
"stop": ["\n"]
});
console.log(gptResponse.data.choices[0].text);
}
```
In the above function, we first load the OpenAI npm module and initiate a client with the relevant API key from OpenAI.
``` js
const OpenAI = require('openai-api');
const openai-client = new OpenAI("");
const fs = require('fs');
```
The new shell allows us to import built-in and external [modules to produce an unlimited flexibility with our scripts.
Then, we read the learning data from our `AI-input.txt` file. Finally we add our `Q: ` input to the end followed by the `A:` value which tells the engine we expect an answer based on the provided learningPath and our query.
This data will go over to an OpenAI API call:
``` js
const gptResponse = await openai.complete({
engine: 'davinci',
prompt: aiInput,
"temperature": 0.3,
"max_tokens": 400,
"top_p": 1,
"frequency_penalty": 0.2,
"presence_penalty": 0,
"stop": "\n"]
});
```
The call performs a completion API and gets the entire initial text as a `prompt` and receives some additional parameters, which I will elaborate on:
* `engine`: OpenAI supports a few AI engines which differ in quality and purpose as a tradeoff for pricing. The “davinci” engine is the most sophisticated one, according to OpenAI, and therefore is the most expensive one in terms of billing consumption.
* `temperature`: How creative will the AI be compared to the input we gave it? It can be between 0-1. 0.3 felt like a down-to-earth value, but you can play with it.
* `Max_tokens`: Describes the amount of data that will be returned.
* `Stop`: List of characters that will stop the engine from producing further content. Since we need to produce MQL statements, it will be one line based and “\n” is a stop character.
Once the content is returned, we parse the returned JSON and print it with `console.log`.
### Lets Put OpenAI to the Test with MQL
Once we have our function in place, we can try to produce a simple query to test it:
``` js
Atlas atlas-ugld61-shard-0 [primary] sample_restaurants> textToMql("query all restaurants where cuisine is American and name starts with 'Ri'")
db.restaurants.find({cuisine : "American", name : /^Ri/})
Atlas atlas-ugld61-shard-0 [primary] sample_restaurants> db.restaurants.find({cuisine : "American", name : /^Ri/})
[
{
_id: ObjectId("5eb3d668b31de5d588f4292a"),
address: {
building: '2780',
coord: [ -73.98241999999999, 40.579505 ],
street: 'Stillwell Avenue',
zipcode: '11224'
},
borough: 'Brooklyn',
cuisine: 'American',
grades: [
{
date: ISODate("2014-06-10T00:00:00.000Z"),
grade: 'A',
score: 5
},
{
date: ISODate("2013-06-05T00:00:00.000Z"),
grade: 'A',
score: 7
},
{
date: ISODate("2012-04-13T00:00:00.000Z"),
grade: 'A',
score: 12
},
{
date: ISODate("2011-10-12T00:00:00.000Z"),
grade: 'A',
score: 12
}
],
name: 'Riviera Caterer',
restaurant_id: '40356018'
}
...
```
Nice! We never taught the engine about the `restaurants` collection or how to filter with [regex operators but it still made the correct AI decisions.
Let's do something more creative.
``` js
Atlas atlas-ugld61-shard-0 primary] sample_restaurants> textToMql("Generate an insert many command with random fruit names and their weight")
db.fruits.insertMany([{name: "apple", weight: 10}, {name: "banana", weight: 5}, {name: "grapes", weight: 15}])
Atlas atlas-ugld61-shard-0 [primary]sample_restaurants> db.fruits.insertMany([{name: "apple", weight: 10}, {name: "banana", weight: 5}, {name: "grapes", weight: 15}])
{
acknowledged: true,
insertedIds: {
'0': ObjectId("60e55621dc4197f07a26f5e1"),
'1': ObjectId("60e55621dc4197f07a26f5e2"),
'2': ObjectId("60e55621dc4197f07a26f5e3")
}
}
```
Okay, now let's put it to the ultimate test: [aggregations!
``` js
Atlas atlas-ugld61-shard-0 primary] sample_restaurants> use sample_mflix;
Atlas atlas-ugld61-shard-0 [primary] sample_mflix> textToMql("Aggregate the count of movies per year (sum : 1) on collection movies")
db.movies.aggregate([{$group : { _id : "$year", count : { $sum : 1 }}}]);
Atlas atlas-ugld61-shard-0 [primary] sample_mflix> db.movies.aggregate([{$group : { _id : "$year", count : { $sum : 1 }}}]);
[
{ _id: 1967, count: 107 },
{ _id: 1986, count: 206 },
{ _id: '2006è2012', count: 2 },
{ _id: 2004, count: 741 },
{ _id: 1918, count: 1 },
{ _id: 1991, count: 252 },
{ _id: 1968, count: 112 },
{ _id: 1990, count: 244 },
{ _id: 1933, count: 27 },
{ _id: 1997, count: 458 },
{ _id: 1957, count: 89 },
{ _id: 1931, count: 24 },
{ _id: 1925, count: 13 },
{ _id: 1948, count: 70 },
{ _id: 1922, count: 7 },
{ _id: '2005è', count: 2 },
{ _id: 1975, count: 112 },
{ _id: 1999, count: 542 },
{ _id: 2002, count: 655 },
{ _id: 2015, count: 484 }
]
```
Now *that* is the AI power of MongoDB pipelines!
## DEMO
[![asciicast](https://asciinema.org/a/424297)
## Wrap-Up
MongoDB's new shell allows us to script with enormous power like never before by utilizing npm external packages. Together with the power of OpenAI sophisticated AI patterns, we were able to teach the shell how to prompt text to accurate complex MongoDB commands, and with further learning and tuning, we can probably get much better results.
Try this today using the new MongoDB shell. | md | {
"tags": [
"MongoDB",
"AI"
],
"pageDescription": "Learn how new mongosh external modules can be used to generate MQL language via OpenAI engine. Transform simple text sentences into sophisticated queries. ",
"contentType": "Article"
} | Generating MQL Shell Commands Using OpenAI and New mongosh Shell | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/introduction-realm-sdk-android | created | # Introduction to the Realm SDK for Android
This is a beginner article where we introduce you to the Realm Android SDK, dive through its features, and illustrate development of the process with a demo application to get you started quickly.
In this article, you will learn how to set up an Android application with the Realm Android SDK, write basic queries to manipulate data, and you'll receive an introduction to Realm Studio, a tool designed to view the local Realm database.
>
>
>Pre-Requisites: You have created at least one app using Android Studio.
>
>
>
>
>**What is Realm?**
>
>Realm is an object database that is simple to embed in your mobile app. Realm is a developer-friendly alternative to mobile databases such as SQLite and CoreData.
>
>
Before we start, create an Android application. Feel free to skip the step if you already have one.
**Step 0**: Open Android Studio and then select Create New Project. For more information, you can visit the official Android website.
Now, let's get started on how to add the Realm SDK to your application.
**Step 1**: Add the gradle dependency to the **project** level **build.gradle** file:
``` kotlin
dependencies {
classpath "com.android.tools.build:gradle:$gradle_version"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath "io.realm:realm-gradle-plugin:10.4.0" // add this line
}
```
Also, add **mavenCentral** as our dependency, which was previously **jCenter** for Realm 10.3.x and below.
``` kotlin
repositories {
google()
mavenCentral() // add this line
}
```
``` kotlin
allprojects {
repositories {
google()
mavenCentral() // add this line
}
}
```
**Step 2**: Add the Realm plugin to the **app** level **build.gradle** file:
``` kotlin
plugins {
id 'com.android.application'
id 'kotlin-android'
id 'kotlin-kapt' // add this line
id 'realm-android' // add this line
}
```
Keep in mind that order matters. You should add the **realm-android** plugin after **kotlin-kapt**.
We have completed setting up Realm in the project. Sync Gradle so that we can move to the next step.
**Step 3**: Initialize and create our first database:
The Realm SDK needs to be initialized before use. This can be done anywhere (application class, activity, or fragment) but to keep it simple, we recommend doing it in the application class.
``` kotlin
// Ready our SDK
Realm.init(this)
// Creating our db with custom properties
val config = RealmConfiguration.Builder()
.name("test.db")
.schemaVersion(1)
.build()
Realm.setDefaultConfiguration(config)
```
Now that we have the Realm SDK added to our project, let's explore basic CRUD (Create, Read, Update, Delete) operations. To do this, we'll create a small application, building on MVVM design principles.
The application counts the number of times the app has been opened, which has been manipulated to give an illustration of CRUD operation.
1. Create app view object when opened the first time — **C** R U D
2. Read app viewed counts—C **R** U D
3. Update app viewed counts—C R **U** D
4. Delete app viewed counts— C R U **D**
Once you have a good understanding of the basic operations, then it is fairly simple to apply this to complex data transformation as, in the end, they are nothing but collections of CRUD operations.
Before we get down to the actual task, it's nice to have background knowledge on how Realm works. Realm is built to help developers avoid common pitfalls, like heavy lifting on the main thread, and follow best practices, like reactive programming.
The default configuration of the Realm allows programmers to read data on any thread and write only on the background thread. This configuration can be overwritten with:
``` kotlin
Realm.init(this)
val config = RealmConfiguration.Builder()
.name("test.db")
.allowQueriesOnUiThread(false)
.schemaVersion(1)
.deleteRealmIfMigrationNeeded()
.build()
Realm.setDefaultConfiguration(config)
```
In this example, we keep `allowQueriesOnUiThread(true)` which is the default configuration.
Let's get started and create our object class `VisitInfo` which holds the visit count:
``` kotlin
open class VisitInfo : RealmObject() {
@PrimaryKey
var id = UUID.randomUUID().toString()
var visitCount: Int = 0
}
```
In the above snippet, you will notice that we have extended the class with `RealmObject`, which allows us to directly save the object into the Realm.
We can insert it into the Realm like this:
``` kotlin
val db = Realm.getDefaultInstance()
db.executeTransactionAsync {
val info = VisitInfo().apply {
visitCount = count
}
it.insert(info)
}
```
To read the object, we write our query as:
``` kotlin
val db = Realm.getDefaultInstance()
val visitInfo = db.where(VisitInfo::class.java).findFirst()
```
To update the object, we use:
``` kotlin
val db = Realm.getDefaultInstance()
val visitInfo = db.where(VisitInfo::class.java).findFirst()
db.beginTransaction()
visitInfo.apply {
visitCount += count
}
db.commitTransaction()
```
And finally, to delete the object:
``` kotlin
val visitInfo = db.where(VisitInfo::class.java).findFirst()
visitInfo?.deleteFromRealm()
```
So now, you will have figured out that it's very easy to perform any operation with Realm. You can also check out the Github repo for the complete application.
The next logical step is how to view data in the database. For that, let's introduce Realm Studio.
*Realm Studio is a developer tool for desktop operating systems that allows you to manage Realm database instances.*
Realm Studio is a very straightforward tool that helps you view your local Realm database file. You can install Realm Studio on any platform from .
Let's grab our database file from our emulator or real device.
Detailed steps are as follows:
**Step 1**: Go to Android Studio, open "Device File Explorer" from the right-side panel, and then select your emulator.
**Step 2**: Get the Realm file for our app. For this, open the folder named **data** as highlighted above, and then go to the **data** folder again. Next, look for the folder with your package name. Inside the **files** folder, look for the file named after the database you set up through the Realm SDK. In my case, it is **test.db**.
**Step 3**: To export, right-click on the file and select "Save As," and
then open the file in Realm Studio.
Notice the visit count in the `VisitInfo` class (AKA table) which is equivalent to the visit count of the application. That's all, folks. Hope it helps to solve the last piece of the puzzle.
If you're an iOS developer, please check out Accessing Realm Data on iOS Using Realm Studio.
>
>
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
>
>
| md | {
"tags": [
"Realm",
"Kotlin",
"Android"
],
"pageDescription": "Learn how to use the Realm SDK with Android.",
"contentType": "Tutorial"
} | Introduction to the Realm SDK for Android | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/computed-pattern | created | # Building with Patterns: The Computed Pattern
We've looked at various ways of optimally storing data in the **Building
with Patterns** series. Now, we're going to look at a different aspect
of schema design. Just storing data and having it available isn't,
typically, all that useful. The usefulness of data becomes much more
apparent when we can compute values from it. What's the total sales
revenue of the latest Amazon Alexa? How many viewers watched the latest
blockbuster movie? These types of questions can be answered from data
stored in a database but must be computed.
Running these computations every time they're requested though becomes a
highly resource-intensive process, especially on huge datasets. CPU
cycles, disk access, memory all can be involved.
Think of a movie information web application. Every time we visit the
application to look up a movie, the page provides information about the
number of cinemas the movie has played in, the total number of people
who've watched the movie, and the overall revenue. If the application
has to constantly compute those values for each page visit, it could use
a lot of processing resources on popular movies
Most of the time, however, we don't need to know those exact numbers. We
could do the calculations in the background and update the main movie
information document once in a while. These **computations** then allow
us to show a valid representation of the data without having to put
extra effort on the CPU.
## The Computed Pattern
The Computed Pattern is utilized when we have data that needs to be
computed repeatedly in our application. The Computed Pattern is also
utilized when the data access pattern is read intensive; for example, if
you have 1,000,000 reads per hour but only 1,000 writes per hour, doing
the computation at the time of a write would divide the number of
calculations by a factor 1000.
In our movie database example, we can do the computations based on all
of the screening information we have on a particular movie, compute the
result(s), and store them with the information about the movie itself.
In a low write environment, the computation could be done in conjunction
with any update of the source data. Where there are more regular writes,
the computations could be done at defined intervals - every hour for
example. Since we aren't interfering with the source data in the
screening information, we can continue to rerun existing calculations or
run new calculations at any point in time and know we will get correct
results.
Other strategies for performing the computation could involve, for
example, adding a timestamp to the document to indicate when it was last
updated. The application can then determine when the computation needs
to occur. Another option might be to have a queue of computations that
need to be done. Selecting the update strategy is best left to the
application developer.
## Sample Use Case
The **Computed Pattern** can be utilized wherever calculations need to
be run against data. Datasets that need sums, such as revenue or
viewers, are a good example, but time series data, product catalogs,
single view applications, and event sourcing are prime candidates for
this pattern too.
This is a pattern that many customers have implemented. For example, a
customer does massive aggregation queries on vehicle data and store the
results for the server to show the info for the next few hours.
A publishing company compiles all kind of data to create ordered lists
like the "100 Best...". Those lists only need to be regenerated once in
a while, while the underlying data may be updated at other times.
## Conclusion
This powerful design pattern allows for a reduction in CPU workload and
increased application performance. It can be utilized to apply a
computation or operation on data in a collection and store the result in
a document. This allows for the avoidance of the same computation being
done repeatedly. Whenever your system is performing the same
calculations repeatedly and you have a high read to write ratio,
consider the **Computed Pattern**.
We're over a third of the way through this **Building with Patterns**
series. Next time we'll look at the features and benefits of the Subset
Pattern
and how it can help with memory shortage issues.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.",
"contentType": "Tutorial"
} | Building with Patterns: The Computed Pattern | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-sync-in-use-with-swiftui-chat-app-meetup | created | # Realm Sync in Use — Building and Architecting a Mobile Chat App Meetup
Didn't get a chance to attend the Realm Sync in use - building and architecting a Mobile Chat App Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.
>Realm Sync in Use - Building and Architecting a Mobile Chat App
>
>:youtube]{vid=npglFqQqODk}
In this meetup, Andrew Morgan, a Staff Engineer at MongoDB, will walk you through the thinking, architecture and design patterns used in building a Mobile Chat App on iOS using MongoDB Realm Sync. The Chat app is used as an example, but the principles can be applied to any mobile app where sync is required. Andrew will focus on the data architecture, both the schema and the partitioning strategy used and after this session, you will come away with the knowledge needed to design an efficient, performant, and robust data architecture for your own mobile app.
In this 70-minute recording, in the first 50 minutes or so, Andrew covers:
- Demo of the RChat App
- System/Network Architecture
- Data Modelling & Partitioning
- The Code - Integrating synced Realms in your SwiftUI App
And then we have about 20 minutes of live Q&A with our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!
Throughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.
To learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.
## Transcript
**Shane McAllister**: Hello, and welcome to the meetup. And we're really, really delighted that you could all join us here and we're giving people time to get on board. And so we have enough of a quorum of people here that we can get started. So first things first introductions. My name is Shane McAllister, and I'm a lead on the Developer Advocacy team here, particularly for Realm. And I'm joined today as, and we'll do introductions later by Andrew Morgan as well. Who's a staff engineer on the Developer Advocacy team, along with me. So, today we're doing this meetup, but it's the start of a series of meetups that we're doing, particularly in COVID, where everything's gone online. We understand that there's lots of events and lots of time pressures for people. We want to reach our core developer audience as easily as possible. So this is only our second meetup using our new live platform that we have.
**Shane McAllister**: And so very much thank you for coming. Thank you for registering and thank you for being here. But if you have registered and you certainly joined the Realm Global Community, it means that you will get notified of these future events instantly via email, as soon as we add them. So we have four more of these events coming over the next four weeks, four to six weeks as well too. And we'll discuss those a little bit at the end of the presentation. With regards to this platform, you're used to online platforms, this is a little bit different. We have chat over on the right-hand side of your window. Please use that throughout.
**Shane McAllister**: I will be monitoring that while Andrew is presenting and I will be trying to answer as much as I can in there, but we will using that as a function to go and do our Q&A at the end. And indeed, if you are up to it, we'd more than welcome to have you turn on your camera, turn on your mic and join us for that Q&A at the end of our sessions as well too. So just maybe mute your microphones if they're not already muted. We'll take care of that at the end. We'll open it out and everyone can get involved as well, too. So without further ado, let's get started. I'm delighted to hand over to Andrew Morgan.
**Andrew Morgan**: I'm going to talk through how you actually build a mobile app using Realm and in particular MongoDB Realm Sync. To make it a bit less dry. We're going to use an example app, which is called RChat, which is a very simple chat application. And if you like, it's a very simple version of WhatsApp or Slack. So the app is built for iOS using SwiftUI, but if you're building on Android, a lot of what I'm going to cover is still going to apply. So all the things about the data modeling the partitioning strategy, setting up the back end, when you should open and close Realms, et cetera, they're the same. Looking at the agenda. We're going to start off with a very quick demo of the app itself. So you understand what we're looking at. When we look at the code and the data model, we'll look at the components to make it up both the front end and the back end.
**Andrew Morgan**: One of the most important is how we came up with the data model and the partitioning strategy. So how partitioning works with Realm Sync? Why we use it? And how you actually come up with a strategy that's going to work for your app? Then we'll get to the code itself, both the front end iOS code, but also some stored procedures or triggers that we're running in the back end to stitch all of the data together. And finally, I promise I'll keep some time at the end so we can have a bit of interactive Q&A. Okay, so let's start with the demo.
**Andrew Morgan**: And so what we've got is a fairly simplistic chat app along the lines of WhatsApp or Slack, where you have the concept of users, chat rooms, and then that messages within those. So we've got three devices connected. You can register new users through the app, but using that radio button at the bottom, but for this, I'm going to use what I've created earlier. This will now authenticate the user with Realm back end. And you may notice as Rod came online, the presence updated in the other apps. So for example, in the buds here, you can see that the first two members are online currently, the middle one is the one I just logged in. And then the third one is still offline. You'll see later, but all of these interactions and what you're seeing there, they're all being implemented through the data changing. So we're not sending any rest messages around like that to say that this user is logged in, it's just the data behind the scenes changes and that gets reflected in the UI. And so we can create a new chat room.
**Andrew Morgan**: You can search for users, we've only got a handful here, so I'll just add Zippy and Jane. And then as I save, you should see that chat appear in their windows too. See they're now part of this group. And we can go in here and can send those messages and it will update the status. And then obviously you can go into there and people can send messages back. Okay. So it's a chat app, as you'd expect you can also do things like attach photos. Apologies, this is the Xcode beta the simulator is a little bit laggy on this. And then you can also share your location. So we'll do the usual kind of things you'd want to do in a chat room and then dive into the maps, et cetera. Okay. So with that, I think I can switch back to the slides and take a very quick look at what's going on behind the scenes from an architectural perspective.
**Andrew Morgan**: Let me get a pointer. So, this is the chat app you were seeing, the only time and so we've got the chat app, we've got the Realm embedded mobile database. We've got MongoDB Realm, which is the back end service. And then we've got MongoDB Atlas, which is the back end data store. So the only time the application interacts directly with Realm, the Realm service in the back end is when the users logging in or logging out or you're registering another user, the rest of the time, the application is just interacting with the local Realm database. And then that Realm database is synchronizing via the Realm service with other instances of the application. So for example, when I sent a message to Rod that just adds a chat message to the Realm database that synchronizes via MongoDB Realm Sync, and then that same day to get sent to the other Realm database, as well as a copy gets written to Atlas.
**Andrew Morgan**: So it's all data-driven. What we could do, which we haven't done yet is that same synchronization can also synchronize with Android applications. And also because the data is stored in Atlas, you can get at that same data through a web application, for example. So you only have to write your back end once and then all of these different platforms, your application can be put into those and work as it is.
**Andrew Morgan**: So the data model and partitioning. So first of all, Shane and I were laughing at this diagram earlier, trying to figure out how many years this picture has been in used by the Realm Team.
**Shane McAllister**: It's one of the evergreen ones I think Andrew. I think nobody wants to redesign it just yet. So we do apologize for the clip art nature of this slide.
**Andrew Morgan**: Yeah. Right. So, the big database cylinder, you can see here is MongoDB Atlas. And within there you have collections. If you're new to MongoDB, then a collection is analogous to a table in a relational database. And so in our shapes database, we've got collections for circles, stars, and triangles. And then each of those shapes within those collections, they've got an attribute called color. And what we've decided to do in this case is to use the color attribute as our partitioning key.
**Andrew Morgan**: So what that means is that every one of these collections, if they're going to be synced, they have to have a key called color. And when someone connects their Realm database and saying, they want to sync, they get to specify the value for the partitioning key. So for example, the top one specified that they want all of the blue objects. And so that means that they get, regardless of which collection they're in, they get all of the blue shapes. And so you don't have any control over whether you just synced the stars or just the triangles. You get all of the shapes because the partition is set to just the color. The other limitation or feature of this is that you don't get to choose that certain parts of the circle gets synchronized, but others don't. So it's all or nothing. You're either syncing all of the red objects in their entirety or you're not sinking the red objects in their entirety.
**Andrew Morgan**: So, why do we do this partitioning rather than just syncing everything to the mobile Realm database? One reason is space. You've obviously got constraints on how much storage you've got in your mobile device. And if, for example, you partitioned on user and you had a million users, you don't want every user's device to have to store data, all of those million users. And so you use the partitioning key to limit how much storage and network you're using for each of those devices. And the other important aspect is security. I don't necessarily want every other user to be able to see everything that's in my objects. And so this way you can control it, that on the server side, you make sure that when someone's logged in that they can only synchronize the objects that they're entitled to see. So, that's the abstract idea.
**Andrew Morgan**: Let's get back to our chat application use case, and we've got three top-level objects that we want to synchronize. The first one is the User. And so if I'm logged in as me, I want to be able to see all of the data. And I also want to be able to update it. So this data will include things like my avatarImage. It will include my userName. It will include a list of the conversations or the chat rooms that I'm currently a member of. So no one else needs to see all of that data. There's some of that data that I would like other people to be able to at least see, say, for example, my displayName and my avatarImage. I'd like other people to be able to see that. But if you think back to how the partitioning works and that it's all or nothing, I can either sync the entire User object or none of it at all.
**Andrew Morgan**: So what we have is, we have another representation of the User, which is called the Chatster. And it's basically a mirror of a subset of the data from the User. So it does include my avatar, for example, it does include my displayName. But what it won't include is things like the complete list of all of the chat rooms that I'm a member of, because other people have no business knowing that. And so for this one, I want the syncing rule to be that, anyone can read that data, but no one can update it.
**Andrew Morgan**: And then finally, we've got the ChatMessages themselves, and this has got a different access rules again, because we want all of the members within a chat room to be able to read them and also write new ones. And so we've got three top-level objects and they all have different access rules. But remember we can only have a single partitioning key. And that partitioning key has to be either a String, an objectID or a Long. And so to be able to get a bit more sophisticated in what we synchronized to which users, we actually cheat a little and instead, so a partitioning key, it's an attribute that we call partition. And within that, we basically have key value pairs. So for each of those types of objects, we can use a different key and value.
**Andrew Morgan**: So, for example, for the user objects or the user collection, we use the String, user=, and then \_id. So the \_id is what uniquely identifies the object or the document within the collection. So this way we can have it, that the rules on the server side will say that this partition will only sync if the currently logged in user has got the \_id that matches. For the Chatster it's a very simple rule. So we're effectively hard coding this to say, all-users equals all-the-users, but this could be anything. So this is just a string that if you see this the back ends knows that it can synchronize everything. And then for the ChatMessages the key is conversation and then the value is the conversation-id.
**Andrew Morgan**: I'll show you in code how that comes together. So this is what our data model looks like. As I said, we've got the three top-level objects. User, Chatster and ChatMessage. And if we zoom in you'll see that a User is actually, its got a bunch of attributes in the top-level object, but then it's got sub-objects, or when sorting MongoDB sub-documents. So it's got sub-objects. So, the users got a list of conversations. The conversation contains a list of members and a UserPreferences or their avatarImage, the displayName, and know that they do have an attribute called partition. And it's only the top level object that needs to have the partition attributes because everything else is a sub-object and it just gets dragged in.
**Andrew Morgan**: I would, we also have a UserPreference contains a Photo, which is a photo object. And then Chatster, which is our read-only publicly visible object. We've got the partition and every time we try and open a Realm for the Chatster objects, we just set it to the String, all-users equals all-the-users. So it's very similar, but it's a subset of the data that I'm happy to share with everyone. And then finally we have the ChatMessage which again, you can see it's a top-level object, so it has to have the partition attribute.
**Andrew Morgan**: So how do we enforce that people or the application front end only tries to open Realms for the partitions that it's enabled that ought to? We can do that through the Realm UI in the back end. We do it by specifying a rule for read-only Realms and read-write Realms. And so in each case, all I'm doing here is I'm saying that I'm going to call a Realm function. And when that functions is called, it's going to be given passed a parameter, which is the partition that they're trying to access. And then that's just the name of the function.
**Andrew Morgan**: And I'm not going to go through this in great detail, but this is a simplified version of the canWritePartition. So this is what the sides, if the application is asking to open a Realm to make changes to it, this is how I check if they're allowed access to that partition. So the first thing we do is we take the partition, which remember is that Key Value string. And we split it to get the Key and the Value for that Key. Then we just do a switch based on what has been used as the Key. If it's the "user" then we check that the partitionValue matches the \_id of the currently logged in user. And so that'll return true or false. The conversation is the most complex one. And for that one, it actually goes and reads the userDoc for this "user" and then checks whether this conversation.id is one that "user" is a member of. So, that's the most complex one. And then all users, so this is remember for the Chatster object, that always returns false, because the application is never allowed to make changes to those objects.
**Andrew Morgan**: So now we're looking at some of the Swift code, and this is the first of the classes that the Realm mobile database is using. So this is the top-level class for the Chatster object. The main things to note in here is so we're importing RealmSwift, which is the Realm Cocoa SDK. The Chatser it conforms to the object protocol, and that's actually RealmSwift.object. So, that's telling Realm that this is a class where the objects can be managed by the Realm mobile database. And for anyone who's used a SwiftUI, ObjectKeyIdentifiable protocol that's taking the place of identifiable. So it just gives each of these objects... But it means that Realm would automatically give each of these objects, an \_id that can be used by Swift UI when it's rendering views.
**Andrew Morgan**: And then the other thing to notice is for the partition, we're hard coding it to always be all-users equals all-the-users, because remember everyone can read all Chatster objects, and then we set things up. We've got the photo objects, for example which is an EmbeddedObject. So all of these things in there. And for doing the sync, you also have to provide a primary key. So again, that's something that Realm insists on. If you're saying that you've implemented the object protocol, taking a look at one of the EmbeddedObjects instead of being object, you implement, you conform to the embedded object protocol. So, that means two things. It means that when you're synchronizing objects, this is just synchronized within a top-level object. And the other nice thing is, this is the way that we implement cascading deletes. So if you deleted a Chatster object, then it will automatically delete all of the embedded photo objects. So that, that makes things a lot simpler.
**Andrew Morgan**: And we'll look quickly at the other top-level objects. We've got the User class. We give ourselves just a reminder that when we're working with this, we should set the partition to user equals. And then the value of this \_id field. And again, it's got userPreferences, which is an EmbeddedObject. Conversations are a little bit different because that's a List. So again, this is a Realm Cocoa List. So we could say RealmSwift.list here. So we've got a list of conversation objects. And then again, those conversation objects little displayName, unreadCount, and members is a List of members and so on and so on. And then finally just for the complete desk here, we've got the ChatMessage objects.
**Andrew Morgan**: Okay. So those of us with objects, but now we'll take a quick look at how you actually use the Realm Cocoa SDK from your Swift application code. As I said before the one interaction that the application has directly with the Realm back end is when you're logging in or logging out or registering a new user. And so that's what we're seeing here. So couple of things to note again, we are using Realm Cocoa we're using Combine, which for people not familiar with iOS development, it's the Swift event framework. So it's what you can use to have pipelines of operations where you're doing asynchronous work. So when I log in function, yes, the first thing we do is we actually create an instance of our Realm App. So this id that's something that you get from the Realm UI when you create your application.
**Andrew Morgan**: So that's just telling the front end application what back end application it's connecting to. So we can connect to Realm, we then log in, in this case, we're using email or username, password authentication. There's also anonymous, or you can use Java UTs there as well. So once this is successfully logged the user in, then if everything's been successful, then we actually send an event to a loginPublisher. So, that means that another, elsewhere we can listen to that Publisher. And when we're told someone's logged in, we can take on other actions. So what we're doing here is we're sending in the parameter that was passed into this stage, which in this case is going to be the user that's just logged in.
**Andrew Morgan**: Okay, and I just take a break now, because there's two ways or two main ways that you can open a Realm. And this is the existing way that up until the start of this week, you'd have to use all of the time, but it's, I've included here because it's still a useful way of doing it because this is still the way that you open Realm. If you're not doing it from within SwiftUI view. So this is the Publisher, we just saw the loginPublisher. So it receives the user. And when it receives the user it creates the configuration where it's setting up the partitionValue. So this is one that's going to match the partition attribute and we create the user equals, so a string with user equals and then the user.id.
**Andrew Morgan**: And then we use that to open a new Realm. And again, this is asynchronous. And so we send that Realm and it's been opened to this userRealmPublisher, which is yet another publisher that Combine will pass in that Realm once it's available. And then in here we store a copy of the user. So, this is actually in our AppState. So we create a copy of the user that we can use within the application. And that's actually the first user. So, when we created that Realm, it's on the users because we use the partition key that only matches a single user. There's only actually going to be one user object in this Realm. So we just say .first to receive that.
**Andrew Morgan**: Then, because we want to store this, and this is an object that's being managed by Realm. We create a Realm, transaction and store, update the user object to say that this user is now online. And so when I logged in, that's what made the little icon turn from red to green. It's the fact that I updated this, which is then synchronized back to the Realm back end and reflected in all the other Realm databases that are syncing.
**Andrew Morgan**: Okay. So there is now also asynchronous mode of opening it, that was how we had to open it all the way through our Swift code previously. But as of late on Monday, we actually have a new way of doing it. And I'm going to show you that here, which is a much more Swift UI friendly way of doing it. So, anyone who went to Jason's session a couple of weeks ago. This is using the functionality that he was describing there. Although if you're very observant, you may know that some of the names have been changed. So the syntax isn't exactly the same as you just described. So let's give a generic example of how you'd use this apologies to people who may be not familiar with Swift or Swift UI, but these are Swift UI views.
**Andrew Morgan**: So within our view, we're going to call a ChildView. So it's a sub view. And in there we pass through the environment, a realmConfiguration, and that configuration is going to be based on a partition. So we're going to give a string in this case, which is going to be the partition that we want to open, and then synchronize. In this case, the ChildView doesn't do anything interesting. All it does is called the GrandChildView, but it's important to note that how the environments work with Swift UI is they automatically get passed down the view hierarchy. So even though we're not actually passing into the environment for GrandChildView, it is inheriting it automatically.
**Andrew Morgan**: So within GrandChildView, we have an annotation so observed results. And what we're doing here is saying for the Realm that's been passed in, I want items to represent the results for all items. So item is a class. So all objects of the class item that are stored in those results. I want to store those as the items results set, and also I'm able to get to the Realm itself, and then we can pass those into, so we can iterate over all of those items and then call the NameView for each of those items. And it's been a long way getting to here, but this is where we can finally actually start using that item. So when we called NameView, we passed in the instance of item and we use this Realm annotation to say that it's an ObservedRealmObject when it's received in the NameView, and why that's important is it means that we don't have to explicitly open Realm transactions when we're working with that item in this View.
**Andrew Morgan**: So the TextField View, it takes the label, which is just the label of the TextField and binding to the $items.name. So it takes a binding to a string. So, TextField can actually update the data. It's not just displaying it, it lets the user input stuff. And so we can pass in a binding to our item. And so text fields can now update that without worrying about having to explicitly open transactions.
**Andrew Morgan**: So let's turn to our actual chat application. And so a top-level view is ContentView, and we do different things depending on whether you're logged in yet, but if you are logged in yet, then we call the ConversationListView, and we pass in a realmConfiguration where the partition is set user equals and then the \_id of the user. Then within the ConversationListView, which is represents what you see from the part of the application here. We've got a couple of things. The first is, so what have we got here? Yeah. So we do some stuff with the data. So, we display each of these cards for the conversations, with a bit I wanted to highlight is that when someone clicks on one of these cards, it actually follows a link to a ChatRoomView. And again, with the ChatRoomView, we pass in the configuration to say that it's this particular partition that we want to open a Realm for.
**Andrew Morgan**: And so once we're in there we, we get a copy of the userRealm and the reason we need a copy of the userRealm is because we're going to explicitly upgrade, update the unreadCount. We're going to set it to zero. So when we opened the conversation, we'll mark all of the messages as read. And because we're doing this explicitly rather than doing it via View, we do still need to do the transaction here. So that's why we received that. And then for each of these, so each of these is a ChatRoomBubble. So because we needed the userRealm in this View, we couldn't inject the ChatMessage View or the ChatMessage Realm into here. And so instead, rather than working with the ChatMessages in here, we have to pass, we have to have another subview where that subview is really just there to be able to pass in another partitionValue. So in this case, we're passing in the conversation equals then the id of the conversation.
**Andrew Morgan**: And so that means that in our ChatRoomBubblesView, we're actually going to receive all of the objects of type ChatMessage, which are in that partition. And the other thing we're doing differently in here is that when we get those results, we can also do things like sorting on them, which we do here. Or you can also add a filter on here, if you don't want this view to work with every single one of those chatMessages, but in our case, all of those chatMessages for this particular conversation. And so we do want to work with all of them, but for example, you could have a filter here that says, don't include any messages that are more older than five months, for example. And then we can loop over those chatMessages, pass them to the ChatBubbleView which is one of these.
**Andrew Morgan**: And the other thing we can do is you can actually observe those results. So when another user has a chatMessage, that will automatically appear in here because this result set automatically gets updated by Realm Sync. So the back end changes, there's another chatMessage it'll be added to that partition. So it appears in this Realm results set. And so this list will automatically be updated. So we don't have to do anything to make that happen. But what I do want to do is I want to scroll to the bottom of the list of these messages when that happens. So I explicitly set a NotificationToken to observe thosechatMessages. And so whenever it changes, I just scroll to the bottom.
**Andrew Morgan**: Then the other thing I can do from this view is, I can send new messages. And so when I do that, I just create, we received a new chatMessage and I just make sure that check a couple of things. Very importantly, I set the conversation id to the current conversation. So the chatMessages tag to say, it's part of this particular conversation. And then I just need to append it to those same results. So, that's the chats results set that we had at the top. So by appending it to that List, Realm will automatically update the local Realm database, which automatically synchronizes with the back end.
**Andrew Morgan**: Okay. So we've seen what's happening in the front end. But what we haven't seen is how was that user document or that user object traits in the first place? How was the Chatster object created when I have a new chatMessage? How do I update the unreadCount in all of the user objects? So that's all being done by Realm in the back end. So we've got a screen capture here of the Realm UI, and we're using Realm Triggers. So we've got three Realm Triggers. The first one is based on authentication. So when the user first logs in, so how it works is the user will register initially. And then they log in. And when they log in for the very first time, this Trigger will be hit and this code will be run. So this is a subset of the JavaScript code that makes up the Realm function.
**Andrew Morgan**: And all it's really doing here is, it is creating a new userDoc based on the user.id that just logged in, set stuff up in there and including setting that they're offline and that they've got no conversations. And then it inserts that into the userCollection. So now we have a userDoc in the userCollection and that user they'll also receive that user object in their application because they straight away after logging in, they opened up that user Realm. And so they'll now have a copy of their own userDoc.
**Andrew Morgan**: Then we've got a couple of database Triggers. So this one is hit every time a new chatMessage is added. And when that happens, that function will search for all of the users that have that conversation id in their list of conversations. And then it will increment the unreadCount value for that particular conversation within that particular user's document. And then finally, we've got the one that creates the Chatster document. So whenever a user document is created or updated, then this function will run and it will update the Chatser document. So it also always provides that read-only copy of a subset of the data. And the other thing that it does is that when a conversation has been added to a particular user, this function will go and update all of the other users that are part of that conversation. So that those user documents also reflect the fact that they're a part of that document.
**Andrew Morgan**: Okay. Um, so that was all the material I was going to go through. We've got a bunch of links here. So the application itself is available within the Realm Organization on GitHub. So that includes the back end Realm application as well as the iOS app. And it will also include the Android app once we've written it. And then there's the Realm Cocoa SDK the docs, et cetera. And if you do want to know more about implementing that application, then there's a blog post you can read. So that's the end-to-end instructions one, but that blog also refers to one that focuses specifically on the data model and partitioning. And then as Shane said, we've got the community forums, which we'd hope everyone would sign up for.
**Shane McAllister**: Super. Thank you, Andrew. I mean, it's amazing to see that in essence, this is something that, WhatsApp, entire companies are building, and we're able to put it demo app to show how it works under the hood. So really, really appreciate that. There are some questions Andrew, in the Q&A channel. There's some interesting conversations there. I'm going to take them, I suppose, as they came in. I'll talk through these unless anybody wants to open their mics and have a chat. There's been good, interesting conversations there. We'd go back to, I suppose, the first ones was that in essence, Richard brought this one up about presence. So you had a presence status icon there on the members of the chat, and how did that work was that that the user was logged in and the devices online or that the user is available? How were you managing that Andrew?
**Andrew Morgan**: Yeah. So, how it works is when a user logs in, we set it that that user is online. And so that will update the user document. That will then get synchronized through Realm Sync to the back end. And when it's received by the back end, it'll be written to the Atlas database and the database trigger will run. And so that database trigger will then replicate that present state to the users Chatser document. And then now going in the other direction, now that documents has changed in Atlas, Realm Sync will push that change to every instance of the application. And so the Swift UI and Realm code, when Realm is updated in the mobile app, that will automatically update the UI. And that's one of the beauties of working with Swift UI and Realm Cocoa is when you update the data model in the local Realm database, that will automatically get reflected in the UI.
**Andrew Morgan**: So you don't have to have any event code saying that, "When you receive this message or when you see this data change, make this change the UI" It happens automatically because the Realm objects really live within the application and because of the clever work that's been done in the Realm Cocoa SDK, when those changes are applied to the local copy of the data, it also notifies Swift UI that the views have to be updated to reflect the change. And then in terms of when you go offline if you explicitly log out it will set it to offline and you get the same process going through again. If you stay on, if you stay logged in, but you've had the app in the background for eight hours, or you can actually configure how long, then you'll get a notification saying, "Do you want to continue to stay, remain logged in Or do you want to log out?"
**Andrew Morgan**: The bit I haven't added, which would be needed in production is that when you force quit or the app crashes, then before you shut things down, just go and update the present state. And then the other presence thing you could do as well is in the back end, you could have a schedule trigger so that if someone has silently died somewhere if they've been online rate hours or something, you just mark them to show their offline.
**Shane McAllister**: Yeah. I think, I mean, presence is important, but I think the key thing for me is that, how much Realm does under the hood on behalf of you \[inaudible 00:43:14\] jumping on a little bit.
**Andrew Morgan**: With that particular one, I can do the demo. So for example, let's go on this window. You can see that, so this is Zippy is the puppet. So if you monitor Zippy, then I'm in the this is actually, I'll move this over. Because I need to expand this a little.
**Shane McAllister**: I have to point out. So Andrew's is in Maidenhead in England, this demo for those of you not familiar, there was a children TV program, sometime in the late '70s early '80s \[inaudible 00:43:54\]. So these are the characters from this TV program where they all the members in this chat app.
**Andrew Morgan**: Yeah. And I think in real life, there's actually a bit of a love triangle between three of them as well.
**Shane McAllister**: We won't go there. We won't go there.
**Andrew Morgan**: So, yeah. So, this is the data that's stored in, so I'll zoom in a little bit. This is the data that's stored in Atlas in the back end. And so if I manually go in and so you want to monitor Zippy status in the iPhone app, if I change that present state in the back end, then we should see thatZippy goes offline. So, again there's no code in there at all. All I've had to do is buying that present state into the Swift UI view.
**Shane McAllister**: That's a really good example. I think that be any stronger examples on doing something in the back end and it immediately reflect in the UI. I think it works really well. Kurt tied a question with regard to the partition Andrew. So, all the user I tried to run, this is a demo. We don't have a lot with users. In essence, If this was a real app, we could have 10 million user objects. How would we manage that? How would we go about that?
**Andrew Morgan**: Yeah. So, the reason I've got all, the reason I've done it like it's literally all users is because I want you to be able to search. I want you to be able to create a new chat room from the mobile app and be able to search through all of the users that are registered in the system. So that's another reason why we don't want the Chatster object to contain everything about user, because he wants it to be fairly compact so that it doesn't matter if you are storing a million of them. So ideally we just have the userName and the avatar in there. If you want you to go a step further, we could have another Chatser object with just the username. And also if it really did get to the stage where you've got hundreds of millions or something, or maybe for example in a Slack type environment where you want to have organizations that instead of having the user, instead of being all the users, you could actually have the old equals orgName as your partition key.
**Andrew Morgan**: So you could just synchronize your organization rather than absolutely everything. If there really was too many users that you didn't want them all in the front end, at that point, you'd start having to involve the back end when you wanted to add a new user to a chat room. And so you could call a Realm function, for example, do a query on the database to get that information.
**Shane McAllister**: Sure. Yeah, that makes sense. Okay, in terms of the chat that I was, this is our demo, we couldn't take care of it on a scale. In essence, these are the things that you would have to think about if you were paying to do something for yourself in this area. The other thing that Andrew was, you showed the very start you're using embedded data for that at the moment in the app. Is another way that we did in our coffee shop as well.
**Andrew Morgan**: Sorry. There was a bit of an echo because I think when I have my mic on and you're talking, I will mute it.
**Shane McAllister**: I'll repeat the question. So it was actually Richard who raised this was regarding the photos shared in the chat, Andrew, they shared within embedded data, as opposed to say how we did it in our oafish open source app with an Amazon S3, routine essentially that ran a trigger that ran in the background and we essentially passed the picture and just presented back a URL with the thumbnail.
**Andrew Morgan**: Yeah. In this one, I was again being a little bit lazy and we're actually storing the binary images within the objects and the documents. And so what we did with the oafish application is we had it that the original document was the original photo was uploaded to S3 and we replace it with an S3 link in the documents instead. And you can do that again through a Realm trigger. So every time a new photo document was added you could then so sorry, in this case it would be a subdocument within the ChatMessage, for example, then yeah. The Realm trigger. When you receive a new ChatMessage, it could go and upload that image to S3 and then just replace it with the URL.
**Andrew Morgan**: And to be honest that's why in the photo, I actually have the, I have a thumbnail as well as the full size image, because the idea is that the full-size image you move that to S3 and replace it with a link, but it can be handy to have the thumbnails so that you can still see those images when you're offline, because obviously for the front end application, if it's offline, then an S3 link isn't much use to you. You can't go and fetch it. So by having the thumbnail, as well as the full-size image, you've got that option of effectively archiving one, but not the thumbnail.
**Shane McAllister**: Perfect. Yeah. That makes a lot of sense. On a similar vein about being logged in, et cetera, as well, to curtail the question with regard, but if there's a user Realm that is open as long as you're logged in, and then you pass in an environment Realm partition, are they both open in that view?
**Andrew Morgan**: No, I think it'll still be, I believe it'll be one. Oh, yes. So both Realms. So if you, for example open to use a Realm and the chats to Realm then yes. Both of those Realms would be open simultaneously.
**Shane McAllister**: Okay. Okay, perfect. And I'm coming through and fairplay to Ian and for writing this. This is a long, long question. So, I do appreciate the time and effort and I hope I catch everything. And Ian, if you want me to open your mic and chime in to ask this question, by all means as well, that just let me know in the chat I'll happily do. So perhaps Andrew, you might scroll back up there in the question as well, too. So it was regarding fetching one object type across many partitions, many partition keys, actually. So, Ian he had a reminder list each shared with a different person, all the reminders in each list have a partition key that's unique for that chair and he wants to show the top-level of that. So we're just wondering how we would go about that or putting you on the spot here now, Andrew. But how would we manage that? Nope, you're muted again because of my feedback. Apologies.
**Andrew Morgan**: Okay. So yeah, I think that's another case where, so there is the functionality slash limitation that when you open a Realm, you can only open it specifying a single value for the partition key. And so if you wanted to display a docket objects from 50 different partitions, then the brute-force way is you have to go and to open 50 Realms, sort of each with a different partition id, but that's another example where you may make a compromise on the back end and decide you want to duplicate some data. And so in the similar way to, we have the Chatster objects that are visible all in a single partition, you could also have a partition, which just contains the list of list.
**Andrew Morgan**: So, you could, whenever someone creates a new list object, you could go and add that to another document that has the complete list of all of the lists. But, but yeah, this is why when you're using Realms Sync, figuring out your data model and your partitioning strategy is one of the first things, as soon as you've figured out the customer store for what you want the app to do. The next thing you want to do is figure out the data model and your partitioning strategy, because it will make a big difference in terms of how much storage you use and how performance is going to be.
**Shane McAllister**: So Ian, your mic is open to chime in on this. Or did we cover? You good? Maybe it's not open, this is the joy.
**Ian**: Do you hear me now?
**Shane McAllister**: Yes. MongoDB.
**Ian**: Yeah. I need to go think about the answer. So you, because I was used to using Realm before Realm Sync, so you didn't have any sharing, but you could fetch all the reminders that you wanted, whatever from any lists and just show them in a big list. I need to go think about the answer. How about \[inaudible 00:54:06\].
**Andrew Morgan**: Yeah, actually there's a third option that I didn't mention is Realm has functions. So, the triggers we looked at that actually implemented as Realm functions, which they're very simple, very lightweight equivalent to the AWS Lambda functions. And you can invoke those from the front end application. So if you wanted to, you could have a function which queries the Atlas database to get a list of all of the lists. And so then it would be a single call from the front end application to a function that runs in the back end. And then that function could then go and fetch whatever data you wanted from the database and send it back as a result.
**Ian**: But that wouldn't work if you're trying to be an offline first, for example.
**Andrew Morgan**: Yeah. Sort of that, that relies on online functionality, which is why is this, I always try and do it based on the data in Realm as much as possible, just because of that. That's the only way you get the offline first functionality. Yeah.
**Ian**: Cool. I just think about it. Thank you.
**Shane McAllister**: Perfect. Thank you Ian. And was there any other followups Ian?
**Andrew Morgan**: Actually, there's one more hack I just thought of. You can add, so you can only have a single partition key for a given Realm app, but I don't think there's any reason why you couldn't have multi, so you can have multiple Realm apps accessing the same Atlas database. And so if you could have the front end app actually open multiple Realm apps, then each of those Realm apps could use a different attribute for partitioning.
**Shane McAllister**: Great. Lets-
**Andrew Morgan**: So it's a bit hacky but that might work.
**Shane McAllister**: No worries. I'm throwing the floor open to Richard's if you're up to it, Richard, I enabled hosts for you. You had a number of questions there. Richard from \[inaudible 00:56:19\] is a longtime friend on Realm. Do you want to jump on Richard and go through those yourself or will I vocalize them for you? Oh, you're you're still muted, Richard.
**Richard**: Okay. Can you hear me now?
**Shane McAllister**: We can in deed.
**Richard**: Okay. I think you answered the question of very well about the image stuff. We've actually been playing around with the Amazon S3 snippets and it's a great way of, because often we need URLs for images and then the other big problem with storing images directly is you're limited to four megabytes, which seems to be the limit for any data object right on Realm. So but Andrew had a great pointer, which is to store your avatars because then you can get them in offline mode. That's actually been a problem with using Amazon S3. But what was the other questions I had, so are you guys going to deprecate the asyncOpen? Because, we've noticed some problems with it lately?
**Andrew Morgan**: Not, that I'm aware of.
**Richard**: Okay.
**Andrew Morgan**: It's because, I think there's still use cases for it. So, for example because when a user logs in, I'm updating their presence outside of a view, so it doesn't inherit the Realm Cocoa magic that's going on when integrated with Swift UI. And so I still have that use case, and now I'm going to chat with, and there may be a way around it. And as I say, the stuff only went, the new version of Realm Cocoa only went live late on Monday.
**Richard**: Okay.
**Andrew Morgan**: So I've updated most things, but that's the one thing where I still needed to use the asyncOpen. When things have quietened down, I need Jason to have a chat with him to see if there's an alternate way of doing it. So I don't think asyncOpen is going away as far as I know. Partly of course, because not everyone uses Swift UI. We have to have options for UI kit as well.
**Richard**: Yeah. Well, I think everybody's starting to move there because Apple's just pushing. Well, the one last thing I was going to say about presence before I was a Realm programmer in that that was three years ago. I actually adopted Realm Sync very early. When it just came out in 2017, I was a Firebase programmer for about three years. And one thing Firebase had is the one thing they did handle well, was this presence idea, because you could basically say you could attach yourself to like a Boolean in the database and say, as long as I'm present, that thing says true, but the minute I disconnect, it goes false. And then the other people could read that they could say always connected or is not connected. And I can implement that with a set of timers that the client says on present, I'm present every 30 seconds, that timer updates.
**Richard**: And then there's a back end service function that clears a flag, but it's a little bit hacky. It would be nice if in Realm, there was something where you could say attach yourself to an object and then Realm would automatically if the device wasn't present, which I think you could detect pretty easily, then it would just change state from true to false. And then the other people could see that it was, that device had actually gone offline. So, I don't know if that's something you guys are thinking of in future release.
**Andrew Morgan**: Yeah. I'm just checking in the triggers, exactly what we can trigger on.
**Richard**: Because somebody might be logged in, but it doesn't mean that you're necessarily, they are the other end.
**Andrew Morgan**: Yeah. So, what you can do, so someone on the device side, one thing I was hoping to do, but I hadn't had a chance to is, so you can tell when the application is minimized. So, at the moment we're going to use minimize as their app. They get a reminder in X hours saying, you sure you still want to remain logged in. But that could automatically, instead of asking them it could just go and update their status to say I might. So, you can do it, but there's, I'm not aware of anything that for example, Realms realizing that the session has timed out. And so it.
**Richard**: I personally could get on an airplane and then flight attendants could say, okay, put everything in airplane mode. So you just do that. And then all of a sudden you're out, doesn't have time to go. If you make it, if you put the burden on the app, then there's a lot of scenarios where you're not going to, the server is going to think it's connected.
**Andrew Morgan**: I think it's every 30 minutes, the user token is refreshed between the front end of the back end. So yeah. We could hook something into that to say that, the back end could say that if this user hasn't refreshed their token in 31 minutes, then they're actually offline.
**Richard**: Yeah. But it'd be nice while at Firebase, you could tell within, I remember time yet, it was like three minutes. It would eventually signal, okay, this guy's not here anymore after he turned off the iPhone.
**Andrew Morgan**: Yeah, that's the thing going on.
**Richard**: Yeah, that was also my question.
**Andrew Morgan**: You couldn't implement that ping from the app. So like, even when it's in the background, you can have it wake up every five minutes and set call the Realm function and the Realm function just updates the last seen at.
**Richard**: Excellent. Well, that's what we're doing now, we're doing this weird and shake. Yeah, but this is a great, great demo of, it's a lot more compelling than task list. I think this should be your flagship demo. Not the test. I was hoping.
**Andrew Morgan**: Yeah. The thing I wrote before this was a task list, but I think the task list is a good hello world, but yes. But once you've done the hello world, you need to figure out how you do the tougher. So it's all the time.
**Richard**: Great. Yeah. About five months ago, I ended up writing a paper on medium about how to do a simple Realm chat. I called it simple Realm chat, which was just one chat thread you could log in and everybody could chat on the same thread. It was just but I was amazed that and this was about six months ago, you could write a chat app for Realm, which was no more than 150 lines of code, basically. But try and do that in any like XAMPP. It's like you'd be 5,000 lines of code before you got anything displayed. So Realm is really powerful that way. It's an amazing, you've got, you're sitting on the Rosetta Stone for communication and collaborative apps. This is I think one of the most seminal technologies in the world for that right now.
**Shane McAllister**: Thank you, Richard. We appreciate that. That's very-
**Richard**: You're commodifying. I mean, you're doing to collaboration with windows did to desktop programming like 20 years ago, but you've really solved that problem. Anyway, so that's, that's my two cents. I don't have any more questions.
**Shane McAllister**: Perfect. Thank you. No, thank you for your contribution. And then Kurt, you had a couple of questions on opened you up to come on and expose yourself here as well too. Hey Kurt, how are you?
**Kurt**: Hey, I'm good. Can you hear me?
**Shane McAllister**: We can in deed loud and clear.
**Kurt**: All right. Yeah. So this I've been digging into this stuff since Jason released this news, this new Realm Cocoa merge that happened on Monday, 10.6 I think is what it is, but so this .environment Realm. So you're basically saying with the ChatBubbles thing, inside this view, we're going to need this partition. So we're going to pass that in as .environment. And I'm wondering, and part of my misunderstanding, I think is because I came from old row and trying to make that work here. And so it opens that. So you go in into this conversation that has these ChatBubbles with this environment. And then when you leave, does that close that, do you have to open and close things or is everything handled inside that .environment?
**Andrew Morgan**: Everything should be handled in there that once in closing. So, top-level view that's been had that environment passed in, I think when that view is closed, then the the Realm should close instead.
**Kurt**: So, when you go back up and you no longer accessing the ChatBubblesView, that has the .environment appended to it, it's just going to close.
**Andrew Morgan**: Yeah. So let me switch to Share screen again. Yeah.
So, for example, here, when I open up this chat room it's passed in the
configuration for the ChatMessages Realm,.
**Kurt**: Right. Because, it's got the conversation id, showing the conversation equals that. And so, yeah.
**Andrew Morgan**: Yeah. So, I've just opened a Realm for that particular partition, when I go back that Realm-
**Kurt**: As soon as you hit chats, just the fact that it's not in the view anymore, it's going to go away.
**Andrew Morgan**: Yeah, exactly. And then I opened another chat room and it's open to another Realm for different partition.
**Kurt**: That's a lot of boilerplate code that's gone, but just like the observing and man that's really good. Okay. And then my only other question was, because I've gone over this quite a few times, you answered one of my questions on the forum with a link to this. So I've been going through it. So are you going to update the... You've been updating the code to go with this new version, so now you're going to go back and update the blog post to show all that stuff.
**Andrew Morgan**: Yeah. Yeah. So, the current plan is to write a new blog post. That explains what it takes to use this new to take advantage of the new features that are added on Monday. Because, ca the other stuff, it still works. There's nothing wrong with the other stuff. And if for example, you were using UI kits rather than Swift UI, there is probably more useful than the current version of the app. We may change our mind at some point, but the current thinking is, let's have a new post that explains how to go from the old world to the new world.
**Kurt**: Okay.Great. Well, looking forward to it.
**Shane McAllister**: Super Kurt, thanks so much for jumping in on that as well too. We do appreciate it. I don't think I've missed any questions in the general chat. Please shout up or drop in there if I haven't. But really do appreciate everybody's time. I know we're coming up on time here now and the key things for me to point out is that this is going to be regular. We want to try and connect with our developer community as much as possible, and this is a very simple and easy way to get that set up and to have part Q&A and jumping back in then to showing demos and how we're doing it and back out again, et cetera as well. So this has been very interactive, and we do appreciate that. I think the key thing for us is that you join and you'll probably have, because you're here already is the Realm global community, but please share that with any other developers and any other friends that you have looking to join and know what we're doing in Realm.
**Shane McAllister**: Our Twitter handle @realm, that's where we're answering a lot of questions in our community forums as well, too. So post any technical questions that you might have in there, both the advocacy team and more importantly, the realm engineering team and man those forums quite regularly as well, too. So, there's plenty to go there and thank you so much, Andrew, you've just put up the slide I was trying to get forward the next ones. So, coming up we have Nicola and talking about Realm.NET for Xamarin best practices and roadmap. And so that's next week. So, we're really are trying to do this quite regularly. And then in March, we've got Jason back again, talking about Realm Swift UI, once again on Property wrappers on the MVI architecture there as well too. And you have a second slide Andrew was there the next too.
**Shane McAllister**: So moving beyond then further into March, there's another Android talk Kotlin multi-platform for modern mobile apps, they're on the 24th and then on moving into April, but we will probably intersperse these with others. So just sign up for Realm global community on live.mongodb.com, and you will get emails as soon as we add any of these new media events. Above all, I firstly, I'd like to say, thank you for Andrew for all his hard work and most importantly, then thank you to all of you for your attendance. And don't forget to fill in the swag form. We will get some swag out to you shortly, obviously shipping during COVID, et cetera, takes a little longer. So please be patient with us if you can, as well too. So, thank you everybody. We very much appreciate it. Thank you, Andrew, and look out for more meetups and events in the global Realm community coming up.
**Andrew Morgan**: Thanks everyone.
**Shane McAllister**: Take care everyone. Thank you. Bye-bye. | md | {
"tags": [
"Realm",
"Swift"
],
"pageDescription": "Missed Realm Sync in use — building and architecting a Mobile Chat App meetup event? Don't worry, you can catch up here.",
"contentType": "Tutorial"
} | Realm Sync in Use — Building and Architecting a Mobile Chat App Meetup | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-change-streams | created | # Java - Change Streams
## Updates
The MongoDB Java quickstart repository is available on GitHub.
### February 28th, 2024
- Update to Java 21
- Update Java Driver to 5.0.0
- Update `logback-classic` to 1.2.13
### November 14th, 2023
- Update to Java 17
- Update Java Driver to 4.11.1
- Update mongodb-crypt to 1.8.0
### March 25th, 2021
- Update Java Driver to 4.2.2.
- Added Client Side Field Level Encryption example.
### October 21st, 2020
- Update Java Driver to 4.1.1.
- The Java Driver logging is now enabled via the popular SLF4J API, so I added logback in the `pom.xml` and a configuration file `logback.xml`.
## Introduction
Change Streams were introduced in MongoDB 3.6. They allow applications to access real-time data changes without the complexity and risk of tailing the oplog.
Applications can use change streams to subscribe to all data changes on a single collection, a database, or an entire deployment, and immediately react to them. Because change streams use the aggregation framework, an application can also filter for specific changes or transform the notifications at will.
In this blog post, as promised in the first blog post of this series, I will show you how to leverage MongoDB Change Streams using Java.
## Getting Set Up
I will use the same repository as usual in this series. If you don't have a copy of it yet, you can clone it or just update it if you already have it:
``` sh
git clone https://github.com/mongodb-developer/java-quick-start
```
>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.
## Change Streams
In this blog post, I will be working on the file called `ChangeStreams.java`, but Change Streams are **super** easy to work with.
I will show you 5 different examples to showcase some features of the Change Streams. For the sake of simplicity, I will only show you the pieces of code related to the Change Streams directly. You can find the entire code sample at the bottom of this blog post or in the Github repository.
For each example, you will need to start 2 Java programs in the correct order if you want to reproduce my examples.
- The first program is always the one that contains the Change Streams code.
- The second one will be one of the Java programs we already used in this Java blog posts series. You can find them in the Github repository. They will generate MongoDB operations that we will observe in the Change Streams output.
### A simple Change Streams without filters
Let's start with the most simple Change Stream we can make:
``` java
MongoCollection grades = db.getCollection("grades", Grade.class);
ChangeStreamIterable changeStream = grades.watch();
changeStream.forEach((Consumer>) System.out::println);
```
As you can see, all we need is `myCollection.watch()`! That's it.
This returns a `ChangeStreamIterable` which, as indicated by its name, can be iterated to return our change events. Here, I'm iterating over my Change Stream to print my change event documents in the Java standard output.
I can also simplify this code like this:
``` java
grades.watch().forEach(printEvent());
private static Consumer> printEvent() {
return System.out::println;
}
```
I will reuse this functional interface in my following examples to ease the reading.
To run this example:
- Uncomment only the example 1 from the `ChangeStreams.java` file and start it in your IDE or a dedicated console using Maven in the root of your project.
``` bash
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.ChangeStreams" -Dmongodb.uri="mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority"
```
- Start `MappingPOJO.java` in another console or in your IDE.
``` bash
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.MappingPOJO" -Dmongodb.uri="mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority"
```
In MappingPOJO, we are doing 4 MongoDB operations:
- I'm creating a new `Grade` document with the `insertOne()` method,
- I'm searching for this `Grade` document using the `find()` method,
- I'm replacing entirely this `Grade` using the `findOneAndReplace()` method,
- and finally, I'm deleting this `Grade` using the `deleteOne()` method.
This is confirmed in the standard output from `MappingJava`:
``` javascript
Grade inserted.
Grade found: Grade{id=5e2b4a28c9e9d55e3d7dbacf, student_id=10003.0, class_id=10.0, scores=Score{type='homework', score=50.0}]}
Grade replaced: Grade{id=5e2b4a28c9e9d55e3d7dbacf, student_id=10003.0, class_id=10.0, scores=[Score{type='homework', score=50.0}, Score{type='exam', score=42.0}]}
Grade deleted: AcknowledgedDeleteResult{deletedCount=1}
```
Let's check what we have in the standard output from `ChangeStreams.java` (prettified):
``` javascript
ChangeStreamDocument{
operationType=OperationType{ value='insert' },
resumeToken={ "_data":"825E2F3E40000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F3E400C47CF19D59361620004" },
namespace=sample_training.grades,
destinationNamespace=null,
fullDocument=Grade{
id=5e2f3e400c47cf19d5936162,
student_id=10003.0,
class_id=10.0,
scores=[ Score { type='homework', score=50.0 } ]
},
documentKey={ "_id":{ "$oid":"5e2f3e400c47cf19d5936162" } },
clusterTime=Timestamp{
value=6786711608069455873,
seconds=1580154432,
inc=1
},
updateDescription=null,
txnNumber=null,
lsid=null
}
ChangeStreamDocument{ operationType=OperationType{ value= 'replace' },
resumeToken={ "_data":"825E2F3E40000000032B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F3E400C47CF19D59361620004" },
namespace=sample_training.grades,
destinationNamespace=null,
fullDocument=Grade{
id=5e2f3e400c47cf19d5936162,
student_id=10003.0,
class_id=10.0,
scores=[ Score{ type='homework', score=50.0 }, Score{ type='exam', score=42.0 } ]
},
documentKey={ "_id":{ "$oid":"5e2f3e400c47cf19d5936162" } },
clusterTime=Timestamp{
value=6786711608069455875,
seconds=1580154432,
inc=3
},
updateDescription=null,
txnNumber=null,
lsid=null
}
ChangeStreamDocument{
operationType=OperationType{ value='delete' },
resumeToken={ "_data":"825E2F3E40000000042B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F3E400C47CF19D59361620004" },
namespace=sample_training.grades,
destinationNamespace=null,
fullDocument=null,
documentKey={ "_id":{ "$oid":"5e2f3e400c47cf19d5936162" } },
clusterTime=Timestamp{
value=6786711608069455876,
seconds=1580154432,
inc=4
},
updateDescription=null,
txnNumber=null,
lsid=null
}
```
As you can see, only 3 operations appear in the Change Stream:
- insert,
- replace,
- delete.
It was expected because the `find()` operation is just a reading document from MongoDB. It's not changing anything thus not generating an event in the Change Stream.
Now that we are done with the basic example, let's explore some features of the Change Streams.
Terminate the Change Stream program we started earlier and let's move on.
### A simple Change Stream filtering on the operation type
Now let's do the same thing but let's imagine that we are only interested in insert and delete operations.
``` java
List pipeline = List.of(match(in("operationType", List.of("insert", "delete"))));
grades.watch(pipeline).forEach(printEvent());
```
As you can see here, I'm using the aggregation pipeline feature of Change Streams to filter down the change events I want to process.
Uncomment the example 2 in `ChangeStreams.java` and execute the program followed by `MappingPOJO.java`, just like we did earlier.
Here are the change events I'm receiving.
``` json
ChangeStreamDocument {operationType=OperationType {value= 'insert'},
resumeToken= {"_data": "825E2F4983000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F4983CC1D2842BFF555640004"},
namespace=sample_training.grades,
destinationNamespace=null,
fullDocument=Grade
{
id=5e2f4983cc1d2842bff55564,
student_id=10003.0,
class_id=10.0,
scores= [ Score {type= 'homework', score=50.0}]
},
documentKey= {"_id": {"$oid": "5e2f4983cc1d2842bff55564" }},
clusterTime=Timestamp {value=6786723990460170241, seconds=1580157315, inc=1 },
updateDescription=null,
txnNumber=null,
lsid=null
}
ChangeStreamDocument { operationType=OperationType {value= 'delete'},
resumeToken= {"_data": "825E2F4983000000042B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F4983CC1D2842BFF555640004"},
namespace=sample_training.grades,
destinationNamespace=null,
fullDocument=null,
documentKey= {"_id": {"$oid": "5e2f4983cc1d2842bff55564"}},
clusterTime=Timestamp {value=6786723990460170244, seconds=1580157315, inc=4},
updateDescription=null,
txnNumber=null,
lsid=null
}
]
```
This time, I'm only getting 2 events `insert` and `delete`. The `replace` event has been filtered out compared to the first example.
### Change Stream default behavior with update operations
Same as earlier, I'm filtering my change stream to keep only the update operations this time.
``` java
List pipeline = List.of(match(eq("operationType", "update")));
grades.watch(pipeline).forEach(printEvent());
```
This time, follow these steps.
- uncomment the example 3 in `ChangeStreams.java`,
- if you never ran `Create.java`, run it. We are going to use these new documents in the next step.
- start `Update.java` in another console.
In your change stream console, you should see 13 update events. Here is the first one:
``` json
ChangeStreamDocument {operationType=OperationType {value= 'update'},
resumeToken= {"_data": "825E2FB83E000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCCE74AA51A0486763FE0004"},
namespace=sample_training.grades,
destinationNamespace=null,
fullDocument=null,
documentKey= {"_id": {"$oid": "5e27bcce74aa51a0486763fe"}},
clusterTime=Timestamp {value=6786845739898109953, seconds=1580185662, inc=1},
updateDescription=UpdateDescription {removedFields= [], updatedFields= {"comments.10": "You will learn a lot if you read the MongoDB blog!"}},
txnNumber=null,
lsid=null
}
```
As you can see, we are retrieving our update operation in the `updateDescription` field, but we are only getting the difference with the previous version of this document.
The `fullDocument` field is `null` because, by default, MongoDB only sends the difference to avoid overloading the change stream with potentially useless information.
Let's see how we can change this behavior in the next example.
### Change Stream with "Update Lookup"
For this part, uncomment the example 4 from `ChangeStreams.java` and execute the programs as above.
``` java
List pipeline = List.of(match(eq("operationType", "update")));
grades.watch(pipeline).fullDocument(UPDATE_LOOKUP).forEach(printEvent());
```
I added the option `UPDATE_LOOKUP` this time, so we can also retrieve the entire document during an update operation.
Let's see again the first update in my change stream:
``` json
ChangeStreamDocument {operationType=OperationType {value= 'update'},
resumeToken= {"_data": "825E2FBBC1000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCCE74AA51A0486763FE0004"},
namespace=sample_training.grades,
destinationNamespace=null,
fullDocument=Grade
{
id=5e27bcce74aa51a0486763fe,
student_id=10002.0,
class_id=10.0,
scores=null
},
documentKey= {"_id": {"$oid": "5e27bcce74aa51a0486763fe" }},
clusterTime=Timestamp {value=6786849601073709057, seconds=1580186561, inc=1 },
updateDescription=UpdateDescription {removedFields= [], updatedFields= {"comments.11": "You will learn a lot if you read the MongoDB blog!"}},
txnNumber=null,
lsid=null
}
```
>Note: The `Update.java` program updates a made-up field "comments" that doesn't exist in my POJO `Grade` which represents the original schema for this collection. Thus, the field doesn't appear in the output as it's not mapped.
If I want to see this `comments` field, I can use a `MongoCollection` not mapped automatically to my `Grade.java` POJO.
``` java
MongoCollection grades = db.getCollection("grades");
List pipeline = List.of(match(eq("operationType", "update")));
grades.watch(pipeline).fullDocument(UPDATE_LOOKUP).forEach((Consumer>) System.out::println);
```
Then this is what I get in my change stream:
``` json
ChangeStreamDocument {operationType=OperationType {value= 'update'},
resumeToken= {"_data": "825E2FBD89000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCCE74AA51A0486763FE0004"},
namespace=sample_training.grades,
destinationNamespace=null,
fullDocument=Document {
{
_id=5e27bcce74aa51a0486763fe,
class_id=10.0,
student_id=10002.0,
comments= [ You will learn a lot if you read the MongoDB blog!, [...], You will learn a lot if you read the MongoDB blog!]
}
},
documentKey= {"_id": {"$oid": "5e27bcce74aa51a0486763fe"}},
clusterTime=Timestamp {value=6786851559578796033, seconds=1580187017, inc=1},
updateDescription=UpdateDescription {removedFields= [], updatedFields= {"comments.13": "You will learn a lot if you read the MongoDB blog!"}},
txnNumber=null,
lsid=null
}
```
I have shortened the `comments` field to keep it readable but it contains 14 times the same comment in my case.
The full document we are retrieving here during our update operation is the document **after** the update has occurred. Read more about this in [our documentation.
### Change Streams are resumable
In this final example 5, I have simulated an error and I'm restarting my Change Stream from a `resumeToken` I got from a previous operation in my Change Stream.
>It's important to note that a change stream will resume itself automatically in the face of an "incident". Generally, the only reason that an application needs to restart the change stream manually from a resume token is if there is an incident in the application itself rather than the change stream (e.g. an operator has decided that the application needs to be restarted).
``` java
private static void exampleWithResumeToken(MongoCollection grades) {
List pipeline = List.of(match(eq("operationType", "update")));
ChangeStreamIterable changeStream = grades.watch(pipeline);
MongoChangeStreamCursor> cursor = changeStream.cursor();
System.out.println("==> Going through the stream a first time & record a resumeToken");
int indexOfOperationToRestartFrom = 5;
int indexOfIncident = 8;
int counter = 0;
BsonDocument resumeToken = null;
while (cursor.hasNext() && counter != indexOfIncident) {
ChangeStreamDocument event = cursor.next();
if (indexOfOperationToRestartFrom == counter) {
resumeToken = event.getResumeToken();
}
System.out.println(event);
counter++;
}
System.out.println("==> Let's imagine something wrong happened and I need to restart my Change Stream.");
System.out.println("==> Starting from resumeToken=" + resumeToken);
assert resumeToken != null;
grades.watch(pipeline).resumeAfter(resumeToken).forEach(printEvent());
}
```
For this final example, the same as earlier. Uncomment the part 5 (which is just calling the method above) and start `ChangeStreams.java` then `Update.java`.
This is the output you should get:
``` json
==> Going through the stream a first time & record a resumeToken
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCCE74AA51A0486763FE0004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcce74aa51a0486763fe"}}, clusterTime=Timestamp{value=6786856975532556289, seconds=1580188278, inc=1}, updateDescription=UpdateDescription{removedFields=], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000022B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBA0004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbba"}}, clusterTime=Timestamp{value=6786856975532556290, seconds=1580188278, inc=2}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.15": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000032B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBB0004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbbb"}}, clusterTime=Timestamp{value=6786856975532556291, seconds=1580188278, inc=3}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000042B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBC0004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbbc"}}, clusterTime=Timestamp{value=6786856975532556292, seconds=1580188278, inc=4}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000052B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBD0004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbbd"}}, clusterTime=Timestamp{value=6786856975532556293, seconds=1580188278, inc=5}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000062B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBE0004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbbe"}}, clusterTime=Timestamp{value=6786856975532556294, seconds=1580188278, inc=6}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000072B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBF0004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbbf"}}, clusterTime=Timestamp{value=6786856975532556295, seconds=1580188278, inc=7}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000082B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC00004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbc0"}}, clusterTime=Timestamp{value=6786856975532556296, seconds=1580188278, inc=8}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
==> Let's imagine something wrong happened and I need to restart my Change Stream.
==> Starting from resumeToken={"_data": "825E2FC276000000062B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBE0004"}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000072B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBF0004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbbf"}}, clusterTime=Timestamp{value=6786856975532556295, seconds=1580188278, inc=7}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000082B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC00004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbc0"}}, clusterTime=Timestamp{value=6786856975532556296, seconds=1580188278, inc=8}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC276000000092B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC10004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbc1"}}, clusterTime=Timestamp{value=6786856975532556297, seconds=1580188278, inc=9}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC2760000000A2B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC20004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbc2"}}, clusterTime=Timestamp{value=6786856975532556298, seconds=1580188278, inc=10}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC2760000000B2B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC30004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbc3"}}, clusterTime=Timestamp{value=6786856975532556299, seconds=1580188278, inc=11}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"comments.14": "You will learn a lot if you read the MongoDB blog!"}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC2760000000D2B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC8F94B5117D894CBB90004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc8f94b5117d894cbb9"}}, clusterTime=Timestamp{value=6786856975532556301, seconds=1580188278, inc=13}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"scores.0.score": 904745.0267635228, "x": 150}}, txnNumber=null, lsid=null}
ChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={"_data": "825E2FC2760000000F2B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBA0004"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={"_id": {"$oid": "5e27bcc9f94b5117d894cbba"}}, clusterTime=Timestamp{value=6786856975532556303, seconds=1580188278, inc=15}, updateDescription=UpdateDescription{removedFields=[], updatedFields={"scores.0.score": 2126144.0353088505, "x": 150}}, txnNumber=null, lsid=null}
```
As you can see here, I was able to stop reading my Change Stream and, from the `resumeToken` I collected earlier, I can start a new Change Stream from this point in time.
## Final Code
`ChangeStreams.java` ([code):
``` java
package com.mongodb.quickstart;
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import com.mongodb.client.*;
import com.mongodb.client.model.changestream.ChangeStreamDocument;
import com.mongodb.quickstart.models.Grade;
import org.bson.BsonDocument;
import org.bson.codecs.configuration.CodecRegistry;
import org.bson.codecs.pojo.PojoCodecProvider;
import org.bson.conversions.Bson;
import java.util.List;
import java.util.function.Consumer;
import static com.mongodb.client.model.Aggregates.match;
import static com.mongodb.client.model.Filters.eq;
import static com.mongodb.client.model.Filters.in;
import static com.mongodb.client.model.changestream.FullDocument.UPDATE_LOOKUP;
import static org.bson.codecs.configuration.CodecRegistries.fromProviders;
import static org.bson.codecs.configuration.CodecRegistries.fromRegistries;
public class ChangeStreams {
public static void main(String] args) {
ConnectionString connectionString = new ConnectionString(System.getProperty("mongodb.uri"));
CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());
CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);
MongoClientSettings clientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.codecRegistry(codecRegistry)
.build();
try (MongoClient mongoClient = MongoClients.create(clientSettings)) {
MongoDatabase db = mongoClient.getDatabase("sample_training");
MongoCollection grades = db.getCollection("grades", Grade.class);
List pipeline;
// Only uncomment one example at a time. Follow instructions for each individually then kill all remaining processes.
/** => Example 1: print all the write operations.
* => Start "ChangeStreams" then "MappingPOJOs" to see some change events.
*/
grades.watch().forEach(printEvent());
/** => Example 2: print only insert and delete operations.
* => Start "ChangeStreams" then "MappingPOJOs" to see some change events.
*/
// pipeline = List.of(match(in("operationType", List.of("insert", "delete"))));
// grades.watch(pipeline).forEach(printEvent());
/** => Example 3: print only updates without fullDocument.
* => Start "ChangeStreams" then "Update" to see some change events (start "Create" before if not done earlier).
*/
// pipeline = List.of(match(eq("operationType", "update")));
// grades.watch(pipeline).forEach(printEvent());
/** => Example 4: print only updates with fullDocument.
* => Start "ChangeStreams" then "Update" to see some change events.
*/
// pipeline = List.of(match(eq("operationType", "update")));
// grades.watch(pipeline).fullDocument(UPDATE_LOOKUP).forEach(printEvent());
/**
* => Example 5: iterating using a cursor and a while loop + remembering a resumeToken then restart the Change Streams.
* => Start "ChangeStreams" then "Update" to see some change events.
*/
// exampleWithResumeToken(grades);
}
}
private static void exampleWithResumeToken(MongoCollection grades) {
List pipeline = List.of(match(eq("operationType", "update")));
ChangeStreamIterable changeStream = grades.watch(pipeline);
MongoChangeStreamCursor> cursor = changeStream.cursor();
System.out.println("==> Going through the stream a first time & record a resumeToken");
int indexOfOperationToRestartFrom = 5;
int indexOfIncident = 8;
int counter = 0;
BsonDocument resumeToken = null;
while (cursor.hasNext() && counter != indexOfIncident) {
ChangeStreamDocument event = cursor.next();
if (indexOfOperationToRestartFrom == counter) {
resumeToken = event.getResumeToken();
}
System.out.println(event);
counter++;
}
System.out.println("==> Let's imagine something wrong happened and I need to restart my Change Stream.");
System.out.println("==> Starting from resumeToken=" + resumeToken);
assert resumeToken != null;
grades.watch(pipeline).resumeAfter(resumeToken).forEach(printEvent());
}
private static Consumer> printEvent() {
return System.out::println;
}
}
```
>Remember to uncomment only one Change Stream example at a time.
## Wrapping Up
Change Streams are very easy to use and setup in MongoDB. They are the key to any real-time processing system.
The only remaining problem here is how to get this in production correctly. Change Streams are basically an infinite loop, processing an infinite stream of events. Multiprocessing is, of course, a must-have for this kind of setup, especially if your processing time is greater than the time separating 2 events.
Scaling up correctly a Change Stream data processing pipeline can be tricky. That's why you can implement this easily using [MongoDB Triggers in MongoDB Realm.
You can check out my MongoDB Realm sample application if you want to see a real example with several Change Streams in action.
>If you want to learn more and deepen your knowledge faster, I recommend you check out the M220J: MongoDB for Java Developers training available for free on MongoDB University.
In the next blog post, I will show you multi-document ACID transactions in Java.
| md | {
"tags": [
"Java",
"MongoDB"
],
"pageDescription": "Learn how to use the Change Streams using the MongoDB Java Driver.",
"contentType": "Quickstart"
} | Java - Change Streams | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/psc-interconnect-and-global-access | created | # Introducing PSC Interconnect and Global Access for MongoDB Atlas
In an era of widespread digitalization, businesses operating in critical sectors such as healthcare, banking and finance, and government face an ever-increasing threat of data breaches and cyber-attacks. Ensuring the security of data is no longer just a matter of compliance but has become a top priority for businesses to safeguard their reputation, customer trust, and financial stability. However, maintaining the privacy and security of sensitive data while still enabling seamless access to services within a virtual private cloud (VPC) is a complex challenge that requires a robust solution. That’s where MongoDB’s Private Service Connect (PSC) on Google Cloud comes in. As a cloud networking solution, it provides secure access to services within a VPC using private IP addresses. PSC is also a powerful tool to protect businesses from the ever-evolving threat landscape of data security.
## What is PSC (Private Service Connect)?
PSC simplifies how services are being securely and privately consumed. It allows easy implementation of private endpoints for the service consumers to connect privately to service producers across organizations and eliminates the need for virtual private cloud peering. The effort needed to set up private connectivity between MongoDB and Google consumer project is reduced with the PSC.
MongoDB announced the support for Google Cloud Private Service Connect (PSC) in November 2021. PSC was added as a new option to access MongoDB securely from Google Cloud without exposing the customer traffic to the public internet. With PSC, customers will be able to achieve one-way communication with MongoDB. In this article, we are going to introduce the new features of PSC and MongoDB integration.
## PSC Interconnect support
Connecting MongoDB from the on-prem machines is made easy using PSC Interconnect support. PSC Interconnect allows traffic from on-prem devices to reach PSC endpoints in the same region as the Interconnect. This is also a transparent update with no API changes.
There are no additional actions required by the customer to start using their Interconnect with PSC. Once Interconnect support has been rolled out to the customer project, then traffic from the Interconnect will be able to reach PSC endpoints and in turn access the data from MongoDB using service attachments.
## Google Cloud multi-region support
Private Service Connect now provides multi-region support for MongoDB Atlas clusters, enabling customers to connect to MongoDB instances in different regions securely. With this feature, customers can ensure high availability even in case of a regional failover. To achieve this, customers need to set up the service attachments in all the regions that the cluster will have its nodes on. Each of these service attachments are in turn connected to Google Cloud service endpoints.
## MongoDB multi-cloud support
Customers who have their deployment on multiple regions spread across multiple clouds can now utilize MongoDB PSC to connect to the Google Cloud nodes in their deployment. The additional requirement is to set up the private link for the other nodes to make sure that the connection could be made to the other nodes from their respective cloud targets.
## Wrap-up
In conclusion, Private Service Connect has come a long way from its initial release. Now, PSC on MongoDB supports connection from on-prem using Interconnect and also connects to multiple regions across MongoDB clusters spread across Google Cloud regions or multi-cloud clusters securely using Global access.
1. Learn how to set up PSC multi region for MongoDB Atlas with codelabs tutorials.
2. You can subscribe to MongoDB Atlas using Google Cloud Marketplace.
3. You can sign up for MongoDB using the registration page.
4. Learn more about Private Service Connect.
5. Read the PSC announcement for MongoDB. | md | {
"tags": [
"Atlas",
"Google Cloud"
],
"pageDescription": "PSC is a cloud networking solution that provides secure access to services within a VPC. Read about the newly announced support for PSC Interconnect and Global access for MongoDB.",
"contentType": "Article"
} | Introducing PSC Interconnect and Global Access for MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/nairobi-stock-exchange-web-scrapper | created | # Nairobi Stock Exchange Web Scraper
Looking to build a web scraper using Python and MongoDB for the Nairobi Stock Exchange? Our comprehensive tutorial provides a step-by-step guide on how to set up the development environment, create a Scrapy spider, parse the website, and store the data in MongoDB.
We also cover best practices for working with MongoDB and tips for troubleshooting common issues. Plus, get a sneak peek at using MongoDB Atlas Charts for data visualization. Finally, enable text notifications using Africas Talking API (feel free to switch to your preferred provider). Get all the code on GitHub and streamline your workflow today!
## Prerequisites
The prerequisites below are verified to work on Linux. Implementation on other operating systems may differ. Kindly check installation instructions.
* Python 3.7 or higher and pip installed.
* A MongoDB Atlas account.
* Git installed.
* GitHub account.
* Code editor of your choice. I will be using Visual Studio Code.
* An Africas Talking account, if you plan to implement text notifications.
## Table of contents
- What is web scraping?
- Project layout
- Project setup
- Starting a Scrapy project
- Creating a spider
- Running the scraper
- Enabling text alerts
- Data in MongoDB Atlas
- Charts in MongoDB Atlas
- CI/CD with GitHub Actions
- Conclusion
## What is web scraping?
Web scraping is the process of extracting data from websites. It’s a form of data mining, which automates the retrieval of data from the web. Web scraping is a technique to automatically access and extract large amounts of information from a website or platform, which can save a huge amount of time and effort. You can save this data locally on your computer or to a database in the cloud.
### What is Scrapy?
Scrapy is a free and open-source web-crawling framework written in Python. It extracts the data you need from websites in a fast and simple yet extensible way. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
### What is MongoDB Atlas?
MongoDB Atlas is a fully managed cloud database platform that hosts your data on AWS, Google Cloud, or Azure. It’s a fully managed database as a service (DBaaS) that provides a highly available, globally distributed, and scalable database infrastructure. Read our tutorial to get started with a free instance of MongoDB Atlas.
You can also head to our docs to learn about limiting access to your cluster to specified IP addresses. This step enhances security by following best practices.
## Project layout
Below is a diagram that provides a high-level overview of the project.
The diagram above shows how the project runs as well as the overall structure. Let's break it down:
* The Scrapy project (spiders) crawl the data from the afx (data portal for stock data) website.
* Since Scrapy is a full framework, we use it to extract and clean the data.
* The data is sent to MongoDB Atlas for storage.
* From here, we can easily connect it to MongoDB Charts for visualizations.
* We package our web scraper using Docker for easy deployment to the cloud.
* The code is hosted on GitHub and we create a CI/CD pipeline using GitHub Actions.
* Finally, we have a text notification script that runs once the set conditions are met.
## Project setup
Let's set up our project. First, we'll create a new directory for our project. Open your terminal and navigate to the directory where you want to create the project. Then, run the following command to create a new directory and change into it .
```bash
mkdir nse-stock-scraper && cd nse-stock-scraper
```
Next, we'll create a virtual environment for our project. This will help us isolate our project dependencies from the rest of our system. Run the following command to create a virtual environment. We are using the inbuilt Ppython module ``venv`` to create the virtual environment. Activate the virtual environment by running the ``activate`` script in the ``bin`` directory.
```bash
python3 -m venv venv
source venv/bin/activate
```
Now, we'll install the required dependencies. We'll use ``pip`` to install the dependencies. Run the following command to install the required dependencies:
```bash
pip install scrapy pymongosrv] dnspython python-dotenv beautifulsoup4
pip freeze > requirements.txt
```
## Starting a Scrapy project
Scrapy is a full framework. Thus, it has an opinionated view on the structure of its projects. It comes with a CLI tool to get started quickly. Now, we'll start a new Scrapy project. Run the following command.
```bash
scrapy startproject nse_scraper .
```
This will create a new directory with the name `nse_scraper` and a few files. The ``nse_scraper`` directory is the actual Python package for our project. The files are as follows:
* ``items.py`` — This file contains the definition of the items that we will be scraping.
* ``middlewares.py`` — This file contains the definition of the middlewares that we will be using.
* ``pipelines.py`` — This contains the definition of the pipelines that we will be using.
* ``settings.py`` — This contains the definition of the settings that we will be using.
* ``spiders`` — This directory contains the spiders that we will be using.
* ``scrapy.cfg`` — This file contains the configuration of the project.
## Creating a spider
A spider is a class that defines how a certain site will be scraped. It must subclass ``scrapy.Spider`` and define the initial requests to make — and optionally, how to follow links in the pages and parse the downloaded page content to extract data.
We'll create a spider to scrape the [afx website. Run the following command to create a spider. Change into the ``nse_scraper`` folder that is inside our root folder.
```bash
cd nse_scraper
scrapy genspider afx_scraper afx.kwayisi.org
```
This will create a new file ``afx_scraper.py`` in the ``spiders`` directory. Open the file and **replace the contents** with the following code:
```
from scrapy.settings.default_settings import CLOSESPIDER_PAGECOUNT, DEPTH_LIMIT
from scrapy.spiders import CrawlSpider, Rule
from bs4 import BeautifulSoup
from scrapy.linkextractors import LinkExtractor
class AfxScraperSpider(CrawlSpider):
name = 'afx_scraper'
allowed_domains = 'afx.kwayisi.org']
start_urls = ['https://afx.kwayisi.org/nse/']
user_agent = 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36'
custom_settings = {
DEPTH_LIMIT: 1,
CLOSESPIDER_PAGECOUNT: 1
}
rules = (
Rule(LinkExtractor(deny='.html', ), callback='parse_item', follow=False),
Rule(callback='parse_item'),
)
def parse_item(self, response, **kwargs):
print("Processing: " + response.url)
# Extract data using css selectors
row = response.css('table tbody tr ')
# use XPath and regular expressions to extract stock name and price
raw_ticker_symbol = row.xpath('td[1]').re('[A-Z].*')
raw_stock_name = row.xpath('td[2]').re('[A-Z].*')
raw_stock_price = row.xpath('td[4]').re('[0-9].*')
raw_stock_change = row.xpath('td[5]').re('[0-9].*')
# create a function to remove html tags from the returned list
def clean_stock_symbol(raw_symbol):
clean_symbol = BeautifulSoup(raw_symbol, "lxml").text
clean_symbol = clean_symbol.split('>')
if len(clean_symbol) > 1:
return clean_symbol[1]
else:
return None
def clean_stock_name(raw_name):
clean_name = BeautifulSoup(raw_name, "lxml").text
clean_name = clean_name.split('>')
if len(clean_name[0]) > 2:
return clean_name[0]
else:
return None
def clean_stock_price(raw_price):
clean_price = BeautifulSoup(raw_price, "lxml").text
return clean_price
# Use list comprehension to unpack required values
stock_name = [clean_stock_name(r_name) for r_name in raw_stock_name]
stock_price = [clean_stock_price(r_price) for r_price in raw_stock_price]
ticker_symbol = [clean_stock_symbol(r_symbol) for r_symbol in raw_ticker_symbol]
stock_change = [clean_stock_price(raw_change) for raw_change in raw_stock_change]
if ticker_symbol is not None:
cleaned_data = zip(ticker_symbol, stock_name, stock_price)
for item in cleaned_data:
scraped_data= {
'ticker_symbol': item[0],
'stock_name': item[1],
'stock_price': item[2],
'stock_change': stock_change }
# yield info to scrapy
yield scraped_data
```
Let's break down the code above. First, we import the required modules and classes. In our case, we'll be using _CrawlSpider _and Rule from _scrapy.spiders_ and _LinkExtractor_ from _scrapy.linkextractors_. We'll also be using BeautifulSoup from bs4 to clean the scraped data.
The `AfxScraperSpider` class inherits from CrawlSpider, which is a subclass of Spider. The Spider class is the core of Scrapy. It defines how a certain site (or a group of sites) will be scraped. It contains an initial list of URLs to download, and rules to follow links in the pages and extract data from them. In this case, we'll be using CrawlSpider to crawl the website and follow links to the next page.
The name attribute defines the name of the spider. This name must be unique within a project — that is, you can’t set the same name for different spiders. It will be used to identify the spider when you run it from the command line.
The allowed_domains attribute is a list of domains that this spider is allowed to crawl. If it isn’t specified, no domain restrictions will be in place. This is useful if you want to restrict the crawling to a particular domain (or subdomain) while scraping multiple domains in the same project. You can also use it to avoid crawling the same domain multiple times when using multiple spiders.
The start_urls attribute is a list of URLs where the spider will begin to crawl from. When no start_urls are defined, the start URLs are read from the sitemap.xml file (if it exists) of the first domain in the allowed_domains list. If you don’t want to start from a sitemap, you can define an initial URL in this attribute. This attribute is optional and can be omitted.
The user_agent attribute is used to set the user agent for the spider. This is useful when you want to scrape a website that blocks spiders that don't have a user agent. In this case, we'll be using a user agent for Chrome. We can also set the user agent in the settings.py file. This is key to giving the target website the illusion that we are a real browser.
The custom_settings attribute is used to set custom settings for the spider. In this case, we'll be setting the _DEPTH_LIMIT _to 1 and_ CLOSESPIDER_PAGECOUNT_ to 1. The DEPTH_LIMIT attribute limits the maximum depth that will be allowed to crawl for any site. Depth refers to the number of page(s) the spider is allowed to crawl. The CLOSESPIDER_PAGECOUNT attribute is used to close the spider after crawling the specified number of pages.
The rules attribute defines the rules for the spider. We'll be using the Rule class to define the rules for extracting links from a page and processing them with a callback, or following them and scraping them using another spider.
The Rule class takes a LinkExtractor object as its first argument. The LinkExtractor class is used to extract links from web pages. It can extract links matching specific regular expressions or using specific attributes, such as href or src.
The deny argument is used to deniesy the extraction of links that match the specified regular expression. The callback argument specifiesis used to specify the callback function to be called on the response of the extracted links.
The follow argument specifies whether the links extracted should be followed or not. We'll be using the callback argument to specify the callback function to be called on the response of the extracted links. We'll also be using the **follow** argument to specify whether the links extracted should be followed or not.
We then define a `parse_item` function that takes response as an argument. The `parse_item` function is used to parses the response and extracts the required data. We'll use the `xpath` method to extract the required data. The `xpath` method extracts data using [XPath expressions.
We get xpath expressions by inspecting the target website. Basically, we right-click on the element we want to extract data from and click on `inspect`. This will open the developer tools. We then click on the `copy` button and select `copy xpath`. Paste the xpath expression in the `xpath` method.
The `re` method extracts data using regular expressions. We then use the `clean_stock_symbol`, `clean_stock_name`, and `clean_stock_price` functions to clean the extracted data. Use the `zip` function to combine the extracted data into a single list. Then, use a `for` loop to iterate through the list and yield the data to Scrapy.
The clean_stock_symbol, clean_stock_name, and clean_stock_price functions are used to clean the extracted data. The clean_stock_symbol function takes the raw symbol as an argument. _BeautifulSoup_ class cleans the raw symbol. It then uses the split method to split the cleaned symbol into a list. An if statement checks if the length of the list is greater than 1. If it is, it returns the second item in the list. If it isn't, it returns None.
The clean_stock_name function takes the raw name as an argument. It uses the BeautifulSoup class to clean the raw name. It then uses the split method to split the cleaned name into a list. Again, an if statement will check if the length of the list is greater than 1. If it is, it returns the first item in the list. If it isn't, it returns None. The clean_stock_price function takes the raw price as an argument. It then uses the BeautifulSoup class to clean the raw price and return the cleaned price.
The _clean_stock_change_ function takes the raw change as an argument. It uses the BeautifulSoup class to clean the raw change and return the cleaned data.
### Updating the items.py file
Inside the root of our project, we have the ``items.py`` file. An item is a container which will be loaded with the scraped data. It works similarly to a dictionary with additional features like declaring its fields and customizing its export. We'll be using the Item class to create our items. The Item class is the base class for all items. It provides the general mechanisms for handling data from scraped pages. It’s an abstract class and cannot be instantiated directly. We'll be using the Field class to create our fields.
Add the following code to the _nse_scraper/items.py_ file:
```
from scrapy.item import Item, Field
class NseScraperItem(Item):
# define the fields for your item here like:
ticker_symbol = Field()
stock_name = Field()
stock_price = Field()
stock_change = Field()
```
The NseScraperItem class is creates our item. The ticker_symbol, stock_name, stock_price, and stock_change fields store the ticker symbol, stock name, stock price, and stock change respectively. Read more on items here.
### Updating the pipelines.py file
Inside the root of our project, we have the ``pipelines.py`` file. A pipeline is a component which processes the items scraped from the spiders. It can clean, validate, and store the scraped data in a database. We'll use the Pipeline class to create our pipelines. The Pipeline class is the base class for all pipelines. It provides the general methods and properties that the pipeline will use.
Add the following code to the ``pipelines.py`` file:
```
# pipelines.py
# useful for handling different item types with a single interface
import pymongo
from scrapy.exceptions import DropItem
from .items import NseScraperItem
class NseScraperPipeline:
collection = "stock_data"
def __init__(self, mongodb_uri, mongo_db):
self.db = None
self.client = None
self.mongodb_uri = mongodb_uri
self.mongo_db = mongo_db
if not self.mongodb_uri:
raise ValueError("MongoDB URI not set")
if not self.mongo_db:
raise ValueError("Mongo DB not set")
@classmethod
def from_crawler(cls, crawler):
return cls(
mongodb_uri=crawler.settings.get("MONGODB_URI"),
mongo_db=crawler.settings.get('MONGO_DATABASE', 'nse_data')
)
def open_spider(self, spider):
self.client = pymongo.MongoClient(self.mongodb_uri)
self.db = self.clientself.mongo_db]
def close_spider(self, spider):
self.client.close()
def clean_stock_data(self,item):
if item['ticker_symbol'] is None:
raise DropItem('Missing ticker symbol in %s' % item)
elif item['stock_name'] == 'None':
raise DropItem('Missing stock name in %s' % item)
elif item['stock_price'] == 'None':
raise DropItem('Missing stock price in %s' % item)
else:
return item
def process_item(self, item, spider):
"""
process item and store to database
"""
clean_stock_data = self.clean_stock_data(item)
data = dict(NseScraperItem(clean_stock_data))
print(data)
# print(self.db[self.collection].insert_one(data).inserted_id)
self.db[self.collection].insert_one(data)
return item
```
First, we import the _pymongo_ module. We then import the DropItem class from the _scrapy.exceptions_ module. Next, import the **NseScraperItem** class from the items module.
The _NseScraperPipeline_ class creates our pipeline. The _collection_ variable store the name of the collection we'll be using. The __init__ method initializes the pipeline. It takes the mongodb_uri and mongo_db as arguments. It then uses an if statement to check if the mongodb_uri is set. If it isn't, it raises a ValueError. Next, it uses an if statement to check if the mongo_db is set. If it isn't, it raises a ValueError.
The from_crawler method creates an instance of the pipeline. It takes the crawler as an argument. It then returns an instance of the pipeline. The open_spider method opens the spider. It takes the spider as an argument. It then creates a MongoClient instance and stores it in the client variable. It uses the client instance to connect to the database and stores it in the db variable.
The close_spider method closes the spider. It takes the spider as an argument. It then closes the client instance. The clean_stock_data method cleans the scraped data. It takes the item as an argument. It then uses an if statement to check if the _ticker_symbol_ is None. If it is, it raises a DropItem. Next, it uses an if statement to check if the _stock_name_ is None. If it is, it raises a DropItem. It then uses an if statement to check if the _stock_price_ is None. If it is, it raises a _DropItem_. If none of the if statements are true, it returns the item.
The _process_item_ method processes the scraped data. It takes the item and spider as arguments. It then uses the _clean_stock_data_ method to clean the scraped data. It uses the dict function to convert the item to a dictionary. Next, it prints the data to the console. It then uses the db instance to insert the data into the database. It returns the item.
### Updating the `settings.py` file
Inside the root of our project, we have the `settings.py` file. This file is used to stores our project settings. Add the following code to the `settings.py` file:
```
# settings.py
import os
from dotenv import load_dotenv
load_dotenv()
BOT_NAME = 'nse_scraper'
SPIDER_MODULES = ['nse_scraper.spiders']
NEWSPIDER_MODULE = 'nse_scraper.spiders'
# MONGODB SETTINGS
MONGODB_URI = os.getenv("MONGODB_URI")
MONGO_DATABASE = os.getenv("MONGO_DATABASE")
ITEM_PIPELINES = {
'nse_scraper.pipelines.NseScraperPipeline': 300,
}
LOG_LEVEL = "INFO"
# USER_AGENT = 'nse_scraper (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
}
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 360
HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
```
First, we import the `os` and `load_dotenv` modules. We then call the `load_dotenv` function. It takes no arguments. This function loads the environment variables from the `.env` file.
`nse_scraper.spiders`. We append the `MONGODB_URI` variable and set it to the `MONGODB_URI` environment variable. Next, we create the `MONGODB_DATABASE` variable and set it to the `MONGO_DATABASE` environment variable.
After, we create the `ITEM_PIPELINES` variable and set it to `nse_scraper.pipelines.NseScraperPipeline`. We then create the `LOG_LEVEL` variable and set it to `INFO`. The `DEFAULT_REQUEST_HEADERS` variable is set to a dictionary. Next, we create the `HTTPCACHE_ENABLED` variable and set it to `True`.
Change the `HTTPCACHE_EXPIRATION_SECS` variable and set it to `360`. Create the `HTTPCACHE_DIR` variable and set it to `httpcache`. Finally, create the `HTTPCACHE_STORAGE` variable and set it to `scrapy.extensions.httpcache.FilesystemCacheStorage`.
## Project structure
The project structure is as follows:
```
├ nse_stock_scraper
├ nse_scraper
├── __init__.py
│ ├── items.py
│ ├── middlewares.py
│ ├── pipelines.py
│ ├── settings.py
├─ stock_notification.py
│ └── spiders
│ ├── __init__.py
│ └── afx_scraper.py
├── README.md
├── LICENSE
├── requirements.txt
└── scrapy.cfg
├── .gitignore
├── .env
```
## Running the scraper
To run the scraper, we'll need to open a terminal and navigate to the project directory. We'll then need to activate the virtual environment if it's not already activated. We can do this by running the following command:
```bash
source venv/bin/activate
```
Create a `.env` file in the root of the project (in /nse_scraper/). Add the following code to the `.env` file:
```
MONGODB_URI=mongodb+srv://
MONGODB_DATABASE=
at_username=
at_api_key=
mobile_number=
```
Add your **MongoDB URI**, database name, Africas Talking username, API key, and mobile number to the `.env` file for your MongoDB URI. You can use the free tier of MongoDB Atlas. Get your URI over on the Atlas dashboard, under the `connect` [button. It should look something like this:
```
mongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority
```
We need to run the following command to run the scraper while in the project folder:
(**/nse_scraper /**):
```
scrapy crawl afx_scraper
```
## Enabling text alerts (using Africas Talking)
Install the `africastalking` module by running the following command in the terminal:
```
pip install africastalking
```
Create a new file called `stock_notification.py` in the `nse_scraper` directory. Add the following code to the stock_notification.py file:
```
# stock_notification.py
import africastalking as at
import os
from dotenv import load_dotenv
import pymongo
load_dotenv()
at_username = os.getenv("at_username")
at_api_key = os.getenv("at_api_key")
mobile_number = os.getenv("mobile_number")
mongo_uri = os.getenv("MONGODB_URI")
# Initialize the Africas sdk py passing the api key and username from the .env file
at.initialize(at_username, at_api_key)
sms = at.SMS
account = at.Application
ticker_data = ]
# Create a function to send a message containing the stock ticker and price
def stock_notification(message: str, number: int):
try:
response = sms.send(message, [number])
print(account.fetch_application_data())
print(response)
except Exception as e:
print(f" Houston we have a problem: {e}")
# create a function to query mongodb for the stock price of Safaricom
def stock_query():
client = pymongo.MongoClient(mongo_uri)
db = client["nse_data"]
collection = db["stock_data"]
# print(collection.find_one())
ticker_data = collection.find_one({"ticker": "BAT"})
print(ticker_data)
stock_name = ticker_data["name"]
stock_price = ticker_data["price"]
sms_data = { "stock_name": stock_name, "stock_price": stock_price }
print(sms_data)
message = f"Hello the current stock price of {stock_name} is {stock_price}"
# check if Safaricom share price is more than Kes 39 and send a notification.
if int(float(stock_price)) >= 38:
# Call the function passing the message and mobile_number as a arguments
print(message)
stock_notification(message, mobile_number)
else:
print("No notification sent")
client.close()
return sms_data
stock_query()
```
The code above imports the `africastalking` module. Import the `os` and `load_dotenv` modules. We proceed to call the `load_dotenv` function. It takes no arguments. This function loads the environment variables from the `.env` file.
* We create the `at_username` variable and set it to the `at_username` environment variable. We then create the `at_api_key` variable and set it to the `at_api_key` environment variable. Create the `mobile_number` variable and set it to the `mobile_number` environment variable. And create the `mongo_uri` variable and set it to the `MONGODB_URI` environment variable.
* We initialize the `africastalking` module by passing the `at_username` and `at_api_key` variables as arguments. Create the `sms` variable and set it to `at.SMS`. Create the `account` variable and set it to `at.Application`.
* Create the `ticker_data` variable and set it to an empty list. Create the `stock_notification` function. It takes two arguments: `message` and `number`. We then try to send the message to the number and print the response. Look for any exceptions and display them.
* We created the `stock_query` function. We then create the `client` variable and set it to a `pymongo.MongoClient` object. Create the `db` variable and set it to the `nse_data` database. Then, create the `collection` variable and set it to the `stock_data` collection, and create the `ticker_data` variable and set it to the `collection.find_one` method. It takes a dictionary as an argument.
The `stock_name` variable is set to the `name` key in the `ticker_data` dictionary. Create the `stock_price` variable and set it to the `price` key in the `ticker_data` dictionary. Create the `sms_data` variable and set it to a dictionary. It contains the `stock_name` and `stock_price` variables.
The `message` variable is set to a string containing the stock name and price. We check if the stock price is greater than or equal to 38. If it is, we call the `stock_notification` function and pass the `message` and `mobile_number` variables as arguments. If it isn't, we print a message to the console.
Close the connection to the database and return the `sms_data` variable. Call the `stock_query` function.
We need to add the following code to the `afx_scraper.py` file:
```
# afx_scraper.py
from nse_scraper.stock_notification import stock_query
# ...
# Add the following code to the end of the file
stock_query()
```
If everything is set up correctly, you should something like this:
## Data in MongoDB Atlas
We need to create a new cluster in MongoDB Atlas. We can do this by:
* Clicking on the `Build a Cluster` button.
* Selecting the `Shared Clusters` option.
* Selecting the `Free Tier` option.
* Selecting the `Cloud Provider & Region` option.
* Selecting the `AWS` option. (I selected the AWS Cape Town option.)
* Selecting the `Cluster Name` option.
* Giving the cluster a name. (We can call it `nse_data`.)
Let’s configure a user to access the cluster by following the steps below:
* Select the `Database Access` option.
* Click on the `Add New User` option.
* Give the user a username. (I used `nse_user.)`.
* Give the user a password. (I used `nse_password`).
* Select the `Network Access` option.
* Select the `Add IP Address` option.
* Select the `Allow Access from Anywhere` option.
* Select the `Cluster` option. We'll then need to select the`Create Cluster` option.
Click on the `Collections` option and then on the `+ Create Database` button. Give the database a name. We can call it `nse_data`. Click on the `+ Create Collection` button. Give the collection a name. We can call it `stock_data`. If everything is set up correctly, you should see something like this:
![Database records displayed in MongoDB Atlas
If you see an empty collection, rerun the project in the terminal to populate the values in MongoDB. Incase of an error, read through the terminal output. Common issues could be:
* The IP aAddress was not added in the dashboard.
* A lLack of/iIncorrect credentials in your ._env_ file.
* A sSyntax error in your code.
* A poorCheck your internet connection.
* A lLack of appropriate permissions for your user.
## Metrics in MongoDB Atlas
Let's go through how to view metrics related to our database(s).
* Click on the **`Metrics` option.
* Click on the `+ Add Metric` button.
* Select the `Database` option.
* Select the `nse_data` option.
* Select the `Collection` option.
* Select the `stock_data` option.
* Select the `Metric` option.
* Select the `Documents` option.
* Select the `Time Range` option.
* Select the `Last 24 Hours`option.
* Select the `Granularity` option.
* Select the `1 Hour` option.
* Click on the `Add Metric` button.
If everything is set up correctly, it will look like this:
## Charts in MongoDB Atlas
MongoDB Atlas offers charts that can be used to visualize the data in the database. Click on the `Charts` option. Then, click on the `+ Add Chart` button. Select the `Database` option. Below is a screenshot of sample charts for NSE data:
## Version control with Git and GitHub
Ensure you have Git installed on your machine, along with a GitHub account.
Run the following command in your terminal to initialize a git repository:
```
git init
```
Create a `.gitignore` file. We can do this by running the following command in our terminal:
```
touch .gitignore
```
Let’s add the .env file to the .gitignore file. Add the following code to the `.gitignore` file:
```
# .gitignore
.env
```
Add the files to the staging area by running the following command in our terminal:
```
git add .
```
Commit the files to the repository by running the following command in our terminal:
```
git commit -m "Initial commit"
```
Create a new repository on GitHub by clicking on the `+` icon on the top right of the page and selecting `New repository`. Give the repository a name. We can call it `nse-stock-scraper`. Select `Public` as the repository visibility. Select `Add a README file` and `Add .gitignore` and select `Python` from the dropdown. Click on the `Create repository` button.
Add the remote repository to our local repository by running the following command in your terminal:
```
git remote add origin
```
Push the files to the remote repositor by running the following command in your terminal:
```
git push -u origin master
```
### CI/CD with GitHub Actions
Create a new folder — `.github` — and a `workflows` folder inside, in the root directory of the project. We can do this by running the following command in our terminal. Inside the `workflows file`, we'll need to create a new file called `scraper-test.yml`. We can do this by running the following command in our terminal:
```
touch .github/workflows/scraper-test.yml
```
Inside the scraper-test.yml file, we'll need to add the following code:
```
name: Scraper test with MongoDB
on: push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, "3.10"]
mongodb-version: ['4.4', '5.0', '6.0']
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: Set up MongoDB ${{ matrix.mongodb-version }}
uses: supercharge/mongodb-github-action@1.8.0
with:
mongodb-version: ${{ matrix.mongodb-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Lint with flake8
run: |
pip install flake8
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: scraper-test
run: |
cd nse_scraper
export MONGODB_URI=mongodb://localhost:27017
export MONGO_DATABASE=nse_data
scrapy crawl afx_scraper -a output_format=csv -a output_file=afx.csv
scrapy crawl afx_scraper -a output_format=json -a output_file=afx.json
```
Let's break down the code above. We create a new workflow called `Scraper test with MongoDB`. We then set the `on` event to `push`. Create a new job called `build`. Set the `runs-on` to `ubuntu-latest`. Set the `strategy` to a matrix. It contains the `python-version` and `mongodb-version` variables. Set the `python-version` to `3.8`, `3.9`, and `3.10`. Set the `mongodb-version` to `4.4`, `5.0`, and `6.0`.
Create a new step called `Checkout`. Set the `uses` to `actions/checkout@v2`. Create a new step called `Set up Python ${{ matrix.python-version }}` and set the `uses` to `actions/setup-python@v1`. Set the `python-version` to `${{ matrix.python-version }}`. Create a new step called `Set up MongoDB ${{ matrix.mongodb-version }}`. This sets up different Python versions and MongoDB versions for testing.
The `Install dependencies` step installs the dependencies. Create a new step called `Lint with flake8`. This step lints the code. Create a new step called `scraper-test`. This step runs the scraper and tests it.
Commit the changes to the repository by running the following command in your terminal:
```
git add .
git commit -m "Add GitHub Actions"
git push
```
Go to the `Actions` tab on your repository. You should see something like this:
![Displaying the build process
## Conclusion
In this tutorial, we built a stock price scraper using Python and Scrapy. We then used MongoDB to store the scraped data. We used Africas Talking to send SMS notifications. Finally, we implemented a CI/CD pipeline using GitHub Actions.
There are definite improvements that can be made to this project. For example, we can add more stock exchanges. We can also add more notification channels. This project should serve as a good starting point.
Thank you for reading through this far., I hope you have gained insight or inspiration for your next project with MongoDB Atlas. Feel free to comment below or reach out for further improvements. We’d love to hear from you! This project is oOpen sSource and available on GitHub —, clone or fork it!, I’m excited to see what you build. | md | {
"tags": [
"Atlas",
"Python"
],
"pageDescription": "A step-by-step guide on how to set up the development environment, create a Scrapy spider, parse the website, and store the data in MongoDB",
"contentType": "Tutorial"
} | Nairobi Stock Exchange Web Scraper | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/introducing-realm-flutter-sdk | created | # Introducing the Realm Flutter SDK
> This article discusses the alpha version of the Realm Flutter SDK which is now in public preview with more features and functionality. Learn more here."
Today, we are pleased to announce the next installment of the Realm Flutter SDK – now with support for Windows, macOS, iOS, and Android. This release gives you the ability to use Realm in any of your Flutter or Dart projects regardless of the version.
Realm is a simple super-fast, object-oriented database for mobile applications that does not require an ORM layer or any glue code to work with your data layer. With Realm, working with your data is as simple as interacting with objects from your data model. Any updates to the underlying data store will automatically update your objects as soon as the state on disk has changed, enabling you to automatically refresh the view via StatefulWidgets and Streams.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## Introduction
Flutter has been a boon to developers across the globe, a framework designed for all platforms: iOS, Android, server and desktop. It enables a developer to write code once and deploy to multiple platforms. Optimizing for performance across multiple platforms that use different rendering engines and creating a single hot reload that work across all platforms is not an easy feat, but the Flutter and Dart teams have done an amazing job. It’s not surprising therefore that Flutter support is our top request on Github.
Realm’s Core Database is platform independent meaning it is easily transferable to another environment which has enabled us to build SDKs for the most popular mobile development frameworks: iOS with Swift, Android with Kotlin, React Native, Xamarin, Unity, and now Flutter.
Our initial version of the Flutter SDK was tied to a custom-built Flutter engine. It was version-specific and shipped as a means to gather feedback from the community on our Realm APIs. With this latest version, we worked closely with the Flutter and Dart team to integrate with the Dart FFI APIs. Now, developers can use Realm with any version of their Dart or Flutter projects. More importantly though, this official integration will form the underpinning of all our future releases and includes full support from Dart’s null safety functionality. Moving forward, we will continue to closely partner with the Flutter and Dart team to follow best practices and ensure version compatibility.
## Why Realm
All of Realm’s SDKs are built on three core concepts:
* An object database that infers the schema from the developers’ class structure – making working with objects as easy as interacting with their data layer. No conversion code necessary
* Live objects so the developer has a simple way to update their UI – integrated with StatefulWidgets and Streams
* A columnar store so that query results return in lightning speed and directly integrate with an idiomatic query language the developer prefers
Realm is a database designed for mobile applications as a replacement for SQLite. It was written from the ground up in C++, so it is not a wrapper around SQLite or any other relational datastore. Designed with the mobile environment in mind, it is lightweight and optimizes for constraints like compute, memory, bandwidth, and battery that do not exist on the server side. Realm uses lazy loading and memory mapping with each object reference pointing directly to the location on disk where the state is stored. This exponentially increases lookup and query speed as it eliminates the loading of state pages of disk space into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer.
## Realm for Flutter Developers
Since Realm is an object database, your schema is defined in the same way you define your object classes. Additionally, Realm delivers a simple and intuitive string-based query system that will feel natural to Flutter developers. No more context switching to SQL to instantiate your schema or looking behind the curtain when an ORM fails to translate your calls into SQL. And because Realm object’s are memory-mapped, a developer can bind an object or query directly to the UI. As soon as changes are made to the state store, they are immediately reflected in the UI. No need to write complex logic to continually recheck whether a state change affects objects or queries bound to the UI and therefore refresh the UI. Realm updates the UI for you.
```cs
// Import the Realm package and set your app file name
import 'package:realm_dart/realm.dart';
part 'test.g.dart'; // if this is test.dart
// Set your schema
@RealmModel()
class _Task {
late String name;
late String owner;
late String status;
}
void main(List arguments) {
// Open a realm database instance. Be sure to run the Realm generator to generate your schema
var config = Configuration(Task.schema]);
var realm = Realm(config);
// Create an instance of your Tasks object and persist it to disk
var task = Task("Ship Flutter", "Lubo", "InProgress");
realm.write(() {
realm.add(task);
});
// Use a string to based query language to query the data
var myTasks = realm.all().query("status == 'InProgress'");
var newTask = Task("Write Blog", "Ian", "InProgress");
realm.write(() {
realm.add(newTask);
});
// Queries are kept live and auto-updating - the length here is now 2
myTasks.length;
}
```
## Looking Ahead
The Realm Flutter SDK is free, open source and available for you to try out today. While this release is still in Alpha, our development team has done a lot of the heavy lifting to set a solid foundation – with a goal of moving rapidly into public preview and GA later this year. We will look to bring new notification APIs, a migration API, solidify our query system, helper functions for Streams integration, and of course Atlas Device Sync to automatically replicate data to MongoDB Atlas.
Give it a try today and let us know what you [think! Check out our samples, read our docs, and follow our repo.
| md | {
"tags": [
"Realm",
"Dart",
"Flutter"
],
"pageDescription": "Announcing the next installment of the Realm Flutter SDK – now with support for Windows, macOS, iOS, and Android.",
"contentType": "News & Announcements"
} | Introducing the Realm Flutter SDK | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/streaming-data-apache-spark-mongodb | created | # Streaming Data with Apache Spark and MongoDB
MongoDB has released a version 10 of the MongoDB Connector for Apache Spark that leverages the new Spark Data Sources API V2 with support for Spark Structured Streaming.
## Why a new version?
The current version of the MongoDB Spark Connector was originally written in 2016 and is based upon V1 of the Spark Data Sources API. While this API version is still supported, Databricks has released an updated version of the API, making it easier for data sources like MongoDB to work with Spark. By having the MongoDB Spark Connector use V2 of the API, an immediate benefit is a tighter integration with Spark Structured Streaming.
*Note: With respect to the previous version of the MongoDB Spark Connector that supported the V1 API, MongoDB will continue to support this release until such a time as Databricks depreciates V1 of the Data Source API. While no new features will be implemented, upgrades to the connector will include bug fixes and support for the current versions of Spark only.*
## What version should I use?
The new MongoDB Spark Connector release (Version 10.1) is not intended to be a direct replacement for your applications that use the previous version of MongoDB Spark Connector.
The new Connector uses a different namespace with a short name, “mongodb” (full path is “com.mongodb.spark.sql.connector.MongoTableProvider”), versus “mongo” (full path of “com.mongodb.spark.DefaultSource”). Having a different namespace makes it possible to use both versions of the connector within the same Spark application! This is helpful in unit testing your application with the new Connector and making the transition on your timeline.
Also, we are changing how we version the MongoDB Spark Connector. The previous versions of the MongoDB Spark Connector aligned with the version of Spark that was supported—e.g., Version 2.4 of the MongoDB Spark Connector works with Spark 2.4. Keep in mind that going forward, this will not be the case. The MongoDB documentation will make this clear as to which versions of Spark the connector supports.
## Structured Streaming with MongoDB using continuous mode
Apache Spark comes with a stream processing engine called Structured Streaming, which is based on Spark's SQL engine and DataFrame APIs. Spark Structured Streaming treats each incoming stream of data as a micro-batch, continually appending each micro-batch to the target dataset. This makes it easy to convert existing Spark batch jobs into a streaming job. Structured Streaming has evolved over Spark releases and in Spark 2.3 introduced Continuous Processing mode, which took the micro-batch latency from over 100ms to about 1ms. Note this feature is still in experimental mode according to the official Spark Documentation. In the following example, we’ll show you how to stream data between MongoDB and Spark using Structured Streams and continuous processing. First, we’ll look at reading data from MongoDB.
### Reading streaming data from MongoDB
You can stream data from MongoDB to Spark using the new Spark Connector. Consider the following example that streams stock data from a MongoDB Atlas cluster. A sample document in MongoDB is as follows:
```
{
_id: ObjectId("624767546df0f7dd8783f300"),
company_symbol: 'HSL',
company_name: 'HUNGRY SYNDROME LLC',
price: 45.74,
tx_time: '2022-04-01T16:57:56Z'
}
```
In this code example, we will use the new MongoDB Spark Connector and read from the StockData collection. When the Spark Connector opens a streaming read connection to MongoDB, it opens the connection and creates a MongoDB Change Stream for the given database and collection. A change stream is used to subscribe to changes in MongoDB. As data is inserted, updated, and deleted, change stream events are created. It’s these change events that are passed back to the client in this case the Spark application. There are configuration options that can change the structure of this event message. For example, if you want to return just the document itself and not include the change stream event metadata, set “spark.mongodb.change.stream.publish.full.document.only” to true.
```
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark = SparkSession.\
builder.\
appName("streamingExampleRead").\
config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.12::10.1.1').\
getOrCreate()
query=(spark.readStream.format("mongodb")
.option('spark.mongodb.connection.uri', '')
.option('spark.mongodb.database', 'Stocks') \
.option('spark.mongodb.collection', 'StockData') \
.option('spark.mongodb.change.stream.publish.full.document.only','true') \
.option("forceDeleteTempCheckpointLocation", "true") \
.load())
query.printSchema()
```
The schema is inferred from the MongoDB collection. You can see from the printSchema command that our document structure is as follows:
| root: | | | |
| --- | --- | --- | --- |
| |_id |string |(nullable=true) |
| |company_name| string | (nullable=true) |
| | company_symbol | string | (nullable=true) |
| | price | double | (nullable=true) |
| | tx_time | string | (nullable=true) |
We can verify that the dataset is streaming with the isStreaming command.
```
query.isStreaming
```
Next, let’s read the data on the console as it gets inserted into MongoDB.
```
query2=(query.writeStream \
.outputMode("append") \
.option("forceDeleteTempCheckpointLocation", "true") \
.format("console") \
.trigger(continuous="1 second")
.start().awaitTermination());
```
When the above code was run through spark-submit, the output resembled the following:
… removed for brevity …
-------------------------------------------
Batch: 2
-------------------------------------------
+--------------------+--------------------+--------------+-----+-------------------+
| _id| company_name|company_symbol|price| tx_time|
+--------------------+--------------------+--------------+-----+-------------------+
|62476caa6df0f7dd8...| HUNGRY SYNDROME LLC| HSL|45.99|2022-04-01 17:20:42|
|62476caa6df0f7dd8...|APPETIZING MARGIN...| AMP|12.81|2022-04-01 17:20:42|
|62476caa6df0f7dd8...|EMBARRASSED COCKT...| ECC|38.18|2022-04-01 17:20:42|
|62476caa6df0f7dd8...|PERFECT INJURY CO...| PIC|86.85|2022-04-01 17:20:42|
|62476caa6df0f7dd8...|GIDDY INNOVATIONS...| GMI|84.46|2022-04-01 17:20:42|
+--------------------+--------------------+--------------+-----+-------------------+
… removed for brevity …
-------------------------------------------
Batch: 3
-------------------------------------------
+--------------------+--------------------+--------------+-----+-------------------+
| _id| company_name|company_symbol|price| tx_time|
+--------------------+--------------------+--------------+-----+-------------------+
|62476cab6df0f7dd8...| HUNGRY SYNDROME LLC| HSL|46.04|2022-04-01 17:20:43|
|62476cab6df0f7dd8...|APPETIZING MARGIN...| AMP| 12.8|2022-04-01 17:20:43|
|62476cab6df0f7dd8...|EMBARRASSED COCKT...| ECC| 38.2|2022-04-01 17:20:43|
|62476cab6df0f7dd8...|PERFECT INJURY CO...| PIC|86.85|2022-04-01 17:20:43|
|62476cab6df0f7dd8...|GIDDY INNOVATIONS...| GMI|84.46|2022-04-01 17:20:43|
+--------------------+--------------------+--------------+-----+-------------------+
### Writing streaming data to MongoDB
Next, let’s consider an example where we stream data from Apache Kafka to MongoDB. Here the source is a kafka topic “stockdata.Stocks.StockData.” As data arrives in this topic, it’s run through Spark with the message contents being parsed, transformed, and written into MongoDB. Here is the code listing with comments in-line:
```
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from pyspark.sql.functions import *
from pyspark.sql.types import StructType,TimestampType, DoubleType, StringType, StructField
spark = SparkSession.\
builder.\
appName("streamingExampleWrite").\
config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector:10.1.1').\
config('spark.jars.packages', 'org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.0').\
getOrCreate()
df = spark \
.readStream \
.format("kafka") \
.option("startingOffsets", "earliest") \
.option("kafka.bootstrap.servers", "KAFKA BROKER HOST HERE") \
.option("subscribe", "stockdata.Stocks.StockData") \
.load()
schemaStock = StructType( \
StructField("_id",StringType(),True), \
StructField("company_name",StringType(), True), \
StructField("company_symbol",StringType(), True), \
StructField("price",StringType(), True), \
StructField("tx_time",StringType(), True)])
schemaKafka = StructType([ \
StructField("payload",StringType(),True)])
```
Note that Kafka topic message arrives in this format -> key (binary), value (binary), topic (string), partition (int), offset (long), timestamp (long), timestamptype (int). See [Structured Streaming + Kafka Integration Guide (Kafka broker version 0.10.0 or higher) for more information on the Kafka and Spark integration.
To process the message for consumption into MongoDB, we want to pick out the value which is in binary format and convert it to JSON.
```
stockDF=df.selectExpr("CAST(value AS STRING)")
```
For reference, here is an example of an event (the value converted into a string) that is on the Kafka topic:
```
{
"schema": {
"type": "string",
"optional": false
},
"payload": "{\"_id\": {\"$oid\": \"6249f8096df0f7dd8785d70a\"}, \"company_symbol\": \"GMI\", \"company_name\": \"GIDDY INNOVATIONS\", \"price\": 87.57, \"tx_time\": \"2022-04-03T15:39:53Z\"}"
}
```
We want to isolate the payload field and convert it to a JSON representation leveraging the shcemaStock defined above. For clarity, we have broken up the operation into multiple steps to explain the process. First, we want to convert the value into JSON.
```
stockDF=stockDF.select(from_json(col('value'),schemaKafka).alias("json_data")).selectExpr('json_data.*')
```
The dataset now contains data that resembles
```
…
{
_id: ObjectId("624c6206e152b632f88a8ee2"),
payload: '{"_id": {"$oid": "6249f8046df0f7dd8785d6f1"}, "company_symbol": "GMI", "company_name": "GIDDY MONASTICISM INNOVATIONS", "price": 87.62, "tx_time": "2022-04-03T15:39:48Z"}'
}, …
```
Next, we want to capture just the value of the payload field and convert that into JSON since it’s stored as a string.
```
stockDF=stockDF.select(from_json(col('payload'),schemaStock).alias("json_data2")).selectExpr('json_data2.*')
```
Now we can do whatever transforms we would like to do on the data. In this case, let’s convert the tx_time into a timestamp.
```
stockDF=stockDF.withColumn("tx_time",col("tx_time").cast("timestamp"))
```
The Dataset is in a format that’s ready for consumption into MongoDB, so let’s stream it out to MongoDB. To do this, use the writeStream method. Keep in mind there are various options to set. For example, when present, the “trigger” option processes the results in batches. In this example, it’s every 10 seconds. Removing the trigger field will result in continuous writing. For more information on options and parameters, check out the Structured Streaming Guide.
```
dsw = (
stockDF.writeStream
.format("mongodb")
.queryName("ToMDB")
.option("checkpointLocation", "/tmp/pyspark7/")
.option("forceDeleteTempCheckpointLocation", "true")
.option('spark.mongodb.connection.uri', ‘')
.option('spark.mongodb.database', 'Stocks')
.option('spark.mongodb.collection', 'Sink')
.trigger(continuous="10 seconds")
.outputMode("append")
.start().awaitTermination());
```
## Structured Streaming with MongoDB using Microbatch mode
While continuous mode offers a lot of promise in terms of the latency and performance characteristics, the support for various popular connectors like AWS S3 for example is non-existent. Thus, you might end up using microbatch mode within your solution. The key difference between the two is how spark handles obtaining the data from the stream. As mentioned previously, the data is batched and processed versus using a continuous append to a table. The noticeable difference is the advertised latency of microbatch around 100ms which for most workloads might not be an issue.
### Reading streaming data from MongoDB using microbatch
Unlike when we specify a write, when we read from MongoDB, there is no special configuration to tell Spark to use microbatch or continuous. This behavior is determined only when you write. Thus, in our code example, to read from MongoDB is the same in both cases, e.g.:
```
query=(spark.readStream.format("mongodb").\
option('spark.mongodb.connection.uri', '<>').\
option('spark.mongodb.database', 'Stocks').\
option('spark.mongodb.collection', 'StockData').\
option('spark.mongodb.change.stream.publish.full.document.only','true').\
option("forceDeleteTempCheckpointLocation", "true").\
load())
```
Recall from the previous discussion on reading MongoDB data, when using `spark.readStream.format("mongodb")`, MongoDB opens a change stream and subscribes to changes as they occur in the database. With microbatch each microbatch event opens a new change stream cursor making this form of microbatch streaming less efficient than continuous streams. That said, some consumers of streaming data such as AWS S3 only support data from microbatch streams.
### Writing streaming data to MongoDB using microbatch
Consider the previous writeStream example code:
```
dsw = (
stockDF.writeStream
.format("mongodb")
.queryName("ToMDB")
.option("checkpointLocation", "/tmp/pyspark7/")
.option("forceDeleteTempCheckpointLocation", "true")
.option('spark.mongodb.connection.uri', '<>')
.option('spark.mongodb.database', 'Stocks')
.option('spark.mongodb.collection', 'Sink')
.trigger(continuous="10 seconds")
.outputMode("append")
.start().awaitTermination());
```
Here the .trigger parameter was used to tell Spark to use Continuous mode streaming, to use microbatch simply remove the .trigger parameter.
## Go forth and stream!
Streaming data is a critical component of many types of applications. MongoDB has evolved over the years, continually adding features and functionality to support these types of workloads. With the MongoDB Spark Connector version 10.1, you can quickly stream data to and from MongoDB with a few lines of code.
For more information and examples on the new MongoDB Spark Connector version 10.1, check out the online documentation. Have questions about the connector or MongoDB? Post a question in the MongoDB Developer Community Connectors & Integrations forum. | md | {
"tags": [
"Python",
"Connectors",
"Spark",
"AI"
],
"pageDescription": "MongoDB has released a new spark connector, MongoDB Spark Connector V10. In this article, learn how to read from and write to MongoDB through Spark Structured Streaming.",
"contentType": "Article"
} | Streaming Data with Apache Spark and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/announcing-realm-cplusplus-sdk-alpha | created | # Announcing the Realm C++ SDK Alpha
Today, we are excited to announce the Realm C++ SDK Alpha and the continuation of the work toward a private preview. Our C++ SDK was built to address increasing demand — for seamless data management and on-device data storage solutions — from our developer community in industries such as automotive, healthcare, and retail. This interest tracks with the continued popularity of C++ as illustrated in the recent survey by Tiobe and the Language of the Year 2022 status by Tiobe.
This SDK was developed in collaboration with the Qt Company. Their example application showcases the functionality of Atlas Device Sync and Realm in an IoT scenario. Take a look at the companion blog post by the Qt Company.
The Realm C++ SDK allows developers to easily store data on devices for offline availability — and automatically sync data to and from the cloud — in an idiomatic way within their C++ applications. Realm is a modern data store, an alternative to SQLite, which is simple to use because it is an object-oriented database and does not require a separate mapping layer or ORM. In line with the mission of MongoDB’s developer data platform — designing technologies to make the development process for developers seamless — networking retry logic and sophisticated conflict merging functionality is built right into this technology, eliminating the need to write and maintain a large volume of code that would traditionally be required.
## Why Realm C++ SDK?
We consider the Realm C++ SDK to be especially well suited for areas such as embedded devices, IoT, and cross-platform applications:
1. Realm is a fully fledged object-oriented persistence layer for edge, mobile, and embedded devices that comes with out-of-the-box support for synchronizing to the MongoDB Atlas cloud back end. As devices become increasingly “smart” and connected, they require more data, such as historical data enabling automated decision making, and necessitate efficient persistence layer and real-time cloud-syncing technologies.
2. Realm is mature, feature-rich and enterprise-ready, with over 10 years of history. The technology is integrated with tens of thousands of applications in Google Play and the Apple App Store that have been downloaded by billions of users in the past six months alone.
3. Realm is designed and developed for resource constrained environments — it is lightweight and optimizes for constraints like compute, memory, bandwidth, and battery.
4. Realm can be embedded in the application code and does not require any additional deployment tasks or activities.
5. Realm is fully object-oriented, which makes data modeling straightforward and idiomatic. Alternative technologies like SQLite require an object-relational mapping library, which adds complexity and makes future development, maintenance, and debugging painful.
6. Updates to the underlying data store in Realm are reflected instantly in the objects which help drive reactive UI layers in different environments.
Let’s dive deeper into a concrete example of using Realm.
## Realm quick start example
The following Todo list example is borrowed from the quick start documentation. We start by showing how Realm infers the data schema directly from the class structure with no conversion code necessary:
```
#include
struct Todo : realm::object {
realm::persisted _id{realm::object_id::generate()};
realm::persisted name;
realm::persisted status;
static constexpr auto schema = realm::schema("Todo",
realm::property<&Todo::_id, true>("_id"),
realm::property<&Todo::name>("name"),
realm::property<&Todo::status>("status"),
};
```
Next, we’ll open a local Realm and store an object in it:
```
auto realm = realm::open();
auto todo = Todo {
.name = "Create my first todo item",
.status = "In Progress"
};
realm.write(&realm, &todo] {
realm.add(todo);
});
```
With the object stored, we are ready to fetch the object back from Realm and modify it:
```
// Fetch all Todo objects
auto todos = realm.objects();
// Filter as per object state
auto todosInProgress = todos.where([ {
return todo.status == "In Progress";
});
// Mark a Todo item as complete
auto todoToUpdate = todosInProgress0];
realm.write([&realm, &todoToUpdate] {
todoToUpdate.status = "Complete";
});
// Delete the Todo item
realm.write([&realm, &todoToUpdate] {
realm.remove(todo);
});
```
While the above query examples are simple, [Realm’s rich query language enables developers to easily express queries even for complex use cases. Realm uses lazy loading and memory mapping with each object reference pointing directly to the location on disk where the state is stored. This increases lookup and query speed performance as it eliminates the loading of pages of state into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer.
The complete Realm C++ SDK documentation provides more complex examples for filtering and querying the objects and shows how to register an object change listener, which enables the developer to react to state changes automatically, something we leverage in the Realm with Qt and Atlas Device Sync example application.
## Realm with Qt and Atlas Device Sync
First a brief introduction to Qt:
*The Qt framework contains a comprehensive set of highly intuitive and modularized C++ libraries and cross-platform APIs to simplify UI application development. Qt produces highly readable, easily maintainable, and reusable code with high runtime performance and small footprint.*
The example provided together with Qt is a smart coffee machine application. We have integrated Realm and Atlas Device Sync into the coffee machine application by extending the existing coffee selection and brewing menu, and by adding local data storage and cloud-syncing — essentially turning the coffee machine into a fleet of machines. The image below clarifies:
This fleet could be operated and controlled remotely by an operator and could include separate applications for the field workers maintaining the machines. Atlas Device Sync makes it easy for developers to build reactive applications for multi-device scenarios by sharing the state in real-time with the cloud and local devices.
This is particularly compelling when combined with a powerful GUI framework such as Qt. The slots and signals mechanism in Qt sits naturally with Realm’s Object Change Listeners, emitting signals of changes to data from Atlas Device Sync so integration is a breeze.
In the coffee machine example, we integrated functionality such as configuring drink recipes in cloud, out of order sensing, and remote control logic. With Realm with Atlas Device Sync, we also get the resiliency for dropped network connections out of the box.
The full walkthrough of the example application is outside of this blog post and we point to the full source code and the more detailed walkthrough in our repository.
## Looking ahead
We are working hard to improve the Realm C++ SDK and will be moving quickly to private preview. We look forward to hearing feedback from our users and partners on applications they are looking to build and how the SDK might be extended to support their use case. In the private preview phase, we hope to deliver Windows support and package managers such as Conan, as well as continuing to close the gap when compared to other Realm SDKs. While we don’t anticipate major breaking changes, the API may change based on feedback from our community. We expect the ongoing private preview phase to finalize in the next few quarters and we are closely monitoring the feedback from the users via the GitHub project.
> **Want more information?**
> Interested in learning more before trying the product? Submit your information to get in touch.
>
> **Ready to get started now?**
> Use the C++ SDK by installing the SDK, read our docs, and follow our repo.
>
> Then, register for Atlas to connect to Atlas Device Sync, a fully-managed mobile backend as a service. Leverage out-of-the-box infrastructure, data synchronization capabilities, network handling, and much more to quickly launch enterprise-grade mobile apps.
>
> Finally, let us know what you think and get involved in our forums. See you there! | md | {
"tags": [
"Realm",
"C++"
],
"pageDescription": "Today, we are excited to announce the Realm C++ SDK Alpha and the continuation of the work toward a private preview.",
"contentType": "Article"
} | Announcing the Realm C++ SDK Alpha | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/guide-working-esg-data | created | # The 5-Minute Guide to Working with ESG Data on MongoDB
MongoDB makes it incredibly easy to work with environmental, social, and corporate governance (ESG) data from multiple providers, analyze that data, and then visualize it.
In this quick guide, we will show you how MongoDB can:
* Move ESG data from different data sources to the document model.
* Easily incorporate new ESG source feeds to the document data model.
* Run advanced, aggregated queries on ESG data.
* Visualize ESG data.
* Manage different data types in a single document.
* Integrate geospatial data.
Throughout this guide, we have sourced ESG data from MSCI.
>NOTE: An MSCI account and login is required to download the datasets linked to in this article. Dataset availability is dependent on MSCI product availability.
Our examples are drawn from real-life work with MongoDB clients in the financial services
industry. Screenshots (apart from code snippets) are taken from MongoDB Compass, MongoDB’s GUI for querying, optimizing, and analyzing data.
## Importing data into MongoDB
The first step is to download the MSCI dataset, and import the MSCI .csv file (Figure 1) into MongoDB.
Even though MSCI’s data is in tabular format, MongoDB’s document data model allows you to import the data directly into a database collection and apply the data types as needed.
*Figure 1. Importing the data using MongoDB’s Compass GUI*
With the MSCI data imported into MongoDB, we can start discovering, querying, and visualizing it.
## Scenario 1: Basic gathering and querying of ESG data using basic aggregations
**Source Data Set**: *MSCI ESG Accounting Governance Risk (AGR)*
**Collection**: `accounting_governance_risk_agr_ratings `
From MSCI - *“**ESG AGR** uses a quantitative approach to identify risks in the financial reporting practices and accounting governance of publicly listed companies. Metrics contributing to the score include traditional fundamental ratios used to evaluate corporate strength and profitability, as well as forensic ratios.”*
**Fields/Data Info:**
* **The AGR (Accounting & Governance Risk) Rating** consists of four groupings based on the AGR Percentile: Very Aggressive (1-10), Aggressive (11-35), Average (36-85), Conservative (86-100).
* **The AGR (Accounting & Governance Risk) Percentile** ranges from 1-100, with lower values representing greater risks.
### Step 1: Match and group AGR ratings per country of interest
In this example, we will count the number of AGR rated companies in Japan belonging to each AGR rating group (i.e., Very Aggressive, Aggressive, Average, and Conservative). To do this, we will use MongoDB’s aggregation pipeline to process multiple documents and return the results we’re after.
The aggregation pipeline presents a powerful abstraction for working with and analyzing data stored in the MongoDB database. The composability of the aggregation pipeline is one of the keys to its power. The design was actually modeled on the Unix pipeline, which allows developers to string together a series of processes that work together. This helps to simplify their application code by reducing logic, and when applied appropriately, a single aggregation pipeline can replace many queries and their associated network round trip times.
What aggregation stages will we use?
* The **$match** operator in MongoDB works as a filter. It filters the documents to pass only the documents that match the specified condition(s).
* The **$group** stage separates documents into groups according to a "group key," which, in this case, is the value of Agr_Rating.
* Additionally, at this stage, we can summarize the total count of those entities.
Combining the first two aggregation stages, we can filter the Issuer_Cntry_Domicile field to be equal to Japan — i.e., ”JP” — and group the AGR ratings.
As a final step, we will also sort the output of the total_count in descending order (hence the -1) and merge the results into another collection in the database of our choice, with the **$merge** operator.
```
{
$match: {
Issuer_Cntry_Domicile: 'JP'
}
}, {
$group: {
_id: '$Agr_Rating',
total_count: {
$sum: 1
},
country: {
$first: '$Issuer_Cntry_Domicile'
}
}
}, {
$sort: {
total_count: -1
}
}, {
$merge: {
into: {
db: 'JP_DB',
coll: 'jp_agr_risk_ratings'
},
on: '_id',
whenMatched: 'merge',
whenNotMatched: 'insert'
}
}]
```
The result and output collection `'jp_agr_risk_ratings'` can be seen below.
![result and output collection
### Step 2: Visualize the output with MongoDB Charts
Next, let’s visualize the results of Step 1 with MongoDB Charts, which is integrated into MongoDB. With Charts, there’s no need for developers to worry about finding a compatible data visualization tool, dealing with data movement, or data duplication when creating or sharing data visualizations.
Using MongoDB Charts, in a few clicks we can visualize the results of our data in Figure 2.
*Figure 2. Distribution of AGR rating in Japan*
### Step 3: Visualize the output for multiple countries
Let’s go a step further and group the results for multiple countries. We can add more countries — for instance, Japan and Hong Kong — and then $group and $count the results for them in Figure 3.
*Figure 3. $match stage run in MongoDB Compass*
Moving back to Charts, we can easily display the results comparing governance risks for Hong Kong and Japan, as shown in Figure 4.
*Figure 4. Compared distribution of AGR ratings - Japan vs Hong Kong*
## Scenario 2: Joins and data analysis using an aggregation pipeline
**Source Data Set**: AGR Ratings
**Collection**: `accounting_governance_risk_agr_ratings`
**Data Set**: Country Fundamental Risk Indicators
**Collection**: `focus_risk_scores`
From MSCI - *“**GeoQuant's Country Fundamental Risk Indicators** fuses political and computer science to measure and predict political risk. GeoQuant's machine-learning software scrapes the web for large volumes of reputable data, news, and social media content. “*
**Fields/Data Info:**
* **Health (Health Risk)** - Quality of/access to health care, resilience to disease
* **IR (International Relations Risk)** - Prevalence/likelihood of diplomatic, military, and economic conflict with other countries
* **PolViol (Political Violence Risk)** - Prevalence/likelihood of civil war, insurgency, terrorism
With the basics of MongoDB’s query framework understood, let’s move on to more complex queries, again using MongoDB’s aggregation pipeline capabilities.
With MongoDB’s document data model, we can nest documents within a parent document. In addition, we are able to perform query operations over those nested fields.
Imagine a scenario where we have two separate collections of ESG data, and we want to combine information from one collection into another, fetch that data into the result array, and further filter and transform the data.
We can do this using an aggregation pipeline.
Let’s say we want more detailed results for companies located in a particular country — for instance, by combining data from `focus_risk_scores` with our primary collection: `accounting_governance_risk_agr_ratings`.
*Figure 5. accounting_governance_risk_agr_ratings collection in MongoDB Compass*
*Figure 6. focus_risk_scores collection in MongoDB Compass*
In order to do that, we use the **$lookup** stage, which adds a new array field to each input document. It contains the matching documents from the "joined" collection. This is similar to the joins used in relational databases. You may ask, "What is $lookup syntax?"
To perform an equality match between a field from the input documents with a field from the documents of the "joined" collection, the $lookup stage has this syntax:
```
{
$lookup:
{
from: ,
localField: ,
foreignField: ,
as:
}
}
```
In our case, we want to join and match the value of **Issuer_Cntry_Domicile** from the collection **accounting_governance_risk_agr_ratings** with the value of **Country** field from the collection **focus_risk_scores**, as shown in Figure 7.
*Figure 7. $lookup stage run in MongoDB Compass*
After performing the $lookup operation, we receive the data into the ‘result’ array field.
Imagine that at this point, we decide only to display **Issuer_Name** and **Issuer_Cntry_Domicle** from the first collection. We can do so with the $project operator and define the fields that we want to be visible for us in Figure 8.
*Figure 8. $project stage run in MongoDB Compass*
Additionally, we remove the **result_.id** field that comes from the original document from the other collection as we do not need it at this stage. Here comes the handy **$unset** stage.
*Figure 9. $unset stage run in MongoDB Compass*
With our data now cleaned up and viewable in one collection, we can go further and edit the data set with new custom fields and categories.
**Updating fields**
Let’s say we would like to set up new fields that categorize Health, IR, and PolViol lists separately.
To do so, we can use the $set operator. We use it to create new fields — health_risk, politcial_violance_risk, international_relations_risk — where each of the respective fields will consist of an array with only those elements that match the condition specified in $filter operator.
**$filter** has the following syntax:
```
{
$filter:
{
input: ,
as: ,
cond:
}
}
```
**input** — An expression that resolves to an array.
**as** — A name for the variable that represents each individual element of the input array.
**cond** — An expression that resolves to a boolean value used to determine if an element should be included in the output array. The expression references each element of the input array individually with the variable name specified in as.
In our case, we perform the $filter stage where the input we specify as “$result” array.
Why dollar sign and field name?
This prefixed field name with a dollar sign $ is used in aggregation expressions to access fields in the input documents (the ones from the previous stage and its result field).
Further, we name every individual element from that $result field as “metric”.
To resolve the boolean we define conditional expression, in our case, we want to run an equality match for a particular metric "$$metric.Risk" (following the "$$." syntax that accesses a specific field in the metric object).
And define and filter those elements to the appropriate value (“Health”, “PolViol”, “IR”).
```
cond: {
$eq: "$$metric.Risk", "Health"],
}
```
The full query can be seen below in Figure 10.
![$set stage and $filter operator run in MongoDB Compass
*Figure 10. $set stage and $filter operator run in MongoDB Compass*
After we consolidate the fields that are interesting for us, we can remove redundant result array and use **$unset** operator once again to remove **result** field.
*Figure 11. $unset stage run in MongoDB Compass*
The next step is to calculate the average risk of every category (Health, International Relations, Political Violence) between country of origin where Company resides (“Country” field) and other countries (“Primary_Countries” field) with $avg operator within $set stage (as seen in Figure 12).
*Figure 12. $set stage run in MongoDB Compass*
And display only the companies whose average values are greater than 0, with a simple $match operation Figure 13.
*Figure 13. $match stage run in MongoDB Compass*
Save the data (merge into) and display the results in the chart.
Once again, we can use the $merge operator to save the result of the aggregation and then visualize it using MongoDB Charts Figure 14.
*Figure 14. $merge stage run in MongoDB Compass*
Let’s take our data set and create a chart of the Average Political Risk for each company, as displayed in Figure 15.
*Figure 15. Average Political Risk per Company in MongoDB Atlas Charts*
We can also create Risk Charts per category of risk, as seen in Figure 16.
*Figure 16. average international risk per company in MongoDB Atlas Charts*
*Figure 17. average health risk per company in MongoDB Atlas Charts*
Below is a snippet with all the aggregation operators mentioned in Scenario 2:
```
{
$lookup: {
from: "focus_risk_scores",
localField: "Issuer_Cntry_Domicile",
foreignField: "Country",
as: "result",
},
},
{
$project: {
_id: 1,
Issuer_Cntry_Domicile: 1,
result: 1,
Issuer_Name: 1,
},
},
{
$unset: "result._id",
},
{
$set: {
health_risk: {
$filter: {
input: "$result",
as: "metric",
cond: {
$eq: ["$$metric.Risk", "Health"],
},
},
},
political_violence_risk: {
$filter: {
input: "$result",
as: "metric",
cond: {
$eq: ["$$metric.Risk", "PolViol"],
},
},
},
international_relations_risk: {
$filter: {
input: "$result",
as: "metric",
cond: {
$eq: ["$$metric.Risk", "IR"],
},
},
},
},
},
{
$unset: "result",
},
{
$set: {
health_risk_avg: {
$avg: "$health_risk.risk_values",
},
political_risk_avg: {
$avg: "$political_violence_risk.risk_values",
},
international_risk_avg: {
$avg: "$international_relations_risk.risk_values",
},
},
},
{
$match: {
health_risk_avg: {
$gt: 0,
},
political_risk_avg: {
$gt: 0,
},
international_risk_avg: {
$gt: 0,
},
},
},
{
$merge: {
into: {
db: "testDB",
coll: "agr_avg_risks",
},
on: "_id",
},
},
]
```
## Scenario 3: Environmental indexes — integrating geospatial ESG data
**Data Set**: [Supply Chain Risks
**Collection**: `supply_chain_risk_metrics`
From MSCI - *“Elevate’s Supply Chain ESG Risk Ratings aggregates data from its verified audit database to the country level. The country risk assessment includes an overall score as well as 38 sub-scores organized under labor, health and safety, environment, business ethics, and management systems.”*
ESG data processing requires the handling of a variety of structured and unstructured data consisting of financial, non-financial, and even climate-related geographical data. In this final scenario, we will combine data related to environmental scoring — especially wastewater, air, environmental indexes, and geo-locations data — and present them in a geo-spatial format to help business users quickly identify the risks.
MongoDB provides a flexible and powerful multimodel data management approach and includes the support of storing and querying geospatial data using GeoJSON objects or as legacy coordinate pairs. We shall see in this example how this can be leveraged for handling the often complex ESG data.
Firstly, let’s filter and group the data. Using $match and $group operators, we can filter and group the country per country and province, as shown in Figure 15 and Figure 16.
*Figure 18. $match stage run in MongoDB Compass*
*Figure 19. $group stage run in MongoDB Compass*
Now that we have the data broken out by region and country, in this case Vietnam, let’s display the information on a map.
It doesn’t matter that the original ESG data did not include comprehensive geospatial data or data in GeoJSON format, as we can simply augment our data set with the latitude and longitude for each region.
Using the $set operator, we can apply the logic for all regions of the data, as shown in Figure 20.
Leveraging the $switch operator, we evaluate a series of case expressions and set the coordinates of longitude and latitude for the particular province in Vietnam.
*Figure 20. $set stage and $switch operator run in MongoDB Compass*
Using MongoDB Charts’ built-in heatmap feature, we can now display the maximum air emission, environment management, and water waste metrics data for Vietnamese regions as a color-coded heat map.
*Figure 21. heatmaps of Environment, Air Emission, Water Waste Indexes in Vietnam in MongoDB Atlas Charts*
Below is a snippet with all the aggregation operators mentioned in Scenario 3:
```
{
$match: {
Country: {
$ne: 'null'
},
Province: {
$ne: 'All'
}
}
}, {
$group: {
_id: {
country: '$Country',
province: '$Province'
},
environment_management: {
$max: '$Environment_Management_Index_Elevate'
},
air_emssion_index: {
$max: '$Air_Emissions_Index_Elevate'
},
water_waste_index: {
$max: '$Waste_Management_Index_Elevate'
}
}
}, {
$project: {
country: '$_id.country',
province: '$_id.province',
environment_management: 1,
air_emssion_index: 1,
water_waste_index: 1,
_id: 0
}
}, {
$set: {
loc: {
$switch: {
branches: [
{
'case': {
$eq: [
'$province',
'Southeast'
]
},
then: {
type: 'Point',
coordinates: [
105.8,
21.02
]
}
},
{
'case': {
$eq: [
'$province',
'North Central Coast'
]
},
then: {
type: 'Point',
coordinates: [
105.54,
18.2
]
}
},
{
'case': {
$eq: [
'$province',
'Northeast'
]
},
then: {
type: 'Point',
coordinates: [
105.51,
21.01
]
}
},
{
'case': {
$eq: [
'$province',
'Mekong Delta'
]
},
then: {
type: 'Point',
coordinates: [
105.47,
10.02
]
}
},
{
'case': {
$eq: [
'$province',
'Central Highlands'
]
},
then: {
type: 'Point',
coordinates: [
108.3,
12.4
]
}
},
{
'case': {
$eq: [
'$province',
'Northwest'
]
},
then: {
type: 'Point',
coordinates: [
103.1,
21.23
]
}
},
{
'case': {
$eq: [
'$province',
'South Central Coast'
]
},
then: {
type: 'Point',
coordinates: [
109.14,
13.46
]
}
},
{
'case': {
$eq: [
'$province',
'Red River Delta'
]
},
then: {
type: 'Point',
coordinates: [
106.3,
21.11
]
}
}
],
'default': null
}
}
}
}]
```
## Speed, performance, and flexibility
As we can see from the scenarios above, MongoDB’s out-of-the box tools and capabilities — including a powerful aggregation pipeline framework for simple or complex data processing, Charts for data visualization, geospatial data management, and native drivers — can easily and quickly combine different ESG-related resources and produce actionable insights.
MongoDB has a distinct advantage over relational databases when it comes to handling ESG data, negating the need to produce the ORM mapping for each data set.
Import any type of ESG data, model the data to fit your specific use case, and perform tests and analytics on that data with only a few commands.
To learn more about how MongoDB can help with your ESG needs, please visit our [dedicated solution page. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "In this quick guide, we will show you how MongoDB can move ESG data from different data sources to the document model, and more!\n",
"contentType": "Tutorial"
} | The 5-Minute Guide to Working with ESG Data on MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/node-aggregation-framework-3-3-2 | created | # Aggregation Framework with Node.js 3.3.2 Tutorial
When you want to analyze data stored in MongoDB, you can use MongoDB's powerful aggregation framework to do so. Today, I'll give you a high-level overview of the aggregation framework and show you how to use it.
>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.
>
>Click here to see a newer version of this post that uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.
If you're just joining us in this Quick Start with MongoDB and Node.js series, welcome! So far, we've covered how to connect to MongoDB and perform each of the CRUD (Create, Read, Update, and Delete) operations. The code we write today will use the same structure as the code we built in the first post in the series; so, if you have any questions about how to get started or how the code is structured, head back to that first post.
And, with that, let's dive into the aggregation framework!
>If you are more of a video person than an article person, fear not. I've made a video just for you! The video below covers the same content as this article.
>
>:youtube]{vid=iz37fDe1XoM}
>
>Get started with an M0 cluster on [Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
## What is the Aggregation Framework?
The aggregation framework allows you to analyze your data in real time. Using the framework, you can create an aggregation pipeline that consists of one or more stages. Each stage transforms the documents and passes the output to the next stage.
If you're familiar with the Linux pipe ( `|` ), you can think of the aggregation pipeline as a very similar concept. Just as output from one command is passed as input to the next command when you use piping, output from one stage is passed as input to the next stage when you use the aggregation pipeline.
The aggregation framework has a variety of stages available for you to use. Today, we'll discuss the basics of how to use $match, $group, $sort, and $limit. Note that the aggregation framework has many other powerful stages including $count, $geoNear, $graphLookup, $project, $unwind, and others.
## How Do You Use the Aggregation Framework?
I'm hoping to visit the beautiful city of Sydney, Australia soon. Sydney is a huge city with many suburbs, and I'm not sure where to start looking for a cheap rental. I want to know which Sydney suburbs have, on average, the cheapest one-bedroom Airbnb listings.
I could write a query to pull all of the one-bedroom listings in the Sydney area and then write a script to group the listings by suburb and calculate the average price per suburb. Or, I could write a single command using the aggregation pipeline. Let's use the aggregation pipeline.
There is a variety of ways you can create aggregation pipelines. You can write them manually in a code editor or create them visually inside of MongoDB Atlas or MongoDB Compass. In general, I don't recommend writing pipelines manually as it's much easier to understand what your pipeline is doing and spot errors when you use a visual editor. Since you're already setup to use MongoDB Atlas for this blog series, we'll create our aggregation pipeline in Atlas.
### Navigate to the Aggregation Pipeline Builder in Atlas
The first thing we need to do is navigate to the Aggregation Pipeline Builder in Atlas.
1. Navigate to Atlas and authenticate if you're not already authenticated.
2. In the **Organizations** menu in the upper-left corner, select the organization you are using for this Quick Start series.
3. In the **Projects** menu (located beneath the Organizations menu), select the project you are using for this Quick Start series.
4. In the right pane for your cluster, click **COLLECTIONS**.
5. In the list of databases and collections that appears, select **listingsAndReviews**.
6. In the right pane, select the **Aggregation** view to open the Aggregation Pipeline Builder.
The Aggregation Pipeline Builder provides you with a visual representation of your aggregation pipeline. Each stage is represented by a new row. You can put the code for each stage on the left side of a row, and the Aggregation Pipeline Builder will automatically provide a live sample of results for that stage on the right side of the row.
## Build an Aggregation Pipeline
Now we are ready to build an aggregation pipeline.
### Add a $match Stage
Let's begin by narrowing down the documents in our pipeline to one-bedroom listings in the Sydney, Australia market where the room type is "Entire home/apt." We can do so by using the $match stage.
1. On the row representing the first stage of the pipeline, choose **$match** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$match` operator in the code box for the stage.
2. Now we can input a query in the code box. The query syntax for `$match` is the same as the `findOne()` syntax that we used in a previous post. Replace the code in the `$match` stage's code box with the following:
``` json
{
bedrooms: 1,
"address.country": "Australia",
"address.market": "Sydney",
"address.suburb": { $exists: 1, $ne: "" },
room_type: "Entire home/apt"
}
```
Note that we will be using the `address.suburb` field later in the pipeline, so we are filtering out documents where `address.suburb` does not exist or is represented by an empty string.
The Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$match` stage is executed.
### Add a $group Stage
Now that we have narrowed our documents down to one-bedroom listings in the Sydney, Australia market, we are ready to group them by suburb. We can do so by using the $group stage.
1. Click **ADD STAGE**. A new stage appears in the pipeline.
2. On the row representing the new stage of the pipeline, choose **$group** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$group` operator in the code box for the stage.
3. Now we can input code for the `$group` stage. We will provide an `_id`, which is the field that the Aggregation Framework will use to create our groups. In this case, we will use `$address.suburb` as our `_id`. Inside of the $group stage, we will also create a new field named `averagePrice`. We can use the $avg aggregation pipeline operator to calculate the average price for each suburb. Replace the code in the $group stage's code box with the following:
``` json
{
_id: "$address.suburb",
averagePrice: {
"$avg": "$price"
}
}
```
The Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$group` stage is executed. Note that the documents have been transformed. Instead of having a document for each listing, we now have a document for each suburb. The suburb documents have only two fields: `_id` (the name of the suburb) and `averagePrice`.
### Add a $sort Stage
Now that we have the average prices for suburbs in the Sydney, Australia market, we are ready to sort them to discover which are the least expensive. We can do so by using the $sort stage.
1. Click **ADD STAGE**. A new stage appears in the pipeline.
2. On the row representing the new stage of the pipeline, choose **$sort** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$sort` operator in the code box for the stage.
3. Now we are ready to input code for the `$sort` stage. We will sort on the `$averagePrice` field we created in the previous stage. We will indicate we want to sort in ascending order by passing `1`. Replace the code in the `$sort` stage's code box with the following:
``` json
{
"averagePrice": 1
}
```
The Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$sort` stage is executed. Note that the documents have the same shape as the documents in the previous stage; the documents are simply sorted from least to most expensive.
### Add a $limit Stage
Now we have the average prices for suburbs in the Sydney, Australia market sorted from least to most expensive. We may not want to work with all of the suburb documents in our application. Instead, we may want to limit our results to the 10 least expensive suburbs. We can do so by using the $limit stage.
1. Click **ADD STAGE**. A new stage appears in the pipeline.
2. On the row representing the new stage of the pipeline, choose **$limit** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$limit` operator in the code box for the stage.
3. Now we are ready to input code for the `$limit` stage. Let's limit our results to 10 documents. Replace the code in the $limit stage's code box with the following:
``` json
10
```
The Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 10 documents that will be included in the results after the `$limit` stage is executed. Note that the documents have the same shape as the documents in the previous stage; we've simply limited the number of results to 10.
## Execute an Aggregation Pipeline in Node.js
Now that we have built an aggregation pipeline, let's execute it from inside of a Node.js script.
### Get a Copy of the Node.js Template
To make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.
1. Download a copy of template.js.
2. Open `template.js` in your favorite code editor.
3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.
4. Save the file as `aggregation.js`.
You can run this file by executing `node aggregation.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.
### Create a Function
Let's create a function whose job it is to print the cheapest suburbs for a given market.
1. Continuing to work in `aggregation.js`, create an asynchronous function named `printCheapestSuburbs` that accepts a connected MongoClient, a country, a market, and the maximum number of results to print as parameters.
``` js
async function printCheapestSuburbs(client, country, market, maxNumberToPrint) {
}
```
2. We can execute a pipeline in Node.js by calling
Collection's
aggregate().
Paste the following in your new function:
``` js
const pipeline = ];
const aggCursor = client.db("sample_airbnb")
.collection("listingsAndReviews")
.aggregate(pipeline);
```
3. The first param for `aggregate()` is a pipeline of type object. We could manually create the pipeline here. Since we've already created a pipeline inside of Atlas, let's export the pipeline from there. Return to the Aggregation Pipeline Builder in Atlas. Click the **Export pipeline code to language** button.
![Export pipeline in Atlas
4. The **Export Pipeline To Language** dialog appears. In the **Export Pipleine To** selection box, choose **NODE**.
5. In the Node pane on the right side of the dialog, click the **copy** button.
6. Return to your code editor and paste the `pipeline` in place of the empty object currently assigned to the pipeline constant.
``` js
const pipeline =
{
'$match': {
'bedrooms': 1,
'address.country': 'Australia',
'address.market': 'Sydney',
'address.suburb': {
'$exists': 1,
'$ne': ''
},
'room_type': 'Entire home/apt'
}
}, {
'$group': {
'_id': '$address.suburb',
'averagePrice': {
'$avg': '$price'
}
}
}, {
'$sort': {
'averagePrice': 1
}
}, {
'$limit': 10
}
];
```
7. This pipeline would work fine as written. However, it is hardcoded to search for 10 results in the Sydney, Australia market. We should update this pipeline to be more generic. Make the following replacements in the pipeline definition:
1. Replace `'Australia'` with `country`
2. Replace `'Sydney'` with `market`
3. Replace `10` with `maxNumberToPrint`
8. `aggregate()` will return an [AggregationCursor, which we are storing in the `aggCursor` constant. An AggregationCursor allows traversal over the aggregation pipeline results. We can use AggregationCursor's forEach() to iterate over the results. Paste the following inside `printCheapestSuburbs()` below the definition of `aggCursor`.
``` js
await aggCursor.forEach(airbnbListing => {
console.log(`${airbnbListing._id}: ${airbnbListing.averagePrice}`);
});
```
### Call the Function
Now we are ready to call our function to print the 10 cheapest suburbs in the Sydney, Australia market. Add the following call in the `main()` function beneath the comment that says `Make the appropriate DB calls`.
``` js
await printCheapestSuburbs(client, "Australia", "Sydney", 10);
```
Running aggregation.js results in the following output:
``` json
Balgowlah: 45.00
Willoughby: 80.00
Marrickville: 94.50
St Peters: 100.00
Redfern: 101.00
Cronulla: 109.00
Bellevue Hill: 109.50
Kingsgrove: 112.00
Coogee: 115.00
Neutral Bay: 119.00
```
Now I know what suburbs to begin searching as I prepare for my trip to Sydney, Australia.
## Wrapping Up
The aggregation framework is an incredibly powerful way to analyze your data. Learning to create pipelines may seem a little intimidating at first, but it's worth the investment. The aggregation framework can get results to your end-users faster and save you from a lot of scripting.
Today, we only scratched the surface of the aggregation framework. I highly recommend MongoDB University's free course specifically on the aggregation framework: M121: The MongoDB Aggregation Framework. The course has a more thorough explanation of how the aggregation framework works and provides detail on how to use the various pipeline stages.
This post included many code snippets that built on code written in the first post of this MongoDB and Node.js Quick Start series. To get a full copy of the code used in today's post, visit the Node.js Quick Start GitHub Repo.
Now you're ready to move on to the next post in this series all about change streams and triggers. In that post, you'll learn how to automatically react to changes in your database.
Questions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums. | md | {
"tags": [
"JavaScript",
"MongoDB"
],
"pageDescription": "Discover how to analyze your data using MongoDB's Aggregation Framework and Node.js.",
"contentType": "Quickstart"
} | Aggregation Framework with Node.js 3.3.2 Tutorial | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/crypto-news-website | created | # Building a Crypto News Website in C# Using the Microsoft Azure App Service and MongoDB Atlas
Who said creating a website has to be hard?
Writing the code, persisting news, hosting the website. A decade ago, this might have been a lot of work. These days, thanks to Microsoft Blazor, Microsoft Azure App Service, and MongoDB Atlas, you can get started in minutes. And finish it equally fast!
In this tutorial, I will walk you through:
* Setting up a new Blazor project.
* Creating a new page with a simple UI.
* Creating data in MongoDB Atlas.
* Showing those news on the website.
* Making the website available by using Azure App Service to host it.
All you need is this tutorial and the following pre-requisites, but if you prefer to just read along for now, check out the GitHub repository for this tutorial where you can find the code and the tutorial.
## Pre-requisites for this tutorial
Before we get started, here is a list of everything you need while working through the tutorial. I recommend getting everything set up first so that you can seamlessly follow along.
* Download and install the .NET framework.
For this tutorial, I am using .NET 7.0.102 for Windows, but any .NET 6.0 or higher should do.
* Download and install Visual Studio.
I am using the 2022 Community edition, version 17.4.4, but any 2019 or 2022 edition will be okay. Make sure to install the `Azure development` workload as we will be deploying with this later. If you already have an installed version of Visual Studio, go into the Installer and click `modify` to find it.
* Sign up for a free Microsoft Azure account.
* Sign up for a free MongoDB Atlas account.
## Creating a new Microsoft Blazor project that will contain our crypto news website
Now that the pre-requisites are out of the way, let's start by creating a new project.
I have recently discovered Microsoft Blazor and I absolutely love it. Such an easy way to create websites quickly and easily. And you don't even have to write any JavaScript or PHP! Let's use it for this tutorial, as well. Search for `Blazor Server App` and click `Next`.
Choose a `Project name` and `Location` of you liking. I like to have the solution and project in the same directory but you don't have to.
Choose your currently installed .NET framework (as described in `Pre-requisites`) and leave the rest on default.
Hit `Create` and you are good to go!
## Adding the MongoDB driver to the project to connect to the database
Before we start getting into the code, we need to add one NuGet package to the project: the MongoDB driver. The driver is a library that lets you easily access your MongoDB Atlas cluster and work with your database. Click on `Project` -> `Manage NuGet Packages...` and search for `MongoDB.Driver`.
During that process, you might have to install additional components, like the ones shown in the following screenshot. Confirm this installation as we will need some of those, as well.
Another message you come across might be the following license agreements, which you need to accept to be able to work with those libraries.
## Creating a new MongoDB Atlas cluster and database to host our crypto news
Now that we've installed the driver, let's go ahead and create a cluster and database to connect to.
When you register a new account, you will be presented with the selection of a cloud database to deploy. Open the `Advanced Configuration Options`.
For this tutorial, we only need the forever-free shared tier. Since the website will later be deployed to Azure, we also want the Atlas cluster deployed in Azure. And we also want both to reside in the same region. This way, we decrease the chance of having an additional latency as much as possible.
Here, you can choose any region. Just make sure to chose the same one later on when deploying the website to Azure. The remaining options can be left on their defaults.
The final step of creating a new cluster is to think about security measures by going through the `Security Quickstart`.
Choose a `Username` and `Password` for the database user that will access this cluster during the tutorial. For the `Access List`, we need add `0.0.0.0/0` since we do not know the IP address of our Azure deployment yet. This is okay for development purposes and testing, but in production, you should restrict the access to the specific IPs accessing Atlas.
Atlas also supports the use of network peering and private connections using the major cloud providers. This includes Azure Private Link or Azure Virtual Private Connection (VPC), if you are using an M10 or above cluster.
Now hit `Finish and Close`.
Creating a new shared cluster happens very, very fast and you should be able to start within minutes. As soon as the cluster is created, you'll see it in your list of `Database Deployments`.
Let's add some sample data for our website! Click on `Browse Collections` now.
If you've never worked with Atlas before, here are some vocabularies to get your started:
- A cluster consists of multiple nodes (for redundancy).
- A cluster can contain multiple databases (which are replicated onto all nodes).
- Each database can contain many collections, which are similar to tables in a relational database.
- Each collection can then contain many documents. Think rows, just better!
- Documents are super-flexible because each document can have its own set of properties. They are easy to read and super flexible to work with JSON-like structures that contain our data.
## Creating some test data in Atlas
Since there is no data yet, you will see an empty list of databases and collections. Click on `Add My Own Data` to add the first entry.
The database name and collection name can be anything, but to be in line with the code we'll see later, call them `crypto-news-website` and `news` respectively, and hit `Create`.
This should lead to a new entry that looks like this:
Next, click on `INSERT DOCUMENT`.
There are a couple things going on here. The `_id` has already been created automatically. Each document contains one of those and they are of type `ObjectId`. It uniquely identifies the document.
By hovering over the line count on the left, you'll get a pop-op to add more fields. Add one called `title` and set its value to whatever you like. The screenshot shows an example you can use. Choose `String` as the type on the right. Next, add a `date` and choose `Date` as the type on the right.
Repeat the above process a couple times to get as much example data in there as you like. You may also just continue with one entry, though, if you like, and fill up your news when you are done.
## Creating a connection string to access your MongoDB Atlas cluster
The final step within MongoDB Atlas is to actually create access to this database so that the MongoDB driver we installed into the project can connect to it. This is done by using a connection string.
A connection string is a URI that contains username, password, and the host address of the database you want to connect to.
Click on `Databases` on the left to get back to the cluster overview.
This time, hit the `Connect` button and then `Connect Your Application`.
If you haven't done so already, choose a username and password for the database user accessing this cluster during the tutorial. Also, add `0.0.0.0/0` as the IP address so that the Azure deployment can access the cluster later on.
Copy the connection string that is shown in the pop-up.
## Creating a new Blazor page
If you have never used Blazor before, just hit the `Run` button and have a look at the template that has been generated. It's a great start, and we will be reusing some parts of it later on.
Let's add our own page first, though. In your Solution Explorer, you'll see a `Pages` folder. Right-click it and add a `Razor Component`. Those are files that combine the HTML of your page with C# code.
Now, replace the content of the file with the following code. Explanations can be read inline in the code comments.
```csharp
@* The `page` attribute defines how this page can be opened. *@
@page "/news"
@* The `MongoDB` driver will be used to connect to your Atlas cluster. *@
@using MongoDB.Driver
@* `BSON` is a file format similar to JSON. MongoDB Atlas documents are BSON documents. *@
@using MongoDB.Bson
@* You need to add the `Data` folder as well. This is where the `News` class resides. *@
@using CryptoNewsApp.Data
@using Microsoft.AspNetCore.Builder
@* The page title is what your browser tab will be called. *@
News
@* Let's add a header to the page. *@
NEWS
@* And then some data. *@
@* This is just a simple table contains news and their date. *@
@if (_news != null)
{
News
Date
@* Blazor takes this data from the `_news` field that we will fill later on. *@
@foreach (var newsEntry in _news)
{
@newsEntry.Title
@newsEntry.Date
}
}
@* This part defines the code that will be run when the page is loaded. It's basically *@
@* what would usually be PHP in a non-Blazor environment. *@
@code {
// The `_news` field will hold all our news. We will have a look at the `News`
// class in just a moment.
private List? _news;
// `OnInitializedAsync()` gets called when the website is loaded. Our data
// retrieval logic has to be placed here.
protected override async Task OnInitializedAsync()
{
// First, we need to create a `MongoClient` which is what we use to
// connect to our cluster.
// The only argument we need to pass on is the connection string you
// retrieved from Atlas. Make sure to replace the password placeholder with your password.
var mongoClient = new MongoClient("YOUR_CONNECTION_STRING");
// Using the `mongoCLient` we can now access the database.
var cryptoNewsDatabase = mongoClient.GetDatabase("crypto-news-database");
// Having a handle to the database we can furthermore get the collection data.
// Note that this is a generic function that takes `News` as it's parameter
// to define who the documents in this collection look like.
var newsCollection = cryptoNewsDatabase.GetCollection("news");
// Having access to the collection, we issue a `Find` call to find all documents.
// A `Find` takes a filter as an argument. This filter is written as a `BsonDocument`.
// Remember, `BSON` is really just a (binary) JSON.
// Since we don't want to filter anything and get all the news, we pass along an
// empty / new `BsonDocument`. The result is then transformed into a list with `ToListAsync()`.
_news = await newsCollection.Find(new BsonDocument()).Limit(10).ToListAsync();
// And that's it! It's as easy as that using the driver to access the data
// in your MongoDB Atlas cluster.
}
}
```
Above, you'll notice the `News` class, which still needs to be created.
In the `Data` folder, add a new C# class, call it `News`, and use the following code.
```csharp
using MongoDB.Bson;
using MongoDB.Bson.Serialization.Attributes;
namespace CryptoNewsApp.Data
{
public class News
{
// The attribute `BsonId` signals the MongoDB driver that this field
// should used to map the `_id` from the Atlas document.
// Remember to use the type `ObjectId` here as well.
BsonId] public ObjectId Id { get; set; }
// The two other fields in each news are `title` and `date`.
// Since the C# coding style differs from the Atlas naming style, we have to map them.
// Thankfully there is another handy attribute to achieve this: `BsonElement`.
// It takes the document field's name and maps it to the classes field name.
[BsonElement("title")] public String Title { get; set; }
[BsonElement("date")] public DateTime Date { get; set; }
}
}
```
Now it's time to look at the result. Hit `Run` again.
The website should open automatically. Just add `/news` to the URL to see your new News page.
![Local Website showing news
If you want to learn more about how to add the news page to the menu on the left, you can have a look at more of my Blazor-specific tutorials.
## Deploying the website to Azure App Service
So far, so good. Everything is running locally. Now to the fun part: going live!
Visual Studio makes this super easy. Just click onto your project and choose `Publish...`.
The `Target` is `Azure`, and the `Specific target` is `Azure App Service (Windows)`.
When you registered for Azure earlier, a free subscription should have already been created and chosen here. By clicking on `Create new` on the right, you can now create a new App Service.
The default settings are all totally fine. You can, however, choose a different region here if you want to. Finally, click `Create` and then `Finish`.
When ready, the following pop-up should appear. By clicking `Publish`, you can start the actual publishing process. It eventually shows the result of the publish.
The above summary will also show you the URL that was created for the deployment. My example: https://cryptonewsapp20230124021236.azurewebsites.net/
Again, add `/news` to it to get to the News page.
## What's next?
Go ahead and add some more data. Add more fields or style the website a bit more than this default table.
The combination of using Microsoft Azure and MongoDB Atlas makes it super easy and fast to create websites like this one. But it is only the start. You can learn more about Azure on the Learn platform and about Atlas on the MongoDB University.
And if you have any questions, please reach out to us at the MongoDB Forums or tweet @dominicfrei. | md | {
"tags": [
"C#",
"MongoDB",
".NET",
"Azure"
],
"pageDescription": "This article by Dominic Frei will lead you through creating your first Microsoft Blazor server application and deploying it to Microsoft Azure.",
"contentType": "Tutorial"
} | Building a Crypto News Website in C# Using the Microsoft Azure App Service and MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/building-remix-applications | created | # Building Remix Applications with the MongoDB Stack
The JavaScript ecosystem has stabilized over the years. There isn’t a new framework every other day, but some interesting projects are still emerging. Remix is one of those newer projects that is getting a lot of traction in the developer communities. Remix is based on top of React and lets you use the same code base between your back end and front end. The pages are server-side generated but also dynamically updated without full page reloads. This makes your web application much faster and even lets it run without JavaScript enabled. In this tutorial, you will learn how to use it with MongoDB using the new MongoDB-Remix stack.
## Requirements
For this tutorial, you will need:
* Node.js.
* A MongoDB (free) cluster with the sample data loaded.
## About the MongoDB-Remix stack
Remix uses stacks of technology to help you get started with your projects. This stack, similar to others provided by the Remix team, includes React, TypeScript, and Tailwind. As for the data persistence layer, it uses MongoDB with the native JavaScript driver.
## Getting started
Start by initializing a new project. This can be done with the `create-remix` tool, which can be launched with npx. Answer the questions, and you will have the basic scaffolding for your project. Notice how we use the `--template` parameter to load the MongoDB Remix stack (`mongodb-developer/remix`) from Github. The second parameter specifies the folder in which you want to create this project.
```
npx create-remix --template mongodb-developer/remix remix-blog
```
This will start downloading the necessary packages for your application. Once everything is downloaded, you can `cd` into that directory and do a first build.
```
cd remix-blog
npm run build
```
You’re almost ready to start your application. Go to your MongoDB Atlas cluster (loaded with the sample data), and get your connection string.
At the root of your project, create a `.env` file with the `CONNECTION_STRING` variable, and paste your connection string. Your file should look like this.
```
CONNECTION_STRING=mongodb+srv://user:pass@cluster0.abcde.mongodb.net
```
At this point, you should be able to point your browser to http://localhost:3000 and see the application running.
Voilà! You’ve got a Remix application that connects to your MongoDB database. You can see the movie list, which fetches data from the `sample_mflix` database. Clicking on a movie title will bring you to the movie details page, which shows the plot. You can even add new movies to the collection if you want.
## Exploring the application
You now have a running application, but you will likely want to connect to a database that shows something other than sample data. In this section, we describe the various moving parts of the sample application and how you can edit them for your purposes.
### Database connection
The database connection is handled for you in the `/app/utils/db.server.ts` file. If you’ve used other Remix stacks in the past, you will find this code very familiar. The MongoDB driver used here will manage the pool of connections. The connection string is read from an environment variable, so there isn’t much you need to do here.
### Movie list
In the sample code, we connect to the `sample_mflix` database and get the first 10 results from the collection. If you are familiar with Remix, you might already know that the code for this page is located in the `/app/routes/movies/index.tsx` file. The sample app uses the default naming convention from the Remix nested routes system.
In that file, you will see a loader at the top. This loader is used for the list of movies and the search bar on that page.
```
export async function loader({ request }: LoaderArgs) {
const url = new URL(request.url);
let db = await mongodb.db("sample_mflix");
let collection = await db.collection("movies");
let movies = await collection.find({}).limit(10).toArray();
// …
return json({movies, searchedMovies});
}
```
You can see that the application connects to the `sample_mflix` database and the `movies` collection. From there, it uses the find method to retrieve some records. It queries the collection with an empty/unfiltered request object with a limit of 10 to fetch the databases' first 10 documents. The MongoDB Query API provides many ways to search and retrieve data.
You can change these to connect to your own database and see the result. You will also need to change the `MovieComponent` (`/app/components/movie.tsx`) to accommodate the documents you fetch from your database.
### Movie details
The movie details page can be found in `/app/routes/movies/$movieId.tsx`. In there, you will find similar code, but this time, it uses the findOne method to retrieve only a specific movie.
```
export async function loader({ params }: LoaderArgs) {
const movieId = params.movieId;
let db = await mongodb.db("sample_mflix");
let collection = await db.collection("movies");
let movie = await collection.findOne({_id: new ObjectId(movieId)});
return json(movie);
}
```
Again, this code uses the Remix routing standards to pass the `movieId` to the loader function.
### Add movie
You might have noticed the _Add_ link on the left menu. This lets you create a new document in your collection. The code for adding the document can be found in the `/app/routes/movies/add.tsx` file. In there, you will see an action function. This function will get executed when the form is submitted. This is thanks to the Remix Form component that we use here.
```
export async function action({ request }: ActionArgs) {
const formData = await request.formData();
const movie = {
title: formData.get("title"),
year: formData.get("year")
}
const db = await mongodb.db("sample_mflix");
const collection = await db.collection("movies");
const result = await collection.insertOne(movie);
return redirect(`/movies/${result.insertedId}`);
}
```
The code retrieves the form data to build the new document and uses the insertOne method from the driver to add this movie to the collection. You will notice the redirect utility at the end. This will send the users to the newly created movie page after the entry was successfully created.
## Next steps
That’s it! You have a running application and know how to customize it to connect to your database. If you want to learn more about using the native driver, use the link on the left navigation bar of the sample app or go straight to the documentation. Try adding pages to update and delete an entry from your collection. It should be using the same patterns as you see in the template. If you need help with the template, please ask in our community forums; we’ll gladly help. | md | {
"tags": [
"MongoDB",
"JavaScript"
],
"pageDescription": "In this tutorial, you will learn how to use Remix with MongoDB using the new MongoDB-Remix stack.",
"contentType": "Article"
} | Building Remix Applications with the MongoDB Stack | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/intro-to-realm-sdk-for-unity3d | created | # Introduction to the Realm SDK for Unity3D
In this video, Dominic Frei, iOS engineer on the Realm team, will
introduce you to the Realm SDK for Unity3D. He will be showing you how
to integrate and use the SDK based on a Unity example created during the
video so that you can follow along.
The video is separated into the following sections:
- What is Realm and where to download it?
- Creating an example project
- Adding Realm to your project
- Executing simple CRUD operations
- Recap / Summary
>
>
>Introduction to the Realm SDK for Unity3D
>
>:youtube]{vid=8jo_S02HLkI}
>
>
For those of you who prefer to read, below we have a full transcript of
the video too. Please be aware that this is verbatim and it might not be
sufficient to understand everything without the supporting video.
>
>
>If you have questions, please head to our [developer community
>website where the MongoDB engineers and
>the MongoDB community will help you build your next big idea with
>MongoDB.
>
>
## Transcript
Hello and welcome to a new tutorial! Today we're not talking about
playing games but rather how to make them. More specifically: how to use
the Realm SDK to persist your data in Unity. I will show you where to
download Realm, how to add it to your project and how to write the
necessary code to save and load your data. Let's get started!
Realm is an open-source, cross-platform database available for many
different platforms. Since we will be working with Unity we'll be using
the Realm .NET SDK. This is not yet available through the Unity package
manager so we need to download it from the Github repository directly. I
will put a link to it into the description. When you go to Releases you
can find the latest release right on the top of the page. Within the
assets you'll find two Unity files. Make sure to choose the file that
says 'unity.bundle' and download it.
Before we actually start integrating Realm into our Unity project let's
create a small example. I'll be creating it from scratch so that you can
follow along all the way and see how easy it is to integrate Realm into
your project. We will be using the Unity Editor version 2021.2.0a10. We
need to use this alpha version because there is a bug in the current
Unity LTS version preventing Realm from working properly. I'll give it a
name and choose a 2D template for this example.
We won't be doing much in the Unity Editor itself, most of the example
will take place in code. All we need to do here is to add a Square
object. The Square will change its color when we click on it and - as
soon as we add Realm to the project - the color will be persisted to the
database and loaded again when we start the game again. The square needs
to be clickable, therefore we need to add a collider. I will choose a
'Box Collider 2D' in this case. Finally we'll add a script to the
square, call it 'Square' and open the script.
The first thing we're going to do before we actually implement the
square's behaviour is to add another class which will hold our data, the
color of our square. We'll call this 'ColorEntity'. All we need for now
are three properties for the colors red, green and blue. They will be of
type float to match the UnityEngine's color properties and we'll default
them to 0, giving us an initial black color. Back in the Square
MonoBehaviour I'll add a ColorEntity property since we'll need that in
several locations. During the Awake of the Square we'll create a new
ColorEntity instance and then set the color of the square to this newly
created ColorEntity by accessing it's SpriteRenderer and setting it's
color. When we go back to the Unity Editor and enter Play Mode we'll see
a black square.
Ok, let's add the color change. Since we added a collider to the square
we can use the OnMouseDown event to listen for mouse clicks. All we want
to do here is to assign three random values to our ColorEntity. We'll
use Random.Range and clamp it between 0 and 1. Finally we need to update
the square with these colors. To avoid duplicated code I'll grab the
line from Awake where we set the color and put it in it's own function.
Now we just call it in Awake and after every mouse click. Let's have a
look at the result.
Initially we get our default black color. And with every click the color
changes. When I stop and start again, we'll of course end up with the
initial default again since the color is not yet saved. Let's do that
next!
We go to Window, Package Manager. From here we click on the plus icon
and choose 'Add package from tarball'. Now you just have to choose the
tarball downloaded earlier. Keep in mind that Unity does not import the
package and save it within your project but uses exactly this file
wherever it is. If you move it, your project won't work anymore. I
recommend moving this file from your Downloads to the project folder
first. As soon as it is imported you should see it in the Custom section
of your package list. That's all we need to do in the Unity Editor,
let's get back to Visual Studio.
Let's start with our ColorEntity. First, we want to import the Realm
package by adding 'using Realms'. The way Realm knows which objects are
meant to be saved in the database is by subclassing 'RealmObjects'.
That's all we have to do here really. To make our life a little bit
easier though I'll also add some more things. First, we want to have a
primary key by which we can later find the object we're looking for
easily. We'll just use the 'ObjectName' for that and add an attribute on
top of it, called 'PrimaryKey'. Next we add a default initialiser to
create a new Realm object for this class and a convenience initialiser
that sets the ObjectName right away. Ok, back to our Square. We need to
import Realm here as well. Then we'll create a property for the Realm
itself. This will later let us access our database. And all we need to
do to get access to it is to instantiate it. We'll do this during awake
as well, since it only needs to be done once.
Now that we're done setting up our Realm we can go ahead and look at how
to perform some simple CRUD operations. First, we want to actually
create the object in the database. We do this by calling add. Notice
that I have put this into a block that is passed to the write function.
We need to do this to tell our Realm that we are about to change data.
If another process was changing data at the same time we could end up in
a corrupt database. The write function makes sure that every other
process is blocked from writing to the database at the time we're
performing this change.
Another thing I'd like to add is a check if the ColorEntity we just
created already exists. If so, we don't need to create it again and in
fact can't since primary keys have to be unique. We do this by asking
our Realm for the ColorEntity we're looking for, identified by it's
primary key. I'll just call it 'square' for now. Now I check if the
object could be found and only if not, we'll be creating it with exactly
the same primary key. Whenever we update the color and therefore update
the properties of our ColorEntity we change data in our database.
Therefore we also need to wrap our mouse click within a write block.
Let's see how that looks in Unity. When we start the game we still see
the initial black state. We can still randomly update the color by
clicking on the square. And when we stop and start Play Mode again, we
see the color persists now.
Let's quickly recap what we've done. We added the Realm package in Unity
and imported it in our script. We added the superclass RealmObject to
our class that's supposed to be saved. And then all we need to do is to
make sure we always start a write transaction when we're changing data.
Notice that we did not need any transaction to actually read the data
down here in the SetColor function.
Alright, that's it for this tutorial. I hope you've learned how to use
Realm in your Unity project to save and load data.
>
>
>If you have questions, please head to our developer community
>website where the MongoDB engineers and
>the MongoDB community will help you build your next big idea with
>MongoDB.
>
>
| md | {
"tags": [
"Realm",
"Unity"
],
"pageDescription": "In this video, Dominic Frei, iOS engineer on the Realm team, will introduce you to the Realm SDK for Unity3D",
"contentType": "News & Announcements"
} | Introduction to the Realm SDK for Unity3D | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/go/get-hyped-using-docker-go-mongodb | created | # Get Hyped: Using Docker + Go with MongoDB
In the developer community, ensuring your projects run accurately regardless of the environment can be a pain. Whether it’s trying to recreate a demo from an online tutorial or working on a code review, hearing the words, “Well, it works on my machine…” can be frustrating. Instead of spending hours debugging, we want to introduce you to a platform that will change your developer experience: Docker.
Docker is a great tool to learn because it provides developers with the ability for their applications to be used easily between environments, and it's resource-efficient in comparison to virtual machines. This tutorial will gently guide you through how to navigate Docker, along with how to integrate Go on the platform. We will be using this project to connect to our previously built MongoDB Atlas Search Cluster made for using Synonyms in Atlas Search. Stay tuned for a fun read on how to learn all the above while also expanding your Gen-Z slang knowledge from our synonyms cluster. Get hyped!
## The Prerequisites
There are a few requirements that must be met to be successful with this tutorial.
- A M0 or better MongoDB Atlas cluster
- Docker Desktop
To use MongoDB with the Golang driver, you only need a free M0 cluster. To create this cluster, follow the instructions listed on the MongoDB documentation. However, we’ll be making many references to a previous tutorial where we used Atlas Search with custom synonyms.
Since this is a Docker tutorial, you’ll need Docker Desktop. You don’t actually need to have Golang configured on your host machine because Docker can take care of this for us as we progress through the tutorial.
## Building a Go API with the MongoDB Golang Driver
Like previously mentioned, you don’t need Go installed and configured on your host computer to be successful. However, it wouldn’t hurt to have it in case you wanted to test things prior to creating a Docker image.
On your computer, create a new project directory, and within that project directory, create a **src** directory with the following files:
- go.mod
- main.go
The **go.mod** file is our dependency management file for Go modules. It could easily be created manually or by using the following command:
```bash
go mod init
```
The **main.go** file is where we’ll keep all of our project code.
Starting with the **go.mod** file, add the following lines:
```
module github.com/mongodb-developer/docker-golang-example
go 1.15
require go.mongodb.org/mongo-driver v1.7.0
require github.com/gorilla/mux v1.8.0
```
Essentially, we’re defining what version of Go to use and the modules that we want to use. For this project, we’ll be using the MongoDB Go driver as well as the Gorilla Web Toolkit.
This brings us into the building of our simple API.
Within the **main.go** file, add the following code:
```golang
package main
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"time"
"github.com/gorilla/mux"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
var client *mongo.Client
var collection *mongo.Collection
type Tweet struct {
ID int64 `json:"_id,omitempty" bson:"_id,omitempty"`
FullText string `json:"full_text,omitempty" bson:"full_text,omitempty"`
User struct {
ScreenName string `json:"screen_name" bson:"screen_name"`
} `json:"user,omitempty" bson:"user,omitempty"`
}
func GetTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}
func SearchTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}
func main() {
fmt.Println("Starting the application...")
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("MONGODB_URI")))
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
collection = client.Database("synonyms").Collection("tweets")
router := mux.NewRouter()
router.HandleFunc("/tweets", GetTweetsEndpoint).Methods("GET")
router.HandleFunc("/search", SearchTweetsEndpoint).Methods("GET")
http.ListenAndServe(":12345", router)
}
```
There’s more to the code, but before we see the rest, let’s start breaking down what we have above to make sense of it.
You’ll probably notice our `Tweets` data structure:
```golang
type Tweet struct {
ID int64 `json:"_id,omitempty" bson:"_id,omitempty"`
FullText string `json:"full_text,omitempty" bson:"full_text,omitempty"`
User struct {
ScreenName string `json:"screen_name" bson:"screen_name"`
} `json:"user,omitempty" bson:"user,omitempty"`
}
```
Earlier in the tutorial, we mentioned that this example is heavily influenced by a previous tutorial that used Twitter data. We highly recommend you take a look at it. This data structure has some of the fields that represent a tweet that we scraped from Twitter. We didn’t map all the fields because it just wasn’t necessary for this example.
Next, you’ll notice the following:
```golang
func GetTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}
func SearchTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}
```
These will be the functions that hold our API endpoint logic. We’re going to skip these for now and focus on understanding the connection and configuration logic.
As of now, most of what we’re interested in is happening in the `main` function.
The first thing we’re doing is connecting to MongoDB:
```golang
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("MONGODB_URI")))
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
collection = client.Database("synonyms").Collection("tweets")
```
You’ll probably notice the `MONGODB_URI` environment variable in the above code. It’s not a good idea to hard-code the MongoDB connection string in the application. This prevents us from being flexible and it could be a security risk. Instead, we’re using environment variables that we’ll pass in with Docker when we deploy our containers.
You can visit the MongoDB Atlas dashboard for your URI string.
The database we plan to use is `synonyms` and we plan to use the `tweets` collection, both of which we talked about in that previous tutorial.
After connecting to MongoDB, we focus on configuring the Gorilla Web Toolkit:
```golang
router := mux.NewRouter()
router.HandleFunc("/tweets", GetTweetsEndpoint).Methods("GET")
router.HandleFunc("/search", SearchTweetsEndpoint).Methods("GET")
http.ListenAndServe(":12345", router)
```
In this code, we are defining which endpoint path should route to which function. The functions are defined, but we haven’t yet added any logic to them. The application itself will be serving on port 12345.
As of now, the application has the necessary basic connection and configuration information. Let’s circle back to each of those endpoint functions.
We’ll start with the `GetTweetsEndpoint` because it will work fine with an M0 cluster:
```golang
func GetTweetsEndpoint(response http.ResponseWriter, request *http.Request) {
response.Header().Set("content-type", "application/json")
var tweets ]Tweet
ctx, _ := context.WithTimeout(context.Background(), 30*time.Second)
cursor, err := collection.Find(ctx, bson.M{})
if err != nil {
response.WriteHeader(http.StatusInternalServerError)
response.Write([]byte(`{ "message": "` + err.Error() + `" }`))
return
}
if err = cursor.All(ctx, &tweets); err != nil {
response.WriteHeader(http.StatusInternalServerError)
response.Write([]byte(`{ "message": "` + err.Error() + `" }`))
return
}
json.NewEncoder(response).Encode(tweets)
}
```
In the above code, we’re saying that we want to use the `Find` operation on our collection for all documents in that collection, hence the empty filter object.
If there were no errors, we can get all the results from our cursor, load them into a `Tweet` slice, and then JSON encode that slice for sending to the client. The client will receive JSON data as a result.
Now we can look at the more interesting endpoint function.
```golang
func SearchTweetsEndpoint(response http.ResponseWriter, request *http.Request) {
response.Header().Set("content-type", "application/json")
queryParams := request.URL.Query()
var tweets []Tweet
ctx, _ := context.WithTimeout(context.Background(), 30*time.Second)
searchStage := bson.D{
{"$search", bson.D{
{"index", "synsearch"},
{"text", bson.D{
{"query", queryParams.Get("q")},
{"path", "full_text"},
{"synonyms", "slang"},
}},
}},
}
cursor, err := collection.Aggregate(ctx, mongo.Pipeline{searchStage})
if err != nil {
response.WriteHeader(http.StatusInternalServerError)
response.Write([]byte(`{ "message": "` + err.Error() + `" }`))
return
}
if err = cursor.All(ctx, &tweets); err != nil {
response.WriteHeader(http.StatusInternalServerError)
response.Write([]byte(`{ "message": "` + err.Error() + `" }`))
return
}
json.NewEncoder(response).Encode(tweets)
}
```
The idea behind the above function is that we want to use an aggregation pipeline for Atlas Search. It does use the synonym information that we outlined in the [previous tutorial.
The first important thing in the above code to note is the following:
```golang
queryParams := request.URL.Query()
```
We’re obtaining the query parameters passed with the HTTP request. We’re expecting a `q` parameter to exist with the search query to be used.
To keep things simple, we make use of a single stage for the MongoDB aggregation pipeline:
```golang
searchStage := bson.D{
{"$search", bson.D{
{"index", "synsearch"},
{"text", bson.D{
{"query", queryParams.Get("q")},
{"path", "full_text"},
{"synonyms", "slang"},
}},
}},
}
```
In this stage, we are doing a text search with a specific index and a specific set of synonyms. The query that we use for our text search comes from the query parameter of our HTTP request.
Assuming that everything went well, we can load all the results from the cursor into a `Tweet` slice, JSON encode it, and return it to the client that requested it.
If you have Go installed and configured on your computer, go ahead and try to run this application. Just don’t forget to add the `MONGODB_URI` to your environment variables prior.
If you want to learn more about API development with the Gorilla Web Toolkit and MongoDB, check out this tutorial on the subject.
## Configuring a Docker Image for Go with MongoDB
Let’s get started with Docker! If it’s a platform you’ve never used before, it might seem a bit daunting at first, but let us guide you through it, step by step. We will be showing you how to download Docker and get started with setting up your first Dockerfile to connect to our Gen-Z Synonyms Atlas Cluster.
First things first. Let’s download Docker. This can be done through their website in just a couple of minutes.
Once you have that up and running, it’s time to create your very first Dockerfile.
At the root of your project folder, create a new **Dockerfile** file with the following content:
```
#get a base image
FROM golang:1.16-buster
MAINTAINER anaiya raisinghani
WORKDIR /go/src/app
COPY ./src .
RUN go get -d -v
RUN go build -v
CMD "./docker-golang-example"]
```
This format is what many Dockerfiles are composed of, and a lot of it is heavily customizable and can be edited to fit your project's needs.
The first step is to grab a base image that you’re going to use to build your new image. You can think of using Dockerfiles as layers to a cake. There are a multitude of different base images out there, or you can use `FROM scratch` to start from an entirely blank image. Since this project is using the programming language Go, we chose to start from the `golang` base image and add the tag `1.16` to represent the version of Go that we plan to use. Whenever you include a tag next to your base image, be sure to set it up with a colon in between, just like this: `golang:1.16`. To learn more about which tag will benefit your project the best, check out [Docker’s documentation on the subject.
This site holds a lot of different tags that can be used on a Golang base image. Tags are important because they hold very valuable information about the base image you’re using such as software versions, operating system flavor, etc.
Let’s run through the rest of what will happen in this Dockerfile!
It's optional to include a `MAINTAINER` for your image, but it’s good practice so that people viewing your Dockerfile can know who created it. It's not necessary, but it’s helpful to include your full name and your email address in the file.
The `WORKDIR /go/src/app` command is crucial to include in your Dockerfile since `WORKDIR` specifies which working directory you’re in. All the commands after will be run through whichever directory you choose, so be sure to be aware of which directory you’re currently in.
The `COPY ./src .` command allows you to copy whichever files you want from the specified location on the host machine into the Docker image.
Now, we can use the `RUN` command to set up exactly what we want to happen at image build time before deploying as a container. The first command we have is `RUN go get -d -v`, which will download all of the Go dependencies listed in the **go.mod** file that was copied into the image..
Our second `RUN` command is `RUN go build -v`, which will build our project into an executable binary file.
The last step of this Dockerfile is to use a `CMD` command, `CMD “./docker-golang-example”]`. This command will define what is run when the container is deployed rather than when the image is built. Essentially we’re saying that we want the built Go application to be run when the container is deployed.
Once you have this Dockerfile set up, you can build and execute your project using your entire MongoDB URI link:
To build the Docker image and deploy the container, execute the following from the command line:
```bash
docker build -t docker-syn-image .
docker run -d -p 12345:12345 -e “MONGODB_URI=YOUR_URI_HERE” docker-syn-image
```
Following these instructions will allow you to run the project and access it from http://localhost:12345. **But**! It’s so tedious. What if we told you there was an easier way to run your application without having to write in the entire URI link? There is! All it takes is one extra step: setting up a Docker Compose file.
## Setting Up a Docker Compose File to Streamline Deployments
A Docker Compose file is a nice little step to run all your container files and dependencies through a simple command: `docker compose up`.
In order to set up this file, you need to establish a YAML configuration file first. Do this by creating a new file in the root of your project folder, naming it **docker-compose**, and adding **.yml** at the end. You can name it something else if you like, but this is the easiest since when running the `docker compose up` command, you won’t need to specify a file name. Once that is in your project folder, follow the steps below.
This is what your Docker Compose file will look like once you have it all set up:
```yaml
version: "3.9"
services:
web:
build: .
ports:
- "12345:12345"
environment:
MONGODB_URI: your_URI_here
```
Let’s run through it!
First things first. Determine which schema version you want to be running. You should be using the most recent version, and you can find this out through [Docker’s documentation.
Next, define which services, otherwise known as containers, you want to be running in your project. We have included `web` since we are attaching to our Atlas Search cluster. The name isn’t important and it acts more as an identifier for that particular service. Next, specify that you are building your application, and put in your `ports` information in the correct spot. For the next step, we can set up our `environment` as our MongoDB URI and we’re done!
Now, run the command `docker compose up` and watch the magic happen. Your container should build, then run, and you’ll be able to connect to your port and see all the tweets!
## Conclusion
This tutorial has now left you equipped with the knowledge you need to build a Go API with the MongoDB Golang driver, create a Dockerfile, create a Docker Compose file, and connect your newly built container to a MongoDB Atlas Cluster.
Using these new platforms will allow you to take your projects to a whole new level.
If you’d like to take a look at the code used in our project, you can access it on GitHub.
Using Docker or Go, but have a question? Check out the MongoDB Community Forums! | md | {
"tags": [
"Go",
"Docker"
],
"pageDescription": "Learn how to create and deploy Golang-powered micro-services that interact with MongoDB using Docker.",
"contentType": "Tutorial"
} | Get Hyped: Using Docker + Go with MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/ionic-realm-web-app-convert-to-mobile-app | created | # Let’s Give Your Realm-Powered Ionic Web App the Native Treatment on iOS and Android!
Realm is an open-source, easy-to-use local database that helps mobile developers to build better apps, faster. It offers a data synchronization service—MongoDB Realm Sync—that makes it simple to move data between the client and MongoDB Atlas on the back end. Using Realm can save you from writing thousands of lines of code, and offers an intuitive way to work with your data.
The Ionic team posted a fantastic article on how you can use Ionic with Realm to build a React Web app quickly, taking advantage of Realm to easily persist your data in a MongoDB Atlas Database.
After cloning the repo and running `ionic serve`, you'll have a really simple task management web application. You can register (using any user/password combination, Realm takes care of your onboarding needs). You can log in, have a look at your tasks, and add new tasks.
| Login in the Web App | Browsing Tasks |
|--------------|-----------|
| | |
Let’s build on what the Ionic team created for the web, and expand it by building a mobile app for iOS and Android using one of the best features Ionic has: the _“Write Once, Run Anywhere”_ approach to coding. I’ll start with an iOS app.
## Prerequisites
To follow along this post, you’ll need five things:
* A macOS-powered computer running Xcode (to develop for iOS). I’m using Xcode 13 Beta. You don’t have to risk your sanity.
* Ionic installed. You can follow the instructions here, but TL;DR it’s `npm install -g @ionic/cli`
* Clone the repo with the Ionic React Web App that we’ll turn into mobile.
* As we need an Atlas Database to store our data in the cloud, and a Realm app to make it easy to work with Atlas from mobile, set up a Free Forever MongoDB cluster and create and import a Realm app schema so everything is ready server-side.
* Once you have your Realm app created, copy the Realm app ID from the MongoDB admin interface for Realm, and paste it into `src/App.tsx`, in the line:
`export const APP_ID = '';`
Once your `APP_ID` is set, run:
```
$ npm run build
```
## The iOS app
To add iOS capabilities to our existing app, we need to open a terminal and run:
```bash
$ ionic cap add ios
```
This will create the iOS Xcode Project native developers know and love, with the code from our Ionic app. I ran into a problem doing that and it was that the version of Capacitor used in the repo was 3.1.2, but for iOS, I needed at least 3.2.0. So, I just changed `package.json` and ran `npm install` to update Capacitor.
`package.json` fragment:
```
...
"dependencies": {
"@apollo/client": "^3.4.5",
"@capacitor/android": "3.2.2",
"@capacitor/app": "1.0.2",
"@capacitor/core": "3.2.0",
"@capacitor/haptics": "1.0.2",
"@capacitor/ios": "3.2.2",
...
```
Now we have a new `ios` directory. If we enter that folder, we’ll see an `App` directory that has a CocoaPods-powered iOS app. To run this iOS app, we need to:
* Change to that directory with `cd ios`. You’ll find an `App` directory. `cd App`
* Install all CocoaPods with `pod repo update && pod install`, as usual in a native iOS project. This updates all libraries’ caches for CocoaPods and then installs the required libraries and dependencies in your project.
* Open the generated `App.xcworkspace` file with Xcode. From Terminal, you can just type `open App.xcworkspace`.
* Run the app from Xcode.
| Login in the iOS App | Browsing Tasks |
|--------------|-----------|
|| |
That’s it. Apart from updating Capacitor, we only needed to run one command to get our Ionic web project running on iOS!
## The Android App
How hard can it be to build our Ionic app for Android now that we have done it for iOS? Well, it turns out to be super-simple. Just `cd` back to the root of the project and type in a terminal:
```
ionic cap android
```
This will create the Android project. Once has finished, launch your app using:
```
ionic capacitor run android -l --host=10.0.1.81
```
In this case, `10.0.1.81` is my own IP address. As you can see, if you have more than one Emulator or even a plugged-in Android phone, you can select where you want to run the Ionic app.
Once running, you can register, log in, and add tasks in Android, just like you can do in the web and iOS apps.
| Adding a task in Android | Browsing Tasks in Android |
|--------------|-----------|
|||
The best part is that thanks to the synchronization happening in the MongoDB Realm app, every time we add a new task locally, it gets uploaded to the cloud to a MongoDB Atlas database behind the scenes. And **all other apps accessing the same MongoDB Realm app can show that data**!
## Automatically refreshing tasks
Realm SDKs are well known for their syncing capabilities. You change something in the server, or in one app, and other users with access to the same data will see the changes almost immediately. You don’t have to worry about invalidating caches, writing complex networking/multithreading code that runs in the background, listening to silent push notifications, etc. MongoDB Realm takes care of all that for you.
But in this example, we access data using the Apollo GraphQL Client for React. Using this client, we can log into our Realm app and run GraphQL Queries—although as designed for the web, we don’t have access to the hard drive to store a .realm file. It’s just a simpler way to use the otherwise awesome Apollo GraphQL Client with Realm, so we don’t have synchronization implemented. But luckily, Apollo GraphQL queries can automatically refresh themselves just passing a `pollInterval` argument. I told you it was awesome. You set the time interval in milliseconds to refresh the data.
So, in `useTasks.ts`, our function to get all tasks will look like this, auto-refreshing our data every half second.
```typescript
function useAllTasksInProject(project: any) {
const { data, loading, error } = useQuery(
gql`
query GetAllTasksForProject($partition: String!) {
tasks(query: { _partition: $partition }) {
_id
name
status
}
}
`,
{ variables: { partition: project.partition }, pollInterval: 500 }
);
if (error) {
throw new Error(`Failed to fetch tasks: ${error.message}`);
}
// If the query has finished, return the tasks from the result data
// Otherwise, return an empty list
const tasks = data?.tasks ?? ];
return { tasks, loading };
}
```
![Now we can sync our actions. Adding a task in the Android Emulator gets propagated to the iOS and Web versions
## Pull to refresh
Adding automatic refresh is nice, but in mobile apps, we’re used to also refreshing lists of data just by pulling them. To get this, we’ll need to add the Ionic component `IonRefresher` to our Home component:
```html
Tasks
Tasks
{loading ? : null}
{tasks.map((task: any) => (
))}
```
As we can see, an `IonRefresher` component will add the pull-to-refresh functionality with an included loading indicator tailored for each platform.
```html
```
To refresh, we call `doRefresh` and there, we just reload the whole page.
```typescript
const doRefresh = (event: CustomEvent) => {
window.location.reload(); // reload the whole page
event.detail.complete(); // we signal the loading indicator to hide
};
```
## Deleting tasks
Right now, we can swipe tasks from right to left to change the status of our tasks. But I wanted to also add a left to right swipe so we can delete tasks. We just need to add the swiping control to the already existing `IonItemSliding` control. In this case, we want a swipe from the _start_ of the control. This way, we avoid any ambiguities with right-to-left vs. left-to-right languages. When the user taps on the new “Delete” button (which will appear red as we’re using the _danger_ color), `deleteTaskSelected` is called.
```html
{task.name}
Status
Delete
```
To delete the task, we use a GraphQL mutation defined in `useTaskMutations.ts`:
```typescript
const deleteTaskSelected = () => {
slidingRef.current?.close(); // close sliding menu
deleteTask(task); // delete task
};
```
## Recap
In this post, we’ve seen how easy it is to start with an Ionic React web application and, with only a few lines of code, turn it into a mobile app running on iOS and Android. Then, we easily added some functionality to the three apps at the same time. Ionic makes it super simple to run your Realm-powered apps everywhere!
You can check out the code from this post in this branch of the repo, just by typing:
```
$ git clone https://github.com/mongodb-developer/ionic-realm-demo
$ git checkout observe-changes
```
But this is not the only way to integrate Realm in your Ionic apps. Using Capacitor and our native SDKs, we’ll show you how to use Realm from Ionic in a future follow-up post.
| md | {
"tags": [
"Realm",
"JavaScript",
"GraphQL",
"React"
],
"pageDescription": "We can convert a existing Ionic React Web App that saves data in MongoDB Realm using Apollo GraphQL into an iOS and Android app using a couple commands, and the three apps will share the same MongoDB Realm backend. Also, we can easily add functionality to all three apps, just modifying one code base.\n",
"contentType": "Tutorial"
} | Let’s Give Your Realm-Powered Ionic Web App the Native Treatment on iOS and Android! | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-flexible-sync-preview | created | # A Preview of Flexible Sync
> Atlas Device Sync's flexible sync mode is now GA. Learn more here.
## Introduction
When MongoDB acquired Realm in 2019, we knew we wanted to give developers the easiest and fastest way to synchronize data on-device with a backend in the cloud.
:youtube]{vid=6WrQ-f0dcIA}
In an offline-first environment, edge-to-cloud data sync typically requires thousands of lines of complex conflict resolution and networking code, and leaves developers with code bloat that slows the development of new features in the long-term. MongoDB’s Atlas Device Sync simplifies moving data between the Realm Mobile Database and MongoDB Atlas. With huge amounts of boilerplate code eliminated, teams are able to focus on the features that drive 5-star app reviews and happy users.
Since bringing Atlas Device Sync GA in February 2021, we’ve seen it transform the way developers are building data synchronization into their mobile applications. But we’ve also seen developers creating workarounds for complex sync use cases. With that in mind, we’ve been hard at work building the next iteration of Sync, which we’re calling Flexible Sync.
Flexible Sync takes into account a year’s worth of user feedback on partition-based sync, and aims to make syncing data to MongoDB Atlas a simple and idiomatic process by using a client-defined query to define the data synced to user applications.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. [Get started now by build: Deploy Sample for Free!
## How Flexible Sync Works
Flexible Sync lets developers start writing code that syncs data more quickly – allowing you to choose which data is synced via a language-native query and to change the queries that define your syncing data at any time.
With Flexible Sync, developers can enable devices to define a query on the client side using the Realm SDK’s query-language, which will execute on MongoDB Atlas to identify the set of data to Sync. Any documents that match the query will be translated to Realm Objects and saved to the client device’s local disk. The query will be maintained on the server, which will check in real-time to identify if new document insertions, updates, or deletions on Atlas change the query results. Relevant changes on the server-side will be replicated down to the client in real-time, and any changes from the client will be similarly replicated to Atlas.
## New Capabilities
Flexible Sync is distinctly different from the partition-based sync used by Device Sync today.
With partition-based sync, developers must configure a partition field for their Atlas database. This partition field lives on each document within the Atlas database that the operator wants to sync. Clients can then request access to different partitions of the Atlas database, using the different values of the partition key field. When a client opens a synchronized Realm they pass in the partition key value as a parameter. The sync server receives the value from the client, and sends any documents down to the client that match the partition key value. These documents are automatically translated as Realm Objects and stored on the client’s disk for offline access.
Partition-based sync works well for applications where data is static and compartmentalized, and where permissions models rarely need to change. With Flexible Sync, we’re making fine-grained and flexible permissioning possible, and opening up new app use cases through simplifying the syncing of data that requires ranged or dynamic queries.
## Flexible Permissions
Unlike with partition-based sync, Flexible Sync makes it seamless to implement the document-level permission model when syncing data - meaning synced fields can be limited based on a user’s role. We expect this to be available at preview, and with field-level permissions coming after that.
Consider a healthcare app, with different field-level permissions for Patients, Doctors, and Administrative staff using the application. A patient collection contains user data about the patient, their health history, procedures undergone, and prognosis. The patient accessing the app would only be able to see their full healthcare history, along with their own personal information. Meanwhile, a doctor using the app would be able to see any patients assigned to their care, along with healthcare history and prognosis. But doctors viewing patient data would be unable to view certain personal identifying information, like social security numbers. Administrative staff who handle billing would have another set of field-level permissions, seeing only the data required to successfully bill the patient.
Under the hood, this is made possible when Flexible Sync runs the query sent by the client, obtains the result set, and then subtracts any data from the result set sent down to the client based on the permissions. The server guards against clients receiving data they aren’t allowed to see, and developers can trust that the server will enforce compliance, even if a query is written with mistakes. In this way, Flexible Sync simplifies sharing subsets of data across groups of users and makes it easier for your application's permissions to mirror complex organizations and business requirements.
Flexible Sync also allows clients to share some documents but not others, based on the ResultSet of their query. Consider a company where teams typically share all the data within their respective teams, but not across teams. When a new project requires teams to collaborate, Flexible Sync makes this easy. The shared project documents could have a field called allowedTeams: marketing, sales]. Each member of the team would have a client-side query, searching for all documents on allowedTeams matching marketing or sales using an $in operator, depending on what team that user was a member of.
## Ranged & Dynamic Queries
One of Flexible Sync's primary benefits is that it allows for simple synchronization of data that falls into a range – such as a time window – and automatically adds and removes documents as they fall in and out of range.
Consider an app used by a company’s workforce, where the users only need to see the last seven days of work orders. With partition-based sync, a time-based trigger needed to fire daily to move work orders in and out of the relevant partition. With Flexible Sync, a developer can write a ranged query that automatically includes and removes data as time passes and the 7-day window changes. By adding a time based range component to the query, code is streamlined. The sync resultset gets a built-in TTL, which previously had to be implemented by the operator on the server-side.
Flexible Sync also enables much more dynamic queries, based on user inputs. Consider a shopping app with millions of products in its Inventory collection. As users apply filters in the app – viewing only pants that are under $30 dollars and size large – the query parameters can be combined with logical ANDs and ORs to produce increasingly complex queries, and narrow down the search result even further. All of these query results are combined into a single realm file on the client’s device, which significantly simplifies code required on the client-side.
## Looking Ahead
Ultimately, our decision to build Flexible Sync is driven by the Realm team’s desire to eliminate every possible piece of boilerplate code for developers. We’re motivated by delivering a sync service that can fit any use case or schema design pattern you can imagine, so that you can spend your time building features rather than implementing workarounds.
The Flexible Sync project represents the next evolution of Atlas Device Sync. We’re working hard to get to a public preview by the end of 2021, and believe this query-based sync has the potential to become the standard for Sync-enabled applications. We won’t have every feature available on day one, but iterative releases over the course of 2022 will continuously bring you more query operators and permissions integrations.
Interested in joining the preview program? [Sign-up here and we’ll let you know when Flexible Sync is available in preview.
| md | {
"tags": [
"Realm",
"React Native",
"Mobile"
],
"pageDescription": "Flexible Sync lets developers start writing code that syncs data more quickly – allowing you to choose which data is synced via a language-native query and to change the queries that define your syncing data at any time.",
"contentType": "Article"
} | A Preview of Flexible Sync | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/code-examples/python/python-quickstart-tornado | created | # Getting Started with MongoDB and Tornado
Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. Because Tornado uses non-blocking network I/O, it is ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.
Tornado also makes it very easy to create JSON APIs, which is how we're going to be using it in this example. Motor, the Python async driver for MongoDB, comes with built-in support for Tornado, making it as simple as possible to use MongoDB in Tornado regardless of the type of server you are building.
In this quick start, we will create a CRUD (Create, Read, Update, Delete) app showing how you can integrate MongoDB with your Tornado projects.
## Prerequisites
- Python 3.9.0
- A MongoDB Atlas cluster. Follow the "Get Started with Atlas" guide to create your account and MongoDB cluster. Keep a note of your username, password, and connection string as you will need those later.
## Running the Example
To begin, you should clone the example code from GitHub.
``` shell
git clone git@github.com:mongodb-developer/mongodb-with-tornado.git
```
You will need to install a few dependencies: Tornado, Motor, etc. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active.
``` shell
cd mongodb-with-tornado
pip install -r requirements.txt
```
It may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.
Once you have installed the dependencies, you need to create an environment variable for your MongoDB connection string.
``` shell
export DB_URL="mongodb+srv://:@/?retryWrites=true&w=majority"
```
Remember, anytime you start a new terminal session, you will need to set this environment variable again. I use direnv to make this process easier.
The final step is to start your Tornado server.
``` shell
python app.py
```
Tornado does not output anything in the terminal when it starts, so as long as you don't have any error messages, your server should be running.
Once the application has started, you can view it in your browser at . There won't be much to see at the moment as you do not have any data! We'll look at each of the end-points a little later in the tutorial, but if you would like to create some data now to test, you need to send a `POST` request with a JSON body to the local URL.
``` shell
curl -X "POST" "http://localhost:8000/" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json; charset=utf-8' \
-d $'{
"name": "Jane Doe",
"email": "jdoe@example.com",
"gpa": "3.9"
}'
```
Try creating a few students via these `POST` requests, and then refresh your browser.
## Creating the Application
All the code for the example application is within `app.py`. I'll break it down into sections and walk through what each is doing.
### Connecting to MongoDB
One of the very first things we do is connect to our MongoDB database.
``` python
client = motor.motor_tornado.MotorClient(os.environ"MONGODB_URL"])
db = client.college
```
We're using the async motor driver to create our MongoDB client, and then we specify our database name `college`.
### Application Routes
Our application has four routes:
- POST / - creates a new student.
- GET / - view a list of all students or a single student.
- PUT /{id} - update a student.
- DELETE /{id} - delete a student.
Each of the routes corresponds to a method on the `MainHandler` class. Here is what that class looks like if we only show the method stubs:
``` python
class MainHandler(tornado.web.RequestHandler):
async def get(self, **kwargs):
pass
async def post(self):
pass
async def put(self, **kwargs):
pass
async def delete(self, **kwargs):
pass
```
As you can see, the method names correspond to the different `HTTP` methods. Let's walk through each method in turn.
#### POST - Create Student
``` python
async def post(self):
student = tornado.escape.json_decode(self.request.body)
student["_id"] = str(ObjectId())
new_student = await self.settings["db"]["students"].insert_one(student)
created_student = await self.settings["db"]["students"].find_one(
{"_id": new_student.inserted_id}
)
self.set_status(201)
return self.write(created_student)
```
Note how I am converting the `ObjectId` to a string before assigning it as the `_id`. MongoDB stores data as [BSON, but we're encoding and decoding our data from JSON strings. BSON has support for additional non-JSON-native data types, including `ObjectId`, but JSON does not. Because of this, for simplicity, we convert ObjectIds to strings before storing them.
The route receives the new student data as a JSON string in the body of the `POST` request. We decode this string back into a Python object before passing it to our MongoDB client. Our client is available within the settings dictionary because we pass it to Tornado when we create the app. You can see this towards the end of the `app.py`.
``` python
app = tornado.web.Application(
(r"/", MainHandler),
(r"/(?P\w+)", MainHandler),
],
db=db,
)
```
The `insert_one` method response includes the `_id` of the newly created student. After we insert the student into our collection, we use the `inserted_id` to find the correct document and write it to our response. By default, Tornado will return an HTTP `200` status code, but in this instance, a `201` created is more appropriate, so we change the HTTP response status code with `set_status`.
##### GET - View Student Data
We have two different ways we may wish to view student data: either as a list of all students or a single student document. The `get` method handles both of these functions.
``` python
async def get(self, student_id=None):
if student_id is not None:
if (
student := await self.settings["db"]["students"].find_one(
{"_id": student_id}
)
) is not None:
return self.write(student)
else:
raise tornado.web.HTTPError(404)
else:
students = await self.settings["db"]["students"].find().to_list(1000)
return self.write({"students": students})
```
First, we check to see if the URL provided a path parameter of `student_id`. If it does, then we know that we are looking for a specific student document. We look up the corresponding student with `find_one` and the specified `student_id`. If we manage to locate a matching record, then it is written to the response as a JSON string. Otherwise, we raise a `404` not found error.
If the URL does not contain a `student_id`, then we return a list of all students.
Motor's `to_list` method requires a max document count argument. For this example, I have hardcoded it to `1000`; but in a real application, you would use the [skip and limit parameters in find to paginate your results.
It's worth noting that as a defence against JSON hijacking, Tornado will not allow you to return an array as the root element. Most modern browsers have patched this vulnerability, but Tornado still errs on the side of caution. So, we must wrap the students array in a dictionary before we write it to our response.
##### PUT - Update Student
``` python
async def put(self, student_id):
student = tornado.escape.json_decode(self.request.body)
await self.settings"db"]["students"].update_one(
{"_id": student_id}, {"$set": student}
)
if (
updated_student := await self.settings["db"]["students"].find_one(
{"_id": student_id}
)
) is not None:
return self.write(updated_student)
raise tornado.web.HTTPError(404)
```
The update route is like a combination of the create student and the student detail routes. It receives the id of the document to update `student_id` as well as the new data in the JSON body.
We attempt to `$set` the new values in the correct document with `update_one`, and then check to see if it correctly modified a single document. If it did, then we find that document that was just updated and return it.
If the `modified_count` is not equal to one, we still check to see if there is a document matching the id. A `modified_count` of zero could mean that there is no document with that id, but it could also mean that the document does exist, but it did not require updating because the current values are the same as those supplied in the `PUT` request.
Only after that final find fails, we raise a `404` Not Found exception.
##### DELETE - Remove Student
``` python
async def delete(self, student_id):
delete_result = await db["students"].delete_one({"_id": student_id})
if delete_result.deleted_count == 1:
self.set_status(204)
return self.finish()
raise tornado.web.HTTPError(404)
```
Our final route is `delete`. Again, because this is acting upon a single document, we have to supply an id, `student_id` in the URL. If we find a matching document and successfully delete it, then we return an HTTP status of `204` or No Content. In this case, we do not return a document as we've already deleted it! However, if we cannot find a student with the specified `student_id`, then instead, we return a `404`.
## Wrapping Up
I hope you have found this introduction to Tornado with MongoDB useful. Now is a fascinating time for Python developers as more and more frameworks—both new and old—begin taking advantage of async.
If you would like to know more about how you can use MongoDB with Tornado and WebSockets, please read my other tutorial, [Subscribe to MongoDB Change Streams Via WebSockets.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Python"
],
"pageDescription": "Getting Started with MongoDB and Tornado",
"contentType": "Code Example"
} | Getting Started with MongoDB and Tornado | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/querying-price-book-data-federation | created | # Querying the MongoDB Atlas Price Book with Atlas Data Federation
As a DevOps engineer or team, keeping up with the cost changes of a continuously evolving cloud service like MongoDB Atlas database can be a daunting task. Manual monitoring of pricing information can be laborious, prone to mistakes, and may result in delays in strategic decisions. In this article, we will demonstrate how to leverage Atlas Data Federation to query and visualize the MongoDB Atlas price book as a real-time data source that can be incorporated into your DevOps processes and application infrastructure.
Atlas Data Federation is a distributed query engine that allows users to combine, transform, and move data across multiple data sources without complex integrations. Users can efficiently and cost-effectively query data from different sources, such as your Atlas clusters, cloud object storage buckets, Atlas Data Lake datasets, and HTTP endpoints with the MongoDB Query Language and the aggregation framework, as if it were all in the same place and format.
While using HTTP endpoints as a data source in Atlas Data Federation may not be suitable for large-scale production workloads, it’s a great option for small businesses or startups that want a quick and easy way to analyze pricing data or to use for testing, development, or small-scale analysis. In this guide, we will use the JSON returned by https://cloud.mongodb.com/billing/pricing?product=atlas as an HTTP data source for a federated database.
## Step 1: Create a new federated database
Let's create a new federated database in MongoDB Atlas by clicking on Data Federation in the left-hand navigation and clicking “set up manually” in the "create new federated database" dropdown in the top right corner of the UI. A federated database is a virtual database that enables you to combine and query data from multiple sources.
## Step 2: Add a new HTTP data source
The HTTP data source allows you to query data from any web API that returns data in JSON, BSON, CSV, TSV, Avro, Parquet, and ORC formats, such as the MongoDB Atlas price book.
## Step 3: Drag and drop the source into the right side, rename as desired
Create a mapping between the HTTP data source and your federated database instance by dragging and dropping the HTTP data source into your federated database. Then, rename the cluster, database, and collection as desired by using the pencil icon.
## Step 4: Add a view to transform the response into individual documents
Atlas Data Federation allows you to transform the raw source data by using the powerful MongoDB Aggregation Framework. We’ll create a view that will reshape the price book into individual documents, each to represent a single price item.
First, create a view:
Then, name the view and paste the following pipeline:
```
{
"$unwind": {
"path": "$resource"
}
}, {
"$replaceRoot": {
"newRoot": "$resource"
}
}
]
```
This pipeline will unwind the "resource" field, which contains an array of pricing data, and replace the root document with the contents of the "resource" array.
## Step 5: Save and copy the connection string
Now, let's save the changes and copy the connection string for our federated database instance. This connection string will allow you to connect to your federated database.
![Select 'Connect' to connect to your federated database.
Atlas Data Federation supports connection methods varying from tools like MongoDB Shell and Compass, any application supporting MongoDB connection, and even a SQL connection using Atlas SQL.
## Step 6: Connect using Compass
Let’s now connect to the federated database instance using MongoDB Compass. By connecting with Compass, we will then be able to use the MongoDB Query Language and aggregation framework to start querying and analyzing the pricing data, if desired.
## Step 7: Visualize using charts
We’ll use MongoDB Atlas Charts for visualization of the Atlas price book. Atlas Charts allows you to create interactive charts and dashboards that can be embedded in your applications or shared with your team.
Once in Charts, you can create new dashboards and add a chart. Then, select the view we created as a data source:
As some relevant data fields are embedded within the sku field, such as NDS_AWS_INSTANCE_M50, we can use calculated fields to help us extract those, such as provider and instanceType:
Use the following value expression:
- Provider
`{$arrayElemAt: {$split: ["$sku", "_"]}, 1]}`
- InstanceType
`{$arrayElemAt: [{$split: ["$sku", "_"]}, 3]}`
- additonalProperty
`{$arrayElemAt: [{$split: ["$sku", "_"]}, 4]}`
Now, by using Charts like a heatmap, we can visualize the different pricing items in a color-coded format:
1. Drag and drop the “sku” field to the X axis of the chart.
2. Drag and drop the “pricing.region” to the Y axis (choose “Unwind array” for array reduction).
3. Drag and drop the “pricing.unitPrice” to Intensity (choose “Unwind array” for array reduction).
4. Drag and drop the “provider”, “instanceType”, and “additionalProperty” fields to filter and choose the desired values.
The final result: A heatmap showing the pricing data for the selected providers, instance types, and additional properties, broken down by region. Hovering over each of the boxes will present its exact price using a tooltip. Thanks to the fact that our federated database is composed from an HTTP data source, the data visualized is the actual live prices returned from the HTTP endpoint, and not subjected to any ETL delay.
![A heatmap showing the pricing data for the selected providers, instance types, and additional properties, broken down by region.
## Summary
With Atlas Data Federation DevOps teams, developers and data engineers can generate insights to power real-time applications or downstream analytics. Incorporating live data from sources such as HTTP, MongoDB Clusters, or Cloud Object Storage reduces the effort, time-sink, and complexity of pipelines and ETL tools.
Have questions or comments? Visit our Community Forums.
Ready to get started? Try Atlas Data Federation today! | md | {
"tags": [
"Atlas"
],
"pageDescription": "In this article, we will demonstrate how to leverage Atlas Data Federation to query and visualize the MongoDB Atlas price book as a real-time data source.",
"contentType": "Article"
} | Querying the MongoDB Atlas Price Book with Atlas Data Federation | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/automate-automation-mongodb-atlas | created | # Automate the Automation on MongoDB Atlas
MongoDB Atlas is an awesome Cloud Data Platform providing an immense amount of automation to set up your databases, data lakes, charts, full-text search indexes, and more across all major cloud providers around the globe. Through the MongoDB Atlas GUI, you can easily deploy a fully scalable global cluster across several regions and even across different cloud providers in a matter of minutes. That's what I call automation. Using the MongoDB GUI is super intuitive and great, but how can I manage all these features in my own way?
The answer is simple and you probably already know it….**APIs**!
MongoDB Atlas has a full featured API which allows users to programmatically manage all Atlas has to offer.
The main idea is to enable users to integrate Atlas with all other aspects of your Software Development Life Cycle (SDLC), giving the ability for your DevOps team to create automation on their current processes across all their environments (Dev, Test/QA, UAT, Prod).
One example would be the DevOps teams leveraging APIs on the creation of ephemeral databases to run their CI/CD processes in lower environments for test purposes. Once it is done, you would just terminate the database deployment.
Another example we have seen DevOps teams using is to incorporate the creation of the databases needed into their Developers Portals. The idea is to give developers a self-service experience, where they can start a project by using a portal to provide all project characteristics (tech stack according to their coding language, app templates, etc.), and the portal will create all the automation to provide all aspects needed, such as a new code repo, CI/CD job template, Dev Application Servers, and a MongoDB database. So, they can start coding as soon as possible!
Even though the MongoDB Atlas API Resources documentation is great with lots of examples using cURL, we thought developers would appreciate it if they could also have all these in one of their favorite tools to work with APIs. I am talking about Postman, an API platform for building and using APIs. So, we did it! Below you will find step-by-step instructions on how to use it.
### Step 1: Configure your workstation/laptop
* Download and install Postman on your workstation/laptop.
* Training on Postman is available if you need a refresher on how to use it.
### Step 2: Configure MongoDB Atlas
* Create a free MongoDB Atlas account to have access to a free cluster to play around in. Make sure you create an organization and a project. Don't skip that step. Here is a coupon code—**GOATLAS10**—for some credits to explore more features (valid as of August 2021). Watch this video to learn how to add these credits to your account.
* Create an API key with Organization Owner privileges and save the public/private key to use when calling APIs. Also, don't forget to add your laptop/workstation IP to the API access list.
* Create a database deployment (cluster) via the Atlas UI or the MongoDB CLI (check out the MongoDB CLI Atlas Quick Start for detailed instructions). Note that a free database deployment will allow you to run most of the API calls. Use an M10 database deployment or higher if you want to have full access to all of the APIs. Feel free to explore all of the other database deployment options, but the default options should be fine for this example.
* Navigate to your Project Settings and retrieve your Project ID so it can be used in one of our examples below.
### Step 3: Configure and use Postman
* Fork or Import the MongoDB Atlas Collection to your Postman Workspace:
![Run in Postman](https://god.gw.postman.com/run-collection/17637161-25049d75-bcbc-467b-aba0-82a5c440ee02?action=collection%2Ffork&collection-url=entityId%3D17637161-25049d75-bcbc-467b-aba0-82a5c440ee02%26entityType%3Dcollection%26workspaceId%3D8355a86e-dec2-425c-9db0-cb5e0c3cec02#?env%5BAtlas%5D=W3sia2V5IjoiYmFzZV91cmwiLCJ2YWx1ZSI6Imh0dHBzOi8vY2xvdWQubW9uZ29kYi5jb20iLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6InZlcnNpb24iLCJ2YWx1ZSI6InYxLjAiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlByb2plY3RJRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJDTFVTVEVSLU5BTUUiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiZGF0YWJhc2VOYW1lIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6ImRiVXNlciIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJPUkctSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiQVBJLWtleS1wd2QiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiQVBJLWtleS11c3IiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiSU5WSVRBVElPTl9JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJJTlZPSUNFLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlBST0pFQ1RfTkFNRSIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJURUFNLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlVTRVItSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiUFJPSi1JTlZJVEFUSU8tSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiVEVBTS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlNBTVBMRS1EQVRBU0VULUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkNMT1VELVBST1ZJREVSIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkNMVVNURVItVElFUiIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJJTlNUQU5DRS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkFMRVJULUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkFMRVJULUNPTkZJRy1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJEQVRBQkFTRS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkNPTExFQ1RJT04tTkFNRSIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJJTkRFWC1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJTTkFQU0hPVC1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJKT0ItSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiUkVTVE9SRS1KT0ItSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoicmVzdG9yZUpvYklkIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlRBUkdFVC1DTFVTVEVSLU5BTUUiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiVEFSR0VULUdST1VQLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6InRhcmdldEdyb3VwSWQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiY2x1c3Rlck5hbWUiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiUkVTVE9SRS1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJBUkNISVZFLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkNPTlRBSU5FUi1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJQRUVSLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkVORFBPSU5ULVNFUlZJQ0UtSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiRU5EUE9JTlQtSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiQVBJLUtFWS1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJBQ0NFU1MtTElTVC1FTlRSWSIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJJUC1BRERSRVNTIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlBST0NFU1MtSE9TVCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJQUk9DRVNTLVBPUlQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiRElTSy1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkhPU1ROQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkxPRy1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlVTRVItTkFNRSIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJST0xFLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkVWRU5ULUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkRBVEEtTEFLRS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlZBTElEQVRJT04tSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiTElWRS1NSUdSQVRJT04tSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiUk9MRS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfV0=)
* Click on the MongoDB Atlas Collection. Under the Authorization tab, choose the Digest Auth Type and use the *public key* as the *user* and the *private key* as your *password*.
* Open up the **Get All Clusters** API call under the cluster folder.
* Make sure you select the Atlas environment variables and update the Postman variable ProjectID value to your **Project ID** captured in the previous steps.
* Execute the API call by hitting the Send button and you should get a response containing a list of all your clusters (database deployments) alongside the cluster details, like whether backup is enabled or the cluster is running.
Now explore all the APIs available to create your own automation.
One last tip: Once you have tested all your API calls to build your automation, Postman allows you to export that in code snippets in your favorite programming language.
Please always refer to the online documentation for any changes or new resources. Also, feel free to make pull requests to update the project with new API resources, fixes, and enhancements.
Hope you enjoyed it! Please share this with your team and community. It might be really helpful for everyone!
Here are some other great posts related to this subject:
* Programmatic API Management of Your MongoDB Atlas Database Clusters
* Programmatic API Management of Your MongoDB Atlas Database Clusters - Part II
* Calling the MongoDB Atlas API - How to Do it from Node, Python, and Ruby
\**A subset of API endpoints are supported in (free) M0, M2, and M5 clusters.*
Public Repo - https://github.com/cassianobein/mongodb-atlas-api-resources
Atlas API Documentation - https://docs.atlas.mongodb.com/api/
Postman MongoDB Public Workspace - https://www.postman.com/mongodb-devrel/workspace/mongodb-public/overview | md | {
"tags": [
"Atlas",
"Postman API"
],
"pageDescription": "Build your own automation with MongoDB Atlas API resources.",
"contentType": "Article"
} | Automate the Automation on MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/bucket-pattern | created | # Building with Patterns: The Bucket Pattern
In this edition of the *Building with Patterns* series, we're going to
cover the Bucket Pattern. This pattern is particularly effective when
working with Internet of Things (IoT), Real-Time Analytics, or
Time-Series data in general. By *bucketing* data together we make it
easier to organize specific groups of data, increasing the ability to
discover historical trends or provide future forecasting and optimize
our use of storage.
## The Bucket Pattern
With data coming in as a stream over a period of time (time series data)
we may be inclined to store each measurement in its own document.
However, this inclination is a very relational approach to handling the
data. If we have a sensor taking the temperature and saving it to the
database every minute, our data stream might look something like:
``` javascript
{
sensor_id: 12345,
timestamp: ISODate("2019-01-31T10:00:00.000Z"),
temperature: 40
}
{
sensor_id: 12345,
timestamp: ISODate("2019-01-31T10:01:00.000Z"),
temperature: 40
}
{
sensor_id: 12345,
timestamp: ISODate("2019-01-31T10:02:00.000Z"),
temperature: 41
}
```
This can pose some issues as our application scales in terms of data and
index size. For example, we could end up having to index `sensor_id` and
`timestamp` for every single measurement to enable rapid access at the
cost of RAM. By leveraging the document data model though, we can
"bucket" this data, by time, into documents that hold the measurements
from a particular time span. We can also programmatically add additional
information to each of these "buckets".
By applying the Bucket Pattern to our data model, we get some benefits
in terms of index size savings, potential query simplification, and the
ability to use that pre-aggregated data in our documents. Taking the
data stream from above and applying the Bucket Pattern to it, we would
wind up with:
``` javascript
{
sensor_id: 12345,
start_date: ISODate("2019-01-31T10:00:00.000Z"),
end_date: ISODate("2019-01-31T10:59:59.000Z"),
measurements:
{
timestamp: ISODate("2019-01-31T10:00:00.000Z"),
temperature: 40
},
{
timestamp: ISODate("2019-01-31T10:01:00.000Z"),
temperature: 40
},
...
{
timestamp: ISODate("2019-01-31T10:42:00.000Z"),
temperature: 42
}
],
transaction_count: 42,
sum_temperature: 2413
}
```
By using the Bucket Pattern, we have "bucketed" our data to, in this
case, a one hour bucket. This particular data stream would still be
growing as it currently only has 42 measurements; there's still more
measurements for that hour to be added to the "bucket". When they are
added to the `measurements` array, the `transaction_count` will be
incremented and `sum_temperature` will also be updated.
With the pre-aggregated `sum_temperature` value, it then becomes
possible to easily pull up a particular bucket and determine the average
temperature (`sum_temperature / transaction_count`) for that bucket.
When working with time-series data it is frequently more interesting and
important to know what the average temperature was from 2:00 to 3:00 pm
in Corning, California on 13 July 2018 than knowing what the temperature
was at 2:03 pm. By bucketing and doing pre-aggregation we're more able
to easily provide that information.
Additionally, as we gather more and more information we may determine
that keeping all of the source data in an archive is more effective. How
frequently do we need to access the temperature for Corning from 1948,
for example? Being able to move those buckets of data to a data archive
can be a large benefit.
## Sample Use Case
One example of making time-series data valuable in the real world comes
from an [IoT implementation by
Bosch. They are using MongoDB
and time-series data in an automotive field data app. The app captures
data from a variety of sensors throughout the vehicle allowing for
improved diagnostics of the vehicle itself and component performance.
Other examples include major banks that have incorporated this pattern
in financial applications to group transactions together.
## Conclusion
When working with time-series data, using the Bucket Pattern in MongoDB
is a great option. It reduces the overall number of documents in a
collection, improves index performance, and by leveraging
pre-aggregation, it can simplify data access.
The Bucket Design pattern works great for many cases. But what if there
are outliers in our data? That's where the next pattern we'll discuss,
the Outlier Design
Pattern, comes into play.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.",
"contentType": "Tutorial"
} | Building with Patterns: The Bucket Pattern | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/python/song-recommendations-example-app | created | # A Spotify Song and Playlist Recommendation Engine
## Creators
Lucas De Oliveira, Chandrish Ambati, and Anish Mukherjee from University of San Francisco contributed this amazing project.
## Background to the Project
In 2018, Spotify organized an Association for Computing Machinery (ACM) RecSys Challenge where they posted a dataset of one million playlists, challenging participants to recommend a list of 500 songs given a user-created playlist.
As both music lovers and data scientists, we were naturally drawn to this challenge. Right away, we agreed that combining song embeddings with some nearest-neighbors method for recommendation would likely produce very good results. Importantly, we were curious about how we could solve this recommendation task at scale with over 4 billion user-curated playlists on Spotify, where this number keeps growing. This realization raised serious questions about how to train a decent model since all that data would likely not fit in memory or a single server.
## What We Built
This project resulted in a scalable ETL pipeline utilizing
* Apache Spark
* MongoDB
* Amazon S3
* Databricks (PySpark)
These were used to train a deep learning Word2Vec model to build song and playlist embeddings for recommendation. We followed up with data visualizations we created on Tensorflow’s Embedding Projector.
## The Process
### Collecting Lyrics
The most tedious task of this project was collecting as many lyrics for the songs in the playlists as possible. We began by isolating the unique songs in the playlist files by their track URI; in total we had over 2 million unique songs. Then, we used the track name and artist name to look up the lyrics on the web. Initially, we used simple Python requests to pull in the lyrical information but this proved too slow for our purposes. We then used asyncio, which allowed us to make requests concurrently. This sped up the process significantly, reducing the downloading time of lyrics for 10k songs from 15 mins to under a minute. Ultimately, we were only able to collect lyrics for 138,000 songs.
### Pre-processing
The original dataset contains 1 million playlists spread across 1 thousand JSON files totaling about 33 GB of data. We used PySpark in Databricks to preprocess these separate JSON files into a single SparkSQL DataFrame and then joined this DataFrame with the lyrics we saved.
While the aforementioned data collection and preprocessing steps are time-consuming, the model also needs to be re-trained and re-evaluated often, so it is critical to store data in a scalable database. In addition, we’d like to consider a database that is schemaless for future expansion in data sets and supports various data types. Considering our needs, we concluded that MongoDB would be the optimal solution as a data and feature store.
Check out the Preprocessing.ipynb notebook to see how we preprocessed the data.
### Training Song Embeddings
For our analyses, we read our preprocessed data from MongoDB into a Spark DataFrame and grouped the records by playlist id (pid), aggregating all of the songs in a playlist into a list under the column song_list.
Using the Word2Vec model in Spark MLlib we trained song embeddings by feeding lists of track IDs from a playlist into the model much like you would send a list of words from a sentence to train word embeddings. As shown below, we trained song embeddings in only 3 lines of PySpark code:
```
from pyspark.ml.feature import Word2Vec
word2Vec = Word2Vec(vectorSize=32, seed=42, inputCol="song_list").setMinCount(1)
word2Vec.sexMaxIter(10)
model = word2Vec.fit(df_play)
```
We then saved the song embeddings down to MongoDB for later use. Below is a snapshot of the song embeddings DataFrame that we saved:
Check out the Song_Embeddings.ipynb notebook to see how we train song embeddings.
### Training Playlists Embeddings
Finally, we extended our recommendation task beyond simple song recommendations to recommending entire playlists. Given an input playlist, we would return the k closest or most similar playlists. We took a “continuous bag of songs” approach to this problem by calculating playlist embeddings as the average of all song embeddings in that playlist.
This workflow started by reading back the song embeddings from MongoDB into a SparkSQL DataFrame. Then, we calculated a playlist embedding by taking the average of all song embeddings in that playlist and saved them in MongoDB.
Check out the Playlist_Embeddings.ipynb notebook to see how we did this.
### Training Lyrics Embeddings
Are you still reading? Whew!
We trained lyrics embeddings by loading in a song's lyrics, separating the words into lists, and feeding those words to a Word2Vec model to produce 32-dimensional vectors for each word. We then took the average embedding across all words as that song's lyrical embedding. Ultimately, our analytical goal here was to determine whether users create playlists based on common lyrical themes by seeing if the pairwise song embedding distance and the pairwise lyrical embedding distance between two songs were correlated. Unsurprisingly, it appears they are not.
Check out the Lyrical_Embeddings.ipynb notebook to see our analysis.
## Notes on our Approach
You may be wondering why we used a language model (Word2Vec) to train these embeddings. Why not use a Pin2Vec or custom neural network model to predict implicit ratings? For practical reasons, we wanted to work exclusively in the Spark ecosystem and deal with the data in a distributed fashion. This was a constraint set on the project ahead of time and challenged us to think creatively.
However, we found Word2Vec an attractive candidate model for theoretical reasons as well. The Word2Vec model uses a word’s context to train static embeddings by training the input word’s embeddings to predict its surrounding words. In essence, the embedding of any word is determined by how it co-occurs with other words. This had a clear mapping to our own problem: by using a Word2Vec model the distance between song embeddings would reflect the songs’ co-occurrence throughout 1M playlists, making it a useful measure for a distance-based recommendation (nearest neighbors). It would effectively model how people grouped songs together, using user behavior as the determinant factor in similarity.
Additionally, the Word2Vec model accepts input in the form of a list of words. For each playlist we had a list of track IDs, which made working with the Word2Vec model not only conceptually but also practically appealing.
## Data Visualizations with Tensorflow and MongoDB
After all of that, we were finally ready to visualize our results and make some interactive recommendations. We decided to represent our embedding results visually using Tensorflow’s Embedding Projector which maps the 32-dimensional song and playlist embeddings into an interactive visualization of a 3D embedding space. You have the choice of using PCA or tSNE for dimensionality reduction and cosine similarity or Euclidean distance for measuring distances between vectors.
Click here for the song embeddings projector for the full 2 million songs, or here for a less crowded version with a random sample of 100k songs (shown below):
The neat thing about using Tensorflow’s projector is that it gives us a beautiful visualization tool and distance calculator all in one. Try searching on the right panel for a song and if the song is part of the original dataset, you will see the “most similar” songs appear under it.
## Using MongoDB for ML/AI
We were impressed by how easy it was to use MongoDB to reliably store and load our data. Because we were using distributed computing, it would have been infeasible to run our pipeline from start to finish any time we wanted to update our code or fine-tune the model. MongoDB allowed us to save our incremental results for later processing and modeling, which collectively saved us hours of waiting for code to re-run.
It worked well with all the tools we use everyday and the tooling we chose - we didn't have any areas of friction.
We were shocked by how this method of training embeddings actually worked. While the 2 million song embedding projector is crowded visually, we see that the recommendations it produces are actually quite good at grouping songs together.
Consider the embedding recommendation for The Beatles’ “A Day In The Life”:
Or the recommendation for Jay Z’s “Heart of the City (Ain’t No Love)”:
Fan of Taylor Swift? Here are the recommendations for “New Romantics”:
We were delighted to find naturally occurring clusters in the playlist embeddings. Most notably, we see a cluster containing mostly Christian rock, one with Christmas music, one for reggaeton, and one large cluster where genres span its length rather continuously and intuitively.
Note also that when we select a playlist, we have many recommended playlists with the same names. This in essence validates our song embeddings. Recall that playlist embeddings were created by taking the average embedding of all its songs; the name of the playlists did not factor in at all. The similar names only conceptually reinforce this fact.
## Next Steps?
We felt happy with the conclusion of this project but there is more that could be done here.
1. We could use these trained song embeddings in other downstream tasks and see how effective these are. Also, you could download the song embeddings we here: Embeddings | Meta Info
2. We could look at other methods of training these embeddings using some recurrent neural networks and enhanced implementation of this Word2Vec model.
| md | {
"tags": [
"Python",
"MongoDB",
"Spark",
"AI"
],
"pageDescription": "Python code example application for Spotify playlist and song recommendations using spark and tensorflow",
"contentType": "Code Example"
} | A Spotify Song and Playlist Recommendation Engine | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/farm-stack-fastapi-react-mongodb | created | # Introducing FARM Stack - FastAPI, React, and MongoDB
When I got my first ever programming job, the LAMP (Linux, Apache, MySQL, PHP) stack—and its variations#Variants)—ruled supreme. I used WAMP at work, DAMP at home, and deployed our customers to SAMP. But now all the stacks with memorable acronyms seem to be very JavaScript forward. MEAN (MongoDB, Express, Angular, Node.js), MERN (MongoDB, Express, React, Node.js), MEVN (MongoDB, Express, Vue, Node.js), JAM (JavaScript, APIs, Markup), and so on.
As much as I enjoy working with React and Vue, Python is still my favourite language for building back end web services. I wanted the same benefits I got from MERN—MongoDB, speed, flexibility, minimal boilerplate—but with Python instead of Node.js. With that in mind, I want to introduce the FARM stack; FastAPI, React, and MongoDB.
## What is FastAPI?
The FARM stack is in many ways very similar to MERN. We've kept MongoDB and React, but we've replaced the Node.js and Express back end with Python and FastAPI. FastAPI is a modern, high-performance, Python 3.6+ web framework. As far as web frameworks go, it's incredibly new. The earliest git commit I could find is from December 5th, 2018, but it is a rising star in the Python community. It is already used in production by the likes of Microsoft, Uber, and Netflix.
And it is speedy. Benchmarks show that it's not as fast as golang's chi or fasthttp, but it's faster than all the other Python frameworks tested and beats out most of the Node.js ones too.
## Getting Started
If you would like to give the FARM stack a try, I've created an example TODO application you can clone from GitHub.
``` shell
git clone git@github.com:mongodb-developer/FARM-Intro.git
```
The code is organised into two directories: back end and front end. The back end code is our FastAPI server. The code in this directory interacts with our MongoDB database, creates our API endpoints, and thanks to OAS3 (OpenAPI Specification 3). It also generates our interactive documentation.
## Running the FastAPI Server
Before I walk through the code, try running the FastAPI server for yourself. You will need Python 3.8+ and a MongoDB database. A free Atlas Cluster will be more than enough. Make a note of your MongoDB username, password, and connection string as you'll need those in a moment.
### Installing Dependencies
``` shell
cd FARM-Intro/backend
pip install -r requirements.txt
```
### Configuring Environment Variables
``` shell
export DEBUG_MODE=True
export DB_URL="mongodb+srv://:@/?retryWrites=true&w=majority"
export DB_NAME="farmstack"
```
Once you have everything installed and configured, you can run the server with `python main.py` and visit in your browser.
This interactive documentation is automatically generated for us by FastAPI and is a great way to try your API during development. You can see we have the main elements of CRUD covered. Try adding, updating, and deleting some Tasks and explore the responses you get back from the FastAPI server.
## Creating a FastAPI Server
We initialise the server in `main.py`; this is where we create our app.
``` python
app = FastAPI()
```
Attach our routes, or API endpoints.
``` python
app.include_router(todo_router, tags="tasks"], prefix="/task")
```
Start the async event loop and ASGI server.
``` python
if __name__ == "__main__":
uvicorn.run(
"main:app",
host=settings.HOST,
reload=settings.DEBUG_MODE,
port=settings.PORT,
)
```
And it is also where we open and close our connection to our MongoDB server.
``` python
@app.on_event("startup")
async def startup_db_client():
app.mongodb_client = AsyncIOMotorClient(settings.DB_URL)
app.mongodb = app.mongodb_client[settings.DB_NAME]
@app.on_event("shutdown")
async def shutdown_db_client():
app.mongodb_client.close()
```
Because FastAPI is an async framework, we're using Motor to connect to our MongoDB server. [Motor is the officially maintained async Python driver for MongoDB.
When the app startup event is triggered, I open a connection to MongoDB and ensure that it is available via the app object so I can access it later in my different routers.
### Defining Models
Many people think of MongoDB as being schema-less, which is wrong. MongoDB has a flexible schema. That is to say that collections do not enforce document structure by default, so you have the flexibility to make whatever data-modelling choices best match your application and its performance requirements. So, it's not unusual to create models when working with a MongoDB database.
The models for the TODO app are in `backend/apps/todo/models.py`, and it is these models which help FastAPI create the interactive documentation.
``` python
class TaskModel(BaseModel):
id: str = Field(default_factory=uuid.uuid4, alias="_id")
name: str = Field(...)
completed: bool = False
class Config:
allow_population_by_field_name = True
schema_extra = {
"example": {
"id": "00010203-0405-0607-0809-0a0b0c0d0e0f",
"name": "My important task",
"completed": True,
}
}
```
I want to draw attention to the `id` field on this model. MongoDB uses `_id`, but in Python, underscores at the start of attributes have special meaning. If you have an attribute on your model that starts with an underscore, pydantic—the data validation framework used by FastAPI—will assume that it is a private variable, meaning you will not be able to assign it a value! To get around this, we name the field `id` but give it an `alias` of `_id`. You also need to set `allow_population_by_field_name` to `True` in the model's `Config` class.
You may notice I'm not using MongoDB's ObjectIds. You can use ObjectIds with FastAPI; there is just more work required during serialisation and deserialisation. Still, for this example, I found it easier to generate the UUIDs myself, so they're always strings.
``` python
class UpdateTaskModel(BaseModel):
name: Optionalstr]
completed: Optional[bool]
class Config:
schema_extra = {
"example": {
"name": "My important task",
"completed": True,
}
}
```
When users are updating tasks, we do not want them to change the id, so the `UpdateTaskModel` only includes the name and completed fields. I've also made both fields optional so that you can update either of them independently. Making both of them optional did mean that all fields were optional, which caused me to spend far too long deciding on how to handle a `PUT` request (an update) where the user did not send any fields to be changed. We'll see that next when we look at the routers.
### FastAPI Routers
The task routers are within `backend/apps/todo/routers.py`.
To cover the different CRUD (Create, Read, Update, and Delete) operations, I needed the following endpoints:
- POST /task/ - creates a new task.
- GET /task/ - view all existing tasks.
- GET /task/{id}/ - view a single task.
- PUT /task/{id}/ - update a task.
- DELETE /task/{id}/ - delete a task.
#### Create
``` python
@router.post("/", response_description="Add new task")
async def create_task(request: Request, task: TaskModel = Body(...)):
task = jsonable_encoder(task)
new_task = await request.app.mongodb["tasks"].insert_one(task)
created_task = await request.app.mongodb["tasks"].find_one(
{"_id": new_task.inserted_id}
)
return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_task)
```
The create_task router accepts the new task data in the body of the request as a JSON string. We write this data to MongoDB, and then we respond with an HTTP 201 status and the newly created task.
#### Read
``` python
@router.get("/", response_description="List all tasks")
async def list_tasks(request: Request):
tasks = []
for doc in await request.app.mongodb["tasks"].find().to_list(length=100):
tasks.append(doc)
return tasks
```
The list_tasks router is overly simplistic. In a real-world application, you are at the very least going to need to include pagination. Thankfully, there are [packages for FastAPI which can simplify this process.
``` python
@router.get("/{id}", response_description="Get a single task")
async def show_task(id: str, request: Request):
if (task := await request.app.mongodb"tasks"].find_one({"_id": id})) is not None:
return task
raise HTTPException(status_code=404, detail=f"Task {id} not found")
```
While FastAPI supports Python 3.6+, it is my use of assignment expressions in routers like this one, which is why this sample application requires Python 3.8+.
Here, I'm raising an exception if we cannot find a task with the correct id.
#### Update
``` python
@router.put("/{id}", response_description="Update a task")
async def update_task(id: str, request: Request, task: UpdateTaskModel = Body(...)):
task = {k: v for k, v in task.dict().items() if v is not None}
if len(task) >= 1:
update_result = await request.app.mongodb["tasks"].update_one(
{"_id": id}, {"$set": task}
)
if update_result.modified_count == 1:
if (
updated_task := await request.app.mongodb["tasks"].find_one({"_id": id})
) is not None:
return updated_task
if (
existing_task := await request.app.mongodb["tasks"].find_one({"_id": id})
) is not None:
return existing_task
raise HTTPException(status_code=404, detail=f"Task {id} not found")
```
We don't want to update any of our fields to empty values, so first of all, we remove those from the update document. As mentioned above, because all values are optional, an update request with an empty payload is still valid. After much deliberation, I decided that in that situation, the correct thing for the API to do is to return the unmodified task and an HTTP 200 status.
If the user has supplied one or more fields to be updated, we attempt to `$set` the new values with `update_one`, before returning the modified document. However, if we cannot find a document with the specified id, our router will raise a 404.
#### Delete
``` python
@router.delete("/{id}", response_description="Delete Task")
async def delete_task(id: str, request: Request):
delete_result = await request.app.mongodb["tasks"].delete_one({"_id": id})
if delete_result.deleted_count == 1:
return JSONResponse(status_code=status.HTTP_204_NO_CONTENT)
raise HTTPException(status_code=404, detail=f"Task {id} not found")
```
The final router does not return a response body on success, as the requested document no longer exists as we have just deleted it. Instead, it returns an HTTP status of 204 which means that the request completed successfully, but the server doesn't have any data to give you.
## The React Front End
The React front end does not change as it is only consuming the API and is therefore somewhat back end agnostic. It is mostly the standard files generated by `create-react-app`. So, to start our React front end, open a new terminal window—keeping your FastAPI server running in the existing terminal—and enter the following commands inside the front end directory.
``` shell
npm install
npm start
```
These commands may take a little while to complete, but afterwards, it should open a new browser window to .
![Screenshot of Timeline in browser
The React front end is just a view of our task list, but you can update
your tasks via the FastAPI documentation and see the changes appear in
React!
The bulk of our front end code is in `frontend/src/App.js`
``` javascript
useEffect(() => {
const fetchAllTasks = async () => {
const response = await fetch("/task/")
const fetchedTasks = await response.json()
setTasks(fetchedTasks)
}
const interval = setInterval(fetchAllTasks, 1000)
return () => {
clearInterval(interval)
}
}, ])
```
When our component mounts, we start an interval which runs each second and gets the latest list of tasks before storing them in our state. The function returned at the end of the hook will be run whenever the component dismounts, cleaning up our interval.
``` javascript
useEffect(() => {
const timelineItems = tasks.reverse().map((task) => {
return task.completed ? (
}
color="green"
style={{ textDecoration: "line-through", color: "green" }}
>
{task.name} ({task._id})
) : (
}
color="blue"
style={{ textDecoration: "initial" }}
>
{task.name} ({task._id})
)
})
setTimeline(timelineItems)
}, [tasks])
```
The second hook is triggered whenever the task list in our state changes. This hook creates a `Timeline Item` component for each task in our list.
``` javascript
<>
{timeline}
</>
```
The last part of `App.js` is the markup to render the tasks to the page. If you have worked with MERN or another React stack before, this will likely seem very familiar.
## Wrapping Up
I'm incredibly excited about the FARM stack, and I hope you are now too. We're able to build highly performant, async, web applications using my favourite technologies! In my next article, we'll look at how you can add authentication to your FARM applications.
In the meantime, check out the [FastAPI and Motor documentation, as well as the other useful packages and links in this Awesome FastAPI list.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Python",
"JavaScript",
"FastApi"
],
"pageDescription": "Introducing FARM Stack - FastAPI, React, and MongoDB",
"contentType": "Article"
} | Introducing FARM Stack - FastAPI, React, and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/improve-your-apps-search-results-with-auto-tuning | created | # Improve Your App's Search Results with Auto-Tuning
Historically, the only way to improve your app’s search query relevance is through manual intervention. For example, you can introduce score boosting to multiply a base relevance score in the presence of particular fields. This ensures that searches where a key present in some fields weigh higher than others. This is, however, fixed by nature. The results are dynamic but the logic itself doesn’t change.
The following project will showcase how to leverage synonyms to create a feedback loop that is self-tuning, in order to deliver incrementally more relevant search results to your users—*all without complex machine learning models!*
## Example
We have a food search application where a user searches for “Romanian Food.” Assuming that we’re logging every user's clickstream data (their step-by-step interaction with our application), we can take a look at this “sequence” and compare it to other results that have yielded a strong CTA (call-to-action): a successful checkout.
Another user searched for “German Cuisine” and that had a very similar clickstream sequence. Well, we can build a script that analyzes both these users’ (and other users’) clickstreams, identify similarities, we can tell the script to append it to a synonyms document that contains “German,” “Romanian,” and other more common cuisines, like “Hungarian.”
Here’s a workflow of what we’re looking to accomplish:
## Tutorial
### Step 1: Log user’s clickstream activity
In our app tier, as events are fired, we log them to a clickstreams collection, like:
```
{
"session_id": "1",
"event_id": "search_query",
"metadata": {
"search_value": "romanian food"
},
"timestamp": "1"
},
{
"session_id": "1",
"event_id": "add_to_cart",
"product_category":"eastern european cuisine",
"timestamp": "2"
},
{
"session_id": "1",
"event_id": "checkout",
"timestamp": "3"
},
{
"session_id": "1",
"event_id": "payment_success",
"timestamp": "4"
},
{
"session_id": "2",
"event_id": "search_query",
"metadata": {
"search_value": "hungarian food"
},
"timestamp": "1"
},
{
"session_id": "2",
"event_id": "add_to_cart",
"product_category":"eastern european cuisine",
"timestamp": "2"
}
]
```
In this simplified list of events, we can conclude that {"session_id":"1"} searched for “romanian food,” which led to a higher conversion rate, payment_success, compared to {"session_id":"2"}, who searched “hungarian food” and stalled after the add_to_cart event.
You can import this data yourself using [sample_data.json.
Let’s prepare the data for our search_tuner script.
### Step 2: Create a view that groups by session_id, then filters on the presence of searches
By the way, it’s no problem that only some documents have a metadata field. Our $group operator can intelligently identify the ones that do vs don’t.
```
# first we sort by timestamp to get everything in the correct sequence of events,
# as that is what we'll be using to draw logical correlations
{
'$sort': {
'timestamp': 1
}
},
# next, we'll group by a unique session_id, include all the corresponding events, and begin
# the filter for determining if a search_query exists
{
'$group': {
'_id': '$session_id',
'events': {
'$push': '$$ROOT'
},
'isSearchQueryPresent': {
'$sum': {
'$cond': [
{
'$eq': [
'$event_id', 'search_query'
]
}, 1, 0
]
}
}
}
},
# we hide session_ids where there is no search query
# then create a new field, an array called searchQuery, which we'll use to parse
{
'$match': {
'isSearchQueryPresent': {
'$gte': 1
}
}
},
{
'$unset': 'isSearchQueryPresent'
},
{
'$set': {
'searchQuery': '$events.metadata.search_value'
}
}
]
```
Let’s create the view by building the query, then going into Compass and adding it as a new collection called group_by_session_id_and_search_query:
![screenshot of creating a view in compass
Here’s what it will look like:
```
{
"session_id": "1",
"events": [
{
"event_id": "search_query",
"search_value": "romanian food"
},
{
"event_id": "add_to_cart",
"context": {
"cuisine": "eastern european cuisine"
}
},
{
"event_id": "checkout"
},
{
"event_id": "payment_success"
}
],
"searchQuery": "romanian food"
}, {
"session_id": "2",
"events": [
{
"event_id": "search_query",
"search_value": "hungarian food"
},
{
"event_id": "add_to_cart",
"context": {
"cuisine": "eastern european cuisine"
}
},
{
"event_id": "checkout"
}
],
"searchQuery": "hungarian food"
},
{
"session_id": "3",
"events": [
{
"event_id": "search_query",
"search_value": "italian food"
},
{
"event_id": "add_to_cart",
"context": {
"cuisine": "western european cuisine"
}
}
],
"searchQuery": "sad food"
}
]
```
### Step 3: Build a scheduled job that compares similar clickstreams and pushes the resulting synonyms to the synonyms collection
```
// Provide a success indicator to determine which session we want to
// compare any incomplete sessions with
const successIndicator = "payment_success"
// what percentage similarity between two sets of click/event streams
// we'd accept to be determined as similar enough to produce a synonym
// relationship
const acceptedConfidence = .9
// boost the confidence score when the following values are present
// in the eventstream
const eventBoosts = {
successIndicator: .1
}
/**
* Enrich sessions with a flattened event list to make comparison easier.
* Determine if the session is to be considered successful based on the success indicator.
* @param {*} eventList List of events in a session.
* @returns {any} Calculated values used to determine if an incomplete session is considered to
* be related to a successful session.
*/
const enrichEvents = (eventList) => {
return {
eventSequence: eventList.map(event => { return event.event_id }).join(';'),
isSuccessful: eventList.some(event => { return event.event_id === successIndicator })
}
}
/**
* De-duplicate common tokens in two strings
* @param {*} str1
* @param {*} str2
* @returns Returns an array with the provided strings with the common tokens removed
*/
const dedupTokens = (str1, str2) => {
const splitToken = ' '
const tokens1 = str1.split(splitToken)
const tokens2 = str2.split(splitToken)
const dupedTokens = tokens1.filter(token => { return tokens2.includes(token)});
const dedupedStr1 = tokens1.filter(token => { return !dupedTokens.includes(token)});
const dedupedStr2 = tokens2.filter(token => { return !dupedTokens.includes(token)});
return [ dedupedStr1.join(splitToken), dedupedStr2.join(splitToken) ]
}
const findMatchingIndex = (synonyms, results) => {
let matchIndex = -1
for(let i = 0; i < results.length; i++) {
for(const synonym of synonyms) {
if(results[i].synonyms.includes(synonym)){
matchIndex = i;
break;
}
}
}
return matchIndex;
}
/**
* Inspect the context of two matching sessions.
* @param {*} successfulSession
* @param {*} incompleteSession
*/
const processMatch = (successfulSession, incompleteSession, results) => {
console.log(`=====\nINSPECTING POTENTIAL MATCH: ${ successfulSession.searchQuery} = ${incompleteSession.searchQuery}`);
let contextMatch = true;
// At this point we can assume that the sequence of events is the same, so we can
// use the same index when comparing events
for(let i = 0; i < incompleteSession.events.length; i++) {
// if we have a context, let's compare the kv pairs in the context of
// the incomplete session with the successful session
if(incompleteSession.events[i].context){
const eventWithContext = incompleteSession.events[i]
const contextKeys = Object.keys(eventWithContext.context)
try {
for(const key of contextKeys) {
if(successfulSession.events[i].context[key] !== eventWithContext.context[key]){
// context is not the same, not a match, let's get out of here
contextMatch = false
break;
}
}
} catch (error) {
contextMatch = false;
console.log(`Something happened, probably successful session didn't have a context for an event.`);
}
}
}
// Update results
if(contextMatch){
console.log(`VALIDATED`);
const synonyms = dedupTokens(successfulSession.searchQuery, incompleteSession.searchQuery, true)
const existingMatchingResultIndex = findMatchingIndex(synonyms, results)
if(existingMatchingResultIndex >= 0){
const synonymSet = new Set([...synonyms, ...results[existingMatchingResultIndex].synonyms])
results[existingMatchingResultIndex].synonyms = Array.from(synonymSet)
}
else{
const result = {
"mappingType": "equivalent",
"synonyms": synonyms
}
results.push(result)
}
}
else{
console.log(`NOT A MATCH`);
}
return results;
}
/**
* Compare the event sequence of incomplete and successful sessions
* @param {*} successfulSessions
* @param {*} incompleteSessions
* @returns
*/
const compareLists = (successfulSessions, incompleteSessions) => {
let results = []
for(const successfulSession of successfulSessions) {
for(const incompleteSession of incompleteSessions) {
// if the event sequence is the same, let's inspect these sessions
// to validate that they are a match
if(successfulSession.enrichments.eventSequence.includes(incompleteSession.enrichments.eventSequence)){
processMatch(successfulSession, incompleteSession, results)
}
}
}
return results
}
const processSessions = (sessions) => {
// console.log(`Processing the following list:`, JSON.stringify(sessions, null, 2));
// enrich sessions for processing
const enrichedSessions = sessions.map(session => {
return { ...session, enrichments: enrichEvents(session.events)}
})
// separate successful and incomplete sessions
const successfulEvents = enrichedSessions.filter(session => { return session.enrichments.isSuccessful})
const incompleteEvents = enrichedSessions.filter(session => { return !session.enrichments.isSuccessful})
return compareLists(successfulEvents, incompleteEvents);
}
/**
* Main Entry Point
*/
const main = () => {
const results = processSessions(eventsBySession);
console.log(`Results:`, results);
}
main();
module.exports = processSessions;
```
Run [the script yourself.
### Step 4: Enhance our search query with the newly appended synonyms
```
{
'$search': {
'index': 'synonym-search',
'text': {
'query': 'hungarian',
'path': 'cuisine-type'
},
'synonyms': 'similarCuisines'
}
}
]
```
See [the synonyms tutorial.
## Next Steps
There you have it, folks. We’ve taken raw data recorded from our application server and put it to use by building a feedback that encourages positive user behavior.
By measuring this feedback loop against your KPIs, you can build a simple A/B test against certain synonyms and user patterns to optimize your application! | md | {
"tags": [
"Atlas"
],
"pageDescription": "This blog will cover how to leverage synonyms to create a feedback loop that is self-tuning, in order to deliver incrementally more relevant search results to your users.",
"contentType": "Tutorial"
} | Improve Your App's Search Results with Auto-Tuning | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/time-series-candlestick-sma-ema | created | # Currency Analysis with Time Series Collections #2 — Simple Moving Average and Exponential Moving Average Calculation
## Introduction
In the previous post, we learned how to group currency data based on given time intervals to generate candlestick charts to perform trend analysis. In this article, we’ll learn how the moving average can be calculated on time-series data.
Moving average is a well-known financial technical indicator that is commonly used either alone or in combination with other indicators. Additionally, the moving average is included as a parameter of other financial technical indicators like MACD. The main reason for using this indicator is to smooth out the price updates to reflect recent price changes accordingly. There are many types of moving averages but here we’ll focus on two of them: Simple Moving Average (SMA) and Exponential Moving Average (EMA).
## Simple Moving Average (SMA)
This is the average price value of a currency/stock within a given period.
Let’s calculate the SMA for the BTC-USD currency over the last three data intervals, including the current data. Remember that each stick in the candlestick chart represents five-minute intervals. Therefore, for every interval, we would look for the previous three intervals.
First we’ll group the BTC-USD currency data for five-minute intervals:
```js
db.ticker.aggregate(
{
$match: {
symbol: "BTC-USD",
},
},
{
$group: {
_id: {
symbol: "$symbol",
time: {
$dateTrunc: {
date: "$time",
unit: "minute",
binSize: 5
},
},
},
high: { $max: "$price" },
low: { $min: "$price" },
open: { $first: "$price" },
close: { $last: "$price" },
},
},
{
$sort: {
"_id.time": 1,
},
},
]);
```
And, we will have the following candlestick chart:
![Candlestick chart
We have four metrics for each interval and we will choose the close price as the numeric value for our moving average calculation. We are only interested in `_id` (a nested field that includes the symbol and time information) and the close price. Therefore, since we are not interested in high, low, open prices for SMA calculation, we will exclude it from the aggregation pipeline with the `$project` aggregation stage:
```js
{
$project: {
_id: 1,
price: "$close",
},
}
```
After we grouped and trimmed, we will have the following dataset:
```js
{"_id": {"time": ISODate("20210101T17:00:00"), "symbol" : "BTC-USD"}, "price": 35050}
{"_id": {"time": ISODate("20210101T17:05:00"), "symbol" : "BTC-USD"}, "price": 35170}
{"_id": {"time": ISODate("20210101T17:10:00"), "symbol" : "BTC-USD"}, "price": 35280}
{"_id": {"time": ISODate("20210101T17:15:00"), "symbol" : "BTC-USD"}, "price": 34910}
{"_id": {"time": ISODate("20210101T17:20:00"), "symbol" : "BTC-USD"}, "price": 35060}
{"_id": {"time": ISODate("20210101T17:25:00"), "symbol" : "BTC-USD"}, "price": 35150}
{"_id": {"time": ISODate("20210101T17:30:00"), "symbol" : "BTC-USD"}, "price": 35350}
```
Once we have the above dataset, we want to enrich our data with the simple moving average indicator as shown below. Every interval in every symbol will have one more field (sma) to represent the SMA indicator by including the current and last three intervals:
```js
{"_id": {"time": ISODate("20210101T17:00:00"), "symbol" : "BTC-USD"}, "price": 35050, "sma": ?}
{"_id": {"time": ISODate("20210101T17:05:00"), "symbol" : "BTC-USD"}, "price": 35170, "sma": ?}
{"_id": {"time": ISODate("20210101T17:10:00"), "symbol" : "BTC-USD"}, "price": 35280, "sma": ?}
{"_id": {"time": ISODate("20210101T17:15:00"), "symbol" : "BTC-USD"}, "price": 34910, "sma": ?}
{"_id": {"time": ISODate("20210101T17:20:00"), "symbol" : "BTC-USD"}, "price": 35060, "sma": ?}
{"_id": {"time": ISODate("20210101T17:25:00"), "symbol" : "BTC-USD"}, "price": 35150, "sma": ?}
{"_id": {"time": ISODate("20210101T17:30:00"), "symbol" : "BTC-USD"}, "price": 35350, "sma": ?}
```
How is it calculated? For the time, `17:00:00`, the calculation of SMA is very simple. Since we don’t have the three previous data points, we can take the existing price (35050) at that time as average. If we don’t have three previous data points, we can get all the available possible price information and divide by the number of price data.
The harder part comes when we have more than three previous data points. If we have more than three previous data points, we need to remove the older ones. And, we have to keep doing this as we have more data for a single symbol. Therefore, we will calculate the average by considering only up to three previous data points. The below table represents the calculation step by step for every interval:
| Time | SMA Calculation for the window (3 previous + current data points) |
| --- | --- |
| 17:00:00 | 35050/1 |
| 17:05:00 | (35050+35170)/2 |
| 17:10:00 | (35050+35170+35280)/3 |
| 17:15:00 | (35050+35170+35280+34910)/4 |
| 17:20:00 | (35170+35280+34910+35060)/4
*oldest price data (35050) discarded from the calculation |
| 17:25:00 | (35280+34910+35060+35150)/4
*oldest price data (35170) discarded from the calculation |
| 17:30:00 | (34190+35060+35150+35350)/4
*oldest price data (35280) discarded from the calculation |
As you see above, the window for the average calculation is moving as we have more data.
## Window Functions
Until now, we learned the theory of moving average calculation. How can we use MongoDB to do this calculation for all of the currencies?
MongoDB 5.0 introduced a new aggregation stage, `$setWindowFields`, to perform operations on a specified range of documents (window) in the defined partitions. Because it also supports average calculation on a window through `$avg` operator, we can easily use it to calculate Simple Moving Average:
```js
{
$setWindowFields: {
partitionBy: "_id.symbol",
sortBy: { "_id.time": 1 },
output: {
sma: {
$avg: "$price",
window: { documents: -3, 0] },
},
},
},
}
```
We chose the symbol field as partition key. For every currency, we have a partition, and each partition will have its own window to process that specific currency data. Therefore, when we’d like to process sequential data of a single currency, we will not mingle the other currency’s data.
After we set the partition field, we apply sorting to process the data in an ordered way. The partition field provides processing of single currency data together. However, we want to process data as ordered by time. As we see in how SMA is calculated on the paper, the order of the data matters and therefore, we need to specify the field for ordering.
After partitions are set and sorted, then we can process the data for each partition. We generate one more field, “`sma`”, and we define the calculation method of this derived field. Here we set three things:
- The operator that is going to be executed (`$avg`).
- The field (`$price`) where the operator is going to be executed on.
- The boundaries of the window (`[-3,0]`).
- `[-3`: “start from 3 previous data points”.
- `0]`: “end up with including current data point”.
- We can also set the second parameter of the window as “`current`” to include the current data point rather than giving numeric value.
Moving the window on the partitioned and sorted data will look like the following. For every symbol, we’ll have a partition, and all the records belonging to that partition will be sorted by the time information:
![Calculation process
Then we will have the `sma` field calculated for every document in the input stream. You can apply `$round` operator to trim to the specified decimal place in a `$set` aggregation stage:
```js
{
$set: {
sma: { $round: "$sma", 2] },
},
}
```
If we bring all the aggregation stages together, we will end-up with this aggregation pipeline:
```js
db.ticker.aggregate([
{
$match: {
symbol: "BTC-USD",
},
},
{
$group: {
_id: {
symbol: "$symbol",
time: {
$dateTrunc: {
date: "$time",
unit: "minute",
binSize: 5,
},
},
},
high: { $max: "$price" },
low: { $min: "$price" },
open: { $first: "$price" },
close: { $last: "$price" },
},
},
{
$sort: {
"_id.time": 1,
},
},
{
$project: {
_id: 1,
price: "$close",
},
},
{
$setWindowFields: {
partitionBy: "_id.symbol",
sortBy: { "_id.time": 1 },
output: {
sma: {
$avg: "$price",
window: { documents: [-3, 0] },
},
},
},
},
{
$set: {
sma: { $round: ["$sma", 2] },
},
},
]);
```
You may want to add more calculated fields with different options. For example, you can have two SMA calculations with different parameters. One of them could include the last three points as we have done already, and the other one could include the last 10 points, and you may want to compare both. Find the query below:
```js
{
$setWindowFields: {
partitionBy: "_id.symbol",
sortBy: { "_id.time": 1 },
output: {
sma_3: {
$avg: "$price",
window: { documents: [-3, 0] },
},
sma_10: {
$avg: "$price",
window: { documents: [-10, 0] },
},
},
},
}
```
Here in the above code, we set two derived fields. The `sma_3` field represents the moving average for the last three data points, and the `sma_10` field represents the moving average for the 10 last data points. Furthermore, you can compare these two moving averages to take a position on the currency or use it for a parameter for your own technical indicator.
The below chart shows two moving average calculations. The line with blue color represents the simple moving average with the window `[-3,0]`. The line with the turquoise color represents the simple moving average with the window `[-10,0]`. As you can see, when the window is bigger, reaction to price change gets slower:
![Candlestick chart
You can even enrich it further with the additional operations such as covariance, standard deviation, and so on. Check the full supported options here. We will cover the Exponential Moving Average here as an additional operation.
## Exponential Moving Average (EMA)
EMA is a kind of moving average. However, it weighs the recent data higher. In the calculation of the Simple Moving Average, we equally weight all the input parameters. However, in the Exponential Moving Average, based on the given parameter, recent data gets more important. Therefore, Exponential Moving Average reacts faster than Simple Moving Average to recent price updates within the similar size window.
`$expMovingAvg` has been introduced in MongoDB 5.0. It takes two parameters: the field name that includes numeric value for the calculation, and `N` or `alpha` value. We’ll set the parameter `N` to specify how many previous data points need to be evaluated while calculating the moving average and therefore, recent records within the `N` data points will have more weight than the older data. You can refer to the documentation for more information:
```js
{
$expMovingAvg: {
input: "$price",
N: 5
}
}
```
In the below diagram, SMA is represented with the blue line and EMA is represented with the red line, and both are calculated by five recent data points. You can see how the Simple Moving Average reacts slower to the recent price updates than the Exponential Moving Average even though they both have the same records in the calculation:
## Conclusion
MongoDB 5.0, with the introduction of Windowing Function, makes calculations much easier over a window. There are many aggregation operators that can be executed over a window, and we have seen `$avg` and `$expMovingAvg` in this article.
Here in the given examples, we set the window boundaries by including the positional documents. In other words, we start to include documents from three previous data points to current data point (`documents: -3,0]`). You can also set a range of documents rather than defining position.
For example, if the window is sorted by time, you can include the last 30 minutes of data (whatever number of documents you have) by specifying the range option as follows: `range: [-30,0], unit: "minute". `Now, we may have hundreds of documents in the window but we know that we only include the documents that are not older than 30 minutes than the current data.
You can also materialize the query output into another collection through [`$out` or `$merge` aggregation stages. And furthermore, you can enable change streams or Database Triggers on the materialized view to automatically trigger buy/sell actions based on the result of technical indicator changes. | md | {
"tags": [
"MongoDB",
"JavaScript"
],
"pageDescription": "Time series collections part 2: How to calculate Simple Moving Average and Exponential Moving Average \n\n",
"contentType": "Tutorial"
} | Currency Analysis with Time Series Collections #2 — Simple Moving Average and Exponential Moving Average Calculation | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/auto-pausing-inactive-clusters | created | # Auto Pausing Inactive Clusters
# Auto Pausing Inactive Clusters
## Introduction
A couple of years ago I wrote an article on how to pause and/or scale clusters using scheduled triggers. This article represents a twist on that concept, adding a wrinkle that will pause clusters across an entire organization based on inactivity. Specifically, I’m looking at the Database Access History to determine activity.
It is important to note this logging limitation:
_If a cluster experiences an activity spike and generates an extremely large quantity of log messages, Atlas may stop collecting and storing new logs for a period of time._
Therefore, this script could get a false positive that a cluster is inactive when indeed quite the opposite is happening. Given, however, that the intent of this script is for managing lower, non-production environments, I don’t see the false positives as a big concern.
## Architecture
The implementation uses a Scheduled Trigger. The trigger calls a series of App Services Functions, which use the Atlas Administration APIs to iterate over the organization’s projects and their associated clusters, testing the cluster inactivity (as explained in the introduction) and finally pausing the cluster if it is indeed inactive.
## API Keys
In order to call the Atlas Administrative APIs, you'll first need an API Key with the Organization Owner role. API Keys are created in the Access Manager, which you'll find in the Organization menu on the left:
or the menu bar at the top:
Click **Create API Key**. Give the key a description and be sure to set the permissions to **Organization Owner**:
When you click **Next**, you'll be presented with your Public and Private keys. **Save your private key as Atlas will never show it to you again**.
As an extra layer of security, you also have the option to set an IP Access List for these keys. I'm skipping this step, so my key will work from anywhere.
## Deployment
### Create a Project for Automation
Since this solution works across your entire Atlas organization, I like to host it in its own dedicated Atlas Project.
### Create a App Services Application
Atlas App Services provide a powerful application development backend as a service. To begin using it, just click the App Services tab.
You'll see that App Services offers a bunch of templates to get you started. For this use case, just select the first option to **Build your own App**:
You'll then be presented with options to link a data source, name your application and choose a deployment model. The current iteration of this utility doesn't use a data source, so you can ignore that step (App Services will create a free cluster for you). You can also leave the deployment model at its default (Global), unless you want to limit the application to a specific region.
I've named the application **Atlas Cluster Automation**:
At this point in our journey, you have two options:
1. Simply import the App Services application and adjust any of the functions to fit your needs.
2. Build the application from scratch (skip to the next section).
## Import Option
### Step 1: Store the API Secret Key.
The extract has a dependency on the API Secret Key, thus the import will fail if it is not configured beforehand.
Use the `Values` menu on the left to Create a Secret named `AtlasPrivateKeySecret` containing the private key you created earlier (the secret is not in quotes):
### Step 1: Install the Atlas App Services CLI (realm-cli)
Realm CLI is available on npm. To install version 2 of the Realm CLI on your system, ensure that you have Node.js installed and then run the following command in your shell:
```npm install -g mongodb-realm-cli```
### Step 2: Extract the Application Archive
Download and extract the AtlasClusterAutomation.zip.
### Step 3: Log into Atlas
To configure your app with realm-cli, you must log in to Atlas using your API keys:
```zsh
✗ realm-cli login --api-key="" --private-api-key=""
Successfully logged in
```
### Step 4: Get the App Services Application ID
Select the `App Settings` menu and copy your Application ID:
### Step 5: Import the Application
Run the following `realm-cli push` command from the directory where you extracted the export:
```zsh
realm-cli push --remote=""
...
A summary of changes
...
? Please confirm the changes shown above Yes
Creating draft
Pushing changes
Deploying draft
Deployment complete
Successfully pushed app up:
```
After the import, replace the `AtlasPublicKey' with your API public key value.
### Review the Imported Application
The imported application includes 5 Atlas Functions:
And the Scheduled Trigger which calls the **pauseInactiveClusters** function:
The trigger is schedule to fire every 30 minutes. Note, the **pauseClusters** function that the trigger calls currently only logs cluster activity. This is so you can monitor and verify that the fuction behaves as you desire. When ready, uncomment the line that calls the **pauseCluster** function:
```Javascript
if (!is_active) {
console.log(`Pausing ${project.name}:${cluster.name} because it has been inactive for more then ${minutesInactive} minutes`);
//await context.functions.execute("pauseCluster", project.id, cluster.name, pause);
```
In addition, the **pauseClusters** function can be configured to exclude projects (such as those dedicated to production workloads):
```javascrsipt
/*
* These project names are just an example.
* The same concept could be used to exclude clusters or even
* configure different inactivity intervals by project or cluster.
* These configuration options could also be stored and read from
* and Atlas database.
*/
excludeProjects = 'PROD1', 'PROD2'];
```
Now that you have reviewed the draft, as a final step go ahead and deploy the App Services application.
![Review Draft & Deploy
## Build it Yourself Option
To understand what's included in the application, here are the steps to build it yourself from scratch.
### Step 1: Store the API Keys
The functions we need to create will call the Atlas Administration API, so we need to store our API Public and Private Keys, which we will do using Values & Secrets. The sample code I provide references these values as `AtlasPublicKey` and `AtlasPrivateKey`, so use those same names unless you want to change the code where they’re referenced.
You'll find `Values` under the Build menu:
First, create a Value, `AtlasPublicKey`, for your public key (note, the key is in quotes):
Create a Secret, `AtlasPrivateKeySecret`, containing your private key (the secret is not in quotes):
The Secret cannot be accessed directly, so create a second Value, `AtlasPrivateKey`, that links to the secret:
### Step 2: Create the Functions
The four functions that need to be created are pretty self-explanatory, so I’m not going to provide a bunch of additional explanations here.
#### getProjects
This standalone function can be test run from the App Services console to see the list of all the projects in your organization.
```Javascript
/*
* Returns an array of the projects in the organization
* See https://docs.atlas.mongodb.com/reference/api/project-get-all/
*
* Returns an array of objects, e.g.
*
* {
* "clusterCount": {
* "$numberInt": "1"
* },
* "created": "2021-05-11T18:24:48Z",
* "id": "609acbef1b76b53fcd37c8e1",
* "links":
* {
* "href": "https://cloud.mongodb.com/api/atlas/v1.0/groups/609acbef1b76b53fcd37c8e1",
* "rel": "self"
* }
* ],
* "name": "mg-training-sample",
* "orgId": "5b4e2d803b34b965050f1835"
* }
*
*/
exports = async function() {
// Get stored credentials...
const username = await context.values.get("AtlasPublicKey");
const password = await context.values.get("AtlasPrivateKey");
const arg = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: 'api/atlas/v1.0/groups',
username: username,
password: password,
headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
digestAuth:true,
};
// The response body is a BSON.Binary object. Parse it and return.
response = await context.http.get(arg);
return EJSON.parse(response.body.text()).results;
};
```
#### getProjectClusters
After `getProjects` is called, the trigger iterates over the results, passing the `projectId` to this `getProjectClusters` function.
_To test this function, you need to supply a `projectId`. By default, the Console supplies ‘Hello world!’, so I test for that input and provide some default values for easy testing._
```Javascript
/*
* Returns an array of the clusters for the supplied project ID.
* See https://docs.atlas.mongodb.com/reference/api/clusters-get-all/
*
* Returns an array of objects. See the API documentation for details.
*
*/
exports = async function(project_id) {
if (project_id == "Hello world!") { // Easy testing from the console
project_id = "5e8f8268d896f55ac04969a1"
}
// Get stored credentials...
const username = await context.values.get("AtlasPublicKey");
const password = await context.values.get("AtlasPrivateKey");
const arg = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: `api/atlas/v1.0/groups/${project_id}/clusters`,
username: username,
password: password,
headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
digestAuth:true,
};
// The response body is a BSON.Binary object. Parse it and return.
response = await context.http.get(arg);
return EJSON.parse(response.body.text()).results;
};
```
#### clusterIsActive
This function contains the logic that determines if the cluster can be paused.
Most of the work in this function is manipulating the timestamp in the database access log so it can be compared to the current time and lookback window.
In addition to returning true (active) or false (inactive), the function logs it’s findings, for example: \
\
`Checking if cluster 'SA-SHARED-DEMO' has been active in the last 60 minutes`
```ZSH
Wed Nov 03 2021 19:52:31 GMT+0000 (UTC) - job is being run
Wed Nov 03 2021 18:52:31 GMT+0000 (UTC) - cluster inactivity before this time will be reported inactive
Wed Nov 03 2021 19:48:45 GMT+0000 (UTC) - last logged database access
Cluster is Active: Username 'brian' was active in cluster 'SA-SHARED-DEMO' 4 minutes ago.
```
Like `getClusterProjects`, there’s a block you can use to provide some test project ID and cluster names for easy testing from the App Services console.
```Javascript
/*
* Used the database access history to determine if the cluster is in active use.
* See https://docs.atlas.mongodb.com/reference/api/access-tracking-get-database-history-clustername/
*
* Returns true (active) or false (inactive)
*
*/
exports = async function(project_id, clusterName, minutes) {
if (project_id == 'Hello world!') { // We're testing from the console
project_id = "5e8f8268d896f55ac04969a1";
clusterName = "SA-SHARED-DEMO";
minutes = 60;
} /*else {
console.log (`project_id: ${project_id}, clusterName: ${clusterName}, minutes: ${minutes}`)
}*/
// Get stored credentials...
const username = await context.values.get("AtlasPublicKey");
const password = await context.values.get("AtlasPrivateKey");
const arg = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: `api/atlas/v1.0/groups/${project_id}/dbAccessHistory/clusters/${clusterName}`,
//query: {'authResult': "true"},
username: username,
password: password,
headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
digestAuth:true,
};
// The response body is a BSON.Binary object. Parse it and return.
response = await context.http.get(arg);
accessLogs = EJSON.parse(response.body.text()).accessLogs;
now = Date.now();
const MS_PER_MINUTE = 60000;
var durationInMinutes = (minutes < 30, 30, minutes); // The log granularity is 30 minutes.
var idleStartTime = now - (durationInMinutes * MS_PER_MINUTE);
nowString = new Date(now).toString();
idleStartTimeString = new Date(idleStartTime).toString();
console.log(`Checking if cluster '${clusterName}' has been active in the last ${durationInMinutes} minutes`)
console.log(` ${nowString} - job is being run`);
console.log(` ${idleStartTimeString} - cluster inactivity before this time will be reported inactive`);
clusterIsActive = false;
accessLogs.every(log => {
if (log.username != 'mms-automation' && log.username != 'mms-monitoring-agent') {
// Convert string log date to milliseconds
logTime = Date.parse(log.timestamp);
logTimeString = new Date(logTime);
console.log(` ${logTimeString} - last logged database access`);
var elapsedTimeMins = Math.round((now - logTime)/MS_PER_MINUTE, 0);
if (logTime > idleStartTime ) {
console.log(`Cluster is Active: Username '${log.username}' was active in cluster '${clusterName}' ${elapsedTimeMins} minutes ago.`);
clusterIsActive = true;
return false;
} else {
// The first log entry is older than our inactive window
console.log(`Cluster is Inactive: Username '${log.username}' was active in cluster '${clusterName}' ${elapsedTimeMins} minutes ago.`);
clusterIsActive = false;
return false;
}
}
return true;
});
return clusterIsActive;
};
```
#### pauseCluster
Finally, if the cluster is inactive, we pass the project Id and cluster name to `pauseCluster`. This function can also resume a cluster, although that feature is not utilized for this use case.
```Javascript
/*
* Pauses the named cluster
* See https://docs.atlas.mongodb.com/reference/api/clusters-modify-one/
*
*/
exports = async function(projectID, clusterName, pause) {
// Get stored credentials...
const username = await context.values.get("AtlasPublicKey");
const password = await context.values.get("AtlasPrivateKey");
const body = {paused: pause};
const arg = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: `api/atlas/v1.0/groups/${projectID}/clusters/${clusterName}`,
username: username,
password: password,
headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
digestAuth:true,
body: JSON.stringify(body)
};
// The response body is a BSON.Binary object. Parse it and return.
response = await context.http.patch(arg);
return EJSON.parse(response.body.text());
};
```
### pauseInactiveClusters
This function will be called by a trigger. As it's not possible to pass a parameter to a scheduled trigger, it uses a hard-coded lookback window of 60 minutes that you can change to meet your needs. You could even store the value in an Atlas database and build a UI to manage its setting :-).
The function will evaluate all projects and clusters in the organization where it’s hosted. Understanding that there are likely projects or clusters that you never want paused, the function also includes an excludeProjects array, where you can specify a list of project names to exclude from evaluation.
Finally, you’ll notice the call to `pauseCluster` is commented out. I suggest you run this function for a couple of days and review the Trigger logs to verify it behaves as you’d expect.
```Javascript
/*
* Iterates over the organizations projects and clusters,
* pausing clusters inactive for the configured minutes.
*/
exports = async function() {
minutesInactive = 60;
/*
* These project names are just an example.
* The same concept could be used to exclude clusters or even
* configure different inactivity intervals by project or cluster.
* These configuration options could also be stored and read from
* and Atlas database.
*/
excludeProjects = ['PROD1', 'PROD2'];
const projects = await context.functions.execute("getProjects");
projects.forEach(async project => {
if (excludeProjects.includes(project.name)) {
console.log(`Project '${project.name}' has been excluded from pause.`)
} else {
console.log(`Checking project '${project.name}'s clusters for inactivity...`);
const clusters = await context.functions.execute("getProjectClusters", project.id);
clusters.forEach(async cluster => {
if (cluster.providerSettings.providerName != "TENANT") { // It's a dedicated cluster than can be paused
if (cluster.paused == false) {
is_active = await context.functions.execute("clusterIsActive", project.id, cluster.name, minutesInactive);
if (!is_active) {
console.log(`Pausing ${project.name}:${cluster.name} because it has been inactive for more then ${minutesInactive} minutes`);
//await context.functions.execute("pauseCluster", project.id, cluster.name, true);
} else {
console.log(`Skipping pause for ${project.name}:${cluster.name} because it has active database users in the last ${minutesInactive} minutes.`);
}
}
}
});
}
});
return true;
};
```
### Step 3: Create the Scheduled Trigger
Yes, we’re still using a [scheduled trigger, but this time the trigger will run periodically to check for cluster inactivity. Now, your developers working late into the night will no longer have the cluster paused underneath them.
### Step 4: Deploy
As a final step you need to deploy the App Services application.
## Summary
The genesis for this article was a customer, when presented my previous article on scheduling cluster pauses, asked if the same could be achieved based on inactivity. It’s my belief that with the Atlas APIs, anything could be achieved. The only question was what constitutes inactivity? Given the heartbeat and replication that naturally occurs, there’s always some “activity” on the cluster. Ultimately, I settled on database access as the guide. Over time, that metric may be combined with some additional metrics or changed to something else altogether, but the bones of the process are here.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "One of Atlas' many great features is that it provides you the ability to pause clusters that are not currently needed, which primarily includes non-prod environments. This article shows you how to automatically pause clusters that go unused for a any period of time that you desire.",
"contentType": "Article"
} | Auto Pausing Inactive Clusters | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/document-swift-powered-frameworks-using-docc | created | # Document our Realm-Powered Swift Frameworks using DocC
## Introduction
In the previous post of this series we added Realm to a really simple Binary Tree library. The idea was to create a Package that, using Realm, allowed you to define binary trees and store them locally.
Now we have that library, but how will anyone know how to use it? We can write a manual, a blog post or FAQs, but luckily Xcode allowed us to add Documentation Comments since forever. And in WWDC 21 Apple announced the new Documentation Compiler, DocC, which takes all our documentation comments and creates a nice, powerful documentation site for our libraries and frameworks.
Let’s try it documenting our library!
## Documentation Comments
Comments are part of any language. You use regular comments to explain some specially complicated piece of code, to leave a reason about why some code was constructed in a certain way or just to organise a really big file or function (By the way, if this happens, split it, it’s better). These comments start with `//` for single line comments or with `/*` for block comments.
And please, please please don't use comments for things like
```swift
// i is now 0
i = 0
```
For example, these are line comments from `Package.swift`:
```swift
// swift-tools-version:5.5
// The swift-tools-version declares the minimum version of Swift required to build this package.
```
While this is a regular block comment:
```swift
/*
File.swift
Created by The Realm Team on 16/6/21.
*/
```
We’ve had Documentation Comments in Swift (and Objective C) since forever. This is how they look:
```swift
/// Single-line documentation comment starts with three /
```
```swift
/**
Multi-line documentation
Comment
Block starts with /**
*/
```
These are similar in syntax, although have two major differences:
* You can write Markup in documentation comments and Xcode will render it
* These comments explain _what_ something is and _how it’s used_, not how it’s coded.
Documentation comments are perfect to explain what that class does, or how to use this function. If you can’t put it in plain words, probably you don’t understand what they do and need to think about it a bit more. Also, having clearly stated what a function receives as parameters, what returns, edge cases and possible side effects helps you a lot while writing unit tests. Is simple to test something that you’ve just written how it should be used, what behaviour will exhibit and which values should return.
## DocC
Apple announced the Documentation Compiler, DocC, during WWDC21. This new tool, integrated with Xcode 13 allows us to generate a Documentation bundle that can be shared, with beautiful web pages containing all our symbols (classes, structs, functions, etc.)
With DocC we can generate documentation for our libraries and frameworks. It won’t work for Apps, as the idea of these comments is to explain how to use a piece of code and that works perfectly with libraries.
DocC allows for much more than just generating a web site from our code. It can host tutorials, and any pages we want to add. Let’s try it!
## Generating Documentation with DocC
First, grab the code for the Realm Binary Tree library from this repository. In order to do that, run the following commands from a Terminal:
```bash
$ git clone https://github.com/mongodb-developer/realm-binary-tree
$ cd realm-binary-tree
```
If you want to follow along and make these changes, just checkout the tag `initial-state` with `git checkout initial-state`.
Then open the project by double clicking on the `Package.swift` file. Once Xcode ends getting all necessary dependencies (`Realm-Swift` is the main one) we can generate the documentation clicking in the menu option `Product > Build Documentation` or the associated keyboard shortcut `⌃⇧⌘D`. This will open the Documentation Browser with our library’s documentation in it.
As we can see, all of our public symbols (in this case the `BinaryTree` class and `TreeTraversable` protocol are there, with their documentation comments nicely showing. This is how it looks for `TreeTraversable::mapInOrder(tree:closure:)`
## Adding an About Section and Articles
This is nice, but Xcode 13 now allows us to create a new type of file: a **Documentation Catalog**. This can host Articles, Tutorials and Images. Let’s start by selecting the `Sources > BinaryTree` folder and typing ⌘N to add a new File. Then scroll down to the Documentation section and select `Documentation Catalog`. Give it the name `BinaryTree.docc`. We can rename this resource later as any other file/group in Xcode. We want a name that identifies it clearly when we create an exported documentation package.
Let’s start by renaming the `Documentation.md` file into `BinaryTree.md`. As this has the same name as our Doc Package, everything we put inside this file will appear in the Documentation node of the Framework itself.
We can add images to our Documentation Catalog simply by dragging them into `Resources`. Then, we can reference those images using the usual Markdown syntax ``. This is how our framework’s main page look like now:
Inside this documentation package we can add Articles. Articles are just Markdown pages where we can explain a subject in written longform. Select the Documentation Package `BinaryTree.docc` and add a new file, using ⌘N. Choose `Article File` from `Documentation`. A new Markdown file will be created. Now write your awesome content to explain how your library works, some concepts you need to know before using it, etc.
## Tutorials
Tutorials are step by step instructions on how to use your library or framework. Here you can explain, for example, how to initialize a class that needs several parameters injected when calling the `init` method, or how a certain threading problem can be handled.
In our case, we want to explain how we can create a Tree, and how we can traverse it.
So first we need a Tutorial File. Go to your `Tutorials` folder and create a new File. Select Documentation > Tutorial File. A Tutorial file describes the steps in a tutorial, so while you scroll through it related code appears, as you can see here in action.
We need two things: our code snippets and the tutorial file. The tutorial file looks like this:
```swift
@Tutorial(time: 5) {
@Intro(title: "Creating Trees") {
How to create Trees
@Image(source: seed-tree.jpg, alt: "This is an image of a Tree")
}
@Section(title: "Creating trees") {
@ContentAndMedia() {
Let's create some trees
}
@Steps {
@Step {
Import `BinaryTree`
@Code(name: "CreateTree.swift", file: 01-create-tree.swift)
}
@Step {
Create an empty Tree object
@Code(name: "CreateTree.swift", file: 02-create-tree.swift)
}
@Step {
Add left and right children. These children are also of type `RealmBinaryTree`
@Code(name: "CreateTree.swift", file: 03-create-tree.swift)
}
}
}
}
```
As you can see, we have a first `@Tutorial(time: 5)` line where we put the estimated time to complete this tutorial. Then some introduction text and images, and one `@Section`. We can create as many sections as we need. When the documentation is rendered they’ll correspond to a new page of the tutorial and can be selected from a dropdown picker. As a tutorial is a step-by-step explanation, we now add each and every step, that will have some text that will tell you what the code will do and the code itself you need to enter.
That code is stored in Resources > code as regular Swift files. So if you have 5 steps you’ll need five files. Each step will show what’s in the associated snippet file, so to make it appear as you advance one step should include the previous step’s code. My approach to code snippets is to do it backwards: first I write the final snippet with the complete sample code, then I duplicate it as many times as steps I have in this tutorial, finally I delete code in each file as needed.
## Recap
In this post we’ve seen how to add developer documentation to our code, how to generate a DocC package including sample code and tutorials.
This will help us explain to others how to use our code, how to test it, its limitations and a better understanding of our own code. Explaining how something works is the quickest way to master it.
In the next post we’ll have a look at how we can host this package online!
If you have any questions or comments on this post (or anything else Realm-related), then please raise them on our community forum\(https://www.mongodb.com/community/forums/c/realm-sdks/58). To keep up with the latest Realm news, follow [@realm on Twitter and join the Realm global community.
## Reference Materials
### Sample Repo
Source code repo: https://github.com/mongodb-developer/realm-binary-tree
### Apple DocC documentation
Documentation about DocC
### WWDC21 Videos
* Meet DocC documentation in Xcode
* Build interactive tutorials using DocC
* Elevate your DocC documentation in Xcode
* Host and automate your DocC documentation
| md | {
"tags": [
"Realm",
"Swift"
],
"pageDescription": "Learn how to use the new Documentation Compiler from Apple, DocC, to create outstanding tutorials, how-tos and explain how your Frameworks work.",
"contentType": "Article"
} | Document our Realm-Powered Swift Frameworks using DocC | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-javascript-v12 | created | # What to Expect from Realm JavaScript v12
The Realm JavaScript team has been working on Realm JavaScript version 12 for a while. We have released a number of prereleases to gain confidence in our approach, and we will continue to do so as we uncover and fix issues. We cannot give a date for when we will have the final release, but we would like to give you a brief introduction to what to expect.
## Changes to the existing API
You will continue to see version 11 releases as bugs are fixed in Realm Core — our underlying database. All of our effort is now focused on version 12, so we don’t expect to fix any more SDK bugs on version 11, and all new bug reports will be verified against version 12. Moreover, we do not plan any new functionality on version 11.
You might expect many breaking changes as we are bumping the major version, but we are actually planning to have as few breaking changes as possible. The reason is that the next major version is more breaking for us than you. In reality, it is a complete rewrite of the SDK internals.
We are changing our collection classes a bit. Today, they derive from a common Collection class that is modeled over ReadonlyArray. It is problematic for Realm.Dictionary as there is no natural ordering. Furthermore, we are deprecating our namespaced API since we find it out of touch with modern TypeScript and JavaScript development. We are dropping support for Altas push notifications (they have been deprecated some time ago). Other changes might come along during the development process and we will document them carefully.
The goal of the rewrite is to keep the public API as it is, and change the internal implementation. To ensure that we are keeping the API mostly untouched, we are either reusing or rewriting the tests we have written over the years. We implemented the ported tests in JavaScript and rewrote them in TypeScript to help us verify the new TypeScript types.
## Issues with the old architecture
Realm JavaScript has historically been a mixture of C++ and vanilla JavaScript. TypeScript definitions and API documentation have been added on the side. A good portion of the API does not touch a single line of JavaScript code but goes directly to an implementation in C++. This makes it difficult to quickly add new functionality, as you have to decide if it can be implemented in JavaScript, C++, or a mixture of both. Moreover, you need to remember to update TypeScript definitions and API documentation. Consequently, over the years, we have seen issues where either API documentation or TypeScript definitions are not consistent with the implementation.
## Our new architecture
Realm JavaScript builds on Realm Core, which is composed of a storage engine, query engine, and sync client connecting your client device with MongoDB Atlas. Realm Core is a C++ library, and the vast majority of Realm JavaScript’s C++ code in our old architecture calls into Realm Core. Another large portion of our old C++ code is interfacing with the different JavaScript engines we are supporting (currently using NAPI Node.js and Electron] and JSI [JavaScriptCore and Hermes]).
Our rewrite will create two separated layers: i) a handcrafted SDK layer and ii) a generated binding layer. The binding layer is interfacing the JavaScript engines and Realm Core. It is generated code, and our code generator (or binding generator) will read a specification of the Realm Core API and generate C++ code and TypeScript definitions. The generated C++ code can be called from JavaScript or TypeScript.
On top of the binding layer, we implement a hand-crafted SDK layer. It is an implementation of the Realm JavaScript API as you know it. It is implemented by using classes and methods in the binding layer as building blocks. We have chosen to use TypeScript as the implementation language.
![The new architecture of the Realm JavaScript SDK
We see a number of benefits from this rewrite:
**Deliver new features faster**
First, our hypothesis is that we are able to deliver new functionality faster. We don’t have to write so much C++ boilerplate code as we have done in the past.
**Provide a TypeScript-first experience**
Second, we are implementing the SDK in TypeScript, which guarantees that the TypeScript definitions will be accurate and consistent with the implementation. If you are a TypeScript developer, this is for you. Likely, your editor will guide you through integrating with Realm, and it will be possible to do static type checking and analysis before deploying your app in production. We are also moving from JSDoc to TSDoc so the API documentation will coexist with the SDK implementation. Again, it will help you and your editor in your day-to-day work, as well as eliminating the previously seen inconsistencies between the API documentation and TypeScripts definitions.
**Facilitate community contributions**
Third, we are lowering the bar for you to contribute. In the past, you likely had to have a good understanding of C++ to open a pull request with either a bug fix or a new feature. Many features can now be implemented in TypeScript alone by using the building blocks found in the binding layer. We are looking forward to seeing contributions from you.
**Generate more optimal code**
Last but not least, we hope to be able to generate more optimal code for the supported JavaScript engines. In the past, we had to write C++ code which was working across multiple JavaScript engines. Our early measurements indicate that many parts of the API will be a little faster, and in a few places, it will be much faster.
## New features
As mentioned earlier, all new functionality will only be released on version 12 and above. Some new functionality has already been merged and released, and more will follow. Let us briefly introduce some highlights to you.
First, a new unified logging mechanism has been introduced. It means that you can get more insights into what the storage engine, query engine, and sync client are doing. The goal is to make it easier for you to debug. You provide a callback function to the global logger, and log messages will be captured by calling your function.
```typescript
type Log = {
message: string;
level: string;
};
const logs: Log] = [];
Realm.setLogger((level, message) => {
logs.push({ level, message });
});
Realm.setLogLevel("all");
```
Second, full-text search will be supported. You can mark a string property to be indexed for full-text search, and Realm Query Language allows you to query your Realm. Currently, the feature is limited to European alphabets. Advanced functionality like stemming and spanning across properties will be added later.
```typescript
interface IStory {
title: string;
content?: string;
}
class Story extends Realm.Object implements IStory {
title: string;
content?: string;
static schema: ObjectSchema = {
name: "Story",
properties: {
title: { type: "string" },
content: { type: "string", indexed: "full-text", optional: true },
},
primaryKey: "title",
};
}
// ... initialize your app and open your Realm
let amazingStories = realm.objects(Story).filtered("content TEXT 'amazing'");
```
Last, a new subscription API for flexible sync will be added. The aim is to make it easier to subscribe and unsubscribe by providing `subscribe()` and `unsubscribe()` methods directly on the query result.
```typescript
const peopleOver20 = await realm
.objects("Person")
.filtered("age > 20")
.subscribe({
name: "peopleOver20",
behavior: WaitForSync.FirstTime, // Default
timeout: 2000,
});
// …
peopleOver20.unsubscribe();
```
## A better place
While Realm JavaScript version 12 will not bring major changes for you as a developer, we believe that the code base will be at a better place. The code base is easier to work with, and it is an open invitation to you to contribute.
The new features are additive, and we hope that they will be useful for you. Logging is likely most useful while developing your app, and full-text search can be useful in many use cases. The new flexible sync subscription API is experimental, and we might change it as we get [feedback from you. | md | {
"tags": [
"Realm",
"TypeScript",
"JavaScript"
],
"pageDescription": "The Realm JavaScript team has been working on Realm JavaScript version 12 for a while, and we'd like to give you a brief introduction to what to expect.",
"contentType": "Article"
} | What to Expect from Realm JavaScript v12 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/build-your-own-function-retry-mechanism-with-realm | created | # Build Your Own Function Retry Mechanism with Realm
## What is Realm?
Wondering what it's all about? Realm is an object-oriented data model database that will persist data on disk, doesn’t need an ORM, and lets you write less code with full offline capabilities… but Realm is also a fully-managed back-end service that helps you deliver best-in-class apps across Android, iOS, and Web.
To leverage the full BaaS capabilities, Functions allow you to define and execute server-side logic for your application. You can call functions from your client applications as well as from other functions and in JSON expressions throughout Realm.
Functions are written in modern JavaScript (ES6+) and execute in a serverless manner. When you call a function, you can dynamically access components of the current application as well as information about the request to execute the function and the logged-in user that sent the request.
By default, Realm Functions have no Node.js modules available for import. If you would like to make use of any such modules, you can upload external dependencies to make them available to import into your Realm Functions.
## Motivation
This tutorial is born to show how we can create a retry mechanism for our functions. We have to keep in mind that triggers have their own internal automatic retry mechanism that ensures they are executed. However, functions lack such a mechanism. Realm functions are executed as HTTP requests, so it is our responsibility to create a mechanism to retry if they fail.
Next, we will show how we can achieve this mechanism in a simple way that could be applied to any project.
## Flow Diagram
The main basis of this mechanism will be based on states. In this way, we will be able to contemplate **four different states**. Thus, we will have:
* **0: Not tried**: Initial state. When creating a new event that will need to be processed, it will be assigned the initial status **0**.
* **1: Success**: Successful status. When an event is successfully executed through our function, it will be assigned this status so that it will not be necessary to retry again.
* **2: Failed**: Failed status. When, after executing an event, it results in an error, it will be necessary to retry and therefore it will be assigned a status **2 or failed**.
* **3: Error**: It is important to note that we cannot always retry. We must have a limit of retries. When this limit is exhausted, the status will change to **error or 3**.
The algorithm that will define the passage between states will be the following:
Flow diagram
## System Architecture
The system is based on two collections and a trigger. The trigger will be defined as a **database trigger** that will react each time there is an insert or update in a specific collection. The collection will keep track of the events that need to be processed. Each time this trigger is activated, the event is processed in a function linked to it. The function, when processing the event, may or may not fail, and we need to capture the failure to retry.
When the function fails, the event state is updated in the event collection, and as the trigger reacts on inserts and updates, it will call the function again to reprocess the same.
A maximum number of retries will be defined so that, once exhausted, the event will not be reprocessed and will be marked as an error in the **error** collection.
## Sequence Diagram
The following diagram shows the three use cases contemplated for this scenario.
## Use Case 1:
A new document is inserted in the collection of events to be processed. Its initial state is **0 (new)** and the number of retries is **0**. The trigger is activated and executes the function for this event. The function is executed successfully and the event status is updated to **1 (success).**
## Use Case 2:
A new document is inserted into the collection of events to be processed. Its initial state is **0 (new)** and the number of retries is **0.** The trigger is activated and executes the function for this event. The function fails and the event status is updated to **2 (failed)** and the number of retries is increased to **1**.
## Use Case 3:
A document is updated in the collection of events to be processed. Its initial status is **2 (failed)** and the number of retries is less than the maximum allowed. The trigger is activated and executes the function for this event. The function fails, the status remains at **2 (failed),** and the counter increases. If the counter for retries is greater than the maximum allowed, the event is sent to the **error** collection and deleted from the event collection.
## Use Case 4:
A document is updated in the event collection to be processed. Its initial status is **2 (failed)** and the number of retries is less than the maximum allowed. The trigger is activated and executes the function for this event. The function is executed successfully, and the status changes to **1 (success).**
Sequence Diagram
## Project Example Repository
We can find a simple project that illustrates the above here.
This project uses a trigger, **newEventsGenerator**, to generate a new document every two minutes through a cron job in the **Events** collection. This will simulate the creation of events to be processed.
The trigger **eventsProcessor** will be in charge of processing the events inserted or updated in the **Events** collection. To simulate a failure, a function is used that generates a random number and returns whether it is divisible or not by two. In this way, both states can be simulated.
```
function getFailOrSuccess() {
// Random number between 1 and 10
const number = Math.floor(Math.random() * 10) + 1;
return ((number % 2) === 0);
}
```
## Conclusion
This tutorial illustrates in a simple way how we can create our own retry mechanism to increase the reliability of our application. Realm allows us to create our application completely serverless, and thanks to the Realm functions, we can define and execute the server-side logic for our application in the cloud.
We can use the functions to handle low-latency, short-lived connection logic, and other server-side interactions. Functions are especially useful when we want to work with multiple services, behave dynamically based on the current user, or abstract the implementation details of our client applications.
This retry mechanism we have just created will allow us to handle interaction with other services in a more robust way, letting us know that the action will be reattempted in case of failure. | md | {
"tags": [
"Atlas"
],
"pageDescription": "This tutorial is born to show how we can create a retry mechanism for our functions. Realm Functions allow you to define and execute server-side logic for your application. You can call functions from your client applications as well as from other functions and in JSON expressions throughout Realm. ",
"contentType": "Tutorial"
} | Build Your Own Function Retry Mechanism with Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/farm-stack-authentication | created | # Adding Authentication to Your FARM Stack App
>If you have not read my Introduction to FARM stack tutorial, I would urge you to do that now and then come back. This guide assumes you have already read and understood the previous article so some things might be confusing or opaque if you have not.
An important part of many web applications is user management, which can be complex with lots of different scenarios to cover: registration, logging in, logging out, password resets, protected routes, and so on. In this tutorial, we will look at how you can integrate the FastAPI Users package into your FARM stack.
## Prerequisites
- Python 3.9.0
- A MongoDB Atlas cluster. Follow the "Get Started with Atlas" guide to create your account and MongoDB cluster. Keep a note of your database username, password, and connection string as you will need those later.
- A MongoDB Realm App connected to your cluster. Follow the "Create a Realm App (Realm UI)" guide and make a note of your Realm App ID.
## Getting Started
Let's begin by cloning the sample code source from GitHub
``` shell
git clone git@github.com:mongodb-developer/FARM-Auth.git
```
Once you have cloned the repository, you will need to install the dependencies. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active. The requirements.txt file is within the back end folder.
``` shell
cd FARM-Auth/backend
pip install -r requirements.txt
```
It may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.
You'll need two new configuration values for this tutorial. To get them, log into Atlas and create a new Realm App by selecting the Realm tab at the top of the page, and then clicking on "Create a New App" on the top-right of the page.
Configure the Realm app to connect to your existing cluster:
You should see your Realm app's ID at the top of the page. Copy it and keep it somewhere safe. It will be used for your application's `REALM_APP_ID` value.
Click on the "Authentication" option on the left-hand side of the page. Then select the "Edit" button next to "Custom JWT Authentication". Ensure the first option, "Provider Enabled" is set to "On". Check that the Signing Algorithm is set to "HS256". Now you need to create a signing key, which is just a set of 32 random bytes. Fortunately, Python has a quick way to securely create random bytes! In your console, run the following:
``` shell
python -c 'import secrets; print(secrets.token_hex(32))'
```
Running that line of code will print out some random characters to the console. Type "signing_key" into the "Signing Key (Secret Name)" text box and then click "Create 'signing_key'" in the menu that appears underneath. A new text box will appear for the actual key bytes. Paste in the random bytes you generated above. Keep the random bytes safe for the moment. You'll need them for your application's "JWT_SECRET_KEY" configuration value.
Now you have all your configuration values, you need to set the following environment variables (make sure that you substitute your actual credentials).
``` shell
export DEBUG_MODE=True
export DB_URL="mongodb+srv://:@/?retryWrites=true&w=majority"
export DB_NAME="farmstack"
export JWT_SECRET_KEY=""
export REALM_APP_ID=""
```
Set these values appropriately for your environment, ensuring that `REALM_APP_ID` and `JWT_SECRET_KEY` use the values from above. Remember, anytime you start a new terminal session, you will need to set these environment variables again. I use direnv to make this process easier. Storing and loading these values from a .env file is another popular alternative.
The final step is to start your FastAPI server.
``` shell
uvicorn main:app --reload
```
Once the application has started, you can view it in your browser at .
You may notice that we now have a lot more endpoints than we did in the FARM stack Intro. These routes are all provided by the FastAPI `Users` package. I have also updated the todo app routes so that they are protected. This means that you can no longer access these routes, unless you are logged in.
If you try to access the `List Tasks` route, for example, it will fail with a 401 Unauthorized error. In order to access any of the todo app routes, we need to first register as a new user and then authenticate. Try this now. Use the `/auth/register` and `/auth/jwt/login` routes to create and authenticate as a new user. Once you are successfully logged in, try accessing the `List Tasks` route again. It should now grant you access and return an HTTP status of 200. Use the Atlas UI to check the new `farmstack.users` collection and you'll see that there's now a document for your new user.
## Integrating FastAPI Users
The routes and models for our users are within the `/backend/apps/user` folder. Lets walk through what it contains.
### The User Models
The FastAPI `Users` package includes some basic `User` mixins with the following attributes:
- `id` (`UUID4`) – Unique identifier of the user. Default to a UUID4.
- `email` (`str`) – Email of the user. Validated by `email-validator`.
- `is_active` (`bool`) – Whether or not the user is active. If not, login and forgot password requests will be denied. Default to `True`.
- `is_superuser` (`bool`) – Whether or not the user is a superuser. Useful to implement administration logic. Default to `False`.
``` python
from fastapi_users.models import BaseUser, BaseUserCreate, BaseUserUpdate, BaseUserDB
class User(BaseUser):
pass
class UserCreate(BaseUserCreate):
pass
class UserUpdate(User, BaseUserUpdate):
pass
class UserDB(User, BaseUserDB):
pass
```
You can use these as-is for your User models, or extend them with whatever additional properties you require. I'm using them as-is for this example.
### The User Routers
The FastAPI Users routes can be broken down into four sections:
- Registration
- Authentication
- Password Reset
- User CRUD (Create, Read, Update, Delete)
``` python
def get_users_router(app):
users_router = APIRouter()
def on_after_register(user: UserDB, request: Request):
print(f"User {user.id} has registered.")
def on_after_forgot_password(user: UserDB, token: str, request: Request):
print(f"User {user.id} has forgot their password. Reset token: {token}")
users_router.include_router(
app.fastapi_users.get_auth_router(jwt_authentication),
prefix="/auth/jwt",
tags="auth"],
)
users_router.include_router(
app.fastapi_users.get_register_router(on_after_register),
prefix="/auth",
tags=["auth"],
)
users_router.include_router(
app.fastapi_users.get_reset_password_router(
settings.JWT_SECRET_KEY, after_forgot_password=on_after_forgot_password
),
prefix="/auth",
tags=["auth"],
)
users_router.include_router(
app.fastapi_users.get_users_router(), prefix="/users", tags=["users"]
)
return users_router
```
You can read a detailed description of each of the routes in the [FastAPI Users' documentation, but there are a few interesting things to note in this code.
#### The on_after Functions
These functions are called after a new user registers and after the forgotten password endpoint is triggered.
The `on_after_register` is a convenience function allowing you to send a welcome email, add the user to your CRM, notify a Slack channel, and so on.
The `on_after_forgot_password` is where you would send the password reset token to the user, most likely via email. The FastAPI Users package does not send the token to the user for you. You must do that here yourself.
#### The get_users_router Wrapper
In order to create our routes we need access to the `fastapi_users` object, which is part of our `app` object. Because app is defined in `main.py`, and `main.py` imports these routers, we wrap them within a `get_users_router` function to avoid creating a cyclic import.
## Creating a Custom Realm JWT
Currently, Realm's user management functionality is only supported in the various JavaScript SDKs. However, Realm does support custom JWTs for authentication, allowing you to use the over the wire protocol support in the Python drivers to interact with some Realm services.
The available Realm services, as well as how you would interact with them via the Python driver, are out of scope for this tutorial, but you can read more in the documentation for Users & Authentication, Custom JWT Authentication, and MongoDB Wire Protocol.
Realm expects the custom JWT tokens to be structured in a certain way. To ensure the JWT tokens we generate with FastAPI Users are structured correctly, within `backend/apps/user/auth.py` we define `MongoDBRealmJWTAuthentication` which inherits from the FastAPI Users' `CookieAuthentication` class.
``` python
class MongoDBRealmJWTAuthentication(CookieAuthentication):
def __init__(self, *args, **kwargs):
super(MongoDBRealmJWTAuthentication, self).__init__(*args, **kwargs)
self.token_audience = settings.REALM_APP_ID
async def _generate_token(self, user):
data = {
"user_id": str(user.id),
"sub": str(user.id),
"aud": self.token_audience,
"external_user_id": str(user.id),
}
return generate_jwt(data, self.lifetime_seconds, self.secret, JWT_ALGORITHM)
```
Most of the authentication code stays the same. However we define a new `_generate_token` method which includes the additional data Realm expects.
## Protecting the Todo App Routes
Now we have our user models, routers, and JWT token ready, we can modify the todo routes to restrict access only to authenticated and active users.
The todo app routers are defined in `backend/apps/todo/routers.py` and are almost identical to those found in the Introducing FARM Stack tutorial, with one addition. Each router now depends upon `app.fastapi_users.get_current_active_user`.
``` python
@router.post(
"/",
response_description="Add new task",
)
async def create_task(
request: Request,
user: User = Depends(app.fastapi_users.get_current_active_user),
task: TaskModel = Body(...),
):
task = jsonable_encoder(task)
new_task = await request.app.db"tasks"].insert_one(task)
created_task = await request.app.db["tasks"].find_one(
{"_id": new_task.inserted_id}
)
return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_task)
```
Because we have declared this as a dependency, if an unauthenticated or inactive user attempts to access any of these URLs, they will be denied. This does mean, however, that our todo app routers now must also have access to the app object, so as we did with the user routers we wrap it in a function to avoid cyclic imports.
## Creating Our FastAPI App and Including the Routers
The FastAPI app is defined within `backend/main.py`. This is the entry point to our FastAPI server and has been quite heavily modified from the example in the previous FARM stack tutorial, so let's go through it section by section.
``` python
@app.on_event("startup")
async def configure_db_and_routes():
app.mongodb_client = AsyncIOMotorClient(
settings.DB_URL, uuidRepresentation="standard"
)
app.db = app.mongodb_client[settings.DB_NAME]
user_db = MongoDBUserDatabase(UserDB, app.db["users"])
app.fastapi_users = FastAPIUsers(
user_db,
[jwt_authentication],
User,
UserCreate,
UserUpdate,
UserDB,
)
app.include_router(get_users_router(app))
app.include_router(get_todo_router(app))
```
This function is called whenever our FastAPI application starts. Here, we connect to our MongoDB database, configure FastAPI Users, and include our routers. Your application won't start receiving requests until this event handler has completed.
``` python
@app.on_event("shutdown")
async def shutdown_db_client():
app.mongodb_client.close()
```
The shutdown event handler does not change. It is still responsible for closing the connection to our database.
## Wrapping Up
In this tutorial we have covered one of the ways you can add user authentication to your [FARM stack application. There are several other packages available which you might also want to try. You can find several of them in the awesome FastAPI list.
Or, for a more in-depth look at the FastAPI Users package, please check their documentation.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Python",
"JavaScript",
"FastApi"
],
"pageDescription": "Adding Authentication to a FARM stack application",
"contentType": "Tutorial"
} | Adding Authentication to Your FARM Stack App | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/building-space-shooter-game-syncs-unity-mongodb-realm | created | # Building a Space Shooter Game in Unity that Syncs with Realm and MongoDB Atlas
When developing a game, in most circumstances you're going to need to store some kind of data. It could be the score, it could be player inventory, it could be where they are located on a map. The possibilities are endless and it's more heavily dependent on the type of game.
Need to sync that data between devices and your remote infrastructure? That is a whole different scenario.
If you managed to catch MongoDB .Live 2021, you'll be familiar that the first stable release of the Realm .NET SDK for Unity was made available. This means that you can use Realm in your Unity game to store and sync data with only a few lines of code.
In this tutorial, we're going to build a nifty game that explores some storage and syncing use-cases.
To get a better idea of what we plan to accomplish, take a look at the following animated image:
In the above example, we have a space shooter style game. Waves of enemies are coming at you and as you defeat them your score increases. In addition to keeping track of score, the player has a set of enabled blasters. What you don't see in the above example is what's happening behind the scenes. The score is synced to and from the cloud and likewise are the blasters.
## The Requirements
There are a lot of moving pieces for this particular gaming example. To be successful with this tutorial, you'll need to have the following ready to go:
- Unity 2021.2.0b3 or newer
- A MongoDB Atlas M0 cluster or better
- A web application pointed at the Atlas cluster
- Game media assets
This is heavily a Unity example. While older or newer versions of Unity might work, I was personally using 2021.2.0b3 when I developed it. You can check to see what version of Unity is available to you using the Unity Hub software.
Because we are going to be introducing a synchronization feature to the game, we're going to need an Atlas cluster as well as an Atlas App Services application. Both of these can be configured for free here. Don't worry about the finer details of the configuration because we'll get to those as we progress in the tutorial.
As much as I'd like to take credit for the space shooter assets used within this game, I can't. I actually downloaded them from the Unity Asset Store. Feel free to download what I used or create your own.
If you're looking for a basic getting started tutorial for Unity with Realm, check out my previous tutorial on the subject.
## Designing the Scenes and Interfaces for the Unity Game
The game we're about to build is not a small and quick project. There will be many game objects and a few scenes that we have to configure, but none of it is particularly difficult.
To get an idea of what we need to create, make note of the following breakdown:
- LoginScene
- Camera
- LoginController
- RealmController
- Canvas
- UsernameField
- PasswordField
- LoginButton
- MainScene
- GameController
- RealmController
- Background
- Player
- Canvas
- HighScoreText
- ScoreText
- BlasterEnabled
- SparkBlasterEnabled
- CrossBlasterEnabled
- Blaster
- CrossBlast
- Enemy
- SparkBlast
The above list represents our two scenes with each of the components that live within the scene.
Let's start by configuring the **LoginScene** with each of the components. Don't worry, we'll explore the logic side of things for this scene later.
Within the Unity IDE, create a **LoginScene** and within the **Hierarchy** choose to create a new **UI -> Input Field**. You'll need to do this twice because this is how we're going to create the **UsernameField** and the **PasswordField** that we defined in the list above. You're also going to want to create a **UI -> Button** which will represent our **LoginButton** to submit the form.
For each of the UI game objects, position them on the screen how you want them. Mine looks like the following:
Within the **Hierarchy** of your scene, create two empty game objects. The first game object, **LoginController**, will eventually hold a script for managing the user input and interactions with the UI components we had just created. The second game object, **RealmController**, will eventually have a script that contains any Realm interactions. For now, we're going to leave these as empty game objects and move on.
Now let's move onto our next scene.
Create a **MainScene** if you haven't already and start adding **UI -> Text** to represent the current score and the high score.
Since we probably don't want a solid blue background in our game, we should add a background image. Add an empty game object to the **Hierarch** and then add a **Sprite Renderer** component to that object using the inspector. Add whatever image you want to the **Sprite** field of the **Sprite Renderer** component.
Since we're going to give the player a few different blasters to choose from, we want to show them which blasters they have at any given time. For this, we should add some simple sprites with blaster images on them.
Create three empty game objects and add a **Sprite Renderer** component to each of them. For each **Sprite** field, add the image that you want to use. Then position the sprites to a section on the screen that you're comfortable with.
If you've made it this far, you might have a scene that looks like the following:
This might be hard to believe, but the visual side of things is almost complete. With just a few more game objects, we can move onto the more exciting logic things.
Like with the **LoginScene**, the **GameController** and **RealmController** game objects will remain empty. There's a small change though. Even though the **RealmController** will eventually exist in the **MainScene**, we're not going to create it manually. Instead, just create an empty **GameController** game object.
This leaves us with the player, enemies, and various blasters.
Starting with the player, create an empty game object and add a **Sprite Renderer**, **Rigidbody 2D**, and **Box Collider 2D** component to the game object. For the **Sprite Renderer**, add the graphic you want to use for your ship. The **Rigidbody 2D** and **Box Collider 2D** have to do with physics and collisions. We're not going to burden ourselves with gravity for this example, so make sure the **Body Type** for the **Rigidbody 2D** is **Kinematic** and the **Is Trigger** for the **Box Collider 2D** is enabled. Within the inspector, tag the player game object as "Player."
The blasters and enemies will have the same setup as our player. Create new game objects for each, just like you did the player, only this time select a different graphic for them and give them the tags of "Weapon" or "Enemy" in the inspector.
This is where things get interesting.
We know that there will be more than one enemy in circulation and likewise with your blaster bullets. Rather than creating a bunch of each, take the game objects you used for the blasters and enemies and drag them into your **Assets** directory. This will convert the game objects into prefabs that can be recycled as many times as you want. Once the prefabs are created, the objects can be removed from the **Hierarchy** section of your scene. As we progress, we'll be instantiating these prefabs through code.
We're ready to start writing code to give our game life.
## Configuring MongoDB Atlas and Atlas Device Sync for Data Synchronization
For this game, we're going to rely on a cloud and synchronization aspect, so there is some additional configuration that we'll need to take care of. However, before we worry about the cloud configurations, let's install the Realm .NET SDK for Unity.
Within Unity, select **Window -> Package Manager** and then click the little cog icon to find the **Advanced Project Settings** area.
Here you're going to want to add a new registry with the following information:
```
name: NPM
url: https://registry.npmjs.org
scope(s): io.realm.unity
```
Even though we're working with Unity, the best way to get the Realm SDK is through NPM, hence the custom registry that we're going to use.
With the registry added, we can add an entry for Realm in the project's **Packages/manifest.json** file. Within the **manifest.json** file, add the following to the `dependencies` object:
```
"io.realm.unity": "10.3.0"
```
You can swap the version of Realm with whatever you plan to use.
From a Unity perspective, Realm is ready to be used. Now we just need to configure Device Sync and Atlas in the cloud.
Within MongoDB Atlas, assuming you already have a cluster to work with, click the **App Services** tab and then **Create a New App** to create a new application.
Name the application whatever you'd like. The MongoDB Atlas cluster requires no special configuration to work with App Services, only that such a cluster exists. App Services will create the necessary databases and collections when the time comes.
Before we start configuring your app, take note of your **App ID** in the top left corner of the screen:
The **App ID** will be very important within the Unity project because it tells the SDK where to sync and authenticate with.
Next you'll want to define what kind of authentication is allowed for your Unity game and the users that are allowed to authenticate. Within the dashboard, click the **Authentication** tab followed by the **Authentication Providers** tab. Enable **Email / Password** if it isn't already enabled. After email and password authentication is enabled for your application, click the **Users** tab and choose to **Add New User** with the email and password information of your choice.
The users can be added through an API request, but for this example we're just going to focus on adding them manually.
With the user information added, we need to define the collections and schemas to sync with our game. Click the **Schema** tab within the dashboard and choose to create a new database and collection if you don't already have a **space_shooter** database and a **PlayerProfile** collection.
The schema for the **PlayerProfile** collection should look like the following:
```json
{
"title": "PlayerProfile",
"bsonType": "object",
"required":
"high_score",
"spark_blaster_enabled",
"cross_blaster_enabled",
"score",
"_partition"
],
"properties": {
"_id": {
"bsonType": "string"
},
"_partition": {
"bsonType": "string"
},
"high_score": {
"bsonType": "int"
},
"score": {
"bsonType": "int"
},
"spark_blaster_enabled": {
"bsonType": "bool"
},
"cross_blaster_enabled": {
"bsonType": "bool"
}
}
}
```
In the above schema, we're saying that we are going to have five fields with the types defined. These fields will eventually be mapped to C# objects within the Unity game. The one field to pay the most attention to is the `_partition` field. The `_partition` field will be the most valuable when it comes to sync because it will represent which data is synchronized rather than attempting to synchronize the entire MongoDB Atlas collection.
In our example, the `_partition` field should hold user email addresses because they are unique and the user will provide them when they log in. With this we can specify that we only want to sync data based on the users email address.
With the schema defined, now we can enable Atlas Device Sync.
Within the dashboard, click on the **Sync** tab. Specify the cluster and the field to be used as the partition key. You should specify `_partition` as the partition key in this example, although the actual field name doesn't matter if you wanted to call it something else. Leaving the permissions as the default will give users read and write permissions.
> Atlas Device Sync will only sync collections that have a defined schema. You could have other collections in your MongoDB Atlas cluster, but they won't sync automatically unless you have schemas defined for them.
At this point, we can now focus on the actual game development.
## Defining the Data Model and Usage Logic
When it comes to data, your Atlas App Services app is going to manage all of it. We need to create a data model that matches the schema that we had just created for synchronization and we need to create the logic for our **RealmController** game object.
Let's start by creating the model to be used.
Within the **Assets** folder of your project, create a **Scripts** folder with a **PlayerProfile.cs** script in it. The **PlayerProfile.cs** script should contain the following C# code:
```csharp
using Realms;
using Realms.Sync;
public class PlayerProfile : RealmObject {
[PrimaryKey]
[MapTo("_id")]
public string UserId { get; set; }
[MapTo("high_score")]
public int HighScore { get; set; }
[MapTo("score")]
public int Score { get; set; }
[MapTo("spark_blaster_enabled")]
public bool SparkBlasterEnabled { get; set; }
[MapTo("cross_blaster_enabled")]
public bool CrossBlasterEnabled { get; set; }
public PlayerProfile() {}
public PlayerProfile(string userId) {
this.UserId = userId;
this.HighScore = 0;
this.Score = 0;
this.SparkBlasterEnabled = false;
this.CrossBlasterEnabled = false;
}
}
```
What we're doing is we are defining object fields and how they map to a remote document in a MongoDB collection. While our C# object looks like the above, the BSON that we'll see in MongoDB Atlas will look like the following:
```json
{
"_id": "12345",
"high_score": 1337,
"score": 0,
"spark_blaster_enabled": false,
"cross_blaster_enabled": false
}
```
It's important to note that the documents in Atlas might have more fields than what we see in our game. We'll only be able to use the mapped fields in our game, so if we have for example an email address in our document, we won't see it in the game because it isn't mapped.
With the model in place, we can focus on syncing, querying, and writing our data.
Within the **Assets/Scripts** directory, add a **RealmController.cs** script. This script should contain the following C# code:
```csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Realms;
using Realms.Sync;
using Realms.Sync.Exceptions;
using System.Threading.Tasks;
public class RealmController : MonoBehaviour {
public static RealmController Instance;
public string RealmAppId = "YOUR_REALM_APP_ID_HERE";
private Realm _realm;
private App _realmApp;
private User _realmUser;
void Awake() {
DontDestroyOnLoad(gameObject);
Instance = this;
}
void OnDisable() {
if(_realm != null) {
_realm.Dispose();
}
}
public async Task Login(string email, string password) {}
public PlayerProfile GetPlayerProfile() {}
public void IncreaseScore() {}
public void ResetScore() {}
public bool IsSparkBlasterEnabled() {}
public bool IsCrossBlasterEnabled() {}
}
```
The above code is incomplete, but it gives you an idea of where we are going.
First, take notice of the `AppId` variable. You're going to want to use your App Services application so sync can happen based on how you've configured everything. This also applies to the authentication rules that are in place for your particular application.
The `RealmController` class is going to be used as a singleton object between scenes. The goal is to make sure it cannot be destroyed and everything we do is through a static instance of itself.
In the `Awake` method, we are saying that the game object that the script is attached to should not be destroyed and that we are setting the static variable to itself. In the `OnDisable`, we are doing cleanup which should really only happen when the game is closed.
Most of the magic will happen in the `Login` function:
```csharp
public async Task Login(string email, string password) {
if(email != "" && password != "") {
_realmApp = App.Create(new AppConfiguration(RealmAppId) {
MetadataPersistenceMode = MetadataPersistenceMode.NotEncrypted
});
try {
if(_realmUser == null) {
_realmUser = await _realmApp.LogInAsync(Credentials.EmailPassword(email, password));
_realm = await Realm.GetInstanceAsync(new SyncConfiguration(email, _realmUser));
} else {
_realm = Realm.GetInstance(new SyncConfiguration(email, _realmUser));
}
} catch (ClientResetException clientResetEx) {
if(_realm != null) {
_realm.Dispose();
}
clientResetEx.InitiateClientReset();
}
return _realmUser.Id;
}
return "";
}
```
In the above code, we are defining our application based on the application ID. Next we are attempting to log into the application using email and password authentication, something we had previously configured in the web dashboard. If successful, we are getting an instance of our Realm to work with going forward. The data to be synchronized is based on our partition field which in this case is the email address. This means we're only synchronizing data for this particular email address.
If all goes smooth with the login, the ID for the user is returned.
At some point in time, we're going to need to load the player data. This is where the `GetPlayerProfile` function comes in:
```csharp
public PlayerProfile GetPlayerProfile() {
PlayerProfile _playerProfile = _realm.Find(_realmUser.Id);
if(_playerProfile == null) {
_realm.Write(() => {
_playerProfile = _realm.Add(new PlayerProfile(_realmUser.Id));
});
}
return _playerProfile;
}
```
What we're doing is we're taking the current instance and we're finding a particular player profile based on the id. If one does not exist, then we create one using the current ID. In the end, we're returning a player profile, whether it be one that we had been using or a fresh one.
We know that we're going to be working with score data in our game. We need to be able to increase the score, reset the score, and calculate the high score for a player.
Starting with the `IncreaseScore`, we have the following:
```csharp
public void IncreaseScore() {
PlayerProfile _playerProfile = GetPlayerProfile();
if(_playerProfile != null) {
_realm.Write(() => {
_playerProfile.Score++;
});
}
}
```
First we get the player profile and then we take whatever score is associated with it and increase it by one. With Realm we can work with our objects like native C# objects. The exception is that when we want to write, we have to wrap it in a `Write` block. Reads we don't have to.
Next let's look at the `ResetScore` function:
```csharp
public void ResetScore() {
PlayerProfile _playerProfile = GetPlayerProfile();
if(_playerProfile != null) {
_realm.Write(() => {
if(_playerProfile.Score > _playerProfile.HighScore) {
_playerProfile.HighScore = _playerProfile.Score;
}
_playerProfile.Score = 0;
});
}
}
```
In the end we want to zero out the score, but we also want to see if our current score is the highest score before we do. We can do all this within the `Write` block and it will synchronize to the server.
Finally we have our two functions to tell us if a certain blaster is available to us:
```csharp
public bool IsSparkBlasterEnabled() {
PlayerProfile _playerProfile = GetPlayerProfile();
return _playerProfile != null ? _playerProfile.SparkBlasterEnabled : false;
}
```
The reason our blasters are data dependent is because we may want to unlock them based on points or through a micro-transaction. In this case, maybe Realm Sync takes care of it.
The `IsCrossBlasterEnabled` function isn't much different:
```csharp
public bool IsCrossBlasterEnabled() {
PlayerProfile _playerProfile = GetPlayerProfile();
return _playerProfile != null ? _playerProfile.CrossBlasterEnabled : false;
}
```
The difference is we are using a different field from our data model.
With the Realm logic in place for the game, we can focus on giving the other game objects life through scripts.
## Developing the Game-Play Logic Scripts for the Space Shooter Game Objects
Almost every game object that we've created will be receiving a script with logic. To keep the flow appropriate, we're going to add logic in a natural progression. This means we're going to start with the **LoginScene** and each of the game objects that live in it.
For the **LoginScene**, only two game objects will be receiving scripts:
- LoginController
- RealmController
Since we already have a **RealmController.cs** script file, go ahead and attach it to the **RealmController** game object as a component.
Next up, we need to create an **Assets/Scripts/LoginController.cs** file with the following C# code:
```csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using UnityEngine.SceneManagement;
public class LoginController : MonoBehaviour {
public Button LoginButton;
public InputField UsernameInput;
public InputField PasswordInput;
void Start() {
UsernameInput.text = "nic.raboy@mongodb.com";
PasswordInput.text = "password1234";
LoginButton.onClick.AddListener(Login);
}
async void Login() {
if(await RealmController.Instance.Login(UsernameInput.text, PasswordInput.text) != "") {
SceneManager.LoadScene("MainScene");
}
}
void Update() {
if(Input.GetKey("escape")) {
Application.Quit();
}
}
}
```
There's not a whole lot going on since the backbone of this script is in the **RealmController.cs** file.
What we're doing in the **LoginController.cs** file is we're defining the UI components which we'll link through the Unity IDE. When the script starts, we're going to default the values of our input fields and we're going to assign a click event listener to the button.
When the button is clicked, the `Login` function from the **RealmController.cs** file is called and we pass the provided email and password. If we get an id back, we know we were successful so we can switch to the next scene.
The `Update` method isn't a complete necessity, but if you want to be able to quit the game with the escape key, that is what this particular piece of logic does.
Attach the **LoginController.cs** script to the **LoginController** game object as a component and then drag each of the corresponding UI game objects into the script via the game object inspector. Remember, we defined public variables for each of the UI components. We just need to tell Unity what they are by linking them in the inspector.
The **LoginScene** logic is complete. Can you believe it? This is because the Realm .NET SDK for Unity is doing all the heavy lifting for us.
The **MainScene** has a lot more going on, but we'll break down what's happening.
Let's start with something you don't actually see but that controls all of our prefab instances. I'm talking about the object pooling script.
In short, creating and destroying game objects on-demand is resource intensive. Instead, we should create a fixed amount of game objects when the game loads and hide them or show them based on when they are needed. This is what an object pool does.
Create an **Assets/Scripts/ObjectPool.cs** file with the following C# code:
```csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ObjectPool : MonoBehaviour
{
public static ObjectPool SharedInstance;
private List pooledEnemies;
private List pooledBlasters;
private List pooledCrossBlasts;
private List pooledSparkBlasts;
public GameObject enemyToPool;
public GameObject blasterToPool;
public GameObject crossBlastToPool;
public GameObject sparkBlastToPool;
public int amountOfEnemiesToPool;
public int amountOfBlastersToPool;
public int amountOfCrossBlastsToPool;
public int amountOfSparkBlastsToPool;
void Awake() {
SharedInstance = this;
}
void Start() {
pooledEnemies = new List();
pooledBlasters = new List();
pooledCrossBlasts = new List();
pooledSparkBlasts = new List();
GameObject tmpEnemy;
GameObject tmpBlaster;
GameObject tmpCrossBlast;
GameObject tmpSparkBlast;
for(int i = 0; i < amountOfEnemiesToPool; i++) {
tmpEnemy = Instantiate(enemyToPool);
tmpEnemy.SetActive(false);
pooledEnemies.Add(tmpEnemy);
}
for(int i = 0; i < amountOfBlastersToPool; i++) {
tmpBlaster = Instantiate(blasterToPool);
tmpBlaster.SetActive(false);
pooledBlasters.Add(tmpBlaster);
}
for(int i = 0; i < amountOfCrossBlastsToPool; i++) {
tmpCrossBlast = Instantiate(crossBlastToPool);
tmpCrossBlast.SetActive(false);
pooledCrossBlasts.Add(tmpCrossBlast);
}
for(int i = 0; i < amountOfSparkBlastsToPool; i++) {
tmpSparkBlast = Instantiate(sparkBlastToPool);
tmpSparkBlast.SetActive(false);
pooledSparkBlasts.Add(tmpSparkBlast);
}
}
public GameObject GetPooledEnemy() {
for(int i = 0; i < amountOfEnemiesToPool; i++) {
if(pooledEnemies[i].activeInHierarchy == false) {
return pooledEnemies[i];
}
}
return null;
}
public GameObject GetPooledBlaster() {
for(int i = 0; i < amountOfBlastersToPool; i++) {
if(pooledBlasters[i].activeInHierarchy == false) {
return pooledBlasters[i];
}
}
return null;
}
public GameObject GetPooledCrossBlast() {
for(int i = 0; i < amountOfCrossBlastsToPool; i++) {
if(pooledCrossBlasts[i].activeInHierarchy == false) {
return pooledCrossBlasts[i];
}
}
return null;
}
public GameObject GetPooledSparkBlast() {
for(int i = 0; i < amountOfSparkBlastsToPool; i++) {
if(pooledSparkBlasts[i].activeInHierarchy == false) {
return pooledSparkBlasts[i];
}
}
return null;
}
}
```
The above object pooling logic is not code optimized because I wanted to keep it readable. If you want to see an optimized version, check out a [previous tutorial I wrote on the subject.
So let's break down what we're doing in this object pool.
We have four different game objects to pool:
- Enemies
- Spark Blasters
- Cross Blasters
- Regular Blasters
These need to be pooled because there could be more than one of the same object at any given time. We're using public variables for each of the game objects and quantities so that we can properly link them to actual game objects in the Unity IDE.
Like with the **RealmController.cs** script, this script will also act as a singleton to be used as needed.
In the `Start` method, we are instantiating a game object, as per the quantities defined through the Unity IDE, and adding them to a list. Ideally the linked game object should be one of the prefabs that we previously defined. The list of instantiated game objects represent our pools. We have four object pools to pull from.
Pulling from the pool is as simple as creating a function for each pool and seeing what's available. Take the `GetPooledEnemy` function for example:
```csharp
public GameObject GetPooledEnemy() {
for(int i = 0; i < amountOfEnemiesToPool; i++) {
if(pooledEnemiesi].activeInHierarchy == false) {
return pooledEnemies[i];
}
}
return null;
}
```
In the above code, we loop through each object in our pool, in this case enemies. If an object is inactive it means we can pull it and use it. If our pool is depleted, then we either defined too small of a pool or we need to wait until something is available.
I like to pool about 50 of each game object even if I only ever plan to use 10. Doesn't hurt to have excess as it's still less resource-heavy than creating and destroying game objects as needed.
The **ObjectPool.cs** file should be attached as a component to the **GameController** game object. After attaching, make sure you assign your prefabs and the pooled quantities using the game object inspector within the Unity IDE.
The **ObjectPool.cs** script isn't the only script we're going to attach to the **GameController** game object. We need to create a script that will control the flow of our game. Create an **Assets/Scripts/GameController.cs** file with the following C# code:
```csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class GameController : MonoBehaviour {
public float timeUntilEnemy = 1.0f;
public float minTimeUntilEnemy = 0.25f;
public float maxTimeUntilEnemy = 2.0f;
public GameObject SparkBlasterGraphic;
public GameObject CrossBlasterGraphic;
public Text highScoreText;
public Text scoreText;
private PlayerProfile _playerProfile;
void OnEnable() {
_playerProfile = RealmController.Instance.GetPlayerProfile();
highScoreText.text = "HIGH SCORE: " + _playerProfile.HighScore.ToString();
scoreText.text = "SCORE: " + _playerProfile.Score.ToString();
}
void Update() {
highScoreText.text = "HIGH SCORE: " + _playerProfile.HighScore.ToString();
scoreText.text = "SCORE: " + _playerProfile.Score.ToString();
timeUntilEnemy -= Time.deltaTime;
if(timeUntilEnemy <= 0) {
GameObject enemy = ObjectPool.SharedInstance.GetPooledEnemy();
if(enemy != null) {
enemy.SetActive(true);
}
timeUntilEnemy = Random.Range(minTimeUntilEnemy, maxTimeUntilEnemy);
}
if(_playerProfile != null) {
SparkBlasterGraphic.SetActive(_playerProfile.SparkBlasterEnabled);
CrossBlasterGraphic.SetActive(_playerProfile.CrossBlasterEnabled);
}
if(Input.GetKey("escape")) {
Application.Quit();
}
}
}
```
There's a diverse set of things happening in the above script, so let's break them down.
You'll notice the following public variables:
```csharp
public float timeUntilEnemy = 1.0f;
public float minTimeUntilEnemy = 0.25f;
public float maxTimeUntilEnemy = 2.0f;
```
We're going to use these variables to define when a new enemy should be activated.
The `timeUntilEnemy` represents how much actual time from the current time until a new enemy should be pulled from the object pool. The `minTimeUntilEnemy` and `maxTimeUntilEnemy` will be used for randomizing what the `timeUntilEnemy` value should become after an enemy is pooled. It's boring to have all enemies appear after a fixed amount of time, so the minimum and maximum values keep things interesting.
```csharp
public GameObject SparkBlasterGraphic;
public GameObject CrossBlasterGraphic;
public Text highScoreText;
public Text scoreText;
```
Remember those UI components and sprites to represent enabled blasters we had created earlier in the Unity IDE? When we attach this script to the **GameController** game object, you're going to want to assign the other components in the game object inspector.
This brings us to the `OnEnable` method:
```csharp
void OnEnable() {
_playerProfile = RealmController.Instance.GetPlayerProfile();
highScoreText.text = "HIGH SCORE: " + _playerProfile.HighScore.ToString();
scoreText.text = "SCORE: " + _playerProfile.Score.ToString();
}
```
The `OnEnable` method is where we're going to get our current player profile and then update the score values visually based on the data stored in the player profile. The `Update` method will continuously update those score values for as long as the scene is showing.
```csharp
void Update() {
highScoreText.text = "HIGH SCORE: " + _playerProfile.HighScore.ToString();
scoreText.text = "SCORE: " + _playerProfile.Score.ToString();
timeUntilEnemy -= Time.deltaTime;
if(timeUntilEnemy <= 0) {
GameObject enemy = ObjectPool.SharedInstance.GetPooledEnemy();
if(enemy != null) {
enemy.SetActive(true);
}
timeUntilEnemy = Random.Range(minTimeUntilEnemy, maxTimeUntilEnemy);
}
if(_playerProfile != null) {
SparkBlasterGraphic.SetActive(_playerProfile.SparkBlasterEnabled);
CrossBlasterGraphic.SetActive(_playerProfile.CrossBlasterEnabled);
}
if(Input.GetKey("escape")) {
Application.Quit();
}
}
```
In the `Update` method, every time it's called, we subtract the delta time from our `timeUntilEnemy` variable. When the value is zero, we attempt to get a new enemy from the object pool and then reset the timer. Outside of the object pooling, we're also checking to see if the other blasters have become enabled. If they have been, we can update the game object status for our sprites. This will allow us to easily show and hide these sprites.
If you haven't already, attach the **GameController.cs** script to the **GameController** game object. Remember to update any values for the script within the game object inspector.
If we were to run the game, every enemy would have the same position and they would not be moving. We need to assign logic to the enemies.
Create an **Assets/Scripts/Enemy.cs** file with the following C# code:
```csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Enemy : MonoBehaviour {
public float movementSpeed = 5.0f;
void OnEnable() {
float randomPositionY = Random.Range(-4.0f, 4.0f);
transform.position = new Vector3(10.0f, randomPositionY, 0);
}
void Update() {
transform.position += Vector3.left * movementSpeed * Time.deltaTime;
if(transform.position.x < -10.0f) {
gameObject.SetActive(false);
}
}
void OnTriggerEnter2D(Collider2D collider) {
if(collider.tag == "Weapon") {
gameObject.SetActive(false);
RealmController.Instance.IncreaseScore();
}
}
}
```
When the enemy is pulled from the object pool, the game object becomes enabled. So the `OnEnable` method picks a random y-axis position for the game object. For every frame, the `Update` method will move the game object along the x-axis. If the game object goes off the screen, we can safely add it back into the object pool.
The `OnTriggerEnter2D` method is for our collision detection. We're not doing physics collisions so this method just tells us if the objects have touched. If the current game object, in this case the enemy, has collided with a game object tagged as a weapon, then add the enemy back into the queue and increase the score.
Attach the **Enemy.cs** script to your enemy prefab.
By now, your game probably looks something like this, minus the animations:
![Space Shooter Enemies
We won't be worrying about animations in this tutorial. Consider that part of your extracurricular challenge after completing this tutorial.
So we have a functioning enemy pool. Let's look at the blaster logic since it is similar.
Create an **Assets/Scripts/Blaster.cs** file with the following C# logic:
```csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Blaster : MonoBehaviour {
public float movementSpeed = 5.0f;
public float decayRate = 2.0f;
private float timeToDecay;
void OnEnable() {
timeToDecay = decayRate;
}
void Update() {
timeToDecay -= Time.deltaTime;
transform.position += Vector3.right * movementSpeed * Time.deltaTime;
if(transform.position.x > 10.0f || timeToDecay <= 0) {
gameObject.SetActive(false);
}
}
void OnTriggerEnter2D(Collider2D collider) {
if(collider.tag == "Enemy") {
gameObject.SetActive(false);
}
}
}
```
Look mildly familiar to the enemy? It is similar.
We need to first define how fast each blaster should move and how quickly the blaster should disappear if it hasn't hit anything.
In the `Update` method will subtract the current time from our blaster decay time. The blaster will continue to move along the x-axis until it has either gone off screen or it has decayed. In this scenario, the blaster is added back into the object pool. If the blaster collides with a game object tagged as an enemy, the blaster is also added back into the pool. Remember, the blaster will likely be tagged as a weapon so the **Enemy.cs** script will take care of adding the enemy back into the object pool.
Attach the **Blaster.cs** script to your blaster prefab and apply any value settings as necessary with the Unity IDE in the inspector.
To make the game interesting, we're going to add some very slight differences to the other blasters.
Create an **Assets/Scripts/CrossBlast.cs** script with the following C# code:
```csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CrossBlast : MonoBehaviour {
public float movementSpeed = 5.0f;
void Update() {
transform.position += Vector3.right * movementSpeed * Time.deltaTime;
if(transform.position.x > 10.0f) {
gameObject.SetActive(false);
}
}
void OnTriggerEnter2D(Collider2D collider) { }
}
```
At a high level, this blaster behaves the same. However, if it collides with an enemy, it keeps going. It only goes back into the object pool when it goes off the screen. So there is no decay and it isn't a one enemy per blast weapon.
Let's look at an **Assets/Scripts/SparkBlast.cs** script:
```csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class SparkBlast : MonoBehaviour {
public float movementSpeed = 5.0f;
void Update() {
transform.position += Vector3.right * movementSpeed * Time.deltaTime;
if(transform.position.x > 10.0f) {
gameObject.SetActive(false);
}
}
void OnTriggerEnter2D(Collider2D collider) {
if(collider.tag == "Enemy") {
gameObject.SetActive(false);
}
}
}
```
The minor difference in the above script is that it has no decay, but it can only ever destroy one enemy.
Make sure you attach these scripts to the appropriate blaster prefabs.
We're almost done! We have one more script and that's for the actual player!
Create an **Assets/Scripts/Player.cs** file and add the following code:
```csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Player : MonoBehaviour
{
public float movementSpeed = 5.0f;
public float respawnSpeed = 8.0f;
public float weaponFireRate = 0.5f;
private float nextBlasterTime = 0.0f;
private bool isRespawn = true;
void Update() {
if(isRespawn == true) {
transform.position = Vector2.MoveTowards(transform.position, new Vector2(-6.0f, -0.25f), respawnSpeed * Time.deltaTime);
if(transform.position == new Vector3(-6.0f, -0.25f, 0.0f)) {
isRespawn = false;
}
} else {
if(Input.GetKey(KeyCode.UpArrow) && transform.position.y < 4.0f) {
transform.position += Vector3.up * movementSpeed * Time.deltaTime;
} else if(Input.GetKey(KeyCode.DownArrow) && transform.position.y > -4.0f) {
transform.position += Vector3.down * movementSpeed * Time.deltaTime;
}
if(Input.GetKey(KeyCode.Space) && Time.time > nextBlasterTime) {
nextBlasterTime = Time.time + weaponFireRate;
GameObject blaster = ObjectPool.SharedInstance.GetPooledBlaster();
if(blaster != null) {
blaster.SetActive(true);
blaster.transform.position = new Vector3(transform.position.x + 1, transform.position.y);
}
}
if(RealmController.Instance.IsCrossBlasterEnabled()) {
if(Input.GetKey(KeyCode.B) && Time.time > nextBlasterTime) {
nextBlasterTime = Time.time + weaponFireRate;
GameObject crossBlast = ObjectPool.SharedInstance.GetPooledCrossBlast();
if(crossBlast != null) {
crossBlast.SetActive(true);
crossBlast.transform.position = new Vector3(transform.position.x + 1, transform.position.y);
}
}
}
if(RealmController.Instance.IsSparkBlasterEnabled()) {
if(Input.GetKey(KeyCode.V) && Time.time > nextBlasterTime) {
nextBlasterTime = Time.time + weaponFireRate;
GameObject sparkBlast = ObjectPool.SharedInstance.GetPooledSparkBlast();
if(sparkBlast != null) {
sparkBlast.SetActive(true);
sparkBlast.transform.position = new Vector3(transform.position.x + 1, transform.position.y);
}
}
}
}
}
void OnTriggerEnter2D(Collider2D collider) {
if(collider.tag == "Enemy" && isRespawn == false) {
RealmController.Instance.ResetScore();
transform.position = new Vector3(-10.0f, -0.25f, 0.0f);
isRespawn = true;
}
}
}
```
Looking at the above script, we have a few variables to keep track of:
```csharp
public float movementSpeed = 5.0f;
public float respawnSpeed = 8.0f;
public float weaponFireRate = 0.5f;
private float nextBlasterTime = 0.0f;
private bool isRespawn = true;
```
We want to define how fast the player can move, how long it takes for the respawn animation to happen, and how fast you're allowed to fire blasters.
In the `Update` method, we first check to see if we are currently respawning:
```csharp
transform.position = Vector2.MoveTowards(transform.position, new Vector2(-6.0f, -0.25f), respawnSpeed * Time.deltaTime);
if(transform.position == new Vector3(-6.0f, -0.25f, 0.0f)) {
isRespawn = false;
}
```
If we are respawning, then we need to smoothly move the player game object towards a particular coordinate position. When the game object has reached that new position, then we can disable the respawn indicator that prevents us from controlling the player.
If we're not respawning, we can check to see if the movement keys were pressed:
```csharp
if(Input.GetKey(KeyCode.UpArrow) && transform.position.y < 4.0f) {
transform.position += Vector3.up * movementSpeed * Time.deltaTime;
} else if(Input.GetKey(KeyCode.DownArrow) && transform.position.y > -4.0f) {
transform.position += Vector3.down * movementSpeed * Time.deltaTime;
}
```
When pressing a key, as long as we haven't moved outside our y-axis boundary, we can adjust the position of the player. Since this is in the `Update` method, the movement should be smooth for as long as you are holding a key.
Using a blaster isn't too different:
```csharp
if(Input.GetKey(KeyCode.Space) && Time.time > nextBlasterTime) {
nextBlasterTime = Time.time + weaponFireRate;
GameObject blaster = ObjectPool.SharedInstance.GetPooledBlaster();
if(blaster != null) {
blaster.SetActive(true);
blaster.transform.position = new Vector3(transform.position.x + 1, transform.position.y);
}
}
```
If the particular blaster key is pressed and our rate limit isn't exceeded, we can update our `nextBlasterTime` based on the rate limit, pull a blaster from the object pool, and let the blaster do its magic based on the **Blaster.cs** script. All we're doing in the **Player.cs** script is checking to see if we're allowed to fire and if we are pull from the pool.
The data dependent spark and cross blasters follow the same rules, the exception being that we first check to see if they are enabled in our player profile.
Finally, we have our collisions:
```csharp
void OnTriggerEnter2D(Collider2D collider) {
if(collider.tag == "Enemy" && isRespawn == false) {
RealmController.Instance.ResetScore();
transform.position = new Vector3(-10.0f, -0.25f, 0.0f);
isRespawn = true;
}
}
```
If our player collides with a game object tagged as an enemy and we're not currently respawning, then we can reset the score and trigger the respawn.
Make sure you attach this **Player.cs** script to your **Player** game object.
If everything worked out, the game should be functional at this point. If something isn't working correctly, double check the following:
- Make sure each of your game objects is properly tagged.
- Make sure the scripts are attached to the proper game object or prefab.
- Make sure the values on the scripts have been defined through the Unity IDE inspector.
Play around with the game and setting values within MongoDB Atlas.
## Conclusion
You just saw how to create a space shooter type game with Unity that syncs with MongoDB Atlas by using the Realm .NET SDK for Unity and Atlas Device Sync. Realm only played a small part in this game because that is the beauty of Realm. You can get data persistence and sync with only a few lines of code.
Want to give this project a try? I've uploaded all of the source code to GitHub. You just need to clone the project, replace my App ID with yours, and build the project. Of course you'll still need to have properly configured Atlas and Device Sync in the cloud.
If you're looking for a slightly slower introduction to Realm with Unity, check out a previous tutorial that I wrote on the subject.
If you'd like to connect with us further, don't forget to visit the community forums. | md | {
"tags": [
"Realm",
"C#",
"Unity",
".NET"
],
"pageDescription": "Learn how to build a space shooter game that synchronizes between clients and the cloud using MongoDB, Unity, and Atlas Device Sync.",
"contentType": "Tutorial"
} | Building a Space Shooter Game in Unity that Syncs with Realm and MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/migrate-to-realm-kotlin-sdk | created | # Migrating Android Apps from Realm Java SDK to Kotlin SDK
## Introduction
So, it's here! The engineering team has released a major milestone of the Kotlin
SDK. The preview is available to
you to try it and make comments and suggestions.
Until now, if you were using Realm in Android, you were using the Java version of the SDK. The purpose of the Realm
Kotlin SDK is to be the evolution of the Java one and eventually replace it. So, you might be wondering if and when you
should migrate to it. But even more important for your team and your app is what it provides that the Java SDK
doesn't. The Kotlin SDK has been written from scratch to combine what the engineering team has learned through years of
SDK development, with the expressivity and fluency of the Kotlin language. They have been successful at that and the
resulting SDK provides a first-class experience that I would summarize in the following points:
- The Kotlin SDK allows you to use expressions that are Kotlin idiomatic—i.e., more natural to the language.
- It uses Kotlin coroutines and flows to make concurrency easier and more efficient.
- It has been designed and developed with Kotlin Multiplatform in mind.
- It has removed the thread-confinement restriction on Java and it directly integrates with the Android lifecycle hooks
so the developer doesn't have to spin up and tear down a realm instance on every activity lifecycle.
- It's the way forward. MongoDB is not discontinuing the Java SDK anytime soon, but Kotlin provides the engineering
team more resources to implement cooler things going forward. A few of them have been implemented already. Why wouldn't
you want to benefit from them?
Are you on board? I hope you are, because through the rest of this article, I'm going to tell you how to upgrade your
projects to use the Realm Kotlin SDK and take advantage of those benefits that I have just mentioned and some more. You
can also find a complete code example in this repo.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## Gradle build files
First things first. You need to make some changes to your `build.gradle` files to get access to the Realm Kotlin SDK
within your project, instead of the Realm Java SDK that you were using. The Realm Kotlin SDK uses a gradle plugin that
has been published in the Gradle Plugin Portal, so the prefered way
to add it to your project is using the plugins section of the build configuration of the module —i.e.,
`app/build.gradle`— instead of the legacy method of declaring the dependency in the `buildscript` block of the
project `build.gradle`.
After replacing the plugin in the module configuration with the Kotlin SDK one, you need to add an implementation
dependency to your module. If you want to use Sync with your MongoDB
cluster, then you should use `'io.realm.kotlin:library-sync'`, but if you just want to have local persistence, then
`'io.realm.kotlin:library-base'` should be enough. Also, it's no longer needed to have a `realm` dsl section in the
`android` block to enable sync.
### `build.gradle` Comparison
#### Java SDK
```kotlin
buildscript {
// ...
dependencies {
classpath "com.android.tools.build:gradle:$agp_version"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
// Realm Plugin
classpath "io.realm:realm-gradle-plugin:10.10.1"
}
}
```
#### Kotlin SDK
```kotlin
buildscript {
// ...
dependencies {
classpath "com.android.tools.build:gradle:$agp_version"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
}
}
```
### `app/build.gradle` Comparison
#### Java SDK
```kotlin
plugins {
id 'com.android.application'
id 'org.jetbrains.kotlin.android'
id 'kotlin-kapt'
id 'realm-android'
}
android {
// ...
realm {
syncEnabled = true
}
}
dependencies {
// ...
}
```
#### Kotlin SDK
```kotlin
plugins {
id 'com.android.application'
id 'org.jetbrains.kotlin.android'
id 'kotlin-kapt'
id 'io.realm.kotlin' version '0.10.0'
}
android {
// ...
}
dependencies {
// ...
implementation("io.realm.kotlin:library-base:0.10.0")
}
```
If you have more than one module in your project and want to pin the version number of the plugin for all of them, you
can define the plugin in the project `build.gradle` with the desired version and the attribute `apply false`. Then, the
`build.gradle` files of the modules should use the same plugin id, but without the version attribute.
### Multimodule Configuration
#### Project build.gradle
```kotlin
plugins {
// ...
id 'io.realm.kotlin' version '0.10.0' apply false
}
```
#### Modules build.gradle
```kotlin
plugins {
// ...
id 'io.realm.kotlin'
}
```
## Model classes
Kotlin scope functions (i.e., `apply`, `run`, `with`, `let`, and `also`) make object creation and manipulation easier.
That was already available when using the Java SDK from kotlin, because they are provided by the Kotlin language itself.
Defining a model class is even easier with the Realm Kotlin SDK. You are not required to make the model class `open`
anymore. The Java SDK was using the Kotlin Annotation Processing Tool to derive proxy classes that took care of
interacting with the persistence. Instead, the Kotlin SDK uses the `RealmObject` interface as a marker for the plugin.
In the construction process, the plugin identifies the objects that are implementing the marker interface and injects the
required functionality to interact with the persistence. So, that's another change that you have to put in place:
instead of making your model classes extend —i.e., inherit— from `RealmObject`, you just have to implement the interface
with the same name. In practical terms, this means using `RealmObject` in the class declaration without parentheses.
### Model Class Definition
#### Java SDK
```kotlin
open class ExpenseInfo : RealmObject() {
@PrimaryKey
var expenseId: String = UUID.randomUUID().toString()
var expenseName: String = ""
var expenseValue: Int = 0
}
```
#### Kotlin SDK
```kotlin
class ExpenseInfo : RealmObject {
@PrimaryKey
var expenseId: String = UUID.randomUUID().toString()
var expenseName: String = ""
var expenseValue: Int = 0
}
```
There are also changes in terms of the type system. `RealmList` that was used in the Java SDK to model one-to-many
relationships is extended in the Kotlin SDK to benefit from the typesystem and allow expressing nullables in those
relationships. So, now you can go beyond `RealmList` and use `RealmList`. You will get all the
benefits of the syntax sugar to mean that the strings the object is related to, might be null. You can check this and
the rest of the supported types in the documentation of the Realm Kotlin SDK.
## Opening (and closing) the realm
Using Realm is even easier now. The explicit initialization of the library that was required by the Realm Java SDK is
not needed for the Realm Kotlin SDK. You did that invoking `Realm.init()` explicitly. And in order to ensure that you
did that once and at the beginning of the execution of your app, you normally put that line of code in the `onCreate()`
method of the `Application` subclass. You can forget about that chore for good.
The configuration of the Realm in the Kotlin SDK requires passing the list of object model classes that conform the
schema, so the `builder()` static method has that as the argument. The Realm Kotlin SDK also allows setting the logging
level per configuration, should you use more than one. The rest of the configuration options remain the same.
It's also different the way you get an instance of a Realm when you have defined the configuration that you want to
use. With the Java SDK, you had to get access to a thread singleton using one of the static methods
`Realm.getInstance()` or `Realm.getDefaultInstance()` (the latter when a default configuration was being set and used).
In most cases, that instance was used and released, by invoking its `close()` method, at the end of the
Activity/Fragment lifecycle. The Kotlin SDK allows you to use the static method `open()` to get a single instance of a
Realm per configuration. Then you can inject it and use it everywhere you need it. This change takes the burden of
Realm lifecycle management off from the shoulders of the developer. That is huge! Lifecycle management is often
painful and sometimes difficult to get right.
### Realm SDK Initialization
#### Java SDK
```kotlin
class ExpenseApplication : Application() {
override fun onCreate() {
super.onCreate()
Realm.init(this)
val config = RealmConfiguration.Builder()
.name("expenseDB.db")
.schemaVersion(1)
.deleteRealmIfMigrationNeeded()
.build()
Realm.setDefaultConfiguration(config)
// Realms can now be obtained with Realm.getDefaultInstance()
}
}
```
#### Kotlin SDK
```kotlin
class ExpenseApplication : Application() {
lateinit var realm: Realm
override fun onCreate() {
super.onCreate()
val config = RealmConfiguration
.Builder(schema = setOf(ExpenseInfo::class))
.name("expenseDB.db")
.schemaVersion(1)
.deleteRealmIfMigrationNeeded()
.log(LogLevel.ALL)
.build()
realm = Realm.open(configuration = config)
// This realm can now be injected everywhere
}
}
```
Objects in the Realm Kotlin SDK are now frozen to directly integrate seamlessly into Kotlin coroutine and flows. That
means that they are not live as they used to be in the Realm Java SDK and don't update themselves when they get changed
in some other part of the application or even in the cloud. Instead, you have to modify them within a write
transaction, i.e., within a `write` or `writeBlocking` block. When the scope of the block ends, the objects are frozen
again.
Even better, the realms aren't confined to a thread. No more thread singletons. Instead, realms are thread-safe, so
they can safely be shared between threads. That means that you don't need to be opening and closing realms for the
purpose of using them within a thread. Get your Realm and use it everywhere in your app. Say goodbye to all those
lifecycle management operations for the realms!
Finally, if you are injecting dependencies of your application, with the Realm Kotlin SDK, you can have a singleton for
the Realm and let the dependency injection framework do its magic and inject it in every view-model. That's much easier
and more efficient than having to create one each time —using a factory, for example— and ensuring that the
close method was called wherever it was injected.
## Writing data
It took a while, but Kotlin brought coroutines to Android and we have learned to use them and enjoy how much easier
they make doing asynchronous things. Now, it seems that coroutines are _the way_ to do those things and we would like
to use them to deal with operations that might affect the performance of our apps, such as dealing with the persistence
of our data.
Support for coroutines and flows is built-in in the Realm Kotlin SDK as a first-class citizen of the API. You no longer
need to insert write operations in suspending functions to benefit from coroutines. The `write {}` method of a realm is
a suspending method itself and can only be invoked from within a coroutine context. No worries here, since the compiler
will complain if you try to do it outside of a context. But with no extra effort on your side, you will be performing
all those expensive IO operations asynchronously. Ain't that nice?
You can still use the `writeBlocking {}` of a realm, if you need to perform a synchronous operation. But, beware that,
as the name states, the operation will block the current thread. Android might not be very forgiving if you block the
main thread for a few seconds, and it'll present the user with the undesirable "Application Not Responding" dialog. Please,
be mindful and use this **only when you know it is safe**.
Another additional advantage of the Realm Kotlin SDK is that, thanks to having the objects frozen in the realm, we can
make asynchronous transactions easier. In the Java SDK, we had to find again the object we wanted to modify inside of
the transaction block, so it was obtained from the realm that we were using on that thread. The Kotlin SDK makes that
much simpler by using `findLatest()` with that object to get its instance in the mutable realm and then apply the
changes to it.
### Asynchronous Transaction Comparison
#### Java SDK
```kotlin
realm.executeTransactionAsync { bgRealm ->
val result = bgRealm.where(ExpenseInfo::class.java)
.equalTo("expenseId", expenseInfo.expenseId)
.findFirst()
result?.let {
result.deleteFromRealm()
}
}
```
#### Kotlin SDK
```kotlin
viewModelScope.launch(Dispatchers.IO) {
realm.write {
findLatest(expenseInfo)?.also {
delete(it)
}
}
}
```
## Queries and listening to updates
One thing where Realm shines is when you have to retrieve information from it. Data is obtained concatenating three
operations:
1. Creating a RealmQuery for the object class that you are interested in.
2. Optionally adding constraints to that query, like expected values or acceptable ranges for some attributes.
3. Executing the query to get the results from the realm. Those results can be actual objects from the realm, or
aggregations of them, like the number of matches in the realm that you get when you use `count()`.
The Realm Kotlin SDK offers you a new query system where each of those steps has been simplified.
The queries in the Realm Java SDK used filters on the collections returned by the `where` method. The Kotlin SDK offers
the `query` method instead. This method takes a type parameter using generics, instead of the explicit type parameter
taken as an argument of `where` method. That is easier to read and to write.
The constraints that allow you to narrow down the query to the results you care about are implemented using a predicate
as the optional argument of the `query()` method. That predicate can have multiple constraints concatenated with
logical operators like `AND` or `OR` and even subqueries that are a mayor superpower that will boost your ability to
query the data.
Finally, you will execute the query to get the data. In most cases, you will want that to happen in the background so
you are not blocking the main thread. If you also want to be aware of changes on the results of the query, not just the
initial results, it's better to get a flow. That required two steps in the Java SDK. First, you had to use
`findAllAsync()` on the query, to get it to work in the background, and then convert the results into a flow with the
`toFlow()` method. The new system simplifies things greatly, providing you with the `asFlow()` method that is a
suspending function of the query. There is no other step. Coroutines and flows are built-in from the beginning in the
new query system.
### Query Comparison
#### Java SDK
```kotlin
private fun getAllExpense(): Flow> =
realm.where(ExpenseInfo::class.java).greaterThan("expenseValue", 0).findAllAsync().toFlow()
```
#### Kotlin SDK
```kotlin
private fun getAllExpense(): Flow> =
realm.query("expenseValue > 0").asFlow()
```
As it was the case when writing to the Realm, you can also use blocking operations when you need them, invoking `find()`
on the query. And also in this case, use it **only when you know it is safe**.
## Conclusion
You're probably not reading this, because if I were you, I would be creating a branch in my project and trying the
Realm Kotlin SDK already and benefiting from all these wonderful changes. But just in case you are, let me summarize the
most relevant changes that the Realm Kotlin SDK provides you with:
- The configuration of your project to use the Realm Kotlin SDK is easier, uses more up-to-date mechanisms, and is more
explicit.
- Model classes are simpler to define and more idiomatic.
- Working with the realm is much simpler because it requires less ceremonial steps that you have to worry about and
plays better with coroutines.
- Working with the objects is easier even when doing things asynchronously, because they're frozen, and that helps you
to do things safely.
- Querying is enhanced with simpler syntax, predicates, and suspending functions and even flows.
Time to code!
| md | {
"tags": [
"Realm",
"Kotlin"
],
"pageDescription": "This is a guide to help you migrate you apps that are using the Realm Java SDK to the newer Realm Kotlin SDK. It covers the most important changes that you need to put in place to use the Kotlin SDK.",
"contentType": "Article"
} | Migrating Android Apps from Realm Java SDK to Kotlin SDK | 2024-05-20T17:32:23.500Z |